10 interesting stories served every morning and every evening.
OpenCiv3 (formerly known by the codename “C7”) is an open-source, cross-platform, mod-oriented, modernized reimagining of Civilization III by the fan community built with the Godot Engine and C#, with capabilities inspired by the best of the 4X genre and lessons learned from modding Civ3. Our vision is to make Civ3 as it could have been, rebuilt for today’s modders and players: removing arbitary limits, fixing broken features, expanding mod capabilities, and supporting modern graphics and platforms. A game that can go beyond C3C but retain all of its gameplay and content.
OpenCiv3 is under active development and currently in an early pre-alpha state. It is a rudimentary playable game but lacking many mechanics and late-game content, and errors are likely. Keep up with our development for the latest updates and opportunities to contribute!
New Players Start Here: An Introduction to OpenCiv3 at CivFanatics
NOTE: OpenCiv3 is not affiliated with civfanatics.com, Firaxis Games, BreakAway Games, Hasbro Interactive, Infogrames Interactive, Atari Interactive, or Take-Two Interactive Software. All trademarks are property of their respective owners.
The OpenCiv3 team is pleased to announce the first preview release of the v0.3 “Dutch” milestone. This is a major enhancement over the “Carthage” release, and our debut with standalone mode featuring placeholder graphics without the need for Civ3 media files. A local installation of Civ3 is still recommended for a more polished experience. See the release notes for a full list of new features in each version.
OpenCiv3 Dutch Preview 1 with the same game in Standalone mode (top) and with imported Civ3 graphics (bottom)
Download the appropriate zip file for your OS from the Dutch Preview 1 release
All official releases of OpenCiv3 along with more detailed release notes can be found on the GitHub releases page.
64-bit Windows, Linux, or Mac OS. Other platforms may be supported in future releases.
Minimum hardware requirements have not yet been identified. Please let us know if OpenCiv3 does not perform well on your system.
Recommended: A local copy of Civilization III files (the game itself does NOT have to run) from Conquests or the Complete edition. Standalone mode is available with placeholder graphics for those who do not have a copy.
Civilization III Complete is available for a pittance from Steam or GOG
This is a Windows 64-bit executable. OpenCiv3 will look for a local installation of Civilization III in the Windows registry automatically, or you may use an environment variable to point to the files.
If it is blocked, you may need to unblock it by
Check the “Unblock” checkbox near the bottom buttons in the “Security” section
If your Civilization III installation is not detected, you can set the environment variable CIV3_HOME pointing to it and restart OpenCiv3
This is an x86-64 Linux executable. You may use an environment variable to point to the files from a Civilization III installation. You can just copy or mount the top-level “Sid Meier’s Civilization III Complete” (Sans “Complete” if your install was from pre-Complete CDs) folder and its contents to your Linux system, or install the game via Steam or GOG.
Set the CIV3_HOME environment variable to point to the Civ3 files, e.g. export CIV3_HOME=“/path/to/civ3”
From that same terminal where you set CIV3_HOME, run OpenCiv3.x86_64
To make this variable permanent, add it to your .profile or equivalent.
This is a universal 64-bit executable, so it should run on both Intel and M1 Macs. You may use an environment variable to point to the files from a Civilization III installation. You can just copy or mount the top-level “Sid Meier’s Civilization III Complete” (Sans “Complete” if your install was from pre-Complete CDs) folder and its contents to your Mac system, or install the game via Steam or GOG.
Download the zip; it may complain bitterly, and you may have to tell it to keep the download instead of trashing it
Double click the zip file, and a folder with OpenCiv3.app and a json file will appear
If you try to open OpenCiv3.app it will tell you it’s damaged and try to trash it; it is not damaged
To unblock the downloaded app, from a terminal run xattr -cr /path/to/OpenCiv3.app; you can avoid typing the path out by typing xattr -cr and then dragging the OpenCiv3.app icon onto the terminal window
Set the CIV3_HOME environment variable to point to the Civ3 files, e.g. export CIV3_HOME=“/path/to/civ3”
From that same terminal where you set CIV3_HOME, run OpenCiv3.app with open /path/to/OpenCiv3.app, or again just type open and drag the OpenCiv3 icon onto the terminal window and press enter
OpenCiv3 uses many primitive placeholder assets; loading files from a local Civilization III install is recommended (see platform specific setup instructions above)
Support for playing Civ3 BIQ or SAV files is incomplete; some files will not load correctly and crashes may occur
For Mac:
Mac will try hard not to let you run this; it will tell you the app is damaged and can’t be opened and helpfully offer to trash it for you. From a terminal you can xattr -cr /path/to/OpenCiv3.app to enable running it.
Mac will crash if you hit buttons to start a new game (New Game, Quick Start, Tutorial, or Load Scenario) because it cant find our ‘new game’ save file we’re using as a stand-in for map generation. But you can Load Game and load c7-static-map-save.json or open a Civ3 SAV file to open that map
Other specific bugs will be tracked on the GitHub issues page.
© OpenCiv3 contributors. OpenCiv3 is free and open source software released under the MIT License.
...
Read the original on openciv3.org »
Full list of projects available here.
La Suite numérique (La Suite for short) is a full blown open-source digital workspace for online collaboration and teamwork.
La Suite is built by French government agencies DINUM and ANCT. It is also the product of a close european collaboration with the Netherlands and German state.
Our code base is a 100% open source and MIT licenced.
Come say hello on Matrix
...
Read the original on github.com »
Experimental - This project is still in development, and not ready for the prime time.
A minimal, secure Python interpreter written in Rust for use by AI.
Monty avoids the cost, latency, complexity and general faff of using a full container based sandbox for running LLM generated code.
Instead, it lets you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.
What Monty can do:
* Run a reasonable subset of Python code - enough for your agent to express what it wants to do
* Completely block access to the host environment: filesystem, env variables and network access are all implemented via external function calls the developer can control
* Call functions on the host - only functions you give it access to
* Run typechecking - monty supports full modern python type hints and comes with ty included in a single binary to run typechecking
* Be snapshotted to bytes at external function calls, meaning you can store the interpreter state in a file or database, and resume later
* Startup extremely fast (<1μs to go from code to execution result), and has runtime performance that is similar to CPython (generally between 5x faster and 5x slower)
* Be called from Rust, Python, or Javascript - because Monty has no dependencies on cpython, you can use it anywhere you can run Rust
* Control resource usage - Monty can track memory usage, allocations, stack depth, and execution time and cancel execution if it exceeds preset limits
* Collect stdout and stderr and return it to the caller
* Run async or sync code on the host via async or sync code on the host
What Monty cannot do:
* Use the standard library (except a few select modules: sys, typing, asyncio, dataclasses (soon), json (soon))
* Use third party libraries (like Pydantic), support for external python library is not a goal
* define classes (support should come soon)
* use match statements (again, support should come soon)
In short, Monty is extremely limited and designed for one use case:
For motivation on why you might want to do this, see:
In very simple terms, the idea of all the above is that LLMs can work faster, cheaper and more reliably if they’re asked to write Python (or Javascript) code, instead of relying on traditional tool calling. Monty makes that possible without the complexity of a sandbox or risk of running code directly on the host.
Note: Monty will (soon) be used to implement codemode in Pydantic AI
Monty can be called from Python, JavaScript/TypeScript or Rust.
uv add pydantic-monty
from typing import Any
import pydantic_monty
code = “”″
async def agent(prompt: str, messages: Messages):
while True:
print(f’messages so far: {messages}’)
output = await call_llm(prompt, messages)
if isinstance(output, str):
return output
messages.extend(output)
await agent(prompt, [])
type_definitions = “”″
from typing import Any
Messages = list[dict[str, Any]]
async def call_llm(prompt: str, messages: Messages) -> str | Messages:
raise NotImplementedError()
prompt: str = ‘’
m = pydantic_monty.Monty(
code,
inputs=[‘prompt’],
external_functions=[‘call_llm’],
script_name=‘agent.py’,
type_check=True,
type_check_stubs=type_definitions,
Messages = list[dict[str, Any]]
async def call_llm(prompt: str, messages: Messages) -> str | Messages:
if len(messages) < 2:
return [{‘role’: ‘system’, ‘content’: ‘example response’}]
else:
return f’example output, message count {len(messages)}′
async def main():
output = await pydantic_monty.run_monty_async(
m,
inputs={‘prompt’: ‘testing’},
external_functions={‘call_llm’: call_llm},
print(output)
#> example output, message count 2
if __name__ == ‘__main__’:
import asyncio
asyncio.run(main())
Use start() and resume() to handle external function calls iteratively, giving you control over each call:
import pydantic_monty
code = “”″
data = fetch(url)
len(data)
m = pydantic_monty.Monty(code, inputs=[‘url’], external_functions=[‘fetch’])
# Start execution - pauses when fetch() is called
result = m.start(inputs={‘url’: ’https://example.com’})
print(type(result))
Both Monty and MontySnapshot can be serialized to bytes and restored later. This allows caching parsed code or suspending execution across process boundaries:
import pydantic_monty
# Serialize parsed code to avoid re-parsing
m = pydantic_monty.Monty(‘x + 1’, inputs=[‘x’])
data = m.dump()
# Later, restore and run
m2 = pydantic_monty.Monty.load(data)
print(m2.run(inputs={‘x’: 41}))
#> 42
# Serialize execution state mid-flight
m = pydantic_monty.Monty(‘fetch(url)’, inputs=[‘url’], external_functions=[‘fetch’])
progress = m.start(inputs={‘url’: ’https://example.com’})
state = progress.dump()
# Later, restore and resume (e.g., in a different process)
progress2 = pydantic_monty.MontySnapshot.load(state)
result = progress2.resume(return_value=‘response data’)
print(result.output)
#> response data
use monty::{MontyRun, MontyObject, NoLimitTracker, StdPrint};
let code = r#”
def fib(n):
if n
MontyRun and RunProgress can be serialized using the dump() and load() methods:
use monty::{MontyRun, MontyObject, NoLimitTracker, StdPrint};
// Serialize parsed code
...
Read the original on github.com »
This is a demo for how you can turn an ESP32-S3 microcontroller into a tiny instant-on PC with its own shell, editor, compiler, and online apps installer. Something like Raspberry Pi, minus the overhead of a full server/desktop grade OS. I think ESP32 is underrated in hobby maker community for this PC-like use case. This demo uses BreezyBox, my mini-shell ESP-IDF component.
First of all, seeing is believing (click to watch the video):
It started as a “cyberdeck” style crafting project. Then I got carried away with the software part. I chose ESP32-S3 for the base platform. It has the nostalgic appeal of the DOS era PCs, with similar resources, and elbow-deep-in-bytes coding experience, plus modern wireless comms.
ESP32-S3 can do everything those PCs did and more, but that is inconvenient out of the box, because that is not the commercial use case it is positioned for. It also forces away the code bloat. If you are like me, and love small elegant things, and technology that punches way above its weight, you ought to try it!
So anyway, I decided to try and package some key missing parts: a basic vterm, the current working directory (CWD) tracking, a few familiar UNIX-like commands, and an app installer. Believe it or not, the rest is already there in ESP-IDF components, including the elf_loader with dynamic linking.
The result is called “BreezyBox”, by analogy with the BusyBox commands suite. The name is just a light joke, it is not meant to be a full clone. You can import it with one command in your ESP-IDF project, and if you have some stdio going, even at “Hello World” level, it should mostly just work. I call it a “mini shell”, a naïve user might call it an OS (it is not, it runs on FreeRTOS), and you can also call it the userland layer.
The BreezyBox component leaves the display and other board configuration details to the user’s firmware project, providing mainly the vterm/vfs features, and some shell commands. This particular example/demo project supports only one specific dev board: Waveshare ESP32-S3-Touch-LCD-7B (no affiliation). But you can see how all the parts connect, and adapt it to your display/board, or just copy some code snippets from here.
I suggest just fork it, clone it, and try to make it work on your board. Mine was about 40€; you can start with some random $10 two inch LCD S3 dev board if you like. Hint: LVGL text label control is the easiest path to stdout on LCD that works almost everywhere. You can also start with a headless board over USB console, that takes zero code, and gives you free ANSI codes in standard IDF Monitor in VSCode (or in Tabby).
You do not have to write your own font renderer like I did here; that was just to push past 30 FPS on a display slightly too large for this chip.
This is free software under MIT License.
The best help is currently more testing beyond “works on my computer”, more shared examples and fun use cases:
More ELF apps — see the examples at my breezyapps repo, they are super easy to follow. Even a carefully written stdlib C program with no platform-specific bits may work sometimes, also with some ANSI codes. But be sure to verify on the actual ESP32-S3: the memory is tight, the larger PSRAM requires alignment, and there are other limits and quirks. You can publish and install the apps using your own repo.
More full example firmware repositories: for different boards, with different styles. Maybe you provide the basic LVGL text label example on some popular board. Maybe you prefer C++ to plain C. Maybe you embrace the GUI. Maybe you port some retro games. Maybe you even make it work on P4, or C6 (RISC-V, a completely different CPU). Maybe you attach some cool gadgets to it. Maybe you build an extra cool cyberdeck case. Or maybe you reproduce the exact same thing, and just share your setup experience and hands-on impressions.
It would be so cool to see more people using BreezyBox, and to have more ready-to-clone examples for everyone!
...
Read the original on github.com »
I don’t post a lot. But when I do, it’s because I think few people are saying out loud what I’m noticing.
I’ve been building a product from the ground up. Not the “I spun up a Next.js template” kind of ground up. I mean from network configuration to product design to pricing decisions. Truly end to end. And I’ve been doing it using frontier models and coding agents for hours and hours every single day, both on this project and in my full time work. I’ve been trying to stay away from the chaos and the hype, filtering hard for what is actually valuable.
Since December 2025, things have dramatically changed for the better. Many have noticed. Few are drawing the right conclusions.
Antirez likes to call it “automated programming”, and I really like that framing. It captures the essence far better than the shallow, almost dismissive label of “vibe coding”. Automation was at the core of most of the work and cultural revolutions of human history. The printing press, the loom, the assembly line. This one doesn’t differ much.
Most of my work is still there. I still have to deeply think about every important aspect of what I want to build. The architecture, the trade offs, the product decisions, the edge cases that will bite you at 3am. What’s gone is the tearing, exhausting manual labour of typing every single line of code.
At this point in time, models and tools, when put in a clean and maniacally well set up environment, can truly make the difference. I can be the architect without the wearing act of laying every single brick and spreading the mortar. I can design the dress without the act of cutting and sewing each individual piece of fabric. But I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time.
Automated programming especially allows me to quickly build the tools I need so fast that every blacksmith that ever existed on this earth would envy me deeply. Finally able to really focus on the things they have in mind. Finally dedicating more time of their craft to the art they conceive, not the sweat of the forge.
It’s been months now that I have this thought crystallized in my mind. It is so clear to me that I genuinely don’t understand why everyone is not screaming it to the world.
We can finally get rid of all that middle work. That adapting layer of garbage we blindly accepted during these years. A huge amount of frameworks and libraries and tooling that has completely polluted software engineering, especially in web, mobile and desktop development. Layers upon layers of abstractions that abstract nothing meaningful, that solve problems we shouldn’t have had in the first place, that create ten new problems for every one they claim to fix.
Think about what happened. We, as an industry, looked at the genuine complexity of building software and instead of sharpening our thinking, we bought someone else’s thinking off the shelf. We wrapped everything in frameworks like wrapping a broken leg in silk. It looks nice. The leg is still broken.
In my mind, besides the self declared objectives, frameworks solve three problems. Two explicit and one obvious but never declared.
“Simplification”. Software engineers are scared of designing things themselves. They would rather accept someone else’s structure, despite having to force fit it into their product, rather than taking the time to start from the goal and work backwards to create the perfect suit for their idea. Like an architect blindly accepting another architect’s blueprints and applying them regardless of the context, the needs, the terrain, the new technological possibilities. We decided to remove complexity not by sharpening our mental models around the products we build, but by buying a one size fits all design and applying it everywhere. That is not simplification. That is intellectual surrender.
Automation. This is the only point I can actually, more or less, understand and buy. Boilerplate is boring work. I hate it. And I especially hate using libraries that I then need to study, keep updated, be aware of vulnerabilities for, just for the purpose of removing the creation of duplicated but necessary code. Think about ORMs, CRUD management, code generation, API documentation and so on. The grunt work that nobody wants to do but everybody needs done. Fair enough. But hold that thought, because this is exactly the point where everything changes.
Labour cost. This is the quiet one. The one nobody puts on the conference slide. For companies, it is much better having Google, Meta, Vercel deciding for you how you build product and ship code. Adopt their framework. Pay the cost of lock in. Be enchanted by their cloud managed solution to host, deploy, store your stuff. And you unlock a feature that has nothing to do with engineering: you no longer need to hire a software engineer. You hire a React Developer. No need to train. Plug and play. Easy to replace. A cog in a machine designed by someone else, maintaining a system architected by someone else, solving problems defined by someone else. This is not engineering. This is operating.
In my opinion Software engineering, the true one, is back again.
I am not speaking out of my lungs only. I’ve been developing this way almost flawlessly for over two years at this point. But the true revolution happened clearly last year, and since December 2025 this is obvious to anyone paying attention. From now on it will be even more so.
We have the chance again to get rid of useless complexity and keep working on the true and welcome complexity of our ideas, our features, our products. The complexity that matters. The complexity that is actually yours.
Automation and boilerplating have never been so cheap to overcome. I’ve been basically never writing twice the same line of code. I’m instantly building small tools I need, purpose built, exactly shaped around the problem at hand. I don’t need any fancy monorepo manager. A simple Makefile covers 100% of my needs for 99% of my use cases. When things will get very complicated, and if they get very complicated, I’ll think about it. But only then. Not a second before. This is engineering. You solve the problem you have, not the problem someone on a conference stage told you that you’ll eventually have.
Agents are really well prepared when it comes to basic tools. Tools that have been around not for months, but literally for decades. Bash was born in 1989, just preceding me by two months. The most mediocre model running at this time knows bash better than any person in the world. Bash is the universal adapter. It is not a coincidence that coding agents are shifting from complex and expensive MCP configurations to a simple agent loop with bash as a way to interact, literally, with the world. The oldest tool turned out to be the most future proof. There’s a lesson in there if you care to listen.
Really think about it.
Why do you ever need, for most of the use cases you can think of, a useless, expensive, flawed, often vulnerable framework, and the parade of libraries that comes with it, that you probably use for only 10% of its capabilities? With all the costs associated with it. From the “least” expensive: operational costs like keeping everything updated because they once again found a critical vulnerability in your Next.js version. To the most expensive one: the cost to your Design Choices. The invisible cost. The one you pay every day without even realizing it, because you’ve been paying it so long you forgot what freedom felt like.
If you keep accepting this trade off, you are not only losing the biggest opportunity we’ve seen in software engineering in decades. You are probably not recognizing your own laziness in once again buying whatever the hyperscalers have decided for you. You’re letting Google and Meta and Vercel be your architect, your designer, your thinker. And in exchange, you get to be their operator.
The tools are here. The models are here. The revolution already happened and most people are still decorating the old house.
Stop wrapping broken legs in silk. Start building things that are yours.
...
Read the original on blog.alaindichiappari.dev »
This is a vocal technique reference covering 21 techniques across five categories. It’s designed as a learning companion — whether you’re a beginner finding your voice or an experienced singer expanding your toolkit.
The sticky bar below the title lets you jump between sections. Each colored dot matches its category:
— ways to shape and color your sound
How to read the table
Each row is one technique. Hover the technique name to see a short description. The difficulty dots (● ○ ○ ○ ○) show how advanced it is, from 1 to 5.
Some techniques show small dashed chips beneath the name — these are prerequisites. The chip color tells you which category the prerequisite belongs to. Hover a chip to see what that technique sounds like, or click it to jump straight to its row in the table.
Techniques marked with ⚠️ warnings can cause damage if done incorrectly. The golden rule: if it hurts, stop. Work with a vocal coach for anything rated 4–5 dots.
Use EN / DA to switch language and the theme button to cycle through five color schemes: Dark, Light, Midnight, Forest, and Ember. Your choices are saved automatically.
Nogle teknikker viser små stiplede chips under navnet — det er forudsætninger. Chipfarven fortæller hvilken kategori forudsætningen hører til. Hold musen over en chip for at se hvad teknikken lyder som, eller klik for at hoppe direkte til den i tabellen.
I hope this guide helps you on your vocal journey. If you have suggestions, found a bug, or just want to say hi — I’d love to hear from you.
Check your posture — feet shoulder-width, shoulders back and down, chin level.
Release tension — roll your neck, shrug and drop shoulders, shake out your arms.
No cold starts — never belt, distort, or push range without warming up first.
Breathing (1 min) — Inhale 4 counts into belly/sides/back. Exhale on “Sss” for 15–20 seconds. Repeat 3x. This activates your support system.
Lip Trills (1 min) — Blow air through closed lips to make them vibrate. Slide up and down your range. Keeps everything relaxed and connected.
Humming (1 min) — Hum on “Mmm” through 5-note scales, ascending. Feel the buzz in your face (mask resonance). Keep jaw and tongue loose.
Vowel Slides (1 min) — Sing “Mee-Meh-Mah-Moh-Moo” on a single note, then move up by half steps. Opens the vocal tract gradually.
Sirens (1 min) — Slide from bottom to top and back on “Woo” or “Wee.” Full range, gentle, no pushing. This bridges your registers.
Straw phonation — Sing through a straw (or into a cup of water with a straw). Creates back-pressure that balances airflow and fold closure. Best warm-up tool available.
Tongue trills — Roll your tongue on “Rr” while singing scales. Releases tongue tension (a common problem).
Arpeggios — 1-3-5-8-5-3-1 on “Nay” or “Gee” to work through your passaggio (break area).
A dome-shaped muscle beneath your lungs. When you inhale, it flattens downward, pulling air in. You don’t directly “sing from your diaphragm” — you use it to control the rate of exhalation. Think of it as an air pressure regulator, not a sound source.
Two small folds of tissue in your larynx. When air passes through, they vibrate and create sound. Thicker vibration = chest voice. Thinner vibration = head voice. Partial closure = falsetto/breathy. The space between them is called the glottis.
Your “voice box.” It can move up (bright, thin sound) or down (dark, warm sound). For most singing, a neutral or slightly lowered larynx is ideal. A high larynx under pressure = strain. Learn to keep it stable — yawning gently while singing helps find the right position.
Abdominals — Control exhalation pressure. They don’t push air out — they slow the collapse of the rib cage.
Intercostals — Muscles between your ribs. Keep your ribs expanded during singing. This is “appoggio” (leaning into the breath).
Back muscles — Often forgotten. Your lower back expands when breathing correctly. Engage it for support.
Stand with feet shoulder-width apart. Knees slightly bent (not locked). Shoulders relaxed, back and down. Chest comfortably open. Head balanced on top of the spine — not jutting forward. Imagine a string pulling you up from the crown of your head.
Hydration — Drink water consistently throughout the day, not just before singing. Your vocal folds need systemic hydration.
Steam inhalation — Breathe steam for 10 minutes before heavy singing. This directly hydrates the folds.
Rest — Your voice needs recovery time. Avoid talking loudly after intense sessions.
Din “stemmeboks.” Den kan bevæge sig op (lys, tynd lyd) eller ned (mørk, varm lyd). Til de fleste former for sang er en neutral eller let sænket strube ideel. En høj strube under pres = belastning. Lær at holde den stabil — at gabe blidt mens du synger hjælper med at finde den rette position.
Common advice that’s misleading, incomplete, or outright harmful. If someone tells you any of these, be skeptical.
You can’t directly control your diaphragm — it’s an involuntary muscle on the inhale. What people mean is: use your abdominal and intercostal muscles to control exhalation. Saying “sing from your diaphragm” is like saying “digest from your stomach.” Technically involved, but not how you’d teach it.
“Drink tea with honey to fix your voice”
Tea and honey never touch your vocal folds — they go down your esophagus, not your trachea. They can soothe throat irritation and feel nice, but they don’t “fix” or “coat” your cords. What actually helps: steam inhalation and systemic hydration (water, hours in advance).
The sound isn’t produced in your chest. “Chest voice” refers to the thick vocal fold vibration pattern that creates sympathetic resonance you feel in your upper torso. The sound is always made at the vocal folds in your larynx.
“Falsetto is only for men”
Everyone with vocal folds can produce falsetto — it’s a mode of vibration where the folds don’t fully close. Women use it too, though the timbral difference from head voice may be less dramatic.
They give you a damaged voice. The “rasp” from smoking and alcohol comes from swollen, irritated, dehydrated folds. Healthy vocal distortion uses the false folds and arytenoids — structures above the true cords. One is controlled art, the other is permanent damage.
The exact opposite. High notes require less air, not more. Pushing more air at higher pitches forces the folds apart and creates strain. Think “less air, more compression” — let the folds do the work.
Artificial vibrato (jaw wobble, diaphragm pulse) sounds unnatural and creates tension. Real vibrato emerges naturally when breath support is solid and the throat is relaxed. If you don’t have vibrato yet, the fix is better technique — not manufacturing it.
“You’re either born with it or you’re not”
Singing is a motor skill. Some people have natural advantages (vocal fold length, resonance cavity size), but technique, pitch accuracy, tone quality, and range are all trainable. Most “natural” singers practiced obsessively as children.
Your vocal folds are tissue. They need increased blood flow and gradual stretching before heavy use — just like any other muscle. Cold singing is the fastest path to strain, nodules, and hemorrhages.
Pain means damage. Unlike skeletal muscles, vocal folds don’t grow stronger from micro-tears. Pain, burning, or persistent hoarseness = stop immediately. Rest. If it lasts more than a few days, see an ENT specialist.
Du kan ikke direkte kontrollere dit mellemgulv — det er en ufrivillig muskel ved indånding. Det folk mener er: brug dine mave- og interkostalmuskler til at kontrollere udåndingen. At sige “syng fra mellemgulvet” er som at sige “fordøj fra maven.”
This guide uses traditional vocal terminology (chest voice, head voice, mixed voice, etc.) because it’s the most widely understood framework worldwide. However, the most scientifically validated system is Complete Vocal Technique (CVT), developed by Cathrine Sadolin at the Complete Vocal Institute in Copenhagen.
CVT is built on laryngoscopic imaging, EGG measurements, and peer-reviewed acoustic research. Here’s how the two frameworks relate.
CVT classifies all singing into four modes based on vocal tract configuration — not felt vibration:
Support — Coordinated abdominal, waist, solar plexus, and back muscle engagement to control air pressure and airflow. (This guide: Breath Support)
Necessary Twang — Narrowing the epiglottic funnel for clearer, more efficient sound. CVT considers this foundational for all healthy singing, not just a style. (This guide: Twang)
Avoid protruding jaw & tightened lips — These trigger uncontrolled vocal cord constriction, especially in upper register. (Not explicitly covered in this guide)
“Overdrive” in CVT is a clean mode, not distortion
Vowel rules: CVT restricts specific vowels per mode. Traditional pedagogy uses general vowel modification. Both work, but CVT is more precise.
Metal & Density: CVT uses “degree of metal” (0–100%) and “density” (fuller vs. reduced) as parameters. Traditional pedagogy doesn’t have these concepts.
“Overdrive” means different things: In this guide, overdrive = heavy vocal distortion (like guitar overdrive). In CVT, Overdrive = a clean, shouty vocal mode.
Learn more: completevocalinstitute.com
Look for expandable annotations on technique cards throughout this guide.
The fundamental modes of vocal fold vibration. Every sound you make lives somewhere on this spectrum. Master these before anything else.
Ways of using your registers to create specific sounds. These define genres and artistic identity.
Textures and colors you add to your base tone. These are the seasoning — use them deliberately, not as defaults.
Decorative techniques that add flair, personality, and musicality to your phrasing.
The foundation everything else rests on. Control here is the difference between amateurs and professionals.
...
Read the original on jesperordrup.github.io »
Recent posts:
04 Aug 2025 »
When to Hire a Computer Performance Engineering Team (2025) part 1 of 2
17 Mar 2024 »
The Return of the Frame Pointers
19 Mar 2022 »
Why Don’t You Use …
Blog index
About
RSS
Recent posts:
04 Aug 2025 »
When to Hire a Computer Performance Engineering Team (2025) part 1 of 2
17 Mar 2024 »
The Return of the Frame Pointers
19 Mar 2022 »
Why Don’t You Use …
Blog index
About
RSS
The staggering and fast-growing cost of AI datacenters is a call for performance engineering like no other in history; it’s not just about saving costs — it’s about saving the planet. I have joined OpenAI to work on this challenge directly, with an initial focus on ChatGPT performance. The scale is extreme and the growth is mind-boggling. As a leader in datacenter performance, I’ve realized that performance engineering as we know it may not be enough — I’m thinking of new engineering methods so that we can find bigger optimizations than we have before, and find them faster. It’s the opportunity of a lifetime and, unlike in mature environments of scale, it feels as if there are no obstacles — no areas considered too difficult to change. Do anything, do it at scale, and do it today.
Why OpenAI exactly? I had talked to industry experts and friends who recommended several companies, especially OpenAI. However, I was still a bit cynical about AI adoption. Like everyone, I was being bombarded with ads by various companies to use AI, but I wondered: was anyone actually using it? Everyday people with everyday uses? One day during a busy period of interviewing, I realized I needed a haircut (as it happened, it was the day before I was due to speak with Sam Altman).
Mia the hairstylist got to work, and casually asked what I do for a living. “I’m an Intel fellow, I work on datacenter performance.” Silence. Maybe she didn’t know what datacenters were or who Intel was. I followed up: “I’m interviewing for a new job to work on AI datacenters.” Mia lit up: “Oh, I use ChatGPT all the time!” While she was cutting my hair — which takes a while — she told me about her many uses of ChatGPT. (I, of course, was a captive audience.) She described uses I hadn’t thought of, and I realized how ChatGPT was becoming an essential tool for everyone. Just one example: She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.
I had previously chatted to other random people about AI, including a realtor, a tax accountant, and a part-time beekeeper. All told me enthusiastically about their uses of ChatGPT; the beekeeper, for example, uses it to help with small business paperwork. My wife was already a big user, and I was using it more and more, e.g. to sanity-check quotes from tradespeople. Now my hairstylist, who recognized ChatGPT as a brand more readily than she did Intel, was praising the technology and teaching me about it. I stood on the street after my haircut and let sink in how big this was, how this technology has become an essential aide for so many, how I could lead performance efforts and help save the planet. Joining OpenAI might be the biggest opportunity of my lifetime.
It’s nice to work on something big that many people recognize and appreciate. I felt this when working at Netflix, and I’d been missing that human connection when I changed jobs. But there are other factors to consider beyond a well-known product: what’s my role, who am I doing it with, and what is the compensation?
I ended up having 26 interviews and meetings (of course I kept a log) with various AI tech giants, so I learned a lot about the engineering work they are doing and the engineers who do it. The work itself reminds me of Netflix cloud engineering: huge scale, cloud computing challenges, fast-paced code changes, and freedom for engineers to make an impact. Lots of very interesting engineering problems across the stack. It’s not just GPUs, it’s everything.
The engineers I met were impressive: the AI giants have been very selective, to the point that I wasn’t totally sure I’d pass the interviews myself. Of the companies I talked to, OpenAI had the largest number of talented engineers I already knew, including former Netflix colleagues such as Vadim who was encouraging me to join. At Netflix, Vadim would bring me performance issues and watch over my shoulder as I debugged and fixed them. It’s a big plus to have someone at a company who knows you well, knows the work, and thinks you’ll be good at the work.
Some people may be excited by what it means for OpenAI to hire me, a well known figure in computer performance, and of course I’d like to do great things. But to be fair on my fellow staff, there are many performance engineers already at OpenAI, including veterans I know from the industry, and they have been busy finding important wins. I’m not the first, I’m just the latest.
AI was also an early dream of mine. As a child I was a fan of British SciFi, including Blake’s 7 (1978-1981) which featured a sarcastic, opinionated supercomputer named Orac. Characters could talk to Orac and ask it to do research tasks. Orac could communicate with all other computers in the universe, delegate work to them, and control them (this was very futuristic in 1978, pre-Internet as we know it).
Orac was considered the most valuable thing in the Blake’s 7 universe, and by the time I was a university engineering student I wanted to build Orac. So I started developing my own natural language processing software. I didn’t get very far, though: main memory at the time wasn’t large enough to store an entire dictionary plus metadata. I visited a PC vendor with my requirements and they laughed, telling me to buy a mainframe instead. I realized I needed it to distinguish hot versus cold data and leave cold data on disk, and maybe I should be using a database… and that was about where I left that project.
Last year I started using ChatGPT, and wondered if it knew about Blake’s 7 and Orac. So I asked:
ChatGPT’s response nails the character. I added it to Settings->Personalization->Custom Instructions, and now it always answers as Orac. I love it. (There’s also surprising news for Blake’s 7 fans: A reboot was just announced!)
I am now a Member of Technical Staff for OpenAI, working remotely from Sydney, Australia, and reporting to Justin Becker. The team I’ve joined is ChatGPT performance engineering, and I’ll be working with the other performance engineering teams at the company. One of my first projects is a multi-org strategy for improving performance and reducing costs.
There’s so many interesting things to work on, things I have done before and things I haven’t. I’m already using Codex for more than just coding. Will I be doing more eBPF, Ftrace, PMCs? I’m starting with OpenAI’s needs and seeing where that takes me; but given those technologies are proven for finding datacenter performance wins, it seems likely — I can lead the way. (And if everything I’ve described here sounds interesting to you, OpenAI is hiring.)
I was at Linux Plumber’s Conference in Toyko in December, just after I announced leaving Intel, and dozens of people wanted to know where I was going next and why. I thought I’d write this blog post to answer everyone at once. I also need to finish part 2 of hiring a performance engineering team (it was already drafted before I joined OpenAI). I haven’t forgotten.
It took months to wrap up my prior job and start at OpenAI, so I was due for another haircut. I thought it’d be neat to ask Mia about ChatGPT now that I work on it, then realized it had been months and she could have changed her mind. I asked nervously: “Still using ChatGPT?”. Mia responded confidently: “twenty-four seven!”
I checked with Mia, she was thrilled to be mentioned in my post. This is also a personal post: no one asked me to write this.
...
Read the original on www.brendangregg.com »
Rebecca Guy, senior policy manager at the Royal Society for the Prevention of Accidents, said: “Regular vision checks are a sensible way to reduce risk as we age, but the priority must be a system that supports people to drive safely for as long as possible, while ensuring timely action is taken when health or eyesight could put them or others in danger.”
...
Read the original on www.bbc.com »
Browse by range of dating or by category, New Testament — Apocrypha — Gnostics — Church Fathers — Other, or use the search box.
Acts of Peter and the Twelve
Discourse on the Eighth and Ninth
On the Origin of the World
Also available on the Early Christian Writings CD-ROM
Special Features
Please bookmark the site for future reference.
Let your friends know about the site too.
Early Christian Writings is the most complete collection of Christian texts before the Council of Nicaea in 325 AD. The site provides translations and commentary for these sources, including the New Testament, Apocrypha, Gnostics, Church Fathers, and some non-Christian references. The “Early Christian Writings: New Testament, Apocrypha, Gnostics, Church Fathers” site is copyright © Peter Kirby . Permission is given to link to any HTML file on the Early Christian Writings site.
Please buy the CD to support the site, view it without ads, and get bonus stuff!
Gospels
Gospel Fragments
Apostolic Acts
Acts of Peter and the Twelve
Martyrologies
Fifth and Sixth Books of Esra
Dialogues with Jesus
Apocalypses
Acts
Acts of Peter and the Twelve
More Nag Hammadi
Discourse on the Eighth and the Ninth
Apostolic Fathers
Quoted Authors
More Quoted Authors
Pagan and Jewish
Jewish/Christian
...
Read the original on earlychristianwritings.com »
I was good at DevOps. This isn’t a story about escaping a job I hated. I spent five years as a DevSecOps engineer at two large financial services companies. I built things I was proud of. I learned a ton. I was respected by my team.
But somewhere around year four, something shifted. The work didn’t change. I did. After five years, I made a change that surprised everyone who knew me: I left for a sales-adjacent role. Today, I’m one year into working as a Solutions Engineer at Infisical, and I want to share what I’ve learned. I think there are other DevOps engineers out there who might be feeling the same thing I felt, even if they can’t quite name it yet.
For a long time, I couldn’t pinpoint what was wrong. The job was fine. I was good at it. But I started dreading the monotony of it all.
Part of it was repetition. My days had become predictable: check the dashboards, respond to tickets, debug whatever broke overnight, push some Terraform, go home. Maintain the HashiCorp Vault clusters, manage the secrets pipelines, answer the same support questions. Repeat. The work that used to feel engaging had become routine.
Part of it was stagnation. When I first started, I was learning constantly. Vault architecture, PKI fundamentals, secrets rotation, the politics of platform adoption in a large enterprise. But once I’d mastered the core toolset and codebase, the learning curve flattened. I wasn’t being challenged anymore. I was just keeping things running.
And part of it was isolation. Most days, it was just me and my pipelines. My primary relationships were with CI/CD tools and YAML files. The humans I did interact with were usually frustrated. They needed something from me, or something I owned was blocking them. I missed working with people, not just unblocking them from behind a ticket queue.
I didn’t have a word for what I was looking for. I just knew that what I was doing wasn’t it anymore.
I had no idea Solutions Engineering was a real career path.
I knew sales existed. I vaguely knew there were “technical sales” people. But I assumed those were salespeople who’d learned enough technical vocabulary to get by, not engineers who actually stayed technical while working with customers.
Some friends in sales pointed it out to me. They’d watched me get animated whenever someone asked how something worked, and one of them eventually said: “You know there’s a job where you explain technical stuff to people all day and help them solve problems, right?”
I’d never considered anything sales-adjacent because I assumed it meant leaving the technical world. But the more I learned about SE, the more I realized it might solve everything I was missing. New problems to solve every day. Constant learning. And more people.
I ended up joining Infisical. There’s some irony there: I spent years managing HashiCorp Vault, and now I work for a Vault competitor. But that background is exactly why the role made sense. I knew the space. I knew the pain points. I knew what it felt like to be the engineer on the other side, evaluating these tools.
The biggest change is the simplest one: I talk to people now. A lot of people. Every day.
In a single week, I might have a discovery call with a fintech startup, demo to a platform team at an aerospace company, help a healthcare organization troubleshoot their Kubernetes deployment, and run a workshop for a manufacturing company’s security team. Fintech to aerospace to healthcare to manufacturing, all with completely different stacks and problems.
And it’s not all over Zoom. I get to visit customers on-site, sit down with their teams face to face, and actually understand how they work. Over time, you build real relationships with these people. You become their trusted technical advisor, not just a vendor they talk to once. That’s something I never experienced in DevOps, where my “customers” were internal teams who mostly just wanted me to unblock them.
The contrast with my DevOps days is stark. Monday morning used to mean checking dashboards, then the ticket queue, then the same Slack questions I’d answered last week. Now Monday morning might mean prepping for a demo with a company I’ve never spoken to, or helping a long-time customer work through a new challenge. I genuinely don’t know what most days will look like until they start.
One thing I didn’t expect: I’m not just talking to customers. I’ve become a bridge between the people using our product and the people building it. Every customer conversation surfaces pain points, feature gaps, edge cases our docs don’t cover. I take that back to engineering and product. I’m actually influencing what we build next, which is something I never had in DevOps. I was always downstream of decisions, not shaping them.
My biggest fear going in was that I’d lose my technical edge. That I’d become “the sales guy” and slowly forget how things actually worked.
That didn’t happen. As an SE, I’m exposed to everything. Customers running Kubernetes, ECS, Lambda, bare metal, air-gapped environments. AWS, Azure, GCP, hybrid setups. CI/CD in Jenkins, GitHub Actions, GitLab, CircleCI. I have to understand their environment well enough to actually help them, so the learning is constant. The stagnation I felt in DevOps? Gone.
But all those years in DevOps weren’t just background. They’re the reason I’m useful in this role.
I get on calls and prospects describe problems I’ve literally lived. They talk about the pain of managing secrets at scale, and I can say “yeah, I’ve been there” and actually mean it. That shared experience changes the dynamic completely. I’m not a salesperson trying to manufacture urgency. I’m an engineer who dealt with the same problems and found something that helped.
Is it all upside? No. Demoing is a skill I had to build from scratch. The context-switching is intense. The stress is different, more human, more ambiguous. But it’s the kind of challenge that makes me better, not the kind that grinds me down.
This path isn’t for everyone. If you love going deep on a single system and optimizing it over years, SE might feel too scattered. If you genuinely prefer working alone with your tools, the constant human interaction might drain you.
But if any of this resonates, if you’re good at DevOps but feel stuck, if the work has become repetitive, if you miss collaborating with people, if you find yourself energized when explaining technical concepts or helping someone work through a problem, Solutions Engineering might be worth exploring.
I didn’t know this role existed until someone pointed it out to me. Now, one year in, I can’t imagine going back. Not because DevOps is bad. It’s critical work, and the people who do it well deserve more credit than they get.
But for me, it was missing something I didn’t know how to name until I found it: the chance to be technical and connected. To keep learning, to solve new problems every day, and to do it alongside people instead of behind a queue.
If you’re a DevOps engineer feeling something similar, even if you can’t quite articulate it, maybe this is what you’re looking for too.
I didn’t know this role existed until someone pointed it out to me. Consider this me pointing it out to you. We’re hiring at Infisical.
...
Read the original on infisical.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.