10 interesting stories served every morning and every evening.
Skip to contentGo full –yolo. We’ve got you. LLMs are probabilistic - 1% chance of disaster makes it a matter of when, not if. Safehouse makes this a 0% chance — enforced by the kernel. Safehouse denies write access outside your project directory. The kernel blocks the syscall before any file is touched. All agents work perfectly in their sandboxes, but can’t impact anything outside it.Agents inherit your full user permissions. Safehouse flips this — nothing is accessible unless explicitly granted.Download a single shell script, make it executable, and run your agent inside it. No build step, no dependencies — just Bash and macOS.Safehouse automatically grants read/write access to the selected workdir (git root by default) and read access to your installed toolchains. Most of your home directory — SSH keys, other repos, personal files — is denied by the kernel.See it fail — proof the sandbox worksTry reading something sensitive inside safehouse. The kernel blocks it before the process ever sees the data.# Try to read your SSH private key — denied by the kernel
safehouse cat ~/.ssh/id_ed25519
# cat: /Users/you/.ssh/id_ed25519: Operation not permitted
# Try to list another repo — invisible
safehouse ls ~/other-project
# ls: /Users/you/other-project: Operation not permitted
# But your current project works fine
safehouse ls .
# README.md src/ package.json …Add these to your shell config and every agent runs inside Safehouse automatically — you don’t have to remember. To run without the sandbox, use command claude to bypass the function.# ~/.zshrc or ~/.bashrc
safe() { safehouse –add-dirs-ro=~/mywork “$@”; }
# Sandboxed — the default. Just type the command name.
claude() { safe claude –dangerously-skip-permissions “$@”; }
codex() { safe codex –dangerously-bypass-approvals-and-sandbox “$@”; }
amp() { safe amp –dangerously-allow-all “$@”; }
gemini() { NO_BROWSER=true safe gemini –yolo “$@”; }
# Unsandboxed — bypass the function with `command`
# command claude — plain interactive sessionGenerate your own profile with an LLMUse a ready-made prompt that tells Claude, Codex, Gemini, or another model to inspect the real Safehouse profile templates, ask about your home directory and toolchain, and generate a least-privilege `sandbox-exec` profile for your setup.The guide also tells the LLM to ask about global dotfiles, suggest a durable profile path like ~/.config/sandbox-exec.profile, offer a wrapper that grants the current working directory, and add shell shortcuts for your preferred agents.Open the copy-paste prompt
...
Read the original on agent-safehouse.dev »
A reimagined classic, that’s only a little bit janky.the first-gen macbook from ’06 is one of my favorite laptop designs ever, mostly because for the longest time it was one of the only macbooks u could get in black besides the powerbook g3.plus it was the first macbook i ever personally owned, although this was around 2015, so even by then the performance was pretty crummy.what inspired me to do this project was reading articles and watching videos about people retrofitting old macs and old pcs with new guts (usually m1 minis), and i got really motivated when i read that someone had already done something like this, and after watching f4mi’s video on converting an imac g5 to a fully-kitted monitor.and so, after doing lots of research into motherboards, display panels, and gathering everything i could think i would need for this project, in the wise words of NileRed:i decided to just go for it.to begin, i ordered a few black polycarb macbooks (model a1181) from ebay. they were all pretty beat up and didn’t have their batteries, nor did they power on.i then found and order some oem parts of the outer chassis that i guess had never been made into an actual uniti then follow an ifixit tutorial to completely take apart the macbooks, till i was down to just the bare chassis. my main idea was that the used macs were gonna be my test runs before i did anything with the oem parts, since they were the cleanest looking parts.pretty much all of the parts of the mac i discarded, since they didn’t work and even on their own, aren’t worth alot if i did sell them. i did keep only a few metal brackets that screwed into the bottom of the bottom chassis (after dremeling away the “middle” section for the old removeable ram sticks), and another that’s screwed in at the top that actually holds the hinges for the top chassis.here’s the guts i’m putting in the macbook:and here’s some periperals and other things i put in the macand away we go…my first concern was if i could use the top case, or the Apple Internal Keyboard/Trackpad. fortunately i found an article that allows you to tap into the case’s circuitry and solder a usb cable to use it as a keyboard and trackpad for pretty much everywhere.so, this was actually the very first time soldered anything ever lol. i had watched plenty of soldering videos and stuff so i felt pretty confident, and when i finished soldering on a usb-c and plugged it into my main computer, it actually worked!as a side note, the solder pads are quite small and fragile, i learned the hard way by accidentally yanking the cable and tearing the solder pads off the case’s pcb. :|so i got a new top case and soldered a new cable again.to start putting the macbook together, i removed the original brass insert standoffs from the bottom case and replaced them with my own 3d printed standoffs.for my standoffs i just used gorilla glue to hold them in place, not super ideal, but idk much about welding plastics together and stuff like that so; super glue ftw lol.i reused the original screws used throughout the mac since they were all the same thickness - M2 size - and i started to slowly piece together where i was going to put everything.come on in !here’s an early picture i took where i figured out where to put most things. the mainboard i put slightly offcentered in the middle, mostly cause i wanted to center the fan’s exhaust out the back the best it could. there is a beam in the middle of the exhaust for a screw to go in at the bottom of the mac, but i figured it’d be ok.i mounted the speakers in the most obvious spots, they sound, fine. maybe a little better than the original macbook’s speakers.i also seperately got an original dead macbook battery that matched the laptop, and very, very carefully, removed the plastic side that held in the battery cells and just super glued it to the backside of the chassis to fill in the giant hole, since i had no real desire or way of use the removable batteries with the mainboard, since the internal connectors for both framework’s battery and apple’s battery are completely different.i also 3d printed a custom made “button” to fill in the hole left by the missing locking mechanism for the og battery.one of the trickest parts of the whole project was figuring out what to do with the I/O, namely the left side of the chassis, since that’s where all of the original ports were housed directly on the original logic board of the macbook.taking inspiration from the f4mi video from before, i ordered some usb hubs, stripped them out of their enclosures, and 3d printed some custom standoffs that allow them to be mounted in a way that holds them in place, while allowing me to easily remove them with screws.i also worked quite hard on modeling an “I/O shield” for the left side since i didn’t want to try to work around the old holes for the original ports, so i dremeled that side out, took a scan of the aftermath, and meticulously remade the side of the chassis to perfectly fit the hubs’ new ports.i then just super glued the shield onto the chassis, again, not super ideal, but it does hold up quite well!the right side was thankfully much easier than the left, since the old dvd drive’s slot was exactly the height of a usb c port. so i order a usb c hub, design a shield to fill in the gaps between the ports, and mounted it will some standoffs.better yet, this side has a mounting piece that not only holds down the ports and prevents from lifting up, it also clips the top case down.to connect the hubs to the mainboard, i used some small flat usb c cables and stripped them of their rubber coating to expose their fpc cables, did some folds to clean up the slack, and connected them.to connect the top case and the webcam to the mainboard, i ended up getting this small usb module on the right that i soldered the connections to, which then feeds into this input shim that breaks out the connector into solderable points.the power button for the top case is a similar story, where i carefully removed the original membrane button and replaced it with a small button that breaks out into to header pins. those pins then plug into two wires that run to the same input shim to turn the mainboard on.now this wouldn’t be a macbook without the classic glowing apple logo on the back. and at first, i really didn’t know how to replicate the glowing logo.
the original display panel of the macbook was designed to allow the backlight of the panel itself to double as the backlight of the display, and as the light to shine through the logo, thus allowing the logo to glow. u can actually see the logo through the display if u have a black image over where the logo is.
my best idea was to find a panel thin enough to fit an led of some kind to turn on when the system is on.after searching on alibaba and talking with a seller, i ordered a custom made 7x7x0.28 cm led that i mounted (aka super glued) to the back of the top chassis, and then ran the wires to the usb module from before, soldered them, and it worked!speaking of which, to get the new display panel mounted, i very carefully centered the display in the macbook’s bezel, tape it down with some masking tape, carefully turned it over, and taped it all the way around with some strong aluminum tape, which turned out pretty good!the webcam was kinda tricky to mount at the top of the macbook, since the original webcam module was much much smaller than the one i found and ended up using. i ended up just carefully dremeling away most of the plastic at the top of the chassis to make room for the module, and carefully deremled a bigger hole to fit the lenses through the top chassis’ metal bracketafter finally getting everything to stay in place without snapping off, or possibly shorting everything, this is what the final look inside my framebook looks like.i added some padding around the battery to discourage hot air around the battery, as well as 3d printing that big rectangle to fill in some space.the wifi card managed to snuggly fit under the right usb hub, with the antennas running up to the right side of the top chassis.to connect the top case i added a male usb c port to the usb module to connect to the top case’s soldered on female usb c port.the reason why i did it like this is so that way if i disconnect the top case from the framebook, i can use any regular usb c m2m cable to use the top case anywhere i want.overall this was a really fun and interesting project. from start to finish this took me around 3 months. i learned alot from this project, from how to solder, how to 3d model, this was a really nice way to learn these skills.there are somethings i would’ve like to have done better or differently, namely making some custom pcbs in place of my usb hubs so i could have any i/o i want, and finding a better way to mount stuff instead of super glue lol.thanks for reading all the way through about my amateur attempt at “retrofitting” my macbook! sorry if i glossed over or skipped some stuff, i didn’t really properly document things or even take photos along the way, most of this article is just me recollecting what i did in, semi-chronological order.
if you do have questions about my process, shoot me an email or dm me on bluesky.
i do have some very special thanks for some people that made this whole thing possible:N3rding for sending me the input shim for the top case and power buttonMy friend Phillip for teaching me how to use blender to make my lil standoffs and the i/o shield.and YOU, for reading this lil blog, article, thing, whatever !!! :P
...
Read the original on fb.edoo.gg »
Based on its own charter, OpenAI should surrender the race
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Interestingly, this is still hosted at https://openai.com/charter/, meaning it remains the official company policy.
At the same time, explicitly stated AGI timelines by Sam Altman are the following:
“Within the next ten years, AI systems will exceed expert skill level in most domains”
“By the time the end of this decade rolls around, the world will be in an unbelievably better place”
“I think in 5 years […] people are like, man, the AGI moment came and went”
“What are you excited about in 2025? - AGI”
“AGI will probably get developed during Trump’s term”
“By 2030, if we don’t have extraordinarily capable models that do things we can’t, I’d be very surprised”
“AGI kinda went whooshing by… okay fine, we built AGIs”
“We basically have built AGI” (later: “a spiritual statement, not a literal one”)
We can see that the timeline of AGI (let’s assume this is the timeline for a better-than-even chance) has accelerated and the median prediction since 2025 is around 2 years. Notably, in the latest interviews it’s claimed that AGI has been achieved, and we’re now racing towards ASI.
Finally, here’s a snapshot of the current overall Arena ranking of top 10 models.
Based on these, the flagship GPT-5.4 model is clearly trailing behind competition. At least Anthropic’s and Google’s models are clearly safety-conscious, and probably value-aligned (whatever that means, but since the models are drop-in replacements to GPT, it should hold).
It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.
Therefore, one can only conclude, that we currently meet the stated example triggering condition of “a better-than-even chance of success in the next two years”. As per its charter, OpenAI should stop competing with the likes of Anthropic and Gemini, and join forces, however that might look like.
While this will never happen, I think it’s illustrative of some great points for pondering:
The impotence of naive idealism in the face of economic incentives.
The discrepancy between marketing points and practical actions.
The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.
...
Read the original on mlumiste.com »
We’re pleased to announce the release of LibreOffice 26.2, the newest version of the free and open source office suite trusted by millions of users around the world. This release makes it easier than ever for users to create, edit and share documents on their own terms. Designed for individuals and organizations alike, it continues to be a trusted alternative to proprietary office software.
LibreOffice 26.2 is focused on improvements that make a difference in daily work and brings better performance, smoother interaction with complex documents and improved compatibility with files created in other office software. Whether you’re writing reports, managing spreadsheets, or preparing presentations, the experience feels more responsive and reliable.
LibreOffice has always been about giving users control. LibreOffice 26.2 continues that tradition by strengthening support for open document standards, and ensuring long-term access to your files, without subscriptions, license restrictions, or data collection. Your documents stay yours — forever.
Behind this release there is a global community of contributors. Developers, designers, translators, QA testers, and volunteers from around the world worked together to deliver hundreds of fixes and refinements. Their efforts result in a suite that not only adds features, but also improves quality, consistency, and stability, release after release.
* Improved performance and responsiveness across the suite, making large documents open, edit, and save more smoothly.
* Enhanced compatibility with documents created in proprietary and open core office software, reducing formatting issues and surprises.
* Hundreds of bug fixes and stability improvements contributed by the global LibreOffice community.
See the Release Notes for the full list of new features.
Florian Effenberger, Executive Director of The Document Foundation, says:
LibreOffice 26.2 shows what happens when software is built around users, not business models, and how open source software can deliver a modern, polished productivity suite without compromising user freedom. This release is about speed, reliability, and giving people control over their documents.
LibreOffice 26.2 is available for Windows, macOS, and Linux, and supports over 120 languages out of the box. It can be used at home, in businesses, schools, and public institutions, with no licensing fees and no vendor lock-in.
You can download LibreOffice 26.2 today from the official LibreOffice website. We invite users to try the new release, share feedback, and join the community helping shape the future of LibreOffice. If they are happy, they can donate to support the independence and the future development of the project.
About LibreOffice and The Document Foundation
LibreOffice is a free, private and open source office suite used by millions of people, businesses, and public institutions worldwide. It is developed by an international community and supported by The Document Foundation, an independent non-profit organization that promotes open standards, digital sovereignty and user choice.
...
Read the original on blog.documentfoundation.org »
The European Commission has accepted our request, and starting from today — Friday March 6 — has added the Open Document Format ODS version of the spreadsheet to be used to provide the feedback. We are grateful to the people working at DG CONNECT, the Commission’s Directorate-General for Communications Networks, Content and Technology, for responding to our request within 24 hours. At this point, the rest of this message is no longer relevant, and the call for action is no longer necessary.
The European Commission has spent years advocating for open standards, vendor neutrality, and digital sovereignty. The European Interoperability Framework explicitly recommends open formats for public sector digital services. The EU’s own Open Source Software Strategy calls for reducing dependency on proprietary technologies, and the Cyber Resilience Act itself is designed to address systemic risks from unaccountable technology dependencies.
On March 3rd, 2026, the European Commission published a request for feedback on to the guidances to be provided in relation to the CRA, which must be provided through the linked spreadsheet in .xlsx format, a proprietary format that makes interoperability extremely difficult due to its ever changing and undocumented features.
This is not a minor procedural oversight. It is a structural bias built into the process which sends out a clear message: full participation in EU policymaking requires a Microsoft licence.
We ask the European Commission to lead by example by following its own guidances in relation to interoperability and at to least provide, alongside the proprietary format generated by the proprietary software and services they use, also an Open Document Format (ODF) file which is an actual interoperable and internationally recognised standard.
While the Commission evaluates plans to upgrade its infrastructure and services to Open Source solutions, with the aim of improving resiliency and reduce risky dependencies, it should implement in its standard procedures the release of documents in ODF format to allow all citizens, organisations and institutions to participate in the democratic processes.
We are writing to provide feedback on a procedural matter that, while perhaps appearing minor at first glance, carries significant implications for the principles underpinning EU digital policy — in particular the commitments to open standards, interoperability, and vendor neutrality that the Commission itself has championed in multiple legislative and strategic contexts.
The stakeholder feedback template for the Cyber Resilience Act Guidance document has been made available exclusively in Microsoft Excel format (.xlsx). This choice is, respectfully, difficult to reconcile with the Commission’s own stated commitments.
The .xlsx format is a proprietary format defined and controlled by Microsoft Corporation, a private entity incorporated in the United States. In fact, although OOXML (ISO/IEC 29500) has been approved as a standard, its implementation has never complied with the specifications of the standard itself, as widely documented in the literature on interoperability. Requiring participants to use this format as the sole vehicle for structured data entry effectively conditions participation in a public consultation on the availability or willingness to use software produced by a single supplier.
This stands in direct contradiction to several principles the EU has advanced:
• The European Interoperability Framework (EIF), which recommends the use of open standards in public sector digital services and the avoidance of lock-in to proprietary technologies.
• The Open Source Software Strategy 2020–2023 and its successor, which promote the use of open source and open standards across EU institutions.
• The spirit, and arguably the letter, of the very Cyber Resilience Act itself, which seeks to reduce systemic risk arising from dependency on unaccountable or opaque technology components.
A consultation process that requires respondents to use a proprietary format produces a structural bias: it disadvantages individuals, organisations, and public administrations that have made the entirely legitimate and EU-endorsed choice to operate on open source software and open formats. A citizen or small organisation using LibreOffice, for instance, may encounter compatibility issues when working with the provided .xlsx template. A government body that has migrated to ODF-based workflows faces an unnecessary obstacle.
The remedy is straightforward. Feedback templates of this kind should be provided in at minimum two formats: one open format (ODF spreadsheet, .ods, being the obvious choice, as it is a true ISO-standardised format with no proprietary ownership) and one widely-used proprietary format for those whose environments require it. Ideally, a plain-text or web-based form would supplement both, removing the spreadsheet dependency entirely for respondents who prefer it.
The Commission’s credibility on digital sovereignty, open standards, and vendor-independent infrastructure is undermined — symbolically but meaningfully — each time its own processes rely exclusively on proprietary formats from non-European technology vendors. The CRA is precisely the kind of legislation where procedural consistency with stated principles matters most.
We respectfully urge the Commission to review its template distribution practices and to adopt a format-neutral approach to stakeholder consultation as standard policy going forward.
...
Read the original on blog.documentfoundation.org »
ShadowBroker is a real-time, full-spectrum geospatial intelligence dashboard that aggregates live data from dozens of open-source intelligence (OSINT) feeds and renders them on a unified dark-ops map interface. It tracks aircraft, ships, satellites, earthquakes, conflict zones, CCTV networks, GPS jamming, and breaking geopolitical events — all updating in real time.
Built with Next.js, MapLibre GL, FastAPI, and Python, it’s designed for analysts, researchers, and enthusiasts who want a single-pane-of-glass view of global activity.
git clone https://github.com/BigBodyCobain/Shadowbroker.git
cd Shadowbroker
docker-compose up -d
* Carrier Strike Group Tracker — All 11 active US Navy aircraft carriers with OSINT-estimated positions
* Clustered Display — Ships cluster at low zoom with count labels, decluster on zoom-in
* Region Dossier — Right-click anywhere on the map for:
The repo includes a docker-compose.yml that builds both images locally.
git clone https://github.com/BigBodyCobain/Shadowbroker.git
cd Shadowbroker
# Add your API keys (optional — see Environment Variables below)
cp backend/.env.example backend/.env
# Build and start
docker-compose up -d –build
Custom ports or LAN access? The frontend auto-detects the backend at
. If you remap the backend to a different port (e.g. “9096:8000”), set NEXT_PUBLIC_API_URL before building:
NEXT_PUBLIC_API_URL=http://192.168.1.50:9096 docker-compose up -d –build
This is a build-time variable (Next.js limitation) — it gets baked into the frontend during npm run build. Changing it requires a rebuild.
If you just want to run the dashboard without dealing with terminal commands:
Go to the Releases tab on the right side of this GitHub page.
Download the latest .zip file from the release.
Extract the folder to your computer.
It will automatically install everything and launch the dashboard!
If you want to modify the code or run from source:
# Clone the repository
git clone https://github.com/your-username/shadowbroker.git
cd shadowbroker/live-risk-dashboard
# Backend setup
cd backend
python -m venv venv
venv\Scripts\activate # Windows
# source venv/bin/activate # macOS/Linux
pip install -r requirements.txt
# Create .env with your API keys
echo “AIS_API_KEY=your_aisstream_key” >> .env
echo “OPENSKY_CLIENT_ID=your_opensky_client_id” >> .env
echo “OPENSKY_CLIENT_SECRET=your_opensky_secret” >> .env
# Frontend setup
cd ../frontend
npm install
# From the frontend directory — starts both frontend & backend concurrently
npm run dev
All layers are independently toggleable from the left panel:
The platform is optimized for handling massive real-time datasets:
* Viewport Culling — Only features within the visible map bounds (+20% buffer) are rendered
* Clustered Rendering — Ships, CCTV, and earthquakes use MapLibre clustering to reduce feature count
# Required
AIS_API_KEY=your_aisstream_key # Maritime vessel tracking (aisstream.io)
# Optional (enhances data quality)
OPENSKY_CLIENT_ID=your_opensky_client_id # OAuth2 — higher rate limits for flight data
OPENSKY_CLIENT_SECRET=your_opensky_secret # OAuth2 — paired with Client ID above
LTA_ACCOUNT_KEY=your_lta_key # Singapore CCTV cameras
This is an educational and research tool built entirely on publicly available, open-source intelligence (OSINT) data. No classified, restricted, or non-public data sources are used. Carrier positions are estimates based on public reporting. The military-themed UI is purely aesthetic.
Do not use this tool for any operational, military, or intelligence purpose.
This project is for educational and personal research purposes. See individual API provider terms of service for data usage restrictions.
Built with ☕ and too many API calls
...
Read the original on github.com »
How I repurposed my old gaming PC to set up a home server for data storage, backups, and self-hosted apps.
How I repurposed my old gaming PC to set up a home server for data storage, backups, and self-hosted apps.
For the longest time, I’ve procrastinated on finding a good backup and storage solution for my Fujifilm RAW files. My solution up until recently involved manually copying my photos across two external SSD drives. This was quite a hassle and I hadn’t yet figured out a good off-site backup strategy.
After hearing constant news updates of how hard drive prices have been surging due to AI data center buildouts, I finally decided to purchase some hard drives and set up a homelab to meet my storage and backup needs. I also used this opportunity to explore self-hosting some apps I’ve been eager to check out.
I repurposed my old gaming PC I built back in 2018 for this use case. This machine has the following specs:
I purchased the Western Digital hard drives over the winter holiday break. The other components were already installed on the machine when I originally built it.
On this machine I installed TrueNAS Community Edition on my NVMe drive. It’s a Linux-based operating system that is well-tailored for network-attached storage (NAS), file storage that is accessible to any device on your network.
For instance, TrueNAS allows you to create snapshots of your data. This is great for preventing data loss. If, for example, you accidentally deleted a file, you could recover it from a previous snapshot containing that file. In other words, a file is only truly deleted if and only if the system has no snapshots containing that file.
I’ve set up my machine to take hourly, daily, and even weekly snapshots. I’ve also configured it to delete old snapshots after a given period of time to save storage space.
Most of my data is mirrored across the two 8 TB hard disks in a RAID 1 setup. This means that if one drive fails, the other drive will still have all of my data intact. The SSD is used to store data from services that I self-host that benefit from having fast read and write speeds.
Not only is TrueNAS good for file storage, you can also host apps on it!
TrueNAS offers a catalog of apps, supported by the community, that you can install on your machine.
Scrutiny is a web dashboard for monitoring the health of your storage drives. Hard drives and SSDs have built-in firmware called S. M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) that continuously tracks health metrics like temperature, power-on hours, and read errors.
Scrutiny reads this data and presents it in a dashboard showing historical trends, making it easy to spot warning signs that a drive may fail soon.
Backrest is a web frontend for restic, a command-line tool used for creating file backups. I’ve set this up to save daily backups of my data to an object storage bucket on Backblaze B2.
Immich is one of the most popular open-source self-hosted apps for managing photos and videos. I love that it also offers iOS and Android apps that allow you to back up photos and videos from your mobile devices. This is great if you want to rely less on services like Google Photos or iCloud. I’m currently using this to back up photos and videos from my phone.
Mealie is a recipe management tool that has made my meal prepping experience so much better! I’ve found it great for saving recipes I find on sites like NYT Cooking.
When importing recipes, you can provide the URL of the recipe and Mealie will scrape the ingredients and instructions from the page and save it in your recipe library. This makes it easier to keep track of recipes you find online and want to try out later.
Ollama is a backend for running various AI models. I installed it to try running large language models like qwen3.5:4b and gemma3:4b out of curiosity. I’ve also recently been exploring the world of vector embeddings such as qwen3-embedding:4b. All of these models are small enough to fit in the 8GB of VRAM my GPU provides. I like being able to offload the work of running models on my homelab instead of my laptop.
When I’m not at home, I use Tailscale, a plug-and-play VPN service, to access my data and self-hosted apps remotely from any device. Tailscale builds on top of another tool called WireGuard to provide a secure tunnel into my home network.
The advantage here is that my homelab PC doesn’t need to be exposed to the public internet for this to work. Any device I want to use to access my homelab remotely needs to install the Tailscale app and be authenticated to my Tailscale network.
Right now, accessing my apps requires typing in the IP address of my machine (or Tailscale address) together with the app’s port number. Because all of my services share the same IP address, my password manager has trouble distinguishing which login to use for each one.
In the future I’ll look into figuring out how to assign custom domain names to all of my services.
...
Read the original on bryananthonio.com »
Literate programming is the idea that code should be intermingled with prose such that an uninformed reader could read a code base as a narrative, and come away with an understanding of how it works and what it does.
Although I have long been intrigued by this idea, and have found uses for it in a couple of different cases, I have found that in practice literate programming turns into a chore of maintaining two parallel narratives: the code itself, and the prose. This has obviously limited its adoption.
Historically in practice literate programming is most commonly found as Jupyter notebooks in the data science community, where explanations live alongside calculations and their results in a web browser.
Frequent readers of this blog will be aware that Emacs Org Mode supports polyglot literate programming through its org-babel package, allowing execution of arbitrary languages with results captured back into the document, but this has remained a niche pattern for nerds like me.
Even for someone as enthusiastic about this pattern as I am, it becomes cumbersome to use Org as the source of truth for larger software projects, as the source code essentially becomes a compiled output, and after every edit in the Org file, the code must be re-extracted and placed into its destination (“tangled”, in Org Mode parlance). Obviously this can be automated, but it’s easy to get into annoying situations where you or your agent has edited the real source and it gets overwritten on the next tangle.
That said, I have had enough success with using literate programming for bookkeeping personal configuration that I have not been able to fully give up on the idea, even before the advent of LLMs.
For example: before coding agents, I had been adapting a pattern for using Org Mode for manual testing and note-taking: instead of working on the command line, I would write more commands into my editor and execute them there, editing them in place until each step was correct, and running them in-place, so that when I was done I would have a document explaining exactly the steps that were taken, without extra steps or note-taking. Combining the act of creating the note and running the test gives you the notes for free when the test is completed.
This is even more exciting now that we have coding agents. Claude and Kimi and friends all have a great grasp of Org Mode syntax; it’s a forgiving markup language and they are quite good at those. All the documentation is available online and was probably in the training data, and while a big downside of Org Mode is just how much syntax there is, but that’s no problem at all for a language model.
Now when I want to test a feature, I ask the clanker to write me a runbook in Org. Then I can review it — the prose explains the model’s reflection of the intent for each step, and the code blocks are interactively executable once I am done reviewing, either one at a time or the whole file like a script. The results will be stored in the document, under the code, like a Jupyter notebook.
I can edit the prose and ask the model to update the code, or edit the code and have the model reflect the meaning upon the text. Or ask the agent to change both simultaneously. The problem of maintaining the parallel systems disappears.
The agent is told to handle tangling, and the problem of extraction goes away. The agent can be instructed with an AGENTS.md file to treat the Org Mode file as the source of truth, to always explain in prose what is going on, and to tangle before execution. The agent is very good at all of these things, and it never gets tired of re-explaining something in prose after a tweak to the code.
The fundamental extra labor of literate programming, which I believe is why it is not widely practiced, is eliminated by the agent and it utilizes capabilities the large language model is best at: translation and summarization.
As a benefit, the code base can now be exported into many formats for comfortable reading. This is especially important if the primary role of engineers is shifting from writing to reading.
I don’t have data to support this, but I also suspect that literate programming will improve the quality of generated code, because the prose explaining the intent of each code block will appear in context alongside the code itself.
I have not personally had the opportunity to try this pattern yet on a larger, more serious codebase. So far, I have only been using this workflow for testing and for documenting manual processes, but I am thrilled by its application there.
I also recognize that the Org format is a limiting factor, due to its tight integration with Emacs. However, I have long believed that Org should escape Emacs. I would promote something like Markdown instead, however Markdown lacks the ability to include metadata. But as usual in my posts about Emacs, it’s not Emacs’s specific implementation of the idea that excites me, as in this case Org’s implementation of literate programming does.
It is the idea itself that is exciting to me, not the tool.
With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?
...
Read the original on silly.business »
Short answer: because math. Longer answer: because prime numbers don’t divide into each other evenly.
To understand what follows, you need to know some facts about the physics of vibrating strings:
* When you pluck a guitar string, it vibrates to and fro. You can tell how fast the string is vibrating by listening to the pitch it produces.
* Shorter and higher-tension strings vibrate faster and make higher pitches. Longer and lower-tension strings vibrate slower and make lower pitches.
* The scientific term for the rate of the string’s vibration is its frequency. You measure frequency in hertz (Hz), a unit that just means “vibrations per second.” The standard tuning pitch, 440 Hz, is the pitch you hear when an object (like a tuning fork or guitar string) vibrates to and fro 440 times per second.
* Strings can vibrate in many different ways at once. In addition to the entire length of the string bending back and forth, the string can also vibrate in halves, in thirds, in quarters, and so on. These vibrations of string subsections are called harmonics (or overtones, or partials, they all mean the same thing.)
If you watch slow-motion video of a guitar string vibrating, you’ll see a complex, evolving blend of squiggles. These squiggles are the mathematical sum of all of the string’s different harmonics. The weird and interesting thing about harmonics is that each one produces a different pitch. So when you play a note, you’re actually hearing many different pitches at once.
It’s not difficult to isolate the harmonics of a vibrating string and hear their individual pitches. Harmonics are very useful for tuning your guitar — here’s a handy guide for doing so. They are also the basis of the whole Western tuning system generally.
As a string vibrates, its longer subsections produce lower and louder harmonics, while its shorter subsections produce higher and quieter harmonics. Click the image below to hear the first six harmonics of a string:
Remember that in a real-world string, you are hearing all these harmonics blended together. However, you can isolate the harmonics of a guitar string by lightly touching it in certain places to deaden some of the vibrations.
* If you touch the vibrating string at its halfway point, that deadens the vibration along the string’s entire length, enabling you to hear it vibrating in halves.
* If you touch the string a third of the way along its length, that deadens the vibration both of the entire string and the halves of the string, so you can now hear it vibrating in thirds.
* If you touch the string a quarter of the way along its length, that deadens the vibration of the whole string, the halves, and the thirds, so you can now hear it vibrating in quarters.
Imagine that you have a guitar string tuned to play a note called “middle C,” which has a frequency of 1 Hz. (In reality, middle C has a frequency of 261.626 Hz, so if you want to think in terms of actual frequencies, just multiply all the numbers in the following paragraphs by 261.626.)
The first harmonic is the string vibrating along its entire length, otherwise known as the fundamental frequency. When we say that your C string is vibrating at 1 Hz, that really means that its fundamental has a frequency of 1 Hz. The other harmonics all have other frequencies, and we’ll get to those, but the fundamental is usually the loudest harmonic, and it’s usually the only one you’re aware of hearing.
The second harmonic is the one you get from the string vibrating in halves. Each half of the string vibrates at twice the frequency of the whole string. The 2:1 relationship between the pitches of the first and second harmonics is called an octave. (I know that the word suggests the number eight, not the number two. Don’t worry about it.) The pitch that’s an octave above middle C has a frequency of 2 Hz, and it is also called C. Both of these notes have the same letter name because in Western convention, notes an octave apart from each other are considered to be “the same note“. The important concept here is that you can move up an octave from any pitch by doubling its frequency. You can also move down an octave from any pitch by halving its frequency.
The third harmonic is the one you get from the string vibrating in thirds. Its frequency is three times the fundamental frequency. Since your C string’s fundamental is 1 Hz, the third harmonic has a frequency of 3 Hz, and it produces a note called G. The interval between C and G is called a perfect fifth, for reasons having nothing to do with harmonics. I know it’s confusing.
The fourth harmonic is the one you get from the string vibrating in quarters, at 4 Hz. This note is an octave higher than the second harmonic, and so is also called C. (The eighth harmonic will also play C, as will the sixteenth, and the thirty-second, and all the powers of two up to infinity.)
The fifth harmonic is the one you get from the string vibrating in fifths. Its frequency is 5 Hz, and it produces a note called E. The interval between C and E is called a major third, which is another name that has nothing to do with harmonics.
There are many more harmonics (infinitely many more, in theory) but these first five are the most audible ones.
The ancient Greeks figured out that if you have a set of strings, it sounds really good if you tune them following the pitch ratios from the natural harmonic series. In such tuning systems, you pick a starting frequency, and then multiply or divide it by ratios of whole numbers to generate more frequencies, the same way you figure out the frequencies of a single string’s harmonics. The best-sounding note combinations (to Western people) are the ones derived from the first few harmonics. In other words, you get the nicest harmony (for Western people) when you multiply and divide your frequencies by ratios of the smallest prime numbers: 2, 3, and 5.
So, let’s do it. Let’s make a tuning system based on the harmonics of your C string. First, we should find the C, G and E notes whose frequencies are as close to each other as possible.
* We’ve already got C at 1 Hz.
* We can bring our G at 3 Hz down an octave by dividing its frequency in half. This gives us a G at 3/2 Hz.
* We can also bring our E at 5 Hz down two octaves by dividing its frequency in half twice. This gives us an E at 5/4 Hz.
When you play 1 Hz, 5/4 Hz and 3/2 Hz at the same time, you get a lovely sound called a C major triad.
So far, so good. Let’s find some more notes!
We can extend our tuning system by thinking of G as our base note, and looking at its harmonics. When we do, we get two new notes. The third harmonic of G is D at 9 Hz. (Thanks to octave equivalency, we can also make Ds at 9/2 Hz, and 9/4 Hz, and 9/8 Hz, and 18 Hz, and 36 Hz, and so on.) The fifth harmonic of G is B at 15 Hz. (There are also Bs at 15/2 Hz, and 30 Hz, and so on.)
The notes C and G feel closely related to each other because of their shared harmonic relationship. The chords you get from their respective overtone series also feel related. If you alternate between C major and G major chords, it just about always sounds good.
Now let’s extend our tuning system further by treating D as our base note. The harmonics of D give us two more new notes: the third harmonic is A at 27 Hz (and 27/2 Hz and 27/4 Hz and 27/8 Hz), and the fifth harmonic is F-sharp at 45 Hz (and 45/2 Hz and 45/4 Hz and 45/8 Hz).
G major chords and D major chords have the same relationship as C major and G major chords, and they sound equally good when you alternate them. Also, C major, G major and D major chords all sound good as a group, in any order and any combination. Western people just really like the sound of shared harmonics. Last thing: notice that you can combine the harmonics of C, G and D to form a G major scale.
Now let’s make some more notes by treating A as our base and looking at its harmonics. The third harmonic of A is E at 81 Hz (and 81/2 Hz and 81/4 Hz etc).
But wait. We already had an E, at 5 Hz. If we put these two E’s in the same octave, then one of them is at 80/64 Hz, and the other is at 81/64 Hz. That may not seem like much of a difference, but even untrained listeners will be able to hear that they are out of tune with each other. Furthermore, if we use the E derived from C, then it will be out of tune with A. However, if we use the E derived from A, then it will be out of tune with C. This is going to be a problem.
Let’s forget about that conflict for a second. Instead, we’ll try a different method of expanding our tuning system, by going in the opposite direction from C. Let’s think about a note that contains C in its harmonic series. That would be F at 1/3 Hz. The third harmonic of F is C at 1 Hz, as expected. The fifth harmonic of F is A at 5/3 Hz.
Uh oh. This new A conflicts with the one we already had at 27 Hz. That is not good. But let’s bracket that and keep expanding.
We can push further left by finding the note whose overtone series contains F. That would be B-flat at 1/9 Hz. Its third harmonic is F at 1/3 Hz, and its fifth harmonic is D at 5/9 Hz. And now we have a new problem: this D clashes with our existing D at 9 Hz.
Can you see the pattern here? Anytime you want to use intervals based on third harmonics, you’re multiplying and dividing by 3, but anytime you want to use intervals based on fifth harmonics, you’re multiplying and dividing by 5. (Notice that the conflicting notes always conflict by the same amount, too, a ratio of 81/80.) Starting from C, it’s possible to produce any note if you multiply or divide your frequencies by 3 enough times, but those notes won’t be in tune with the notes you’d get multiplying or dividing your frequencies by 5, because 3 and 5 don’t mutually divide evenly. This is not just an abstract mathematical issue. It’s the reason that it’s impossible to have a guitar be in tune with itself.
Imagine that the guitar’s low E string has a frequency of 1 Hz. (It’s really 82.4069 Hz; feel free to multiply everything in this next section by that number if you want actual frequencies.) Ideally, you want your high E string to be tuned two octaves above the low one, at 4 Hz. Let’s see if you can get there by tuning the strings pairwise.
* The interval between E and A is a fifth, but it’s upside down, because we’re going down a fifth from E. In music theory terms, an upside down fifth is called a fourth. You go up a fourth by multiplying your frequency by 4/3 (it’s 3/2 upside down, doubled to bring it up an octave.) So your A string is now tuned to 4/3 Hz.
* The D string should be another fourth higher, so you can multiply by 4/3 again, giving you 16/9 Hz.
* The G string should be yet another fourth higher, so you multiply by 4/3 to get 64/27 Hz.
* The B string is a major third higher than G, which means multiplying by 5/4, and that puts you at 80/27 Hz.
* Finally, to get to high E, that’s another fourth, so you multiply by 4/3 again, giving you… uh… 320/81 Hz.
This is not good. We wanted the high E to be at 4 Hz, which is the same as 324/81 Hz. We’re 4/81 Hz flat! That difference is big enough to make your guitar tuning sound like warm garbage.
Let’s try a different strategy. I said you should tune the B string a major third above G. However, you could just as easily retune the B string so it’s a fifth plus an octave above the low E string. You do this by multiplying 1 Hz by 3/2, and then doubling it, which puts your B at 3 Hz. Now the B string sounds perfectly in tune with the low E string at 1 Hz, and with the high E string at 4 Hz. Unfortunately, the B string is now out of tune with the G string at 64/27 Hz.
So maybe you should just retune the G string a major third below your new B, at 12/5 Hz. That makes the G and B strings sound great together. Unfortunately, now the G string is out of tune with the D string at 16/9 Hz.
You could retune the D string to be a fourth below G… but now the D string will be out of tune with the A string. If you retune the A string based on your new D, then it will be out of tune against the low E string. And if you retune the low E string based on your new A, then it will be out of tune with the high E string.
The bottom line: there is no way to tune the guitar so that every string is in tune with every other string.
The mathematical awkwardness of harmonics-based tuning systems has caused Western musicians a lot of pain over the past thousand years. Depending on your starting pitch, some intervals can be perfectly in tune, but others can’t be. And the more harmonically complex you want your music to be, the worse the tuning issues become.
In the 16th century, Chinese and Dutch musicians independently came up with an alternative system to harmonics-based tuning, called 12-tone equal temperament, or 12-TET. It’s the system that the entire Western world uses today. The idea behind 12-TET is to have everything be pretty much in tune, which you accomplish by having everything be a little bit out of tune. Is this a worthwhile compromise? Let’s do the math and find out.
In 12-TET, you divide up the octave into twelve equally-sized semitones (the interval between two adjacent piano keys or guitar frets). To go up a semitone from any note, you multiply its frequency by the 12th root of 2 (about 1.05946). To go down a semitone from any note, you divide its frequency by the 12th root of 2. If you go up by an octave (twelve semitones), you’re multiplying your frequency by the 12th root of 2 twelve times, which works out to 2. That’s a perfect octave, hooray! Unfortunately, you can’t exactly create the other harmonics-based intervals by adding up 12-TET semitones; you can only approximate them.
Remember that the pure fifth you get from harmonics is a frequency ratio of 3/2. In 12-TET, however, you make a fifth by adding up seven semitones. This means that you multiply your frequency by the 12th root of two seven times, which comes to about 1.498. That’s close to 3/2, but it’s not exact. As a result, fifths in 12-TET sound a little flat compared to what your ear is expecting from natural harmonics.
Major thirds are worse in 12-TET. Recall that the major third you get from the overtone series is a frequency ratio of 5/4. In 12-TET, you make a major third by adding four semitones, which means that you multiply your frequency by the 12th root of 2 four times. That comes to 1.25992, which is noticeably higher than 5/4. Thirds in 12-TET are quite sharp compared to what your ears are expecting from natural harmonics.
If thirds and fifths are so out of tune in 12-TET, why do we use it? The advantage is that all the thirds and fifths in all the keys are out of tune by the same amount. None of them sound perfect, but none of them sound terrible, either. You don’t have to worry about whether your notes are derived from the third harmonic of some note or the fifth harmonic of some other note; they all just work together, kind of. If you use a digital guitar tuner, you are tuning your strings to the 12-TET versions of E, A, D, G and B. None of them will be perfectly in tune with each other, but they will all be wrong by an acceptable amount. Also, songs in the key of E won’t sound any better or worse than songs in the key of F or E-flat.
Not everyone in history thought that 12-TET was an acceptable compromise. Johann Sebastian Bach thought we should use other tuning systems that made better-sounding thirds and fifths in some keys in exchange for worse-sounding thirds and fifths in others. In Bach’s preferred tuning, each key had its own distinctive blend of smoothness and harshness. However, Bach did not get his way. We as a civilization have collectively decided that we want all our keys to be interchangeable. There are good reasons to want this! In 12-TET, all intervals and chords are built from standardized, Lego-like parts. You don’t have to keep track of a complicated web of different-sized intervals in every key. If you move a song from C to C-sharp or D or anywhere else, you can be confident that it will still sound “the same.”
Some musicians don’t want to accommodate to 12-TET, insisting instead that we should continue to use pure intervals derived from harmonics the way God and Pythagoras intended. Harmonics-based tuning systems are collectively known as just intonation systems. This is a poetically apt term, because it implies fairness. By contrast, the implicit message of 12-TET is that life isn’t fair. Just intonation systems give you some lovely pure intervals, but you can’t change keys unless you retune all your instruments. In other world cultures, this is not necessarily a problem. Hindustani classical music uses just intonation over an omnipresent drone, so everything is always in the same “key.”
Meanwhile, a few Western oddballs and nerds have explored just intonation systems that use bigger prime numbers than 2, 3 and 5 to generate finer pure intervals. Harry Partch used the primes up to eleven to make a tuning system that divides up the octave into 43 pure parts rather than 12 impure ones. You can try the Partch 43-tone scale using the Wilsonic app or Audiokit Synth One. It’s extremely strange! But, I guess, it’s strange in a pure way. I have made some music of my own with exotic just intonation tunings.
Just intonation may also play a role in the blues. There is a theory that the blues originates from the natural overtone series of I and IV. If this is true, then the characteristic chords and scales of the blues are really 12-TET approximations of the original just intonation blues scale. It’s conventional to say that blues musicians and singers bend notes to make them go out of tune, but it may be that they are actually bending the 12-TET pitches to get them in tune instead.
Anyway, outside of the blues and the avant-garde, most Western musicians just live with everything being a little out of tune. If you’re a guitarist, you know that no matter how you tune your guitar, it won’t stay in tune for long anyway, so how much does any of this even matter? There’s a joke among guitarists: we spend half our lives tuning, and the other half wishing we were in tune. There are lots of reasons why tuning is hard: you might be hampered by having a poorly made guitar, or by having a guitar that’s not set up correctly, or by using old worn-out strings, or by changes in temperature or humidity, or just by a lack of patience or time. At least you can be secure in the knowledge that some of your tuning struggles are due to the basic unfairness of the universe, and not just the limitations of your ears or your equipment.
...
Read the original on www.ethanhein.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.