10 interesting stories served every morning and every evening.
Skip to main content
🚨 The Conservatives (EPP) are attempting to force a new vote on Thursday (26th), seeking to reverse Parliament’s NO on indiscriminate scanning. This is a direct attack on democracy and blatant disregard for your right to privacy. No means no. Take action now!
...
Read the original on fightchatcontrol.eu »
Tesla runs a bug bounty program that invites researchers to find security vulnerabilities in their vehicles. To participate, I needed the actual hardware, so I started looking for Tesla Model 3 parts on eBay. My goal was to get a Tesla car computer and touchscreen running on my desk, booting the car’s operating system.
The car computer consists of two parts - the MCU (Media Control Unit) and the autopilot computer (AP) layered on top of each other. In the car, the computer is located in front of the passenger seat, roughly behind the glovebox. The part itself is the size of an iPad and the thickness of a ~500 page book and is covered in a water-cooled metal casing:
By searching for “Tesla Model 3 MCU” on Ebay, I found quite a lot of results in the $200 - $300 USD price range. Looking at the listings, I found that many of these sellers are “salvaging” companies who buy crashed cars, take them apart, and list all parts for sale individually. Sometimes, they even include a photo of the original crashed car and a way to filter their listings for parts extracted from the same vehicle.
To boot the car up and interact with it, I needed a few more things:
* The display cable to connect them together
For the power supply, I went with an adjustable 0-30V model from Amazon. There was a 5 ampere and a 10A version available, at the time, I figured it’s safer to have some headroom and went with the 10A version — it was a very good decision, as it later turned out, the full setup could consume up to 8A at peak times. The Model 3 screens were surprisingly expensive on Ebay, I assume that is because it is a popular part to replace. I found a pretty good deal for 175 USD.
The last and most difficult part to order was the cable which connects the MCU to the screen. I needed this because both the computer and a screen were being sold with the cables cut a few centimeters after the connector (interestingly most sellers did that, instead of just unplugging the cables).
This is when I discovered that Tesla publishes the wiring “Electrical Reference” for all of its cars publicly. On their service website, you can look up a specific car model, search for a component (such as the display), and it will show you exactly how the part should be wired up, what cables/connectors are used, and even what the different pins are responsible for inside a single connector:
Turns out the display uses a 6-pin cable (2 for 12V and ground, 4 for data) with a special Rosenberger 99K10D-1D5A5-D connector. I soon discovered that unless you are a car manufacturer ordering in bulk, there is no way you are buying a single Rosenberger cable like this. No Ebay listings, nothing on Aliexpress, essentially no search results at all.
After digging around a bit, I found that this cable is very similar to a more widely used automotive cable called “LVDS”, which is used to transfer video in BMW cars. At first sight, the connectors looked like a perfect match to my Rosenberger, so I placed an order:
The computer arrived first. To attempt to power it on, I looked up which pin of which connector I needed to attach 12V and ground to using the Tesla schematics & the few pictures online of people doing the same desk-MCU setup. Since the computer included the shortly cut cables, I was able to strip the relevant wires and attach the power supply’s clips to the right ones:
I saw a couple of red LEDs start flashing, and the computer started up! Since I had no screen yet, there were not many ways to interact with the car. Reading @lewurm’s previous research on GitHub I knew that, at least in older car versions, there was a network inside the car, with some components having their own webserver. I connected an Ethernet cable to the port next to the power connector and to my laptop.
This network does not have DHCP, so you have to manually set your IP address. The IP you select has to be 192.168.90. X/24, and should be higher than 192.168.90.105 to not conflict with other hosts on the network. On Reddit, I found the contents of an older /etc/hosts file from a car which shows the hosts that are normally associated with specific IPs:
@lewurm’s blog mentioned that SSH on port :22 and a webserver on :8080 was open on 192.168.90.100, the MCU. Was this still the case on newer models? Yes!
I had already found 2 services to explore on the MCU:
* An SSH server which states “SSH allowed: vehicle parked” - quite funny given the circumstances
This SSH server requires specially signed SSH keys which only Tesla is supposed to be able to generate.
Interestingly, Tesla offers a “Root access program” on their bug bounty program. Researchers who find at least one valid “rooting” vulnerability will receive a permanent SSH certificate for their own car, allowing them to log in as root and continue their research further. — A nice perk, as it is much easier to find additional vulnerabilities once you are on the inside.
* This SSH server requires specially signed SSH keys which only Tesla is supposed to be able to generate.
* Interestingly, Tesla offers a “Root access program” on their bug bounty program. Researchers who find at least one valid “rooting” vulnerability will receive a permanent SSH certificate for their own car, allowing them to log in as root and continue their research further. — A nice perk, as it is much easier to find additional vulnerabilities once you are on the inside.
* A REST-like API on :8080 which returned a history of “tasks”
This service is called “ODIN” (On-Board Diagnostic Interface Network), and is intentionally exposed to be used by Tesla’s diagnostics tool “Toolbox”.
* This service is called “ODIN” (On-Board Diagnostic Interface Network), and is intentionally exposed to be used by Tesla’s diagnostics tool “Toolbox”.
Around this time, I also removed the metal shielding to see exactly what the boards look like inside. You can see the two different boards which were stacked on top of each other:
Once the screen and the BMW LVDS cable arrived, it unfortunately became clear that the connector is not going to fit. The BMW connector was much thicker on the sides and it was not possible to plug it into the screen. This led to some super sketchy improvised attempts to strip the two original “tail” cables from the MCU and the screen and connect the individual wires together. The wires were really sensitive and thin. The setup worked for a couple of seconds, but caused wire debris to fall on the PCB and short it, burning one of the power controller chips:
It was extremely hard to find the name/model of the chip that got burned, especially since part of the text printed on it had become unreadable due to the damage. To be able to continue with the project, I had to order a whole other car computer.
In the meantime, my friend Yasser (@n3r0li) somehow pulled off the impossible and identified it as the “MAX16932CATIS/V+T” step-down controller, responsible for converting power down to lower voltages. We ordered the chip and took the board to a local PCB repair shop, where they successfully replaced it and fixed the MCU. Now I had two computers to work with.
So I really did need that Rosenberger cable, there was no getting around it.
After having no luck finding it online and even visiting a Tesla service center in London (an odd encounter, to say the least), I had to accept what I had been trying to avoid: buying an entire Dashboard Wiring Harness.
Back in the Tesla Electrical Reference, in addition to the connectors, one can find every part number. Looking at the cable which connects the MCU to the screen, the number 1067960-XX-E shows. Searching for it on Ebay brings up this monstrosity:
Turns out that actual cars don’t have individual cables. Instead they have these big “looms”, which bundle many cables from a nearby area into a single harness. This is the reason why I could not find the individual cable earlier. They simply don’t manufacture it. Unfortunately I had no other choice but to buy this entire loom for 80 USD.
Despite how bulky it was, the loom worked perfectly. The car booted, the touch screen started up, and I had a working car computer on my desk, running the car’s operating system!
Having the system running, I can now start playing with the user interface, interacting with the exposed network interfaces, exploring the CAN buses, and perhaps even attempting to extract the firmware.
...
Read the original on bugs.xdavidhu.me »
Last year, I visited my grandmother’s house for the first time after the pandemic and came across a cupboard full of loose old photos. I counted 1,351 of them spanning all the way from my grandparents in their early 20s, my mom as a baby, to me in middle school, just around the time when we got our first smartphone and all photos since then were backed up online.
Everything was all over the place so I spent some time going through them individually and organizing them into groups. Some of the initial groups were based on the physical attributes of the photograph like similar aspect ratios or film stock. For example, there was a group of black/white 32mm square pictures that were taken around the time when my grandfather was in his mid 20s.
As I got done with grouping all of them, I was able to see flashes of stories in my head, but they were ephemeral and fragile. For instance, there was a group of photos that looked like it was taken during my grandparents’ wedding but I didn’t know the chronological order they were taken because EXIF metadata didn’t exist around that time.
So I sat down with my grandmother and asked her to reorder the photos and tell me everything she could remember about her wedding. Her face lit up as she narrated the backstory behind the occasion, going from photo to photo, resurfacing details that had been dormant for decades. I wrote everything down, recorded the names of people in some of the photos, some of whom I recognized as younger versions of my uncles and aunts.
After the “interview”, I had multiple pages of notes connecting the photos to events that happened 50 years ago. Since the account was historical, as an inside joke I wanted to see if I could clean it up and present it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a local instance, and began my editorial work. I used the 2011 Royal Wedding as reference and drafted a page starting with the classic infobox and the lead paragraph.
I split up the rest of the content into sections and filled them with everything I could verify like dates, names, places, who sat where. I scanned all the photos and spent some time figuring out what to place where. For every photo placement, there was a follow up to include a descriptive caption too.
Whenever I mentioned a person, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that provided wider context to things like venues, rituals, and the political climate around that time, like for instance a legal amendment that was relevant to the wedding ceremony.
In two evenings, I was able to document a full backstory for the photos into a neat article. These two evenings also made me realize just how powerful encyclopedia software is to record and preserve media and knowledge that would’ve otherwise been lost over time.
This was so much fun that I spent the following months writing pages to account for all the photos that needed to be stitched together.
I got help from r/genealogy about how to approach recording oral history and I was given resources to better conduct interviews, shoutout to u/stemmatis! I would get on calls with my grandmother and people in the family, ask them a couple of questions, and then write. It was also around this time that I began using audio transcription and language models to make the editorial process easier.
Over time, I managed to write a lot of pages connecting people to different life events. The encyclopedia format made it easy to connect dots I would have never found on my own, like discovering that one of the singers at my grandparents’ wedding was the same nurse who helped deliver me.
After finding all the stories behind the physical photos, I started to work on digital photos and videos that I had stored on Google Photos. The wonderful thing about digital photos is that they come with EXIF metadata that can reveal extra information like date, time, and sometimes geographical coordinates.
This time, without any interviews, I wanted to see if I could use a language model to create a page based on just browsing through the photos. As my first experiment, I created a folder with 625 photos of a family trip to Coorg back in 2012.
I pointed Claude Code at the directory and asked it to draft a wiki page by browsing through the images. I hinted at using ImageMagick to create contact sheets so it would help with browsing through multiple photos at once.
After a few minutes and a couple of tokens later, it had created a compelling draft with a detailed account of everything we did during the trip by time of day. The model had no location data to work with, just timestamps and visual content, but it was able to identify the places from the photos alone, including ones that I had forgotten by now. It picked up details on the modes of transportation we used to get between places just from what it could see.
After I had clarified who some of the people in the pictures were, it went on to identify them automatically in the captions. Now that I had a detailed outline ready, the page still only had content based on the available data, so to fill in the gaps I shared a list of anecdotes from my point of view and the model inserted them into places where the narrative called for them.
The Coorg trip only had photos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 photos and 343 videos with an iPhone 12 Pro that included geographical coordinates as part of the EXIF metadata.
On top of that, I exported my location timeline from Google Maps, my Uber trips, my bank transactions, and Shazam history. I would ask Claude Code to start with the photos and then gradually give it access to the different data exports.
Here are some of the things it did across multiple runs:
It cross-referenced my bank transactions with location data to ascertain the restaurants I went to.
Some of the photos and videos showed me in attendance at a soccer match, however, it was unknown which teams were playing. The model looked up my bank transactions and found a Ticketmaster invoice with information about the teams and name of the tournament.
It looked up my Uber trips to figure out travel times and exact locations of pickup and drop.
It used my Shazam tracks to write about the kinds of songs that were playing at a place, like Cuban songs at a Cuban restaurant.
In a follow-up, I mentioned remembering an evening dinner with a guitarist playing in the background. It filtered my media to evening captures, found a frame in a video with the guitarist, uploaded it, and referenced the moment in the page.
The MediaWiki architecture worked well with the edits, since for every new data source it would make amendments like a real Wikipedia contributor would. I leaned heavily on features that already existed. Talk pages to clarify gaps and consolidate research notes, categories to group pages by theme, revision history to track how a page evolved as new data came in. I didn’t have to build any of this, it was all just there.
What started as me helping the model fill in gaps from my memory gradually inverted. The model was now surfacing things I had completely forgotten, cross-referencing details across data sources in ways I never would have done manually.
So I started pointing Claude Code at other data exports. My Facebook, Instagram, and WhatsApp archives held around 100k messages and a couple thousand voice notes exchanged with close friends over a decade.
The model traced the arc of our friendships through the messages, pulled out the life episodes we had talked each other through, and wove them into multiple pages that read like it was written by someone who knew us both. When I shared the pages with my friends, they wanted to read every single one.
This is when I realized I was no longer working on a family history project. What I had been building, page by page, was a personal encyclopedia. A structured, browsable, interconnected account of my life compiled from the data I already had lying around.
I’ve been working on this as whoami.wiki. It uses MediaWiki as its foundation, which turns out to be a great fit because language models already understand Wikipedia conventions deeply from their training data. You bring your data exports, and agents draft the pages for you to review.
A page about your grandmother’s wedding works the same way as a page about a royal wedding. A page about your best friend works the same way as a page about a public figure.
Oh and it’s genuinely fun! Putting together the encyclopedia felt like the early days of Facebook timeline, browsing through finished pages, following links between people and events, and stumbling on a detail I forgot.
But more than the technology, it’s the stories that stayed with me. Writing about my grandmother’s life surfaced things I’d never known, her years as a single mother, the decisions she had to make, the resilience it took. She was a stronger woman than I ever realized. Going through my friendships, I found moments of endearment that I had nearly forgotten, the days friends went the extra mile to be good to me. Seeing those moments laid out on a page made me pick up the phone and call a few of them. The encyclopedia didn’t just organize my data, it made me pay closer attention to the people in my life.
Today I’m releasing whoami.wiki as an open source project. The encyclopedia is yours, it runs on your machine, your data stays with you, and any model can read it. The project is early and I’m still figuring a lot of it out, but if this sounds interesting, you can get started here and tell me what you think!
...
Read the original on whoami.wiki »
This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is.
Learn more about Bluesky at bsky.social and atproto.com. You did it! 🥳
European Parliament just decided that Chat Control 1.0 must stop.
This means on April 6, 2026, Gmail, LinkedIn, Microsoft and other Big Techs must stop scanning your private messages in the EU. #PrivacyWins 💪
[contains quote post or other embedded content]
...
Read the original on bsky.app »
ARC-AGI-3 is an interactive reasoning benchmark which challenges AI agents to explore novel environments, acquire goals on the fly, build adaptable world models, and learn continuously.
A 100% score means AI agents can beat every game as efficiently as humans.
Instead of solving static puzzles, agents must learn from experience inside each environment—perceiving what matters, selecting actions, and adapting their strategy without relying on natural-language instructions.
...
Read the original on arcprize.org »
Why do I file bug reports with Apple Feedback Assistant? I plead insanity. Or perhaps addiction. I seesaw between phases of abstinence and falling off the wagon. I’ve even tried organizing a public boycott of Feedback Assistant, with a list of demands to improve the experience for users, but the boycott never caught on with other developers. Regardless, an incentive still exists to file bug reports, because Apple actually fixes some of my bugs. My main complaint about the bug reporting process is not the unfixed bugs but rather the disrespect for bug reports and the people who file them. Apple intentionally wastes our time with no regrets, as if our time had no value, as if we had some kind of duty to serve Apple.
In March 2023, I filed FB12088655 “Privacy: Network filter extension TCP connection and IP address leak.” I mentioned this bug report at the time in a blog post, which included the same steps to reproduce and example Xcode project that I provided to Apple. In the three years since I filed the bug report, I received no response whatsoever from Apple… until a couple of weeks ago, when Apple asked me to “verify” the issue with macOS 26.4 beta 4 and update my bug report.
I install the WWDC betas every year in June but don’t run OS betas after September when the major OS updates are released. I don’t have enough time or indeed enough Apple devices to be an unpaid tester year round. Thus, verifying issues in betas is a hassle for me. I’ve been burned by such requests in the past, asked by Apple to verify issues in betas that were not fixed, so I asked Apple directly whether beta 4 fixed the bug: they should already know, since they have my steps to reproduce! However, their response was evasive, never directly answering my question. Moreover, they threatened to close my bug report and assume the bug is fixed if I didn’t verify within two weeks! Again, this is after Apple silently sat on my bug report for three years.
Although I didn’t install the beta myself, I spoke to the developers of Little Snitch, who do run the macOS betas, and they kindly informed me that in their testing, they could still reproduce my issue with macOS 26.4 beta 4. It was no surprise, then, that when I updated to macOS 26.4, released to the public yesterday by Apple, I could still reproduce the bug with my instructions and example project. It appears that Apple knowingly sent me on a wild goose chase, demanding that I “verify” a bug they did nothing to fix, perhaps praying that the bug had magically disappeared on its own, with no effort from Apple.
By the way, a few weeks ago I published a blog post about another bug, FB22057274 “Pinned tabs: slow-loading target=“_blank” links appear in the wrong tab,” which is also 100% reproducible but nonetheless was marked by Apple with the resolution “Investigation complete - Unable to diagnose with current information.” On March 9, I updated the bug report, asking what additional information Apple needs from me—they never asked for more information—but I’ve yet to receive a response.
I can only assume that some bozos in Apple leadership incentivize underlings to close bug reports, no matter whether the bugs are fixed. Out of sight, out of mind. Apple’s internal metrics probably tell them that they have no software quality problem, because the number of open bug reports is kept lower artificially.
Ironically, the iPadOS 26.4 betas introduced a Safari crashing bug that I reported a month ago, but Apple failed to fix the bug before the public release. What’s the purpose of betas? As far as I can tell, the purpose is just to annoy people who file bugs, without doing anything useful.
Shortly after this blog post hit the front page of Hacker News yesterday, my “Investigation complete - Unable to diagnose with current information” Feedback FB22057274 was updated by Apple. What an amazing coincidence! Unfortunately, the update was not helpful, because Apple requested a sysdiagnose. For a user interface issue! This was precisely the fear I expressed in my earlier blog post:
I honestly don’t know what additional information Apple needs to diagnose it. I included not only steps to reproduce but also multiple screen recordings to illustrate. I have a suspicion that Apple did not even read my bug report, because I did not attach a sysdiagnose report. But a privacy-violating sysdiagnose would not be useful in this case!
The only trick in my bug report is that I used Little Snitch to simulate a slow loading link. This was just the easiest way I could think of to reliably reproduce the bug. There are of course other ways to simulate a slow loading link; if Apple Safari engineers of all people somehow can’t figure that out, then they aren’t qualified for their jobs. Again, however, the more likely explanation is that my feedback was ignored because it did not include a pro forma sysdiagnose, but who knows, because Apple did not request more information of any kind from me.
Here is my response this morning to Apple’s request:
You shouldn’t need a sysdiagnose, and I don’t know how a sysdiagnose would possibly be helpful for a user interface bug.
I found an easy way to reproduce the issue without Little Snitch: use the Network Link Conditioner preference pane from the Xcode Additional Tools download, and create a profile with Uplink Delay 3000 ms.
The Xcode Additional Tools, which include a number of useful utilities, can be found in the Apple Developer Downloads (sign-in required).
...
Read the original on lapcatsoftware.com »
There is a distinct, visceral kind of pain in watching an otherwise brilliant engineer hold down the Backspace key for six continuous seconds to fix a typo at the beginning of a line.
We’ve all been there. We learn ls, cd, and grep, and then we sort of… stop. The terminal becomes a place we live in-but we rarely bother to arrange the furniture. We accept that certain tasks take forty keystrokes, completely unaware that the shell authors solved our exact frustration sometime in 1989.
Here are some tricks that aren’t exactly secret, but aren’t always taught either. To keep the peace in our extended Unix family, I’ve split these into two camps: the universal tricks that work on almost any POSIX-ish shell (like sh on FreeBSD or ksh on OpenBSD), and the quality-of-life additions specific to interactive shells like Bash or Zsh.
These tricks rely on standard terminal line disciplines, generic Bourne shell behaviors, or POSIX features. If you SSH into an embedded router from 2009, a fresh OpenBSD box, or a minimal Alpine container, these will still have your back.
Why shuffle character-by-character when you can teleport? These are standard Emacs-style line-editing bindings (via Readline or similar), enabled by default in most modern shells.
CTRL + W: You’re typing /var/log/nginx/ but you actually meant /var/log/apache2/. You have two choices: hold down Backspace until your soul leaves your body, or hit CTRL + W to instantly delete the word before the cursor. Once you get used to this, holding Backspace feels like digging a hole with a spoon.
CTRL + U and CTRL + K: You typed out a beautifully crafted, 80-character rsync command, but suddenly realize you need to check if the destination directory actually exists first. You don’t want to delete it, but you don’t want to run it. Hit CTRL + U to cut everything from the cursor to the beginning of the line. Check your directory, and then hit CTRL + Y to paste (“yank”) your masterpiece right back into the prompt. (CTRL + K does the same thing, but cuts from the cursor to the end of the line.)
CTRL + A and CTRL + E: Jump instantly to the beginning (A) or end (E) of the line. Stop reaching for the Home and End keys; they are miles away from the home row anyway.
ALT + B and ALT + F: Move backward (B) or forward (F) one entire word at a time. It’s the arrow key’s much faster, much cooler sibling. (Mac users: you usually have to tweak your terminal settings to use Option as Meta for this to work).
reset (or stty sane): While strictly more of a terminal recovery tip than an interactive shell trick, it belongs here. We’ve all done it: you meant to cat a text file, but you accidentally cat a compiled binary or a compressed tarball. Suddenly, your terminal is spitting out ancient runes and Wingdings, and your prompt is completely illegible. Instead of closing the terminal window in shame, type reset (even if you can’t see the letters you’re typing) and hit enter. Your terminal will heal itself.
CTRL + C: Cancel the current command immediately. Your emergency exit when a command hangs, or you realize you’re tailing the wrong log file.
CTRL + D: Sends an EOF (End of File) signal. If you’re typing input to a command that expects it, this closes the stream. But if the command line is empty, it logs you out of the shell completely-be careful where you press it.
CTRL + L: Your terminal is cluttered with stack traces, compiler spaghetti, and pure digital noise. Running the clear command works, but what if you’re already halfway through typing a new command? CTRL + L wipes the slate clean, throwing your current prompt right up to the top without interrupting your train of thought.
cd -: The classic channel-flipper. You’re deep down in /usr/local/etc/postfix and you need to check something in /var/log. You type cd /var/log, look at the logs, and now you want to go back. Instead of typing that long path again, type cd -. It switches you to your previous directory. Run it again, and you’re back in logs. Perfect for toggling back and forth.
pushd and popd: If cd - is a toggle switch, pushd is a stack. Need to juggle multiple directories? pushd /etc changes to /etc but saves your previous directory to a hidden stack. When you’re done, type popd to pop it off the stack and return exactly where you left off.
> file.txt: This empties a file completely without deleting and recreating it. Why does this matter? It preserves file permissions, ownership, and doesn’t interrupt processes that already have the file open. It’s much cleaner than echo “” > file.txt (which actually leaves a newline character) or rm file && touch file.
$_: In most shells, $_ expands to the last argument of the previous command-especially useful interactively or in simple scripts when you need to operate on the same long path twice:
No more re-typing paths or declaring temporary variables to enter a directory you created a second ago.
If you are writing shell scripts, put these at the top immediately after your shebang. It will save you from deploying chaos to production.
* set -e: Exit on error. Very useful, but notoriously weird with edge cases (especially inside conditionals like if statements, while loops, and pipelines). Don’t rely on it blindly as it can create false confidence. (Pro-tip: consider set -euo pipefail for a more robust safety net, but learn its caveats first.)
* set -u: Treats referencing an unset variable as an error. This protects you from catastrophic disasters like rm -rf /usr/local/${MY_TYPO_VAR}/* accidentally expanding into rm -rf /usr/local/*.
If you’re on a Linux box or using a modern interactive shell, these are the tools that make the CLI feel less like a rusty bicycle and more like something that actually responds when you steer.
CTRL + R: Reverse incremental search. Stop pressing the up arrow forty times to find that one awk command you used last Tuesday. Press CTRL + R, start typing a keyword from the command, and it magically pulls it from your history. Press CTRL + R again to cycle backwards through matches.
!!: This expands to the entirety of your previous command. Its most famous use case is the “Permission denied” walk of shame. You confidently type systemctl restart nginx, hit enter, and the system laughs at your lack of privileges. Instead of retyping it, run:
It’s your way of telling the shell, “Do what I said, but this time with authority.”
CTRL + X, then CTRL + E: You start typing a quick one-liner. Then you add a pipe. Then an awk statement. Soon, you’re editing a four-line monster inside your prompt and navigation is getting difficult. Hit CTRL + X followed by CTRL + E (in Bash; in Zsh, this needs configuring). This drops your current command into your default text editor (like Vim or Nano). You can edit it with all the power of a proper editor, save, and exit. The shell then executes the command instantly.
fc: The highly portable, traditional sibling to CTRL+X CTRL+E. Running fc opens your previous command in your $EDITOR. It works across most shells and is a fantastic hidden gem for fixing complex, multi-line commands that went wrong.
ESC + . (or ALT + .): Inserts the last argument of the previous command right at your cursor. Press it repeatedly to cycle further back through your history, dropping the exact filename or parameter you need right into your current command.
!$: The non-interactive sibling of ESC + .. Unlike ESC + . (which inserts the text live at your cursor for you to review or edit), !$ expands blindly at the exact moment you hit enter.
(Pro-Tip: For scripting or standard sh, use the $_ variable mentioned earlier instead!)
Brace expansion is pure magic for avoiding repetitive typing, especially when doing quick backups or renames.
The Backup Expansion: Need to edit a critical config file and want to make a quick backup first?
This expands to mv filename.txt filename.md. Fast, elegant, and makes you look like a wizard.
Need multiple directories? mkdir -p project/{src,tests,docs} creates all three at once.
: Treats the output of a command as if it were a file. Say you want to diff the sorted versions of two files. Traditionally, you’d sort them into temporary files, diff those, and clean up. Process substitution skips the middleman:
** (Globstar): find is a great command, but sometimes it feels like overkill. If you run shopt -s globstar in Bash (it’s enabled by default in Zsh), ** matches files recursively in all subdirectories. Need to find all JavaScript files in your current directory and everything beneath it?
CTRL + Z, then bg, then disown: You started a massive, hour-long database import task, but you forgot to run it in tmux or screen. It’s tying up your terminal, and if your SSH connection drops, the process dies. Panic sets in.
Type bg to let it resume running in the background. Your prompt is free!
Type disown to detach it from your shell entirely. You can safely close your laptop, grab a coffee, and the process will survive.
command |& tee file.log: Standard pipes (|) only catch standard output (stdout). If a script throws an error (stderr), it skips the pipe and bleeds directly onto your screen, missing the log file. |& pipes both stdout and stderr (it’s a helpful shorthand for 2>&1 |).
Throw in tee, and you get to watch the output on your screen while simultaneously saving it to a log file. It’s the equivalent of watching live TV while recording it to your DVR.
The shell is a toolbox, not an obstacle course. You don’t need to memorize all of these today. Pick just one trick, force it into your daily habits for a week, and then pick another. Stop letting the terminal push you around, and start rearranging the furniture. It’s your house now.
...
Read the original on blog.hofstede.it »
Today, we’re announcing an update on how GitHub will use data to deliver more intelligent, context-aware coding assistance. From April 24 onward, interaction data—specifically inputs, outputs, code snippets, and associated context—from Copilot Free, Pro, and Pro+ users will be used to train and improve our AI models unless they opt out. Copilot Business and Copilot Enterprise users are not affected by this update.
Not interested? Opt out in settings under “Privacy.” If you previously opted out of the setting allowing GitHub to collect this data for product improvements, your preference has been retained—your choice is preserved, and your data will not be used for training unless you opt in.
This approach aligns with established industry practices and will improve model performance for all users. By participating, you’ll help our models better understand development workflows, deliver more accurate and secure code pattern suggestions, and improve their ability to help you catch potential bugs before they reach production.
Our initial models were built using a mix of publicly available data and hand-crafted code samples. This past year, we’ve started incorporating interaction data from Microsoft employees and have seen meaningful improvements, including increased acceptance rates in multiple languages.
The improvements we’ve seen by incorporating Microsoft interaction data indicate we can improve model performance for a more diverse range of use cases by training on real-world interaction data. Should you decide to participate in this program, the interaction data we may collect and leverage includes:
Outputs accepted or modified by you
Inputs sent to GitHub Copilot, including code snippets shown to the model
This program does not use:
Interaction data from users who opt out of model training in their Copilot settings
Content from your issues, discussions, or private repositories at rest. We use the phrase “at rest” deliberately because Copilot does process code from private repositories when you are actively using Copilot. This interaction data is required to run the service and could be used for model training unless you opt out.
The data used in this program may be shared with GitHub affiliates, which are companies in our corporate family including Microsoft. This data will not be shared with third-party AI model providers or other independent service providers.
We believe the future of AI-assisted development depends on real-world interaction data from developers like you. It’s why we’re using Microsoft interaction data for model training and will begin using interaction data from GitHub employees as well.
If you choose to help us improve our models with your interaction data, thank you. Your contributions make a meaningful difference in building AI tools that serve the entire developer community. If you prefer not to participate, that’s fine too—you will still be able to take full advantage of the AI features you know and love.
Together, we can continue to build AI that accelerates your workflows and empowers you to build better, more secure software faster than ever.
If you have questions, visit our FAQ and related discussion.
Mario Rodriguez leads the GitHub Product team as Chief Product Officer. His core identity is being a learner and his passion is creating developer tools—so much so that he has spent the last 20 years living that mission in leadership roles across Microsoft and GitHub. Mario most recently oversaw GitHub’s AI strategy and the GitHub Copilot product line, launching and growing Copilot across thousands of organizations and millions of users. Mario spends time outside of GitHub with his wife and two daughters. He also co-chairs and founded a charter school in an effort to progress education in rural regions of the United States.
Everything you need to master GitHub, all in one place.
Build what’s next on GitHub, the place for anyone from anywhere to build anything.
Meet the companies and engineering teams that build with GitHub.
Catch up on the GitHub podcast, a show dedicated to the topics, trends, stories and culture in and around the open source developer community on GitHub.
We do newsletters, tooDiscover tips, technical guides, and best practices in our biweekly newsletter just for devs.
Yes please, I’d like GitHub and affiliates to use my information for personalized communications, targeted advertising and campaign effectiveness. See the GitHub Privacy Statement for more details.
...
Read the original on github.blog »
A quick read on momentum, adoption, and where the current activity is clustering. These are the earliest observed public-era Claude Code commits we can verify after launch. Multiple same-day candidates may exist, so this is suggestive rather than definitive.Change initial game setup to always have exactly 1 correct can
This improves the starting condition by ensuring players always begin with exactly
one can in the correct position, making the initial game state more consistent.
🤖 Generated with Claude Code
Co-Authored-By: Claude <[email protected]>Original repos (non-forks) with their first observed Claude Code commit in the last 7 days
fix: enable adaptive threshold + add 22 bridge chunks for top failure protocols
- Enable adaptive threshold retry for ALL agency searches (not just high-accuracy)
- Add dense bridge chunks for top 20 failing protocols covering ~130 test failures
- Protocols: 1203 1209 1210 1211 1212 1213 1215 1216 1219 1220 1222 1223 1225 1229 1232 1237 1241 1244 1302 510 518 519feat: add project statistics with charts (bugs created vs resolved, resolution time evolution)
- Backend: add getTimeSeries() to globalTaskStore for day-by-day created/resolved
counts, resolution time evolution, and open tickets over time
- Backend: add GET /api/agents/tasks/stats/timeseries?project=X&days=N endpoint
- Frontend: new ProjectStats component with Chart.js graphs (created vs resolved
bar chart, resolution time line chart, open tickets area chart, state durations)
- Frontend: integrate ProjectStats into ProjectsView - clicking any project card
opens the statistics panel with all charts
- Frontend: add getProjectTaskStats and getProjectTimeSeries API methods
(by CLAUDIO)Use NHTSA instead of AutoAstat for VIN data loading in CarCaseDetail
Replace the removed fetchAutoAstat button with a new ‘Завантажити з NHTSA’
button that calls vinApi.decode(vin) and updates the case fields directly.
Shows warning toast when mock data is returned (VIN not in NHTSA DB).
https://claude.ai/code/session_01XNhJmFU2Jzmc5B1FZTwGWKfeat(all): add DBSCAN clustering, FFT plan caching, RBF/Akima interpolation, BFGS optimizer, and expand constants/special/stats
Cluster (+126 lines): DBSCAN density-based clustering with configurable eps
and min_samples, returning cluster labels with -1 for noise points.
Constants (+108 lines): Comprehensive physical constants module (speed of
light, Planck, Boltzmann, Avogadro, electron mass, proton mass, elementary
charge, gravitational constant, etc.) matching scipy.constants surface.
FFT (+94 lines): Plan caching for repeated FFT sizes, reducing planning
overhead for iterative algorithms. Adds real-to-complex (rfft/irfft)
optimized paths.
Interpolate (+216 lines): RBF (radial basis function) interpolation with
multiquadric/inverse_multiquadric/gaussian/linear/cubic kernels. Akima
interpolation (subspline with reduced overshoot). RegularGridInterpolator
for N-dimensional interpolation on regular grids.
Linalg (+32/-32): Refactors matrix operation signatures for consistency,
replacing ad-hoc parameter ordering with standardized (matrix, n, …) form.
Ndimage (+8/-8): Minor cleanup of filter axis validation messages.
Optimize (+128 lines): BFGS quasi-Newton optimizer with Wolfe line search,
inverse Hessian approximation, and gradient convergence detection.
Signal (+12/-12): Consistency fixes for window function parameter validation.
Special: Simplifies Airy function implementation (+48/-48), fixes
convenience function parameter passing, updates re-exports.
Stats (+6/-6): Minor fixes to distribution parameter edge cases.
Integrate (+4/-4): Quadrature tolerance consistency fixes.
Co-Authored-By: Claude Opus 4.6 (1M context) <>Update CogPR-57 doctrine: mark all three defects as fixed (tic 108)CogPR-57 mandate lifecycle fix: race guard, concurrency guard, reconcile-first
Three structural defect fixes (authorized at tic 107 review, not tech debt):
1. cgg-gate.sh: Re-validate mandate status before inline lightweight
consumption — prevents double-consumption race with mogul-runner
2. review SKILL.md: Concurrency guard at steps 5.5 and 8.5 — check
current.json status before writing mandates or spawning Mogul.
Running/pending mandates are not overwritten.
3. session-restore.sh: Reconcile-first cycle computation — read previous
mandate tic_context as primary schedule source, modulo as fallback
only when no previous context exists. Estate snapshot can add cycles
but not replace schedule. Eliminates recomputation drift.
Runtime parity: both hooks synced to ~/.claude/hooks/ and verified. Disable AutoAstat button in CarCaseDetail
Remove the ‘Завантажити дані’ button that called fetch-autoastat
from the case detail page. AutoAstat is not used for now.
https://claude.ai/code/session_01XNhJmFU2Jzmc5B1FZTwGWKfeat: add Redis-backed multi-message buffering with whatsapp-agentkit skill
- New agent/buffer.py: Manages message buffering with Redis
- Groups multiple messages into coherent context
- Configurable timeout (default 2.5s)
- Automatic deduplication (webhook retries)
- Backpressure handling (max 15 messages/buffer)
- Age-based topic separation (5 min max)
- Structured JSON logging
- Updated agent/main.py:
- Integrates buffer_manager in webhook handler
- Single unified response instead of per-message
- Connects to Redis on startup
- Graceful degradation if Redis unavailable
- Improved structured logging
- Dependencies: Added redis + python-json-logger
- Config: New REDIS_URL, BUFFER_TIMEOUT_MS, MAX_BUFFER_AGE_MS env vars
Implements whatsapp-agentkit skill (RED-GREEN-REFACTOR tested)Fix NHTSA VIN loading and add Consignor/Transport/PreviousDocs sections
- Add dedicated GET /api/vin/decode/{vin} endpoint that calls NHTSA
directly without requiring case creation first, eliminating the
two-step create-then-fetch flow that caused the loading error
- Update NewCarCase.tsx: use vinApi.decode() for VIN lookup (cleaner,
more specific error messages, shows brand/model/year on success)
- Add Consignor section: sender name, country, address, EORI number
- Add Transport section: type (sea/road/rail/air), vessel/vehicle ID,
flag country, crossing point, expected date, document type & number
- Add PreviousDocs (Graph 44) section: dynamic list of previous customs
documents (T1/T2/TIR/EUR-1/CMR/MRN) with type, number, date, issuer
- Add backend model fields + migration 014 for all new columns
- Update CarCaseCreate/Update/Response schemas with new fields
- Add vinApi to frontend api.ts and new fields to types/index.ts
https://claude.ai/code/session_01XNhJmFU2Jzmc5B1FZTwGWKfix: prevent CDN caching of empty search results
Empty results were getting cached at CDN (Fastly) for 1 hour via
Cache-Control: public, max-age=3600. After deploying threshold/dictionary
fixes, old NO RESULTS responses kept being served from CDN cache.
Now: empty results get Cache-Control: no-storeAdd README.md for repo and quick reference header to SKILL.mdfix(backend): add Dockerfile with python:3.11-slim to reduce image size
Replaces Nixpacks build (5.7GB) with explicit Dockerfile using slim base
image + CPU-only PyTorch to stay under Railway’s 4GB trial limit.
Co-Authored-By: Claude Sonnet 4.6 <>fix: add missing New York State and Queens NY locale config files
These were registered in generate-locale-questions.ts in a prior commit
but the actual config files were never staged, causing TS2307 module
not found errors on Render.
Co-Authored-By: Claude Sonnet 4.6 <>fix: lower similarity and quality thresholds for better recall
...
Read the original on www.claudescode.dev »
After an enormous amount of work and dedication from FreeCAD contributors we are delighted to announce that FreeCAD Version 1.1 is now released and available for download.
There are significant amounts of improvements and new features. These include; transparent Part Design previews, interactive draggers added to tools like Fillet and Chamfer, 3 point lighting, a Clarify Selection tool, Assembly and FEM improvements and animations, a totally new CAM tool library system and much much more.
For a full list of changes and new features check out the Release Notes and if you want to support the ongoing development of FreeCAD then do consider making a donation!
...
Read the original on blog.freecad.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.