10 interesting stories served every morning and every evening.




1 1,374 shares, 55 trendiness

Protect Digital Privacy in the EU

Skip to main con­tent

🚨 The Conservatives (EPP) are at­tempt­ing to force a new vote on Thursday (26th), seek­ing to re­verse Parliament’s NO on in­dis­crim­i­nate scan­ning. This is a di­rect at­tack on democ­racy and bla­tant dis­re­gard for your right to pri­vacy. No means no. Take ac­tion now!

...

Read the original on fightchatcontrol.eu »

2 777 shares, 36 trendiness

Running Tesla Model 3's Computer on My Desk Using Parts From Crashed Cars

Tesla runs a bug bounty pro­gram that in­vites re­searchers to find se­cu­rity vul­ner­a­bil­i­ties in their ve­hi­cles. To par­tic­i­pate, I needed the ac­tual hard­ware, so I started look­ing for Tesla Model 3 parts on eBay. My goal was to get a Tesla car com­puter and touch­screen run­ning on my desk, boot­ing the car’s op­er­at­ing sys­tem.

The car com­puter con­sists of two parts - the MCU (Media Control Unit) and the au­topi­lot com­puter (AP) lay­ered on top of each other. In the car, the com­puter is lo­cated in front of the pas­sen­ger seat, roughly be­hind the glove­box. The part it­self is the size of an iPad and the thick­ness of a ~500 page book and is cov­ered in a wa­ter-cooled metal cas­ing:

By search­ing for Tesla Model 3 MCU on Ebay, I found quite a lot of re­sults in the $200 - $300 USD price range. Looking at the list­ings, I found that many of these sell­ers are salvaging” com­pa­nies who buy crashed cars, take them apart, and list all parts for sale in­di­vid­u­ally. Sometimes, they even in­clude a photo of the orig­i­nal crashed car and a way to fil­ter their list­ings for parts ex­tracted from the same ve­hi­cle.

To boot the car up and in­ter­act with it, I needed a few more things:

* The dis­play ca­ble to con­nect them to­gether

For the power sup­ply, I went with an ad­justable 0-30V model from Amazon. There was a 5 am­pere and a 10A ver­sion avail­able, at the time, I fig­ured it’s safer to have some head­room and went with the 10A ver­sion — it was a very good de­ci­sion, as it later turned out, the full setup could con­sume up to 8A at peak times. The Model 3 screens were sur­pris­ingly ex­pen­sive on Ebay, I as­sume that is be­cause it is a pop­u­lar part to re­place. I found a pretty good deal for 175 USD.

The last and most dif­fi­cult part to or­der was the ca­ble which con­nects the MCU to the screen. I needed this be­cause both the com­puter and a screen were be­ing sold with the ca­bles cut a few cen­time­ters af­ter the con­nec­tor (interestingly most sell­ers did that, in­stead of just un­plug­ging the ca­bles).

This is when I dis­cov­ered that Tesla pub­lishes the wiring Electrical Reference” for all of its cars pub­licly. On their ser­vice web­site, you can look up a spe­cific car model, search for a com­po­nent (such as the dis­play), and it will show you ex­actly how the part should be wired up, what ca­bles/​con­nec­tors are used, and even what the dif­fer­ent pins are re­spon­si­ble for in­side a sin­gle con­nec­tor:

Turns out the dis­play uses a 6-pin ca­ble (2 for 12V and ground, 4 for data) with a spe­cial Rosenberger 99K10D-1D5A5-D con­nec­tor. I soon dis­cov­ered that un­less you are a car man­u­fac­turer or­der­ing in bulk, there is no way you are buy­ing a sin­gle Rosenberger ca­ble like this. No Ebay list­ings, noth­ing on Aliexpress, es­sen­tially no search re­sults at all.

After dig­ging around a bit, I found that this ca­ble is very sim­i­lar to a more widely used au­to­mo­tive ca­ble called LVDS, which is used to trans­fer video in BMW cars. At first sight, the con­nec­tors looked like a per­fect match to my Rosenberger, so I placed an or­der:

The com­puter ar­rived first. To at­tempt to power it on, I looked up which pin of which con­nec­tor I needed to at­tach 12V and ground to us­ing the Tesla schemat­ics & the few pic­tures on­line of peo­ple do­ing the same desk-MCU setup. Since the com­puter in­cluded the shortly cut ca­bles, I was able to strip the rel­e­vant wires and at­tach the power sup­ply’s clips to the right ones:

I saw a cou­ple of red LEDs start flash­ing, and the com­puter started up! Since I had no screen yet, there were not many ways to in­ter­act with the car. Reading @lewurm’s pre­vi­ous re­search on GitHub I knew that, at least in older car ver­sions, there was a net­work in­side the car, with some com­po­nents hav­ing their own web­server. I con­nected an Ethernet ca­ble to the port next to the power con­nec­tor and to my lap­top.

This net­work does not have DHCP, so you have to man­u­ally set your IP ad­dress. The IP you se­lect has to be 192.168.90. X/24, and should be higher than 192.168.90.105 to not con­flict with other hosts on the net­work. On Reddit, I found the con­tents of an older /etc/hosts file from a car which shows the hosts that are nor­mally as­so­ci­ated with spe­cific IPs:

@lewurm’s blog men­tioned that SSH on port :22 and a web­server on :8080 was open on 192.168.90.100, the MCU. Was this still the case on newer mod­els? Yes!

I had al­ready found 2 ser­vices to ex­plore on the MCU:

* An SSH server which states SSH al­lowed: ve­hi­cle parked” - quite funny given the cir­cum­stances

This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

* Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* A REST-like API on :8080 which re­turned a his­tory of tasks”

This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

* This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

Around this time, I also re­moved the metal shield­ing to see ex­actly what the boards look like in­side. You can see the two dif­fer­ent boards which were stacked on top of each other:

Once the screen and the BMW LVDS ca­ble ar­rived, it un­for­tu­nately be­came clear that the con­nec­tor is not go­ing to fit. The BMW con­nec­tor was much thicker on the sides and it was not pos­si­ble to plug it into the screen. This led to some su­per sketchy im­pro­vised at­tempts to strip the two orig­i­nal tail” ca­bles from the MCU and the screen and con­nect the in­di­vid­ual wires to­gether. The wires were re­ally sen­si­tive and thin. The setup worked for a cou­ple of sec­onds, but caused wire de­bris to fall on the PCB and short it, burn­ing one of the power con­troller chips:

It was ex­tremely hard to find the name/​model of the chip that got burned, es­pe­cially since part of the text printed on it had be­come un­read­able due to the dam­age. To be able to con­tinue with the pro­ject, I had to or­der a whole other car com­puter.

In the mean­time, my friend Yasser (@n3r0li) some­how pulled off the im­pos­si­ble and iden­ti­fied it as the MAX16932CATIS/V+T” step-down con­troller, re­spon­si­ble for con­vert­ing power down to lower volt­ages. We or­dered the chip and took the board to a lo­cal PCB re­pair shop, where they suc­cess­fully re­placed it and fixed the MCU. Now I had two com­put­ers to work with.

So I re­ally did need that Rosenberger ca­ble, there was no get­ting around it.

After hav­ing no luck find­ing it on­line and even vis­it­ing a Tesla ser­vice cen­ter in London (an odd en­counter, to say the least), I had to ac­cept what I had been try­ing to avoid: buy­ing an en­tire Dashboard Wiring Harness.

Back in the Tesla Electrical Reference, in ad­di­tion to the con­nec­tors, one can find every part num­ber. Looking at the ca­ble which con­nects the MCU to the screen, the num­ber 1067960-XX-E shows. Searching for it on Ebay brings up this mon­stros­ity:

Turns out that ac­tual cars don’t have in­di­vid­ual ca­bles. Instead they have these big looms”, which bun­dle many ca­bles from a nearby area into a sin­gle har­ness. This is the rea­son why I could not find the in­di­vid­ual ca­ble ear­lier. They sim­ply don’t man­u­fac­ture it. Unfortunately I had no other choice but to buy this en­tire loom for 80 USD.

Despite how bulky it was, the loom worked per­fectly. The car booted, the touch screen started up, and I had a work­ing car com­puter on my desk, run­ning the car’s op­er­at­ing sys­tem!

Having the sys­tem run­ning, I can now start play­ing with the user in­ter­face, in­ter­act­ing with the ex­posed net­work in­ter­faces, ex­plor­ing the CAN buses, and per­haps even at­tempt­ing to ex­tract the firmware.

...

Read the original on bugs.xdavidhu.me »

3 634 shares, 69 trendiness

Personal Encyclopedias — whoami.wiki

Last year, I vis­ited my grand­moth­er’s house for the first time af­ter the pan­demic and came across a cup­board full of loose old pho­tos. I counted 1,351 of them span­ning all the way from my grand­par­ents in their early 20s, my mom as a baby, to me in mid­dle school, just around the time when we got our first smart­phone and all pho­tos since then were backed up on­line.

Everything was all over the place so I spent some time go­ing through them in­di­vid­u­ally and or­ga­niz­ing them into groups. Some of the ini­tial groups were based on the phys­i­cal at­trib­utes of the pho­to­graph like sim­i­lar as­pect ra­tios or film stock. For ex­am­ple, there was a group of black/​white 32mm square pic­tures that were taken around the time when my grand­fa­ther was in his mid 20s.

As I got done with group­ing all of them, I was able to see flashes of sto­ries in my head, but they were ephemeral and frag­ile. For in­stance, there was a group of pho­tos that looked like it was taken dur­ing my grand­par­ents’ wed­ding but I did­n’t know the chrono­log­i­cal or­der they were taken be­cause EXIF meta­data did­n’t ex­ist around that time.

So I sat down with my grand­mother and asked her to re­order the pho­tos and tell me every­thing she could re­mem­ber about her wed­ding. Her face lit up as she nar­rated the back­story be­hind the oc­ca­sion, go­ing from photo to photo, resur­fac­ing de­tails that had been dor­mant for decades. I wrote every­thing down, recorded the names of peo­ple in some of the pho­tos, some of whom I rec­og­nized as younger ver­sions of my un­cles and aunts.

After the interview”, I had mul­ti­ple pages of notes con­nect­ing the pho­tos to events that hap­pened 50 years ago. Since the ac­count was his­tor­i­cal, as an in­side joke I wanted to see if I could clean it up and pre­sent it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a lo­cal in­stance, and be­gan my ed­i­to­r­ial work. I used the 2011 Royal Wedding as ref­er­ence and drafted a page start­ing with the clas­sic in­fobox and the lead para­graph.

I split up the rest of the con­tent into sec­tions and filled them with every­thing I could ver­ify like dates, names, places, who sat where. I scanned all the pho­tos and spent some time fig­ur­ing out what to place where. For every photo place­ment, there was a fol­low up to in­clude a de­scrip­tive cap­tion too.

Whenever I men­tioned a per­son, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that pro­vided wider con­text to things like venues, rit­u­als, and the po­lit­i­cal cli­mate around that time, like for in­stance a le­gal amend­ment that was rel­e­vant to the wed­ding cer­e­mony.

In two evenings, I was able to doc­u­ment a full back­story for the pho­tos into a neat ar­ti­cle. These two evenings also made me re­al­ize just how pow­er­ful en­cy­clo­pe­dia soft­ware is to record and pre­serve me­dia and knowl­edge that would’ve oth­er­wise been lost over time.

This was so much fun that I spent the fol­low­ing months writ­ing pages to ac­count for all the pho­tos that needed to be stitched to­gether.

I got help from r/​ge­neal­ogy about how to ap­proach record­ing oral his­tory and I was given re­sources to bet­ter con­duct in­ter­views, shoutout to u/​stem­ma­tis! I would get on calls with my grand­mother and peo­ple in the fam­ily, ask them a cou­ple of ques­tions, and then write. It was also around this time that I be­gan us­ing au­dio tran­scrip­tion and lan­guage mod­els to make the ed­i­to­r­ial process eas­ier.

Over time, I man­aged to write a lot of pages con­nect­ing peo­ple to dif­fer­ent life events. The en­cy­clo­pe­dia for­mat made it easy to con­nect dots I would have never found on my own, like dis­cov­er­ing that one of the singers at my grand­par­ents’ wed­ding was the same nurse who helped de­liver me.

After find­ing all the sto­ries be­hind the phys­i­cal pho­tos, I started to work on dig­i­tal pho­tos and videos that I had stored on Google Photos. The won­der­ful thing about dig­i­tal pho­tos is that they come with EXIF meta­data that can re­veal ex­tra in­for­ma­tion like date, time, and some­times ge­o­graph­i­cal co­or­di­nates.

This time, with­out any in­ter­views, I wanted to see if I could use a lan­guage model to cre­ate a page based on just brows­ing through the pho­tos. As my first ex­per­i­ment, I cre­ated a folder with 625 pho­tos of a fam­ily trip to Coorg back in 2012.

I pointed Claude Code at the di­rec­tory and asked it to draft a wiki page by brows­ing through the im­ages. I hinted at us­ing ImageMagick to cre­ate con­tact sheets so it would help with brows­ing through mul­ti­ple pho­tos at once.

After a few min­utes and a cou­ple of to­kens later, it had cre­ated a com­pelling draft with a de­tailed ac­count of every­thing we did dur­ing the trip by time of day. The model had no lo­ca­tion data to work with, just time­stamps and vi­sual con­tent, but it was able to iden­tify the places from the pho­tos alone, in­clud­ing ones that I had for­got­ten by now. It picked up de­tails on the modes of trans­porta­tion we used to get be­tween places just from what it could see.

After I had clar­i­fied who some of the peo­ple in the pic­tures were, it went on to iden­tify them au­to­mat­i­cally in the cap­tions. Now that I had a de­tailed out­line ready, the page still only had con­tent based on the avail­able data, so to fill in the gaps I shared a list of anec­dotes from my point of view and the model in­serted them into places where the nar­ra­tive called for them.

The Coorg trip only had pho­tos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 pho­tos and 343 videos with an iPhone 12 Pro that in­cluded ge­o­graph­i­cal co­or­di­nates as part of the EXIF meta­data.

On top of that, I ex­ported my lo­ca­tion time­line from Google Maps, my Uber trips, my bank trans­ac­tions, and Shazam his­tory. I would ask Claude Code to start with the pho­tos and then grad­u­ally give it ac­cess to the dif­fer­ent data ex­ports.

Here are some of the things it did across mul­ti­ple runs:

It cross-ref­er­enced my bank trans­ac­tions with lo­ca­tion data to as­cer­tain the restau­rants I went to.

Some of the pho­tos and videos showed me in at­ten­dance at a soc­cer match, how­ever, it was un­known which teams were play­ing. The model looked up my bank trans­ac­tions and found a Ticketmaster in­voice with in­for­ma­tion about the teams and name of the tour­na­ment.

It looked up my Uber trips to fig­ure out travel times and ex­act lo­ca­tions of pickup and drop.

It used my Shazam tracks to write about the kinds of songs that were play­ing at a place, like Cuban songs at a Cuban restau­rant.

In a fol­low-up, I men­tioned re­mem­ber­ing an evening din­ner with a gui­tarist play­ing in the back­ground. It fil­tered my me­dia to evening cap­tures, found a frame in a video with the gui­tarist, up­loaded it, and ref­er­enced the mo­ment in the page.

The MediaWiki ar­chi­tec­ture worked well with the ed­its, since for every new data source it would make amend­ments like a real Wikipedia con­trib­u­tor would. I leaned heav­ily on fea­tures that al­ready ex­isted. Talk pages to clar­ify gaps and con­sol­i­date re­search notes, cat­e­gories to group pages by theme, re­vi­sion his­tory to track how a page evolved as new data came in. I did­n’t have to build any of this, it was all just there.

What started as me help­ing the model fill in gaps from my mem­ory grad­u­ally in­verted. The model was now sur­fac­ing things I had com­pletely for­got­ten, cross-ref­er­enc­ing de­tails across data sources in ways I never would have done man­u­ally.

So I started point­ing Claude Code at other data ex­ports. My Facebook, Instagram, and WhatsApp archives held around 100k mes­sages and a cou­ple thou­sand voice notes ex­changed with close friends over a decade.

The model traced the arc of our friend­ships through the mes­sages, pulled out the life episodes we had talked each other through, and wove them into mul­ti­ple pages that read like it was writ­ten by some­one who knew us both. When I shared the pages with my friends, they wanted to read every sin­gle one.

This is when I re­al­ized I was no longer work­ing on a fam­ily his­tory pro­ject. What I had been build­ing, page by page, was a per­sonal en­cy­clo­pe­dia. A struc­tured, brows­able, in­ter­con­nected ac­count of my life com­piled from the data I al­ready had ly­ing around.

I’ve been work­ing on this as whoami.wiki. It uses MediaWiki as its foun­da­tion, which turns out to be a great fit be­cause lan­guage mod­els al­ready un­der­stand Wikipedia con­ven­tions deeply from their train­ing data. You bring your data ex­ports, and agents draft the pages for you to re­view.

A page about your grand­moth­er’s wed­ding works the same way as a page about a royal wed­ding. A page about your best friend works the same way as a page about a pub­lic fig­ure.

Oh and it’s gen­uinely fun! Putting to­gether the en­cy­clo­pe­dia felt like the early days of Facebook time­line, brows­ing through fin­ished pages, fol­low­ing links be­tween peo­ple and events, and stum­bling on a de­tail I for­got.

But more than the tech­nol­ogy, it’s the sto­ries that stayed with me. Writing about my grand­moth­er’s life sur­faced things I’d never known, her years as a sin­gle mother, the de­ci­sions she had to make, the re­silience it took. She was a stronger woman than I ever re­al­ized. Going through my friend­ships, I found mo­ments of en­dear­ment that I had nearly for­got­ten, the days friends went the ex­tra mile to be good to me. Seeing those mo­ments laid out on a page made me pick up the phone and call a few of them. The en­cy­clo­pe­dia did­n’t just or­ga­nize my data, it made me pay closer at­ten­tion to the peo­ple in my life.

Today I’m re­leas­ing whoami.wiki as an open source pro­ject. The en­cy­clo­pe­dia is yours, it runs on your ma­chine, your data stays with you, and any model can read it. The pro­ject is early and I’m still fig­ur­ing a lot of it out, but if this sounds in­ter­est­ing, you can get started here and tell me what you think!

...

Read the original on whoami.wiki »

4 513 shares, 100 trendiness

Tuta (@tuta.com)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. You did it! 🥳

European Parliament just de­cided that Chat Control 1.0 must stop.

This means on April 6, 2026, Gmail, LinkedIn, Microsoft and other Big Techs must stop scan­ning your pri­vate mes­sages in the EU. #PrivacyWins 💪

[contains quote post or other em­bed­ded con­tent]

...

Read the original on bsky.app »

5 469 shares, 19 trendiness

ARC-AGI-3

ARC-AGI-3 is an in­ter­ac­tive rea­son­ing bench­mark which chal­lenges AI agents to ex­plore novel en­vi­ron­ments, ac­quire goals on the fly, build adapt­able world mod­els, and learn con­tin­u­ously.

A 100% score means AI agents can beat every game as ef­fi­ciently as hu­mans.

Instead of solv­ing sta­tic puz­zles, agents must learn from ex­pe­ri­ence in­side each en­vi­ron­ment—per­ceiv­ing what mat­ters, se­lect­ing ac­tions, and adapt­ing their strat­egy with­out re­ly­ing on nat­ural-lan­guage in­struc­tions.

...

Read the original on arcprize.org »

6 443 shares, 18 trendiness

Apple randomly closes bug reports unless you “verify” the bug remains unfixed

Why do I file bug re­ports with Apple Feedback Assistant? I plead in­san­ity. Or per­haps ad­dic­tion. I see­saw be­tween phases of ab­sti­nence and falling off the wagon. I’ve even tried or­ga­niz­ing a pub­lic boy­cott of Feedback Assistant, with a list of de­mands to im­prove the ex­pe­ri­ence for users, but the boy­cott never caught on with other de­vel­op­ers. Regardless, an in­cen­tive still ex­ists to file bug re­ports, be­cause Apple ac­tu­ally fixes some of my bugs. My main com­plaint about the bug re­port­ing process is not the un­fixed bugs but rather the dis­re­spect for bug re­ports and the peo­ple who file them. Apple in­ten­tion­ally wastes our time with no re­grets, as if our time had no value, as if we had some kind of duty to serve Apple.

In March 2023, I filed FB12088655 Privacy: Network fil­ter ex­ten­sion TCP con­nec­tion and IP ad­dress leak.” I men­tioned this bug re­port at the time in a blog post, which in­cluded the same steps to re­pro­duce and ex­am­ple Xcode pro­ject that I pro­vided to Apple. In the three years since I filed the bug re­port, I re­ceived no re­sponse what­so­ever from Apple… un­til a cou­ple of weeks ago, when Apple asked me to verify” the is­sue with ma­cOS 26.4 beta 4 and up­date my bug re­port.

I in­stall the WWDC be­tas every year in June but don’t run OS be­tas af­ter September when the ma­jor OS up­dates are re­leased. I don’t have enough time or in­deed enough Apple de­vices to be an un­paid tester year round. Thus, ver­i­fy­ing is­sues in be­tas is a has­sle for me. I’ve been burned by such re­quests in the past, asked by Apple to ver­ify is­sues in be­tas that were not fixed, so I asked Apple di­rectly whether beta 4 fixed the bug: they should al­ready know, since they have my steps to re­pro­duce! However, their re­sponse was eva­sive, never di­rectly an­swer­ing my ques­tion. Moreover, they threat­ened to close my bug re­port and as­sume the bug is fixed if I did­n’t ver­ify within two weeks! Again, this is af­ter Apple silently sat on my bug re­port for three years.

Although I did­n’t in­stall the beta my­self, I spoke to the de­vel­op­ers of Little Snitch, who do run the ma­cOS be­tas, and they kindly in­formed me that in their test­ing, they could still re­pro­duce my is­sue with ma­cOS 26.4 beta 4. It was no sur­prise, then, that when I up­dated to ma­cOS 26.4, re­leased to the pub­lic yes­ter­day by Apple, I could still re­pro­duce the bug with my in­struc­tions and ex­am­ple pro­ject. It ap­pears that Apple know­ingly sent me on a wild goose chase, de­mand­ing that I verify” a bug they did noth­ing to fix, per­haps pray­ing that the bug had mag­i­cally dis­ap­peared on its own, with no ef­fort from Apple.

By the way, a few weeks ago I pub­lished a blog post about an­other bug, FB22057274 Pinned tabs: slow-load­ing tar­get=“_blank” links ap­pear in the wrong tab,” which is also 100% re­pro­ducible but nonethe­less was marked by Apple with the res­o­lu­tion Investigation com­plete - Unable to di­ag­nose with cur­rent in­for­ma­tion.” On March 9, I up­dated the bug re­port, ask­ing what ad­di­tional in­for­ma­tion Apple needs from me—they never asked for more in­for­ma­tion—but I’ve yet to re­ceive a re­sponse.

I can only as­sume that some bo­zos in Apple lead­er­ship in­cen­tivize un­der­lings to close bug re­ports, no mat­ter whether the bugs are fixed. Out of sight, out of mind. Apple’s in­ter­nal met­rics prob­a­bly tell them that they have no soft­ware qual­ity prob­lem, be­cause the num­ber of open bug re­ports is kept lower ar­ti­fi­cially.

Ironically, the iPa­dOS 26.4 be­tas in­tro­duced a Safari crash­ing bug that I re­ported a month ago, but Apple failed to fix the bug be­fore the pub­lic re­lease. What’s the pur­pose of be­tas? As far as I can tell, the pur­pose is just to an­noy peo­ple who file bugs, with­out do­ing any­thing use­ful.

Shortly af­ter this blog post hit the front page of Hacker News yes­ter­day, my Investigation com­plete - Unable to di­ag­nose with cur­rent in­for­ma­tion” Feedback FB22057274 was up­dated by Apple. What an amaz­ing co­in­ci­dence! Unfortunately, the up­date was not help­ful, be­cause Apple re­quested a sys­di­ag­nose. For a user in­ter­face is­sue! This was pre­cisely the fear I ex­pressed in my ear­lier blog post:

I hon­estly don’t know what ad­di­tional in­for­ma­tion Apple needs to di­ag­nose it. I in­cluded not only steps to re­pro­duce but also mul­ti­ple screen record­ings to il­lus­trate. I have a sus­pi­cion that Apple did not even read my bug re­port, be­cause I did not at­tach a sys­di­ag­nose re­port. But a pri­vacy-vi­o­lat­ing sys­di­ag­nose would not be use­ful in this case!

The only trick in my bug re­port is that I used Little Snitch to sim­u­late a slow load­ing link. This was just the eas­i­est way I could think of to re­li­ably re­pro­duce the bug. There are of course other ways to sim­u­late a slow load­ing link; if Apple Safari en­gi­neers of all peo­ple some­how can’t fig­ure that out, then they aren’t qual­i­fied for their jobs. Again, how­ever, the more likely ex­pla­na­tion is that my feed­back was ig­nored be­cause it did not in­clude a pro forma sys­di­ag­nose, but who knows, be­cause Apple did not re­quest more in­for­ma­tion of any kind from me.

Here is my re­sponse this morn­ing to Apple’s re­quest:

You should­n’t need a sys­di­ag­nose, and I don’t know how a sys­di­ag­nose would pos­si­bly be help­ful for a user in­ter­face bug.

I found an easy way to re­pro­duce the is­sue with­out Little Snitch: use the Network Link Conditioner pref­er­ence pane from the Xcode Additional Tools down­load, and cre­ate a pro­file with Uplink Delay 3000 ms.

The Xcode Additional Tools, which in­clude a num­ber of use­ful util­i­ties, can be found in the Apple Developer Downloads (sign-in re­quired).

...

Read the original on lapcatsoftware.com »

7 364 shares, 38 trendiness

Shell Tricks That Actually Make Life Easier (And Save Your Sanity)

There is a dis­tinct, vis­ceral kind of pain in watch­ing an oth­er­wise bril­liant en­gi­neer hold down the Backspace key for six con­tin­u­ous sec­onds to fix a typo at the be­gin­ning of a line.

We’ve all been there. We learn ls, cd, and grep, and then we sort of… stop. The ter­mi­nal be­comes a place we live in-but we rarely bother to arrange the fur­ni­ture. We ac­cept that cer­tain tasks take forty key­strokes, com­pletely un­aware that the shell au­thors solved our ex­act frus­tra­tion some­time in 1989.

Here are some tricks that aren’t ex­actly se­cret, but aren’t al­ways taught ei­ther. To keep the peace in our ex­tended Unix fam­ily, I’ve split these into two camps: the uni­ver­sal tricks that work on al­most any POSIX-ish shell (like sh on FreeBSD or ksh on OpenBSD), and the qual­ity-of-life ad­di­tions spe­cific to in­ter­ac­tive shells like Bash or Zsh.

These tricks rely on stan­dard ter­mi­nal line dis­ci­plines, generic Bourne shell be­hav­iors, or POSIX fea­tures. If you SSH into an em­bed­ded router from 2009, a fresh OpenBSD box, or a min­i­mal Alpine con­tainer, these will still have your back.

Why shuf­fle char­ac­ter-by-char­ac­ter when you can tele­port? These are stan­dard Emacs-style line-edit­ing bind­ings (via Readline or sim­i­lar), en­abled by de­fault in most mod­ern shells.

CTRL + W: You’re typing /var/log/nginx/ but you ac­tu­ally meant /var/log/apache2/. You have two choices: hold down Backspace un­til your soul leaves your body, or hit CTRL + W to in­stantly delete the word be­fore the cur­sor. Once you get used to this, hold­ing Backspace feels like dig­ging a hole with a spoon.

CTRL + U and CTRL + K: You typed out a beau­ti­fully crafted, 80-char­ac­ter rsync com­mand, but sud­denly re­al­ize you need to check if the des­ti­na­tion di­rec­tory ac­tu­ally ex­ists first. You don’t want to delete it, but you don’t want to run it. Hit CTRL + U to cut every­thing from the cur­sor to the be­gin­ning of the line. Check your di­rec­tory, and then hit CTRL + Y to paste (“yank”) your mas­ter­piece right back into the prompt. (CTRL + K does the same thing, but cuts from the cur­sor to the end of the line.)

CTRL + A and CTRL + E: Jump in­stantly to the be­gin­ning (A) or end (E) of the line. Stop reach­ing for the Home and End keys; they are miles away from the home row any­way.

ALT + B and ALT + F: Move back­ward (B) or for­ward (F) one en­tire word at a time. It’s the ar­row key’s much faster, much cooler sib­ling. (Mac users: you usu­ally have to tweak your ter­mi­nal set­tings to use Option as Meta for this to work).

re­set (or stty sane): While strictly more of a ter­mi­nal re­cov­ery tip than an in­ter­ac­tive shell trick, it be­longs here. We’ve all done it: you meant to cat a text file, but you ac­ci­den­tally cat a com­piled bi­nary or a com­pressed tar­ball. Suddenly, your ter­mi­nal is spit­ting out an­cient runes and Wingdings, and your prompt is com­pletely il­leg­i­ble. Instead of clos­ing the ter­mi­nal win­dow in shame, type re­set (even if you can’t see the let­ters you’re typ­ing) and hit en­ter. Your ter­mi­nal will heal it­self.

CTRL + C: Cancel the cur­rent com­mand im­me­di­ately. Your emer­gency exit when a com­mand hangs, or you re­al­ize you’re tail­ing the wrong log file.

CTRL + D: Sends an EOF (End of File) sig­nal. If you’re typ­ing in­put to a com­mand that ex­pects it, this closes the stream. But if the com­mand line is empty, it logs you out of the shell com­pletely-be care­ful where you press it.

CTRL + L: Your ter­mi­nal is clut­tered with stack traces, com­piler spaghetti, and pure dig­i­tal noise. Running the clear com­mand works, but what if you’re al­ready halfway through typ­ing a new com­mand? CTRL + L wipes the slate clean, throw­ing your cur­rent prompt right up to the top with­out in­ter­rupt­ing your train of thought.

cd -: The clas­sic chan­nel-flip­per. You’re deep down in /usr/local/etc/postfix and you need to check some­thing in /var/log. You type cd /var/log, look at the logs, and now you want to go back. Instead of typ­ing that long path again, type cd -. It switches you to your pre­vi­ous di­rec­tory. Run it again, and you’re back in logs. Perfect for tog­gling back and forth.

pushd and popd: If cd - is a tog­gle switch, pushd is a stack. Need to jug­gle mul­ti­ple di­rec­to­ries? pushd /etc changes to /etc but saves your pre­vi­ous di­rec­tory to a hid­den stack. When you’re done, type popd to pop it off the stack and re­turn ex­actly where you left off.

> file.txt: This emp­ties a file com­pletely with­out delet­ing and recre­at­ing it. Why does this mat­ter? It pre­serves file per­mis­sions, own­er­ship, and does­n’t in­ter­rupt processes that al­ready have the file open. It’s much cleaner than echo ” > file.txt (which ac­tu­ally leaves a new­line char­ac­ter) or rm file && touch file.

$_: In most shells, $_ ex­pands to the last ar­gu­ment of the pre­vi­ous com­mand-es­pe­cially use­ful in­ter­ac­tively or in sim­ple scripts when you need to op­er­ate on the same long path twice:

No more re-typ­ing paths or de­clar­ing tem­po­rary vari­ables to en­ter a di­rec­tory you cre­ated a sec­ond ago.

If you are writ­ing shell scripts, put these at the top im­me­di­ately af­ter your she­bang. It will save you from de­ploy­ing chaos to pro­duc­tion.

* set -e: Exit on er­ror. Very use­ful, but no­to­ri­ously weird with edge cases (especially in­side con­di­tion­als like if state­ments, while loops, and pipelines). Don’t rely on it blindly as it can cre­ate false con­fi­dence. (Pro-tip: consider set -euo pipefail for a more ro­bust safety net, but learn its caveats first.)

* set -u: Treats ref­er­enc­ing an un­set vari­able as an er­ror. This pro­tects you from cat­a­strophic dis­as­ters like rm -rf /usr/local/${MY_TYPO_VAR}/* ac­ci­den­tally ex­pand­ing into rm -rf /usr/local/*.

If you’re on a Linux box or us­ing a mod­ern in­ter­ac­tive shell, these are the tools that make the CLI feel less like a rusty bi­cy­cle and more like some­thing that ac­tu­ally re­sponds when you steer.

CTRL + R: Reverse in­cre­men­tal search. Stop press­ing the up ar­row forty times to find that one awk com­mand you used last Tuesday. Press CTRL + R, start typ­ing a key­word from the com­mand, and it mag­i­cally pulls it from your his­tory. Press CTRL + R again to cy­cle back­wards through matches.

!!: This ex­pands to the en­tirety of your pre­vi­ous com­mand. Its most fa­mous use case is the Permission de­nied” walk of shame. You con­fi­dently type sys­tem­ctl restart ng­inx, hit en­ter, and the sys­tem laughs at your lack of priv­i­leges. Instead of re­typ­ing it, run:

It’s your way of telling the shell, Do what I said, but this time with au­thor­ity.”

CTRL + X, then CTRL + E: You start typ­ing a quick one-liner. Then you add a pipe. Then an awk state­ment. Soon, you’re edit­ing a four-line mon­ster in­side your prompt and nav­i­ga­tion is get­ting dif­fi­cult. Hit CTRL + X fol­lowed by CTRL + E (in Bash; in Zsh, this needs con­fig­ur­ing). This drops your cur­rent com­mand into your de­fault text ed­i­tor (like Vim or Nano). You can edit it with all the power of a proper ed­i­tor, save, and exit. The shell then ex­e­cutes the com­mand in­stantly.

fc: The highly portable, tra­di­tional sib­ling to CTRL+X CTRL+E. Running fc opens your pre­vi­ous com­mand in your $EDITOR. It works across most shells and is a fan­tas­tic hid­den gem for fix­ing com­plex, multi-line com­mands that went wrong.

ESC + . (or ALT + .): Inserts the last ar­gu­ment of the pre­vi­ous com­mand right at your cur­sor. Press it re­peat­edly to cy­cle fur­ther back through your his­tory, drop­ping the ex­act file­name or pa­ra­me­ter you need right into your cur­rent com­mand.

!$: The non-in­ter­ac­tive sib­ling of ESC + .. Unlike ESC + . (which in­serts the text live at your cur­sor for you to re­view or edit), !$ ex­pands blindly at the ex­act mo­ment you hit en­ter.

(Pro-Tip: For script­ing or stan­dard sh, use the $_ vari­able men­tioned ear­lier in­stead!)

Brace ex­pan­sion is pure magic for avoid­ing repet­i­tive typ­ing, es­pe­cially when do­ing quick back­ups or re­names.

The Backup Expansion: Need to edit a crit­i­cal con­fig file and want to make a quick backup first?

This ex­pands to mv file­name.txt file­name.md. Fast, el­e­gant, and makes you look like a wiz­ard.

Need mul­ti­ple di­rec­to­ries? mkdir -p pro­ject/{​src,tests,docs} cre­ates all three at once.

: Treats the out­put of a com­mand as if it were a file. Say you want to diff the sorted ver­sions of two files. Traditionally, you’d sort them into tem­po­rary files, diff those, and clean up. Process sub­sti­tu­tion skips the mid­dle­man:

** (Globstar): find is a great com­mand, but some­times it feels like overkill. If you run shopt -s glob­star in Bash (it’s en­abled by de­fault in Zsh), ** matches files re­cur­sively in all sub­di­rec­to­ries. Need to find all JavaScript files in your cur­rent di­rec­tory and every­thing be­neath it?

CTRL + Z, then bg, then dis­own: You started a mas­sive, hour-long data­base im­port task, but you for­got to run it in tmux or screen. It’s ty­ing up your ter­mi­nal, and if your SSH con­nec­tion drops, the process dies. Panic sets in.

Type bg to let it re­sume run­ning in the back­ground. Your prompt is free!

Type dis­own to de­tach it from your shell en­tirely. You can safely close your lap­top, grab a cof­fee, and the process will sur­vive.

com­mand |& tee file.log: Standard pipes (|) only catch stan­dard out­put (std­out). If a script throws an er­ror (stderr), it skips the pipe and bleeds di­rectly onto your screen, miss­ing the log file. |& pipes both std­out and stderr (it’s a help­ful short­hand for 2>&1 |).

Throw in tee, and you get to watch the out­put on your screen while si­mul­ta­ne­ously sav­ing it to a log file. It’s the equiv­a­lent of watch­ing live TV while record­ing it to your DVR.

The shell is a tool­box, not an ob­sta­cle course. You don’t need to mem­o­rize all of these to­day. Pick just one trick, force it into your daily habits for a week, and then pick an­other. Stop let­ting the ter­mi­nal push you around, and start re­ar­rang­ing the fur­ni­ture. It’s your house now.

...

Read the original on blog.hofstede.it »

8 332 shares, 11 trendiness

Updates to GitHub Copilot interaction data usage policy

Today, we’re an­nounc­ing an up­date on how GitHub will use data to de­liver more in­tel­li­gent, con­text-aware cod­ing as­sis­tance. From April 24 on­ward, in­ter­ac­tion data—specif­i­cally in­puts, out­puts, code snip­pets, and as­so­ci­ated con­text—from Copilot Free, Pro, and Pro+ users will be used to train and im­prove our AI mod­els un­less they opt out. Copilot Business and Copilot Enterprise users are not af­fected by this up­date.

Not in­ter­ested? Opt out in set­tings un­der Privacy.” If you pre­vi­ously opted out of the set­ting al­low­ing GitHub to col­lect this data for prod­uct im­prove­ments, your pref­er­ence has been re­tained—your choice is pre­served, and your data will not be used for train­ing un­less you opt in.

This ap­proach aligns with es­tab­lished in­dus­try prac­tices and will im­prove model per­for­mance for all users. By par­tic­i­pat­ing, you’ll help our mod­els bet­ter un­der­stand de­vel­op­ment work­flows, de­liver more ac­cu­rate and se­cure code pat­tern sug­ges­tions, and im­prove their abil­ity to help you catch po­ten­tial bugs be­fore they reach pro­duc­tion.

Our ini­tial mod­els were built us­ing a mix of pub­licly avail­able data and hand-crafted code sam­ples. This past year, we’ve started in­cor­po­rat­ing in­ter­ac­tion data from Microsoft em­ploy­ees and have seen mean­ing­ful im­prove­ments, in­clud­ing in­creased ac­cep­tance rates in mul­ti­ple lan­guages.

The im­prove­ments we’ve seen by in­cor­po­rat­ing Microsoft in­ter­ac­tion data in­di­cate we can im­prove model per­for­mance for a more di­verse range of use cases by train­ing on real-world in­ter­ac­tion data. Should you de­cide to par­tic­i­pate in this pro­gram, the in­ter­ac­tion data we may col­lect and lever­age in­cludes:

Outputs ac­cepted or mod­i­fied by you

Inputs sent to GitHub Copilot, in­clud­ing code snip­pets shown to the model

This pro­gram does not use:

Interaction data from users who opt out of model train­ing in their Copilot set­tings

Content from your is­sues, dis­cus­sions, or pri­vate repos­i­to­ries at rest. We use the phrase at rest” de­lib­er­ately be­cause Copilot does process code from pri­vate repos­i­to­ries when you are ac­tively us­ing Copilot. This in­ter­ac­tion data is re­quired to run the ser­vice and could be used for model train­ing un­less you opt out.

The data used in this pro­gram may be shared with GitHub af­fil­i­ates, which are com­pa­nies in our cor­po­rate fam­ily in­clud­ing Microsoft. This data will not be shared with third-party AI model providers or other in­de­pen­dent ser­vice providers.

We be­lieve the fu­ture of AI-assisted de­vel­op­ment de­pends on real-world in­ter­ac­tion data from de­vel­op­ers like you. It’s why we’re us­ing Microsoft in­ter­ac­tion data for model train­ing and will be­gin us­ing in­ter­ac­tion data from GitHub em­ploy­ees as well.

If you choose to help us im­prove our mod­els with your in­ter­ac­tion data, thank you. Your con­tri­bu­tions make a mean­ing­ful dif­fer­ence in build­ing AI tools that serve the en­tire de­vel­oper com­mu­nity. If you pre­fer not to par­tic­i­pate, that’s fine too—you will still be able to take full ad­van­tage of the AI fea­tures you know and love.

Together, we can con­tinue to build AI that ac­cel­er­ates your work­flows and em­pow­ers you to build bet­ter, more se­cure soft­ware faster than ever.

If you have ques­tions, visit our FAQ and re­lated dis­cus­sion.

Mario Rodriguez leads the GitHub Product team as Chief Product Officer. His core iden­tity is be­ing a learner and his pas­sion is cre­at­ing de­vel­oper tools—so much so that he has spent the last 20 years liv­ing that mis­sion in lead­er­ship roles across Microsoft and GitHub. Mario most re­cently over­saw GitHub’s AI strat­egy and the GitHub Copilot prod­uct line, launch­ing and grow­ing Copilot across thou­sands of or­ga­ni­za­tions and mil­lions of users. Mario spends time out­side of GitHub with his wife and two daugh­ters. He also co-chairs and founded a char­ter school in an ef­fort to progress ed­u­ca­tion in rural re­gions of the United States.

Everything you need to mas­ter GitHub, all in one place.

Build what’s next on GitHub, the place for any­one from any­where to build any­thing.

Meet the com­pa­nies and en­gi­neer­ing teams that build with GitHub.

Catch up on the GitHub pod­cast, a show ded­i­cated to the top­ics, trends, sto­ries and cul­ture in and around the open source de­vel­oper com­mu­nity on GitHub.

We do newslet­ters, tooD­is­cover tips, tech­ni­cal guides, and best prac­tices in our bi­weekly newslet­ter just for devs.

Yes please, I’d like GitHub and af­fil­i­ates to use my in­for­ma­tion for per­son­al­ized com­mu­ni­ca­tions, tar­geted ad­ver­tis­ing and cam­paign ef­fec­tive­ness. See the GitHub Privacy Statement for more de­tails.

...

Read the original on github.blog »

9 328 shares, 14 trendiness

Claude's Code

A quick read on mo­men­tum, adop­tion, and where the cur­rent ac­tiv­ity is clus­ter­ing. These are the ear­li­est ob­served pub­lic-era Claude Code com­mits we can ver­ify af­ter launch. Multiple same-day can­di­dates may ex­ist, so this is sug­ges­tive rather than de­fin­i­tive.Change ini­tial game setup to al­ways have ex­actly 1 cor­rect can

This im­proves the start­ing con­di­tion by en­sur­ing play­ers al­ways be­gin with ex­actly

one can in the cor­rect po­si­tion, mak­ing the ini­tial game state more con­sis­tent.

🤖 Generated with Claude Code

Co-Authored-By: Claude <[email protected]>Original re­pos (non-forks) with their first ob­served Claude Code com­mit in the last 7 days

fix: en­able adap­tive thresh­old + add 22 bridge chunks for top fail­ure pro­to­cols

- Enable adap­tive thresh­old retry for ALL agency searches (not just high-ac­cu­racy)

- Add dense bridge chunks for top 20 fail­ing pro­to­cols cov­er­ing ~130 test fail­ures

- Protocols: 1203 1209 1210 1211 1212 1213 1215 1216 1219 1220 1222 1223 1225 1229 1232 1237 1241 1244 1302 510 518 519feat: add pro­ject sta­tis­tics with charts (bugs cre­ated vs re­solved, res­o­lu­tion time evo­lu­tion)

- Backend: add get­Time­Series() to glob­al­Task­Store for day-by-day cre­ated/​re­solved

counts, res­o­lu­tion time evo­lu­tion, and open tick­ets over time

- Backend: add GET /api/agents/tasks/stats/timeseries?project=X&days=N end­point

- Frontend: new ProjectStats com­po­nent with Chart.js graphs (created vs re­solved

bar chart, res­o­lu­tion time line chart, open tick­ets area chart, state du­ra­tions)

- Frontend: in­te­grate ProjectStats into ProjectsView - click­ing any pro­ject card

opens the sta­tis­tics panel with all charts

- Frontend: add get­Pro­ject­TaskStats and get­Pro­ject­Time­Series API meth­ods

(by CLAUDIO)Use NHTSA in­stead of AutoAstat for VIN data load­ing in CarCaseDetail

Replace the re­moved fetchAu­toA­s­tat but­ton with a new Завантажити з NHTSA

but­ton that calls vinApi.de­code(vin) and up­dates the case fields di­rectly.

Shows warn­ing toast when mock data is re­turned (VIN not in NHTSA DB).

https://​claude.ai/​code/​ses­sion_01XN­hJm­FU2Jzm­c5B1FZTwG­WK­feat(all): add DBSCAN clus­ter­ing, FFT plan caching, RBF/Akima in­ter­po­la­tion, BFGS op­ti­mizer, and ex­pand con­stants/​spe­cial/​stats

Cluster (+126 lines): DBSCAN den­sity-based clus­ter­ing with con­fig­urable eps

and min_sam­ples, re­turn­ing clus­ter la­bels with -1 for noise points.

Constants (+108 lines): Comprehensive phys­i­cal con­stants mod­ule (speed of

light, Planck, Boltzmann, Avogadro, elec­tron mass, pro­ton mass, el­e­men­tary

charge, grav­i­ta­tional con­stant, etc.) match­ing scipy.con­stants sur­face.

FFT (+94 lines): Plan caching for re­peated FFT sizes, re­duc­ing plan­ning

over­head for it­er­a­tive al­go­rithms. Adds real-to-com­plex (rfft/irfft)

op­ti­mized paths.

Interpolate (+216 lines): RBF (radial ba­sis func­tion) in­ter­po­la­tion with

mul­ti­quadric/​in­verse_­mul­ti­quadric/​gauss­ian/​lin­ear/​cu­bic ker­nels. Akima

in­ter­po­la­tion (subspline with re­duced over­shoot). RegularGridInterpolator

for N-dimensional in­ter­po­la­tion on reg­u­lar grids.

Linalg (+32/-32): Refactors ma­trix op­er­a­tion sig­na­tures for con­sis­tency,

re­plac­ing ad-hoc pa­ra­me­ter or­der­ing with stan­dard­ized (matrix, n, …) form.

Ndimage (+8/-8): Minor cleanup of fil­ter axis val­i­da­tion mes­sages.

Optimize (+128 lines): BFGS quasi-New­ton op­ti­mizer with Wolfe line search,

in­verse Hessian ap­prox­i­ma­tion, and gra­di­ent con­ver­gence de­tec­tion.

Signal (+12/-12): Consistency fixes for win­dow func­tion pa­ra­me­ter val­i­da­tion.

Special: Simplifies Airy func­tion im­ple­men­ta­tion (+48/-48), fixes

con­ve­nience func­tion pa­ra­me­ter pass­ing, up­dates re-ex­ports.

Stats (+6/-6): Minor fixes to dis­tri­b­u­tion pa­ra­me­ter edge cases.

Integrate (+4/-4): Quadrature tol­er­ance con­sis­tency fixes.

Co-Authored-By: Claude Opus 4.6 (1M con­text) <>Update CogPR-57 doc­trine: mark all three de­fects as fixed (tic 108)CogPR-57 man­date life­cy­cle fix: race guard, con­cur­rency guard, rec­on­cile-first

Three struc­tural de­fect fixes (authorized at tic 107 re­view, not tech debt):

1. cgg-gate.sh: Re-validate man­date sta­tus be­fore in­line light­weight

con­sump­tion — pre­vents dou­ble-con­sump­tion race with mogul-run­ner

2. re­view SKILL.md: Concurrency guard at steps 5.5 and 8.5 — check

cur­rent.json sta­tus be­fore writ­ing man­dates or spawn­ing Mogul.

Running/pending man­dates are not over­writ­ten.

3. ses­sion-re­store.sh: Reconcile-first cy­cle com­pu­ta­tion — read pre­vi­ous

man­date tic_­con­text as pri­mary sched­ule source, mod­ulo as fall­back

only when no pre­vi­ous con­text ex­ists. Estate snap­shot can add cy­cles

but not re­place sched­ule. Eliminates re­com­pu­ta­tion drift.

Runtime par­ity: both hooks synced to ~/.claude/hooks/ and ver­i­fied. Disable AutoAstat but­ton in CarCaseDetail

Remove the Завантажити дані’ but­ton that called fetch-au­toa­s­tat

from the case de­tail page. AutoAstat is not used for now.

https://​claude.ai/​code/​ses­sion_01XN­hJm­FU2Jzm­c5B1FZTwG­WK­feat: add Redis-backed multi-mes­sage buffer­ing with what­sapp-agen­tkit skill

- New agent/​buffer.py: Manages mes­sage buffer­ing with Redis

- Groups mul­ti­ple mes­sages into co­her­ent con­text

- Configurable time­out (default 2.5s)

- Automatic dedu­pli­ca­tion (webhook re­tries)

- Backpressure han­dling (max 15 mes­sages/​buffer)

- Age-based topic sep­a­ra­tion (5 min max)

- Structured JSON log­ging

- Updated agent/​main.py:

- Integrates buffer­_­man­ager in web­hook han­dler

- Single uni­fied re­sponse in­stead of per-mes­sage

- Connects to Redis on startup

- Graceful degra­da­tion if Redis un­avail­able

- Improved struc­tured log­ging

- Dependencies: Added re­dis + python-json-log­ger

- Config: New REDIS_URL, BUFFER_TIMEOUT_MS, MAX_BUFFER_AGE_MS env vars

Implements what­sapp-agen­tkit skill (RED-GREEN-REFACTOR tested)Fix NHTSA VIN load­ing and add Consignor/Transport/PreviousDocs sec­tions

- Add ded­i­cated GET /api/vin/decode/{vin} end­point that calls NHTSA

di­rectly with­out re­quir­ing case cre­ation first, elim­i­nat­ing the

two-step cre­ate-then-fetch flow that caused the load­ing er­ror

- Update NewCarCase.tsx: use vinApi.de­code() for VIN lookup (cleaner,

more spe­cific er­ror mes­sages, shows brand/​model/​year on suc­cess)

- Add Consignor sec­tion: sender name, coun­try, ad­dress, EORI num­ber

- Add Transport sec­tion: type (sea/road/rail/air), ves­sel/​ve­hi­cle ID,

flag coun­try, cross­ing point, ex­pected date, doc­u­ment type & num­ber

- Add PreviousDocs (Graph 44) sec­tion: dy­namic list of pre­vi­ous cus­toms

doc­u­ments (T1/T2/TIR/EUR-1/CMR/MRN) with type, num­ber, date, is­suer

- Add back­end model fields + mi­gra­tion 014 for all new columns

- Update CarCaseCreate/Update/Response schemas with new fields

- Add vinApi to fron­tend api.ts and new fields to types/​in­dex.ts

https://​claude.ai/​code/​ses­sion_01XN­hJm­FU2Jzm­c5B1FZTwG­WK­fix: pre­vent CDN caching of empty search re­sults

Empty re­sults were get­ting cached at CDN (Fastly) for 1 hour via

Cache-Control: pub­lic, max-age=3600. After de­ploy­ing thresh­old/​dic­tio­nary

fixes, old NO RESULTS re­sponses kept be­ing served from CDN cache.

Now: empty re­sults get Cache-Control: no-store­Add README.md for repo and quick ref­er­ence header to SKILL.mdfix(backend): add Dockerfile with python:3.11-slim to re­duce im­age size

Replaces Nixpacks build (5.7GB) with ex­plicit Dockerfile us­ing slim base

im­age + CPU-only PyTorch to stay un­der Railway’s 4GB trial limit.

Co-Authored-By: Claude Sonnet 4.6 <>fix: add miss­ing New York State and Queens NY lo­cale con­fig files

These were reg­is­tered in gen­er­ate-lo­cale-ques­tions.ts in a prior com­mit

but the ac­tual con­fig files were never staged, caus­ing TS2307 mod­ule

not found er­rors on Render.

Co-Authored-By: Claude Sonnet 4.6 <>fix: lower sim­i­lar­ity and qual­ity thresh­olds for bet­ter re­call

...

Read the original on www.claudescode.dev »

10 303 shares, 12 trendiness

FreeCAD Version 1.1 Released

After an enor­mous amount of work and ded­i­ca­tion from FreeCAD con­trib­u­tors we are delight­ed to an­nounce that FreeCAD Ver­sion 1.1 is now re­leased and avail­able for down­load.

There are sig­nif­i­cant amounts of improve­ments and new fea­tures. These in­clude; trans­par­ent Part Design pre­views, inter­ac­tive drag­gers added to tools like Fil­let and Cham­fer, 3 point light­ing, a Clar­i­fy Selec­tion tool, Assem­bly and FEM improve­ments and ani­ma­tions, a total­ly new CAM tool li­brary sys­tem and much much more.

For a full list of changes and new fea­tures check out the Release Notes and if you want to sup­port the ongo­ing devel­op­ment of FreeCAD then do con­sid­er mak­ing a dona­tion!

...

Read the original on blog.freecad.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.