10 interesting stories served every morning and every evening.




1 1,393 shares, 53 trendiness

Protect Digital Privacy in the EU

Skip to main con­tent

🚨 The Conservatives (EPP) are at­tempt­ing to force a new vote on Thursday (26th), seek­ing to re­verse Parliament’s NO on in­dis­crim­i­nate scan­ning. This is a di­rect at­tack on democ­racy and bla­tant dis­re­gard for your right to pri­vacy. No means no. Take ac­tion now!

...

Read the original on fightchatcontrol.eu »

2 791 shares, 35 trendiness

Running Tesla Model 3's Computer on My Desk Using Parts From Crashed Cars

Tesla runs a bug bounty pro­gram that in­vites re­searchers to find se­cu­rity vul­ner­a­bil­i­ties in their ve­hi­cles. To par­tic­i­pate, I needed the ac­tual hard­ware, so I started look­ing for Tesla Model 3 parts on eBay. My goal was to get a Tesla car com­puter and touch­screen run­ning on my desk, boot­ing the car’s op­er­at­ing sys­tem.

The car com­puter con­sists of two parts - the MCU (Media Control Unit) and the au­topi­lot com­puter (AP) lay­ered on top of each other. In the car, the com­puter is lo­cated in front of the pas­sen­ger seat, roughly be­hind the glove­box. The part it­self is the size of an iPad and the thick­ness of a ~500 page book and is cov­ered in a wa­ter-cooled metal cas­ing:

By search­ing for Tesla Model 3 MCU on Ebay, I found quite a lot of re­sults in the $200 - $300 USD price range. Looking at the list­ings, I found that many of these sell­ers are salvaging” com­pa­nies who buy crashed cars, take them apart, and list all parts for sale in­di­vid­u­ally. Sometimes, they even in­clude a photo of the orig­i­nal crashed car and a way to fil­ter their list­ings for parts ex­tracted from the same ve­hi­cle.

To boot the car up and in­ter­act with it, I needed a few more things:

* The dis­play ca­ble to con­nect them to­gether

For the power sup­ply, I went with an ad­justable 0-30V model from Amazon. There was a 5 am­pere and a 10A ver­sion avail­able, at the time, I fig­ured it’s safer to have some head­room and went with the 10A ver­sion — it was a very good de­ci­sion, as it later turned out, the full setup could con­sume up to 8A at peak times. The Model 3 screens were sur­pris­ingly ex­pen­sive on Ebay, I as­sume that is be­cause it is a pop­u­lar part to re­place. I found a pretty good deal for 175 USD.

The last and most dif­fi­cult part to or­der was the ca­ble which con­nects the MCU to the screen. I needed this be­cause both the com­puter and a screen were be­ing sold with the ca­bles cut a few cen­time­ters af­ter the con­nec­tor (interestingly most sell­ers did that, in­stead of just un­plug­ging the ca­bles).

This is when I dis­cov­ered that Tesla pub­lishes the wiring Electrical Reference” for all of its cars pub­licly. On their ser­vice web­site, you can look up a spe­cific car model, search for a com­po­nent (such as the dis­play), and it will show you ex­actly how the part should be wired up, what ca­bles/​con­nec­tors are used, and even what the dif­fer­ent pins are re­spon­si­ble for in­side a sin­gle con­nec­tor:

Turns out the dis­play uses a 6-pin ca­ble (2 for 12V and ground, 4 for data) with a spe­cial Rosenberger 99K10D-1D5A5-D con­nec­tor. I soon dis­cov­ered that un­less you are a car man­u­fac­turer or­der­ing in bulk, there is no way you are buy­ing a sin­gle Rosenberger ca­ble like this. No Ebay list­ings, noth­ing on Aliexpress, es­sen­tially no search re­sults at all.

After dig­ging around a bit, I found that this ca­ble is very sim­i­lar to a more widely used au­to­mo­tive ca­ble called LVDS, which is used to trans­fer video in BMW cars. At first sight, the con­nec­tors looked like a per­fect match to my Rosenberger, so I placed an or­der:

The com­puter ar­rived first. To at­tempt to power it on, I looked up which pin of which con­nec­tor I needed to at­tach 12V and ground to us­ing the Tesla schemat­ics & the few pic­tures on­line of peo­ple do­ing the same desk-MCU setup. Since the com­puter in­cluded the shortly cut ca­bles, I was able to strip the rel­e­vant wires and at­tach the power sup­ply’s clips to the right ones:

I saw a cou­ple of red LEDs start flash­ing, and the com­puter started up! Since I had no screen yet, there were not many ways to in­ter­act with the car. Reading @lewurm’s pre­vi­ous re­search on GitHub I knew that, at least in older car ver­sions, there was a net­work in­side the car, with some com­po­nents hav­ing their own web­server. I con­nected an Ethernet ca­ble to the port next to the power con­nec­tor and to my lap­top.

This net­work does not have DHCP, so you have to man­u­ally set your IP ad­dress. The IP you se­lect has to be 192.168.90. X/24, and should be higher than 192.168.90.105 to not con­flict with other hosts on the net­work. On Reddit, I found the con­tents of an older /etc/hosts file from a car which shows the hosts that are nor­mally as­so­ci­ated with spe­cific IPs:

@lewurm’s blog men­tioned that SSH on port :22 and a web­server on :8080 was open on 192.168.90.100, the MCU. Was this still the case on newer mod­els? Yes!

I had al­ready found 2 ser­vices to ex­plore on the MCU:

* An SSH server which states SSH al­lowed: ve­hi­cle parked” - quite funny given the cir­cum­stances

This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* This SSH server re­quires spe­cially signed SSH keys which only Tesla is sup­posed to be able to gen­er­ate.

* Interestingly, Tesla of­fers a Root ac­cess pro­gram” on their bug bounty pro­gram. Researchers who find at least one valid rooting” vul­ner­a­bil­ity will re­ceive a per­ma­nent SSH cer­tifi­cate for their own car, al­low­ing them to log in as root and con­tinue their re­search fur­ther. — A nice perk, as it is much eas­ier to find ad­di­tional vul­ner­a­bil­i­ties once you are on the in­side.

* A REST-like API on :8080 which re­turned a his­tory of tasks”

This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

* This ser­vice is called ODIN (On-Board Diagnostic Interface Network), and is in­ten­tion­ally ex­posed to be used by Tesla’s di­ag­nos­tics tool Toolbox”.

Around this time, I also re­moved the metal shield­ing to see ex­actly what the boards look like in­side. You can see the two dif­fer­ent boards which were stacked on top of each other:

Once the screen and the BMW LVDS ca­ble ar­rived, it un­for­tu­nately be­came clear that the con­nec­tor is not go­ing to fit. The BMW con­nec­tor was much thicker on the sides and it was not pos­si­ble to plug it into the screen. This led to some su­per sketchy im­pro­vised at­tempts to strip the two orig­i­nal tail” ca­bles from the MCU and the screen and con­nect the in­di­vid­ual wires to­gether. The wires were re­ally sen­si­tive and thin. The setup worked for a cou­ple of sec­onds, but caused wire de­bris to fall on the PCB and short it, burn­ing one of the power con­troller chips:

It was ex­tremely hard to find the name/​model of the chip that got burned, es­pe­cially since part of the text printed on it had be­come un­read­able due to the dam­age. To be able to con­tinue with the pro­ject, I had to or­der a whole other car com­puter.

In the mean­time, my friend Yasser (@n3r0li) some­how pulled off the im­pos­si­ble and iden­ti­fied it as the MAX16932CATIS/V+T” step-down con­troller, re­spon­si­ble for con­vert­ing power down to lower volt­ages. We or­dered the chip and took the board to a lo­cal PCB re­pair shop, where they suc­cess­fully re­placed it and fixed the MCU. Now I had two com­put­ers to work with.

So I re­ally did need that Rosenberger ca­ble, there was no get­ting around it.

After hav­ing no luck find­ing it on­line and even vis­it­ing a Tesla ser­vice cen­ter in London (an odd en­counter, to say the least), I had to ac­cept what I had been try­ing to avoid: buy­ing an en­tire Dashboard Wiring Harness.

Back in the Tesla Electrical Reference, in ad­di­tion to the con­nec­tors, one can find every part num­ber. Looking at the ca­ble which con­nects the MCU to the screen, the num­ber 1067960-XX-E shows. Searching for it on Ebay brings up this mon­stros­ity:

Turns out that ac­tual cars don’t have in­di­vid­ual ca­bles. Instead they have these big looms”, which bun­dle many ca­bles from a nearby area into a sin­gle har­ness. This is the rea­son why I could not find the in­di­vid­ual ca­ble ear­lier. They sim­ply don’t man­u­fac­ture it. Unfortunately I had no other choice but to buy this en­tire loom for 80 USD.

Despite how bulky it was, the loom worked per­fectly. The car booted, the touch screen started up, and I had a work­ing car com­puter on my desk, run­ning the car’s op­er­at­ing sys­tem!

Having the sys­tem run­ning, I can now start play­ing with the user in­ter­face, in­ter­act­ing with the ex­posed net­work in­ter­faces, ex­plor­ing the CAN buses, and per­haps even at­tempt­ing to ex­tract the firmware.

...

Read the original on bugs.xdavidhu.me »

3 682 shares, 67 trendiness

Personal Encyclopedias — whoami.wiki

Last year, I vis­ited my grand­moth­er’s house for the first time af­ter the pan­demic and came across a cup­board full of loose old pho­tos. I counted 1,351 of them span­ning all the way from my grand­par­ents in their early 20s, my mom as a baby, to me in mid­dle school, just around the time when we got our first smart­phone and all pho­tos since then were backed up on­line.

Everything was all over the place so I spent some time go­ing through them in­di­vid­u­ally and or­ga­niz­ing them into groups. Some of the ini­tial groups were based on the phys­i­cal at­trib­utes of the pho­to­graph like sim­i­lar as­pect ra­tios or film stock. For ex­am­ple, there was a group of black/​white 32mm square pic­tures that were taken around the time when my grand­fa­ther was in his mid 20s.

As I got done with group­ing all of them, I was able to see flashes of sto­ries in my head, but they were ephemeral and frag­ile. For in­stance, there was a group of pho­tos that looked like it was taken dur­ing my grand­par­ents’ wed­ding but I did­n’t know the chrono­log­i­cal or­der they were taken be­cause EXIF meta­data did­n’t ex­ist around that time.

So I sat down with my grand­mother and asked her to re­order the pho­tos and tell me every­thing she could re­mem­ber about her wed­ding. Her face lit up as she nar­rated the back­story be­hind the oc­ca­sion, go­ing from photo to photo, resur­fac­ing de­tails that had been dor­mant for decades. I wrote every­thing down, recorded the names of peo­ple in some of the pho­tos, some of whom I rec­og­nized as younger ver­sions of my un­cles and aunts.

After the interview”, I had mul­ti­ple pages of notes con­nect­ing the pho­tos to events that hap­pened 50 years ago. Since the ac­count was his­tor­i­cal, as an in­side joke I wanted to see if I could clean it up and pre­sent it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a lo­cal in­stance, and be­gan my ed­i­to­r­ial work. I used the 2011 Royal Wedding as ref­er­ence and drafted a page start­ing with the clas­sic in­fobox and the lead para­graph.

I split up the rest of the con­tent into sec­tions and filled them with every­thing I could ver­ify like dates, names, places, who sat where. I scanned all the pho­tos and spent some time fig­ur­ing out what to place where. For every photo place­ment, there was a fol­low up to in­clude a de­scrip­tive cap­tion too.

Whenever I men­tioned a per­son, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that pro­vided wider con­text to things like venues, rit­u­als, and the po­lit­i­cal cli­mate around that time, like for in­stance a le­gal amend­ment that was rel­e­vant to the wed­ding cer­e­mony.

In two evenings, I was able to doc­u­ment a full back­story for the pho­tos into a neat ar­ti­cle. These two evenings also made me re­al­ize just how pow­er­ful en­cy­clo­pe­dia soft­ware is to record and pre­serve me­dia and knowl­edge that would’ve oth­er­wise been lost over time.

This was so much fun that I spent the fol­low­ing months writ­ing pages to ac­count for all the pho­tos that needed to be stitched to­gether.

I got help from r/​ge­neal­ogy about how to ap­proach record­ing oral his­tory and I was given re­sources to bet­ter con­duct in­ter­views, shoutout to u/​stem­ma­tis! I would get on calls with my grand­mother and peo­ple in the fam­ily, ask them a cou­ple of ques­tions, and then write. It was also around this time that I be­gan us­ing au­dio tran­scrip­tion and lan­guage mod­els to make the ed­i­to­r­ial process eas­ier.

Over time, I man­aged to write a lot of pages con­nect­ing peo­ple to dif­fer­ent life events. The en­cy­clo­pe­dia for­mat made it easy to con­nect dots I would have never found on my own, like dis­cov­er­ing that one of the singers at my grand­par­ents’ wed­ding was the same nurse who helped de­liver me.

After find­ing all the sto­ries be­hind the phys­i­cal pho­tos, I started to work on dig­i­tal pho­tos and videos that I had stored on Google Photos. The won­der­ful thing about dig­i­tal pho­tos is that they come with EXIF meta­data that can re­veal ex­tra in­for­ma­tion like date, time, and some­times ge­o­graph­i­cal co­or­di­nates.

This time, with­out any in­ter­views, I wanted to see if I could use a lan­guage model to cre­ate a page based on just brows­ing through the pho­tos. As my first ex­per­i­ment, I cre­ated a folder with 625 pho­tos of a fam­ily trip to Coorg back in 2012.

I pointed Claude Code at the di­rec­tory and asked it to draft a wiki page by brows­ing through the im­ages. I hinted at us­ing ImageMagick to cre­ate con­tact sheets so it would help with brows­ing through mul­ti­ple pho­tos at once.

After a few min­utes and a cou­ple of to­kens later, it had cre­ated a com­pelling draft with a de­tailed ac­count of every­thing we did dur­ing the trip by time of day. The model had no lo­ca­tion data to work with, just time­stamps and vi­sual con­tent, but it was able to iden­tify the places from the pho­tos alone, in­clud­ing ones that I had for­got­ten by now. It picked up de­tails on the modes of trans­porta­tion we used to get be­tween places just from what it could see.

After I had clar­i­fied who some of the peo­ple in the pic­tures were, it went on to iden­tify them au­to­mat­i­cally in the cap­tions. Now that I had a de­tailed out­line ready, the page still only had con­tent based on the avail­able data, so to fill in the gaps I shared a list of anec­dotes from my point of view and the model in­serted them into places where the nar­ra­tive called for them.

The Coorg trip only had pho­tos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 pho­tos and 343 videos with an iPhone 12 Pro that in­cluded ge­o­graph­i­cal co­or­di­nates as part of the EXIF meta­data.

On top of that, I ex­ported my lo­ca­tion time­line from Google Maps, my Uber trips, my bank trans­ac­tions, and Shazam his­tory. I would ask Claude Code to start with the pho­tos and then grad­u­ally give it ac­cess to the dif­fer­ent data ex­ports.

Here are some of the things it did across mul­ti­ple runs:

It cross-ref­er­enced my bank trans­ac­tions with lo­ca­tion data to as­cer­tain the restau­rants I went to.

Some of the pho­tos and videos showed me in at­ten­dance at a soc­cer match, how­ever, it was un­known which teams were play­ing. The model looked up my bank trans­ac­tions and found a Ticketmaster in­voice with in­for­ma­tion about the teams and name of the tour­na­ment.

It looked up my Uber trips to fig­ure out travel times and ex­act lo­ca­tions of pickup and drop.

It used my Shazam tracks to write about the kinds of songs that were play­ing at a place, like Cuban songs at a Cuban restau­rant.

In a fol­low-up, I men­tioned re­mem­ber­ing an evening din­ner with a gui­tarist play­ing in the back­ground. It fil­tered my me­dia to evening cap­tures, found a frame in a video with the gui­tarist, up­loaded it, and ref­er­enced the mo­ment in the page.

The MediaWiki ar­chi­tec­ture worked well with the ed­its, since for every new data source it would make amend­ments like a real Wikipedia con­trib­u­tor would. I leaned heav­ily on fea­tures that al­ready ex­isted. Talk pages to clar­ify gaps and con­sol­i­date re­search notes, cat­e­gories to group pages by theme, re­vi­sion his­tory to track how a page evolved as new data came in. I did­n’t have to build any of this, it was all just there.

What started as me help­ing the model fill in gaps from my mem­ory grad­u­ally in­verted. The model was now sur­fac­ing things I had com­pletely for­got­ten, cross-ref­er­enc­ing de­tails across data sources in ways I never would have done man­u­ally.

So I started point­ing Claude Code at other data ex­ports. My Facebook, Instagram, and WhatsApp archives held around 100k mes­sages and a cou­ple thou­sand voice notes ex­changed with close friends over a decade.

The model traced the arc of our friend­ships through the mes­sages, pulled out the life episodes we had talked each other through, and wove them into mul­ti­ple pages that read like it was writ­ten by some­one who knew us both. When I shared the pages with my friends, they wanted to read every sin­gle one.

This is when I re­al­ized I was no longer work­ing on a fam­ily his­tory pro­ject. What I had been build­ing, page by page, was a per­sonal en­cy­clo­pe­dia. A struc­tured, brows­able, in­ter­con­nected ac­count of my life com­piled from the data I al­ready had ly­ing around.

I’ve been work­ing on this as whoami.wiki. It uses MediaWiki as its foun­da­tion, which turns out to be a great fit be­cause lan­guage mod­els al­ready un­der­stand Wikipedia con­ven­tions deeply from their train­ing data. You bring your data ex­ports, and agents draft the pages for you to re­view.

A page about your grand­moth­er’s wed­ding works the same way as a page about a royal wed­ding. A page about your best friend works the same way as a page about a pub­lic fig­ure.

Oh and it’s gen­uinely fun! Putting to­gether the en­cy­clo­pe­dia felt like the early days of Facebook time­line, brows­ing through fin­ished pages, fol­low­ing links be­tween peo­ple and events, and stum­bling on a de­tail I for­got.

But more than the tech­nol­ogy, it’s the sto­ries that stayed with me. Writing about my grand­moth­er’s life sur­faced things I’d never known, her years as a sin­gle mother, the de­ci­sions she had to make, the re­silience it took. She was a stronger woman than I ever re­al­ized. Going through my friend­ships, I found mo­ments of en­dear­ment that I had nearly for­got­ten, the days friends went the ex­tra mile to be good to me. Seeing those mo­ments laid out on a page made me pick up the phone and call a few of them. The en­cy­clo­pe­dia did­n’t just or­ga­nize my data, it made me pay closer at­ten­tion to the peo­ple in my life.

Today I’m re­leas­ing whoami.wiki as an open source pro­ject. The en­cy­clo­pe­dia is yours, it runs on your ma­chine, your data stays with you, and any model can read it. The pro­ject is early and I’m still fig­ur­ing a lot of it out, but if this sounds in­ter­est­ing, you can get started here and tell me what you think!

...

Read the original on whoami.wiki »

4 540 shares, 82 trendiness

Tuta (@tuta.com)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. You did it! 🥳

European Parliament just de­cided that Chat Control 1.0 must stop.

This means on April 6, 2026, Gmail, LinkedIn, Microsoft and other Big Techs must stop scan­ning your pri­vate mes­sages in the EU. #PrivacyWins 💪

[contains quote post or other em­bed­ded con­tent]

...

Read the original on bsky.app »

5 477 shares, 18 trendiness

ARC-AGI-3

ARC-AGI-3 is an in­ter­ac­tive rea­son­ing bench­mark which chal­lenges AI agents to ex­plore novel en­vi­ron­ments, ac­quire goals on the fly, build adapt­able world mod­els, and learn con­tin­u­ously.

A 100% score means AI agents can beat every game as ef­fi­ciently as hu­mans.

Instead of solv­ing sta­tic puz­zles, agents must learn from ex­pe­ri­ence in­side each en­vi­ron­ment—per­ceiv­ing what mat­ters, se­lect­ing ac­tions, and adapt­ing their strat­egy with­out re­ly­ing on nat­ural-lan­guage in­struc­tions.

...

Read the original on arcprize.org »

6 449 shares, 18 trendiness

Apple randomly closes bug reports unless you “verify” the bug remains unfixed

Why do I file bug re­ports with Apple Feedback Assistant? I plead in­san­ity. Or per­haps ad­dic­tion. I see­saw be­tween phases of ab­sti­nence and falling off the wagon. I’ve even tried or­ga­niz­ing a pub­lic boy­cott of Feedback Assistant, with a list of de­mands to im­prove the ex­pe­ri­ence for users, but the boy­cott never caught on with other de­vel­op­ers. Regardless, an in­cen­tive still ex­ists to file bug re­ports, be­cause Apple ac­tu­ally fixes some of my bugs. My main com­plaint about the bug re­port­ing process is not the un­fixed bugs but rather the dis­re­spect for bug re­ports and the peo­ple who file them. Apple in­ten­tion­ally wastes our time with no re­grets, as if our time had no value, as if we had some kind of duty to serve Apple.

In March 2023, I filed FB12088655 Privacy: Network fil­ter ex­ten­sion TCP con­nec­tion and IP ad­dress leak.” I men­tioned this bug re­port at the time in a blog post, which in­cluded the same steps to re­pro­duce and ex­am­ple Xcode pro­ject that I pro­vided to Apple. In the three years since I filed the bug re­port, I re­ceived no re­sponse what­so­ever from Apple… un­til a cou­ple of weeks ago, when Apple asked me to verify” the is­sue with ma­cOS 26.4 beta 4 and up­date my bug re­port.

I in­stall the WWDC be­tas every year in June but don’t run OS be­tas af­ter September when the ma­jor OS up­dates are re­leased. I don’t have enough time or in­deed enough Apple de­vices to be an un­paid tester year round. Thus, ver­i­fy­ing is­sues in be­tas is a has­sle for me. I’ve been burned by such re­quests in the past, asked by Apple to ver­ify is­sues in be­tas that were not fixed, so I asked Apple di­rectly whether beta 4 fixed the bug: they should al­ready know, since they have my steps to re­pro­duce! However, their re­sponse was eva­sive, never di­rectly an­swer­ing my ques­tion. Moreover, they threat­ened to close my bug re­port and as­sume the bug is fixed if I did­n’t ver­ify within two weeks! Again, this is af­ter Apple silently sat on my bug re­port for three years.

Although I did­n’t in­stall the beta my­self, I spoke to the de­vel­op­ers of Little Snitch, who do run the ma­cOS be­tas, and they kindly in­formed me that in their test­ing, they could still re­pro­duce my is­sue with ma­cOS 26.4 beta 4. It was no sur­prise, then, that when I up­dated to ma­cOS 26.4, re­leased to the pub­lic yes­ter­day by Apple, I could still re­pro­duce the bug with my in­struc­tions and ex­am­ple pro­ject. It ap­pears that Apple know­ingly sent me on a wild goose chase, de­mand­ing that I verify” a bug they did noth­ing to fix, per­haps pray­ing that the bug had mag­i­cally dis­ap­peared on its own, with no ef­fort from Apple.

By the way, a few weeks ago I pub­lished a blog post about an­other bug, FB22057274 Pinned tabs: slow-load­ing tar­get=“_blank” links ap­pear in the wrong tab,” which is also 100% re­pro­ducible but nonethe­less was marked by Apple with the res­o­lu­tion Investigation com­plete - Unable to di­ag­nose with cur­rent in­for­ma­tion.” On March 9, I up­dated the bug re­port, ask­ing what ad­di­tional in­for­ma­tion Apple needs from me—they never asked for more in­for­ma­tion—but I’ve yet to re­ceive a re­sponse.

I can only as­sume that some bo­zos in Apple lead­er­ship in­cen­tivize un­der­lings to close bug re­ports, no mat­ter whether the bugs are fixed. Out of sight, out of mind. Apple’s in­ter­nal met­rics prob­a­bly tell them that they have no soft­ware qual­ity prob­lem, be­cause the num­ber of open bug re­ports is kept lower ar­ti­fi­cially.

Ironically, the iPa­dOS 26.4 be­tas in­tro­duced a Safari crash­ing bug that I re­ported a month ago, but Apple failed to fix the bug be­fore the pub­lic re­lease. What’s the pur­pose of be­tas? As far as I can tell, the pur­pose is just to an­noy peo­ple who file bugs, with­out do­ing any­thing use­ful.

Shortly af­ter this blog post hit the front page of Hacker News yes­ter­day, my Investigation com­plete - Unable to di­ag­nose with cur­rent in­for­ma­tion” Feedback FB22057274 was up­dated by Apple. What an amaz­ing co­in­ci­dence! Unfortunately, the up­date was not help­ful, be­cause Apple re­quested a sys­di­ag­nose. For a user in­ter­face is­sue! This was pre­cisely the fear I ex­pressed in my ear­lier blog post:

I hon­estly don’t know what ad­di­tional in­for­ma­tion Apple needs to di­ag­nose it. I in­cluded not only steps to re­pro­duce but also mul­ti­ple screen record­ings to il­lus­trate. I have a sus­pi­cion that Apple did not even read my bug re­port, be­cause I did not at­tach a sys­di­ag­nose re­port. But a pri­vacy-vi­o­lat­ing sys­di­ag­nose would not be use­ful in this case!

The only trick in my bug re­port is that I used Little Snitch to sim­u­late a slow load­ing link. This was just the eas­i­est way I could think of to re­li­ably re­pro­duce the bug. There are of course other ways to sim­u­late a slow load­ing link; if Apple Safari en­gi­neers of all peo­ple some­how can’t fig­ure that out, then they aren’t qual­i­fied for their jobs. Again, how­ever, the more likely ex­pla­na­tion is that my feed­back was ig­nored be­cause it did not in­clude a pro forma sys­di­ag­nose, but who knows, be­cause Apple did not re­quest more in­for­ma­tion of any kind from me.

Here is my re­sponse this morn­ing to Apple’s re­quest:

You should­n’t need a sys­di­ag­nose, and I don’t know how a sys­di­ag­nose would pos­si­bly be help­ful for a user in­ter­face bug.

I found an easy way to re­pro­duce the is­sue with­out Little Snitch: use the Network Link Conditioner pref­er­ence pane from the Xcode Additional Tools down­load, and cre­ate a pro­file with Uplink Delay 3000 ms.

The Xcode Additional Tools, which in­clude a num­ber of use­ful util­i­ties, can be found in the Apple Developer Downloads (sign-in re­quired).

...

Read the original on lapcatsoftware.com »

7 394 shares, 37 trendiness

Shell Tricks That Actually Make Life Easier (And Save Your Sanity)

There is a dis­tinct, vis­ceral kind of pain in watch­ing an oth­er­wise bril­liant en­gi­neer hold down the Backspace key for six con­tin­u­ous sec­onds to fix a typo at the be­gin­ning of a line.

We’ve all been there. We learn ls, cd, and grep, and then we sort of… stop. The ter­mi­nal be­comes a place we live in-but we rarely bother to arrange the fur­ni­ture. We ac­cept that cer­tain tasks take forty key­strokes, com­pletely un­aware that the shell au­thors solved our ex­act frus­tra­tion some­time in 1989.

Here are some tricks that aren’t ex­actly se­cret, but aren’t al­ways taught ei­ther. To keep the peace in our ex­tended Unix fam­ily, I’ve split these into two camps: the uni­ver­sal tricks that work on al­most any POSIX-ish shell (like sh on FreeBSD or ksh on OpenBSD), and the qual­ity-of-life ad­di­tions spe­cific to in­ter­ac­tive shells like Bash or Zsh.

These tricks rely on stan­dard ter­mi­nal line dis­ci­plines, generic Bourne shell be­hav­iors, or POSIX fea­tures. If you SSH into an em­bed­ded router from 2009, a fresh OpenBSD box, or a min­i­mal Alpine con­tainer, these will still have your back.

Why shuf­fle char­ac­ter-by-char­ac­ter when you can tele­port? These are stan­dard Emacs-style line-edit­ing bind­ings (via Readline or sim­i­lar), en­abled by de­fault in most mod­ern shells.

CTRL + W: You’re typing /var/log/nginx/ but you ac­tu­ally meant /var/log/apache2/. You have two choices: hold down Backspace un­til your soul leaves your body, or hit CTRL + W to in­stantly delete the word be­fore the cur­sor. Once you get used to this, hold­ing Backspace feels like dig­ging a hole with a spoon.

CTRL + U and CTRL + K: You typed out a beau­ti­fully crafted, 80-char­ac­ter rsync com­mand, but sud­denly re­al­ize you need to check if the des­ti­na­tion di­rec­tory ac­tu­ally ex­ists first. You don’t want to delete it, but you don’t want to run it. Hit CTRL + U to cut every­thing from the cur­sor to the be­gin­ning of the line. Check your di­rec­tory, and then hit CTRL + Y to paste (“yank”) your mas­ter­piece right back into the prompt. (CTRL + K does the same thing, but cuts from the cur­sor to the end of the line.)

CTRL + A and CTRL + E: Jump in­stantly to the be­gin­ning (A) or end (E) of the line. Stop reach­ing for the Home and End keys; they are miles away from the home row any­way.

ALT + B and ALT + F: Move back­ward (B) or for­ward (F) one en­tire word at a time. It’s the ar­row key’s much faster, much cooler sib­ling. (Mac users: you usu­ally have to tweak your ter­mi­nal set­tings to use Option as Meta for this to work).

re­set (or stty sane): While strictly more of a ter­mi­nal re­cov­ery tip than an in­ter­ac­tive shell trick, it be­longs here. We’ve all done it: you meant to cat a text file, but you ac­ci­den­tally cat a com­piled bi­nary or a com­pressed tar­ball. Suddenly, your ter­mi­nal is spit­ting out an­cient runes and Wingdings, and your prompt is com­pletely il­leg­i­ble. Instead of clos­ing the ter­mi­nal win­dow in shame, type re­set (even if you can’t see the let­ters you’re typ­ing) and hit en­ter. Your ter­mi­nal will heal it­self.

CTRL + C: Cancel the cur­rent com­mand im­me­di­ately. Your emer­gency exit when a com­mand hangs, or you re­al­ize you’re tail­ing the wrong log file.

CTRL + D: Sends an EOF (End of File) sig­nal. If you’re typ­ing in­put to a com­mand that ex­pects it, this closes the stream. But if the com­mand line is empty, it logs you out of the shell com­pletely-be care­ful where you press it.

CTRL + L: Your ter­mi­nal is clut­tered with stack traces, com­piler spaghetti, and pure dig­i­tal noise. Running the clear com­mand works, but what if you’re al­ready halfway through typ­ing a new com­mand? CTRL + L wipes the slate clean, throw­ing your cur­rent prompt right up to the top with­out in­ter­rupt­ing your train of thought.

cd -: The clas­sic chan­nel-flip­per. You’re deep down in /usr/local/etc/postfix and you need to check some­thing in /var/log. You type cd /var/log, look at the logs, and now you want to go back. Instead of typ­ing that long path again, type cd -. It switches you to your pre­vi­ous di­rec­tory. Run it again, and you’re back in logs. Perfect for tog­gling back and forth.

pushd and popd: If cd - is a tog­gle switch, pushd is a stack. Need to jug­gle mul­ti­ple di­rec­to­ries? pushd /etc changes to /etc but saves your pre­vi­ous di­rec­tory to a hid­den stack. When you’re done, type popd to pop it off the stack and re­turn ex­actly where you left off.

> file.txt: This emp­ties a file com­pletely with­out delet­ing and recre­at­ing it. Why does this mat­ter? It pre­serves file per­mis­sions, own­er­ship, and does­n’t in­ter­rupt processes that al­ready have the file open. It’s much cleaner than echo ” > file.txt (which ac­tu­ally leaves a new­line char­ac­ter) or rm file && touch file.

$_: In most shells, $_ ex­pands to the last ar­gu­ment of the pre­vi­ous com­mand-es­pe­cially use­ful in­ter­ac­tively or in sim­ple scripts when you need to op­er­ate on the same long path twice:

No more re-typ­ing paths or de­clar­ing tem­po­rary vari­ables to en­ter a di­rec­tory you cre­ated a sec­ond ago.

If you are writ­ing shell scripts, put these at the top im­me­di­ately af­ter your she­bang. It will save you from de­ploy­ing chaos to pro­duc­tion.

* set -e: Exit on er­ror. Very use­ful, but no­to­ri­ously weird with edge cases (especially in­side con­di­tion­als like if state­ments, while loops, and pipelines). Don’t rely on it blindly as it can cre­ate false con­fi­dence. (Pro-tip: consider set -euo pipefail for a more ro­bust safety net, but learn its caveats first.)

* set -u: Treats ref­er­enc­ing an un­set vari­able as an er­ror. This pro­tects you from cat­a­strophic dis­as­ters like rm -rf /usr/local/${MY_TYPO_VAR}/* ac­ci­den­tally ex­pand­ing into rm -rf /usr/local/*.

If you’re on a Linux box or us­ing a mod­ern in­ter­ac­tive shell, these are the tools that make the CLI feel less like a rusty bi­cy­cle and more like some­thing that ac­tu­ally re­sponds when you steer.

CTRL + R: Reverse in­cre­men­tal search. Stop press­ing the up ar­row forty times to find that one awk com­mand you used last Tuesday. Press CTRL + R, start typ­ing a key­word from the com­mand, and it mag­i­cally pulls it from your his­tory. Press CTRL + R again to cy­cle back­wards through matches.

!!: This ex­pands to the en­tirety of your pre­vi­ous com­mand. Its most fa­mous use case is the Permission de­nied” walk of shame. You con­fi­dently type sys­tem­ctl restart ng­inx, hit en­ter, and the sys­tem laughs at your lack of priv­i­leges. Instead of re­typ­ing it, run:

It’s your way of telling the shell, Do what I said, but this time with au­thor­ity.”

CTRL + X, then CTRL + E: You start typ­ing a quick one-liner. Then you add a pipe. Then an awk state­ment. Soon, you’re edit­ing a four-line mon­ster in­side your prompt and nav­i­ga­tion is get­ting dif­fi­cult. Hit CTRL + X fol­lowed by CTRL + E (in Bash; in Zsh, this needs con­fig­ur­ing). This drops your cur­rent com­mand into your de­fault text ed­i­tor (like Vim or Nano). You can edit it with all the power of a proper ed­i­tor, save, and exit. The shell then ex­e­cutes the com­mand in­stantly.

fc: The highly portable, tra­di­tional sib­ling to CTRL+X CTRL+E. Running fc opens your pre­vi­ous com­mand in your $EDITOR. It works across most shells and is a fan­tas­tic hid­den gem for fix­ing com­plex, multi-line com­mands that went wrong.

ESC + . (or ALT + .): Inserts the last ar­gu­ment of the pre­vi­ous com­mand right at your cur­sor. Press it re­peat­edly to cy­cle fur­ther back through your his­tory, drop­ping the ex­act file­name or pa­ra­me­ter you need right into your cur­rent com­mand.

!$: The non-in­ter­ac­tive sib­ling of ESC + .. Unlike ESC + . (which in­serts the text live at your cur­sor for you to re­view or edit), !$ ex­pands blindly at the ex­act mo­ment you hit en­ter.

(Pro-Tip: For script­ing or stan­dard sh, use the $_ vari­able men­tioned ear­lier in­stead!)

Brace ex­pan­sion is pure magic for avoid­ing repet­i­tive typ­ing, es­pe­cially when do­ing quick back­ups or re­names.

The Backup Expansion: Need to edit a crit­i­cal con­fig file and want to make a quick backup first?

This ex­pands to mv file­name.txt file­name.md. Fast, el­e­gant, and makes you look like a wiz­ard.

Need mul­ti­ple di­rec­to­ries? mkdir -p pro­ject/{​src,tests,docs} cre­ates all three at once.

: Treats the out­put of a com­mand as if it were a file. Say you want to diff the sorted ver­sions of two files. Traditionally, you’d sort them into tem­po­rary files, diff those, and clean up. Process sub­sti­tu­tion skips the mid­dle­man:

** (Globstar): find is a great com­mand, but some­times it feels like overkill. If you run shopt -s glob­star in Bash (it’s en­abled by de­fault in Zsh), ** matches files re­cur­sively in all sub­di­rec­to­ries. Need to find all JavaScript files in your cur­rent di­rec­tory and every­thing be­neath it?

CTRL + Z, then bg, then dis­own: You started a mas­sive, hour-long data­base im­port task, but you for­got to run it in tmux or screen. It’s ty­ing up your ter­mi­nal, and if your SSH con­nec­tion drops, the process dies. Panic sets in.

Type bg to let it re­sume run­ning in the back­ground. Your prompt is free!

Type dis­own to de­tach it from your shell en­tirely. You can safely close your lap­top, grab a cof­fee, and the process will sur­vive.

com­mand |& tee file.log: Standard pipes (|) only catch stan­dard out­put (std­out). If a script throws an er­ror (stderr), it skips the pipe and bleeds di­rectly onto your screen, miss­ing the log file. |& pipes both std­out and stderr (it’s a help­ful short­hand for 2>&1 |).

Throw in tee, and you get to watch the out­put on your screen while si­mul­ta­ne­ously sav­ing it to a log file. It’s the equiv­a­lent of watch­ing live TV while record­ing it to your DVR.

The shell is a tool­box, not an ob­sta­cle course. You don’t need to mem­o­rize all of these to­day. Pick just one trick, force it into your daily habits for a week, and then pick an­other. Stop let­ting the ter­mi­nal push you around, and start re­ar­rang­ing the fur­ni­ture. It’s your house now.

...

Read the original on blog.hofstede.it »

8 370 shares, 89 trendiness

Moving from GitHub to Codeberg, for lazy people

I’ve just started to mi­grate some repos­i­to­ries from GitHub to Codeberg. I’ve wanted to do this for a long time but have stalled on it be­cause I per­ceived Codeberg as not be­ing ready and the mi­gra­tion process as be­ing a lot of (boring) work.

It turns out that is only par­tially true and wildly de­pends on your pro­ject. If you’re in a sim­i­lar po­si­tion as me, hope­fully these notes serve as mo­ti­va­tion and start­ing point. These so­lu­tions are not what I might stick around with long-term, but aimed at what I think is

eas­i­est to get started with when mi­grat­ing from GitHub.

First, there’s the mi­gra­tion of is­sues, pull re­quests and re­leases along with their ar­ti­facts. This is ac­tu­ally the eas­i­est part since Codeberg of­fers repos­i­tory im­port from GitHub that just works, and all these fea­tures have a UI nearly iden­ti­cal to GitHub’s. The im­port pre­serves is­sue num­bers, la­bels, au­thor­ship. The user ex­pe­ri­ence is very much a step above the ex­tremely awk­ward hacks that peo­ple use to im­port from other is­sue track­ers into GitHub.

If you’re us­ing GitHub Pages you can use code­berg.page. There’s a warn­ing about it not of­fer­ing any up­time SLO, but I haven’t no­ticed any down­time at all, and for now it’s fine. You push your HTML to a branch, very much like the old GitHub Pages. Update

2025-09-22: Alternatively you may try https://​grebedoc.dev or https://​www.sta­ti­chost.eu/

The by far nas­ti­est part is CI. GitHub has done an ex­cel­lent job lur­ing peo­ple in with free ma­cOS run­ners and in­fi­nite ca­pac­ity for pub­lic re­pos . You will have to give up on both of those things. I rec­om­mend look­ing into cross-com­pi­la­tion for your pro­gram­ming lan­guage, and to self-host a run­ner for

Forgejo Actions, to solve those prob­lems re­spec­tively.

Why Forgejo

Actions and not Woodpecker CI, is­n’t Woodpecker on Codeberg more

sta­ble? Yes, ab­solutely, in fact the

doc­u­men­ta­tion for Forgejo Actions on Codeberg is out of date right

now, but Forgejo Actions will just feel way more fa­mil­iar com­ing from GitHub Actions. The UI and YAML syn­tax is al­most iden­ti­cal, and the ex­ist­ing ac­tions ecosys­tem mostly works as-is on Codeberg. For ex­am­ple, where my GitHub Actions work­flow would say

uses: dtol­nay/​rust-tool­chain, my Forgejo Actions work­flow would just change to

uses: https://​github.com/​dtol­nay/​rust-tool­chain.

If you ab­solutely need ma­cOS run­ners I’d rec­om­mend stick­ing with GitHub Actions on the GitHub repos­i­tory, mir­ror­ing all com­mits from Codeberg to GitHub and us­ing Forgejo Actions to poll the GitHub API and sync the CI sta­tus back to Codeberg. I haven’t tried this one yet, but I have tried some other CI providers of­fer­ing ma­cOS builds and I don’t think they’re eas­ier or cleaner to in­te­grate into Codeberg than GitHub Actions.

Finally, what to do with the old repo on GitHub? I’ve just up­dated the README and archived the repo.

You could tell Codeberg to push new com­mits to GitHub, but this al­lows users to still file PRs and com­ment on is­sues and com­mits . Some folks have dealt with this by dis­abling is­sues on the GitHub repo, but that is a re­ally de­struc­tive ac­tion as it will 404 all is­sues, and pull re­quests can­not be dis­abled. Some re­pos like lib­virt/​lib­virt have writ­ten a GitHub Action that au­to­mat­i­cally closes all pull re­quests.

...

Read the original on unterwaditzer.net »

9 350 shares, 62 trendiness

EU Parliament Stops Mass Surveillance in Voting Thriller – Paving the Way for Genuine Child Protection!

The con­tro­ver­sial mass sur­veil­lance of pri­vate mes­sages in Europe is com­ing to an end. After the European Parliament had al­ready re­jected the in­dis­crim­i­nate and blan­ket Chat Control by US tech com­pa­nies on 13 March, con­ser­v­a­tive forces at­tempted a de­mo­c­ra­t­i­cally highly ques­tion­able ma­neu­ver yes­ter­day to force a re­peat vote to ex­tend the law any­way.

However, in a true vot­ing thriller to­day, the Parliament fi­nally pulled the plug on this sur­veil­lance ma­nia: With a ra­zor-thin ma­jor­ity of just a sin­gle vote, the Parliament first re­jected the au­to­mated as­sess­ment of un­known pri­vate pho­tos and chat texts as suspicious” or unsuspicious”. In the sub­se­quent fi­nal vote, the amended re­main­ing pro­posal clearly failed to reach a ma­jor­ity.

This means: As of 4 April, the EU dero­ga­tion will ex­pire for good. US cor­po­ra­tions like Meta, Google, and Microsoft must stop the in­dis­crim­i­nate scan­ning of the pri­vate chats of European cit­i­zens. The dig­i­tal pri­vacy of cor­re­spon­dence is re­stored!

This does not cre­ate a le­gal vac­uum—quite the op­po­site. Ending in­dis­crim­i­nate mass scan­ning clears the path for mod­ern, ef­fec­tive child pro­tec­tion. Fearmongering that in­ves­ti­ga­tors will be flying blind” is un­war­ranted: Recently, only 36% of sus­pi­cious ac­tiv­ity re­ports from US com­pa­nies orig­i­nated from the sur­veil­lance of pri­vate mes­sages any­way. Social me­dia and cloud stor­age ser­vices are be­com­ing in­creas­ingly rel­e­vant for in­ves­ti­ga­tions. Targeted telecom­mu­ni­ca­tions sur­veil­lance based on con­crete sus­pi­cion and a ju­di­cial war­rant re­mains fully per­mis­si­ble, as does the rou­tine scan­ning of pub­lic posts and hosted files. User re­port­ing also re­mains fully in­tact.

Digital free­dom fighter and for­mer Member of the European Parliament Patrick Breyer (Pirate Party) com­mented on to­day’s his­toric vic­tory:

This his­toric day brings tears of joy! The EU Parliament has buried Chat Control — a mas­sive, hard-fought vic­tory for the un­prece­dented re­sis­tance of civil so­ci­ety and cit­i­zens! The fact that a sin­gle vote tipped the scales against the ex­tremely er­ror-prone text and im­age search shows: Every sin­gle vote in Parliament and every call from con­cerned cit­i­zens counted!

We have stopped a bro­ken and il­le­gal sys­tem. Once our in­ves­ti­ga­tors are no longer drown­ing in a flood of false and long-known sus­pi­cion re­ports from the US, re­sources will fi­nally be freed up to hunt down or­ga­nized abuse rings in a tar­geted and covert man­ner. Trying to pro­tect chil­dren with mass sur­veil­lance is like des­per­ately try­ing to mop up the floor while leav­ing the faucet run­ning. We must fi­nally turn off the tap! This means gen­uine child pro­tec­tion through a par­a­digm shift: Providers must tech­ni­cally pre­vent cy­ber­groom­ing from the out­set through se­cure app de­sign. Illegal ma­te­r­ial on the in­ter­net must be proac­tively tracked down and deleted di­rectly at the source. That is what truly pro­tects chil­dren.

But be­ware, we can only cel­e­brate briefly to­day: They will try again. The ne­go­ti­a­tions for a per­ma­nent Chat Control reg­u­la­tion are con­tin­u­ing un­der high pres­sure, and soon the planned age ver­i­fi­ca­tion for mes­sen­gers threat­ens to end anony­mous com­mu­ni­ca­tion on the in­ter­net. The fight for dig­i­tal free­dom must go on!”

The Next Battle: The Return of Chat Control and Mandatory ID

Despite to­day’s vic­tory, fur­ther pro­ce­dural steps by EU gov­ern­ments can­not be com­pletely ruled out. Most of all, the tri­logue ne­go­ti­a­tions on a per­ma­nent child pro­tec­tion reg­u­la­tion (Chat Control 2.0) are con­tin­u­ing un­der se­vere time pres­sure. There, too, EU gov­ern­ments con­tinue to in­sist on their de­mand for voluntary” in­dis­crim­i­nate Chat Control.

Furthermore, the next mas­sive threat to dig­i­tal civil lib­er­ties is al­ready on the agenda: Next up in the on­go­ing tri­logue, law­mak­ers will ne­go­ti­ate whether mes­sen­ger and chat ser­vices, as well as app stores, will be legally obliged to im­ple­ment age ver­i­fi­ca­tion. This would re­quire users to pro­vide ID doc­u­ments or sub­mit to fa­cial scans, ef­fec­tively mak­ing anony­mous com­mu­ni­ca­tion im­pos­si­ble and se­verely en­dan­ger­ing vul­ner­a­ble groups such as whistle­blow­ers and per­se­cuted in­di­vid­u­als.

Background: What ex­actly ex­pires on 3 April

An EU in­terim reg­u­la­tion (2021/1232), set to ex­pire on 3 April, cur­rently per­mits US cor­po­ra­tions such as Meta to carry out in­dis­crim­i­nate mass scan­ning of pri­vate mes­sages on a vol­un­tary ba­sis. Three types of chat con­trol are au­tho­rised: scan­ning for al­ready known im­ages and videos (so-called hash scan­ning, which gen­er­ates over 90% of re­ports); au­to­mated as­sess­ment of pre­vi­ously un­known im­ages and videos; and au­to­mated analy­sis of text con­tent in pri­vate chats.

The AI-based analy­sis of un­known im­ages and texts is ex­tremely er­ror-prone. But the in­dis­crim­i­nate mass scan­ning for known ma­te­r­ial is also highly con­tro­ver­sial, too: be­yond the un­re­li­a­bil­ity of the al­go­rithms doc­u­mented by re­searchers, these scans rely on opaque for­eign data­bases rather than European crim­i­nal law. The al­go­rithms are blind to con­text and lack of crim­i­nal in­tent (e.g. con­sen­sual sex­ting be­tween teenagers). As a re­sult, vast num­bers of pri­vate but crim­i­nally ir­rel­e­vant chats are ex­posed.

The fact that to­day’s de­ci­sion by the EU Parliament was also tech­ni­cally im­per­a­tive is proven by a newly pub­lished sci­en­tific study. Renowned IT se­cu­rity re­searchers an­a­lyzed the stan­dard al­go­rithm PhotoDNA”, which is used by tech com­pa­nies for Chat Control. Their damn­ing ver­dict: The soft­ware is unreliable”. The re­searchers proved that crim­i­nals can ren­der il­le­gal im­ages in­vis­i­ble to the scan­ner through min­i­mal al­ter­ations (e.g., adding a sim­ple bor­der), while harm­less im­ages can be eas­ily ma­nip­u­lated so that in­no­cent cit­i­zens are falsely re­ported to the po­lice.

The Hard Facts: Why Chat Control Has Failed Spectacularly

The EU Commission’s 2025 eval­u­a­tion re­port on Chat Control reads like an ad­mis­sion of com­plete fail­ure:

* Data Giant Monopoly: Roughly 99% of all chat re­ports to po­lice in Europe come from a sin­gle US tech cor­po­ra­tion: Meta. US com­pa­nies acted as a pri­vate aux­il­iary po­lice force—with­out ef­fec­tive European over­sight.

* Massive Police Overload from Junk Data: The German Federal Criminal Police Office (BKA) re­ports that a stag­ger­ing 48% of the dis­closed chats are crim­i­nally ir­rel­e­vant. This flood of junk data ties up re­sources that are ur­gently needed for tar­geted in­ves­ti­ga­tions.

* Criminalization of Minors: According to crime sta­tis­tics, around 40% of in­ves­ti­ga­tions in Germany tar­get teenagers who thought­lessly share im­ages (e.g., con­sen­sual sex­ting).

* An Obsolete Model Due to Encryption: Because providers are in­creas­ingly tran­si­tion­ing to end-to-end en­cryp­tion for pri­vate mes­sages, the num­ber of chats re­ported to the po­lice has al­ready dropped by 50% since 2022.

* Failure in Child Protection: According to the Commission’s re­port, there is no mea­sur­able cor­re­la­tion be­tween the mass sur­veil­lance of pri­vate mes­sages and ac­tual con­vic­tions.

During the leg­isla­tive process, for­eign-funded lobby groups and au­thor­i­ties tried to pres­sure the Parliament through fear­mon­ger­ing. A com­par­i­son of their claims with re­al­ity:

Disinformation 1: The European Parliament is to blame for the col­lapse of the tri­logue ne­go­ti­a­tions.”

(Claimed by the lobby al­liance ECLAG and US tech com­pa­nies)

* Fact: It was the EU Council of Ministers that de­lib­er­ately let the ne­go­ti­a­tions fail. Leaked Council ca­bles re­veal that EU mem­ber states showed no will­ing­ness to com­pro­mise, fear­ing that any con­ces­sion could set a prece­dent for the per­ma­nent Chat Control 2.0 reg­u­la­tion. Parliament’s lead ne­go­tia­tor, Birgit Sippel, sharply crit­i­cized the Council: With their lack of flex­i­bil­ity, Member States have de­lib­er­ately ac­cepted that the in­terim reg­u­la­tion will ex­pire.”

Disinformation 2: Without in­dis­crim­i­nate Chat Control, law en­force­ment will be fly­ing blind.”

(Claimed by au­thor­i­ties in­clud­ing BKA President Holger Münch)

* Fact: Targeted sur­veil­lance re­mains al­lowed. The real prob­lem for au­thor­i­ties is their own re­fusal to re­move ma­te­r­ial from the in­ter­net. The Federation of German Criminal Investigators (BDK) warns that this mass sur­veil­lance pro­duces a flood of tips… of­ten with­out any ac­tual in­ves­tiga­tive lead.” Meanwhile, the BKA sys­tem­at­i­cally re­fuses to proac­tively have abuse ma­te­r­ial re­moved from the in­ter­net, as in­ves­tiga­tive re­port­ing by ARD has re­vealed.

Disinformation 4: The de­mand comes pri­mar­ily from vic­tims.”

(Implied by the ECLAG cam­paign)

* Fact: Actual sur­vivors are tak­ing le­gal ac­tion against the sur­veil­lance. Survivor Alexander Hanff writes: Taking away our right to pri­vacy means fur­ther harm­ing us.” To pre­serve safe spaces for vic­tims, a sur­vivor from Bavaria is cur­rently su­ing Meta. Who truly ben­e­fits was ex­posed in an in­ves­tiga­tive re­port by Balkan Insight: The US or­ga­ni­za­tion Thorn, which sells scan­ning soft­ware, in­vests mas­sively in EU lob­by­ing, while ECLAG mem­bers are funded by tech cor­po­ra­tions.

The European Parliament ad­vo­cates a gen­uine par­a­digm shift for fu­ture leg­is­la­tion, sup­ported by civil so­ci­ety, sur­vivor net­works, and IT se­cu­rity ex­perts:

Strict de­fault set­tings and pro­tec­tive mech­a­nisms (Security by Design) to make cy­ber­groom­ing tech­ni­cally harder from the out­set.

Proactive search by a new EU Child Protection Center and im­me­di­ate take­down oblig­a­tions for providers and law en­force­ment on the open in­ter­net and dark­net — il­le­gal ma­te­r­ial must be de­stroyed di­rectly at the source. There must be an end to law en­force­ment agen­cies de­clar­ing them­selves not com­pe­tent” for the re­moval of abuse ma­te­r­ial.

During the leg­isla­tive process, the mas­sive, ques­tion­able lob­by­ing ef­forts were ex­posed: The push for Chat Control is heav­ily dri­ven by for­eign-funded lobby groups and tech ven­dors. The US or­ga­ni­za­tion Thorn, which sells the ex­act type of scan­ning soft­ware in ques­tion, spends hun­dreds of thou­sands of eu­ros lob­by­ing in Brussels. The tech in­dus­try of­fi­cially lob­bied side-by-side with cer­tain or­ga­ni­za­tions for a law that does not pro­tect chil­dren, but rather se­cures their own prof­its and data ac­cess.

Right up to the very end, the US tech in­dus­try and for­eign- or gov­ern­ment-funded lobby groups tried to panic Europe. But flood­ing our po­lice with false pos­i­tives and du­pli­cates from mass sur­veil­lance does­n’t save a sin­gle child from abuse. Today’s de­fin­i­tive fail­ure of Chat Control is a clear stop sign to this sur­veil­lance ma­nia. Negotiators can­not ig­nore this ver­dict in the on­go­ing tri­logue ne­go­ti­a­tions for a per­ma­nent reg­u­la­tion. Indiscriminate mass scan­ning of our pri­vate mes­sages must fi­nally give way to truly ef­fec­tive and tar­geted child pro­tec­tion that re­spects fun­da­men­tal rights.”

...

Read the original on www.patrick-breyer.de »

10 336 shares, 11 trendiness

Updates to GitHub Copilot interaction data usage policy

Today, we’re an­nounc­ing an up­date on how GitHub will use data to de­liver more in­tel­li­gent, con­text-aware cod­ing as­sis­tance. From April 24 on­ward, in­ter­ac­tion data—specif­i­cally in­puts, out­puts, code snip­pets, and as­so­ci­ated con­text—from Copilot Free, Pro, and Pro+ users will be used to train and im­prove our AI mod­els un­less they opt out. Copilot Business and Copilot Enterprise users are not af­fected by this up­date.

Not in­ter­ested? Opt out in set­tings un­der Privacy.” If you pre­vi­ously opted out of the set­ting al­low­ing GitHub to col­lect this data for prod­uct im­prove­ments, your pref­er­ence has been re­tained—your choice is pre­served, and your data will not be used for train­ing un­less you opt in.

This ap­proach aligns with es­tab­lished in­dus­try prac­tices and will im­prove model per­for­mance for all users. By par­tic­i­pat­ing, you’ll help our mod­els bet­ter un­der­stand de­vel­op­ment work­flows, de­liver more ac­cu­rate and se­cure code pat­tern sug­ges­tions, and im­prove their abil­ity to help you catch po­ten­tial bugs be­fore they reach pro­duc­tion.

Our ini­tial mod­els were built us­ing a mix of pub­licly avail­able data and hand-crafted code sam­ples. This past year, we’ve started in­cor­po­rat­ing in­ter­ac­tion data from Microsoft em­ploy­ees and have seen mean­ing­ful im­prove­ments, in­clud­ing in­creased ac­cep­tance rates in mul­ti­ple lan­guages.

The im­prove­ments we’ve seen by in­cor­po­rat­ing Microsoft in­ter­ac­tion data in­di­cate we can im­prove model per­for­mance for a more di­verse range of use cases by train­ing on real-world in­ter­ac­tion data. Should you de­cide to par­tic­i­pate in this pro­gram, the in­ter­ac­tion data we may col­lect and lever­age in­cludes:

Outputs ac­cepted or mod­i­fied by you

Inputs sent to GitHub Copilot, in­clud­ing code snip­pets shown to the model

This pro­gram does not use:

Interaction data from users who opt out of model train­ing in their Copilot set­tings

Content from your is­sues, dis­cus­sions, or pri­vate repos­i­to­ries at rest. We use the phrase at rest” de­lib­er­ately be­cause Copilot does process code from pri­vate repos­i­to­ries when you are ac­tively us­ing Copilot. This in­ter­ac­tion data is re­quired to run the ser­vice and could be used for model train­ing un­less you opt out.

The data used in this pro­gram may be shared with GitHub af­fil­i­ates, which are com­pa­nies in our cor­po­rate fam­ily in­clud­ing Microsoft. This data will not be shared with third-party AI model providers or other in­de­pen­dent ser­vice providers.

We be­lieve the fu­ture of AI-assisted de­vel­op­ment de­pends on real-world in­ter­ac­tion data from de­vel­op­ers like you. It’s why we’re us­ing Microsoft in­ter­ac­tion data for model train­ing and will be­gin us­ing in­ter­ac­tion data from GitHub em­ploy­ees as well.

If you choose to help us im­prove our mod­els with your in­ter­ac­tion data, thank you. Your con­tri­bu­tions make a mean­ing­ful dif­fer­ence in build­ing AI tools that serve the en­tire de­vel­oper com­mu­nity. If you pre­fer not to par­tic­i­pate, that’s fine too—you will still be able to take full ad­van­tage of the AI fea­tures you know and love.

Together, we can con­tinue to build AI that ac­cel­er­ates your work­flows and em­pow­ers you to build bet­ter, more se­cure soft­ware faster than ever.

If you have ques­tions, visit our FAQ and re­lated dis­cus­sion.

Mario Rodriguez leads the GitHub Product team as Chief Product Officer. His core iden­tity is be­ing a learner and his pas­sion is cre­at­ing de­vel­oper tools—so much so that he has spent the last 20 years liv­ing that mis­sion in lead­er­ship roles across Microsoft and GitHub. Mario most re­cently over­saw GitHub’s AI strat­egy and the GitHub Copilot prod­uct line, launch­ing and grow­ing Copilot across thou­sands of or­ga­ni­za­tions and mil­lions of users. Mario spends time out­side of GitHub with his wife and two daugh­ters. He also co-chairs and founded a char­ter school in an ef­fort to progress ed­u­ca­tion in rural re­gions of the United States.

Everything you need to mas­ter GitHub, all in one place.

Build what’s next on GitHub, the place for any­one from any­where to build any­thing.

Meet the com­pa­nies and en­gi­neer­ing teams that build with GitHub.

Catch up on the GitHub pod­cast, a show ded­i­cated to the top­ics, trends, sto­ries and cul­ture in and around the open source de­vel­oper com­mu­nity on GitHub.

We do newslet­ters, tooD­is­cover tips, tech­ni­cal guides, and best prac­tices in our bi­weekly newslet­ter just for devs.

Yes please, I’d like GitHub and af­fil­i­ates to use my in­for­ma­tion for per­son­al­ized com­mu­ni­ca­tions, tar­geted ad­ver­tis­ing and cam­paign ef­fec­tive­ness. See the GitHub Privacy Statement for more de­tails.

...

Read the original on github.blog »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.