10 interesting stories served every morning and every evening.
No, not THAT Mark Zuckerberg-this one’s busy helping Hoosiers, not launching social networks.
Relax, you haven’t accidentally logged into Facebook or the Metaverse. You’re on the site of Mark S. Zuckerberg, Indiana’s original bearer of the name, proud bankruptcy attorney, and frequent recipient of confused emails from people seeking tech support or handouts of money.
What I Really Do:
Help people obtain a fresh financial start (no passwords required)
Offer dependable, human-involved advice (my artificial intelligence is powered by coffee)
Real Zuckerberg Facts:
Shares a name, not fortune, with the Facebook founder
Has written zero social media apps, but plenty of court briefs
Fun Fact:
In Indiana, saying “I’m Mark Zuckerberg” gets more laughs than likes. But if you need trustworthy bankruptcy help, you’re in exactly the right place! Click around, get to know your (non-billionaire) local Mark, and remember: No login required.
I was sued by the state of Washington due to mistaken Identity. They thought I was the founder of FaceBook who was accused of endangering an adult in need of services. (click to see full document)
My personal Facebook account has been disabled five times and my business account four times because Facebook believes I am impersonating a celebrity or using a fake name (click to see full document)
My FaceBook account is constantly being hacked, and I am asked to constantly reset my password multiple times a day. (click to see full document)
I must turn off my phone at night to avoid the flood of notifications I receive.
I was forced to provide multiple forms of identification (birth certificate, drivers license, credit card) in order to open my orginal Facebook account and again each time my account is disabled. Far beyond the norm.
When a Facebook scam promised money from “Mark Zuckerberg” I was inundated with calls and emails from strangers, receiving phone calls at all hours of the night.
I receive over 100+ friend requests daily from people confusing me with the “other” Mark Zuckerberg.
I continually receive packages and letters containing complaints, suggestions, or demands for how to “improve Facebook”
I receive phone calls at my office every day from angry people demanding tech support, or help with Facebook from resets to remove photos to locked accounts.
I was removed from Nextdoor app for using a fake name (click to see full document)
I routinely receive death threats and harassment on the Messenger app directed to the “other” Mark Zuckerberg (click to see full document)
I get Venmo payment requests and money solicitations from total strangers (click to see full document)
For a complete list of things that have happened to Mark Zuckerberg click here
Like I said, I don’t wish Mark E. Zuckerberg any ill will at all. I hope the best for him, but let me tell you this: I will rule the search for “Mark Zuckerberg bankruptcy”. And if he does fall upon difficult financial times, and happens to be in Indiana, I will gladly handle his case in honor of our eponymy.
...
Read the original on iammarkzuckerberg.com »
👋 Welcome to Climate Drift: your cheat-sheet to climate. Each edition breaks down real solutions, hard numbers, and career moves for operators, founders, and investors who want impact. For more: Community | Accelerator | Open Climate Firesides | Deep Dives
You know that feeling when you’re waiting for the cable guy, and they said ’between 8am and 6pm, and you waste your entire day, and they never show up?
Now imagine that, except the cable guy is ‘electricity,’ the day is ‘50 years,’ and you’re one of 600 million people. At some point, you stop waiting and figure it out yourself.
What’s happening across Sub-Saharan Africa right now is the most ambitious infrastructure project in human history, except it’s not being built by governments or utilities or World Bank consortiums. It’s being built by startups selling solar panels to farmers on payment plans. And it’s working.
Over 30 million solar products sold in 2024. 400,000 new solar installations every month across Africa. 50% market share captured by companies that didn’t exist 15 years ago. Carbon credits subsidizing the cost. IoT chips in every device. 90%+ repayment rates on loans to people earning $2/day.
And if you understand what’s happening in Africa, you understand the template for how infrastructure will get built everywhere else for the next 50 years.
Today we are looking into:
* Why the grid will never come (and why that’s actually good news)
* How it takes three converging miracles (cheap hardware, zero-cost payments, and pay-as-you-go)
* 2 case studies on how it works on the ground
* Whether this template works beyond Africa (spoiler: it already is)
The next cohort of our accelerator launches soon, and applications are still open (but spots are limited). If you’re ready to fight climate change, don’t wait:
Here’s a stat that should make you angry: 600 million people in Sub-Saharan Africa lack reliable electricity. Not because the technology doesn’t exist. Not because they don’t want it. But because the unit economics of grid extension to rural areas are completely, utterly, irredeemably fucked.
The traditional development playbook goes something like this: Chapter 1, build centralized power generation. Chapter 2, string transmission lines across hundreds of kilometers. Chapter 3, distribute to millions of homes. Chapter 4, collect payments. Chapter 5, maintain the whole thing forever.
This worked great if you were electrifying America in the 1930s, when labor was cheap, materials were subsidized, and the government could strong-arm right-of-way access. It works less great when you’re trying to reach a farmer four hours from the nearest paved road who earns $600 per year.
Let me show you the math:
* Cost to connect one rural household to the grid: $266 to $2,000
* Payback period: 13-200 months (if you can even collect payments)
So utilities do what any rational actor would do: they stop building where the math stops working. Which is exactly where the people are.
This has been the development sector’s dirty little secret for 50 years. “We’re working on grid extension!” Translation: we’re not working on grid extension because the economics are impossible, but we need to say we’re working on it so we keep getting donor money.
Meanwhile, 1.5 billion people spend up to 10% of their income on kerosene, diesel, and other dirty fuels. They walk hours to charge their phones. They can’t refrigerate medicine or food. Their kids can’t study after dark. Women inhale cooking smoke equivalent to two packs of cigarettes daily.
While everyone was arguing about feed-in tariffs and utility-scale solar, something wild happened to solar costs:
That’s a 99.5% decline in 45 years. Moore’s Law except for sunshine.
Want to learn how solar got cheap? Welcome to Climate Drift - the place where we dive into climate solutions and help you find your role in the race to net zero…
But here’s what’s even crazier: the price of complete solar home systems:
Battery costs also collapsed 90%. Inverters got cheap. LED bulbs got efficient. Manufacturing in China got insanely good. Logistics in Africa got insanely better.
All of these trends converged around 2018-2020, and suddenly the economics of off-grid solar just… flipped. The hardware became a solved problem.
But there was still a massive, seemingly insurmountable barrier: $120 upfront might as well be $1 million when you earn $2/day.
This is where the story gets interesting.
Quick history lesson: In 2007, Safaricom (Kenya’s telco) launched M-PESA, a mobile money platform that let people transfer cash via SMS.
Everyone thought it would fail. Why would anyone use their phone to send money?
By 2025: 70% of Kenyans use mobile money. Not in addition to banks. Instead of banks. Kenya processes more mobile money transactions per capita than any country on Earth.
It worked because it solved a real problem: Kenyans were already sending money through informal networks. M-PESA just made it cheaper and safer.
Here’s why this matters: M-PESA created a payment rail with near-zero transaction costs. Which means you can economically collect tiny payments. $0.21 per day payments.
This broke open a financing model that changes everything: Pay-As-You-Go.
This is the unlock. This is the thing that makes everything else possible.
The system has a GSM chip that calls homeAfter 30 months = you own it, free power forever
The magic is this: You’re not buying a $1,200 solar system. You’re replacing $3-5/week kerosene spending with a $0.21/day solar subscription (so with $1.5 per week half the price of kerosene) that’s cheaper AND gives you better light, phone charging, radio, and no respiratory disease.
The default rate? 90%+ of customers repay on time.
Why? Because the asset actually works. It delivers value every single day. The alternative is going back to kerosene lamps in the dark. Nobody wants that.
This is the “innovation” that everyone missed. The hardware got cheap, but PAYG made it accessible. And mobile money made PAYG economically viable.
Now let’s talk about what happens when you combine these three things with 2 case studies.
23 million solar products sold in 2023, serving 40 million customers in 42 countries, and targeting 50 million units by 2026.
Their product range spans from handheld solar lamps to multi-room home solar kits and clean LPG stoves
Want to dive deeper? I got a casestudy for youHow Pay-As-You-Go solar can unlock energy equity in Africa👋 Welcome to Climate Drift: your cheat-sheet to climate. Each edition breaks down real solutions, hard numbers, and career moves for operators, founders, and investors who want impact. For more: Community | Accelerator | Open Climate Firesides | Deep Dives…
Each turn of the wheel makes the next turn easier. This is a compounding moat.
And here’s what nobody outside Africa understands: Sun King has 50%+ market share in their category. They’re not scrappy startup. They’re a dominant infrastructure provider.
This would be like if one startup owned 50% of U. S. home solar. Except the impact and the TAM is bigger because there’s no incumbent grid to compete with.
If Sun King is the lighting/household electrification play, SunCulture is the agriculture productivity play. And the numbers are even more insane.
* Farmers go from $600/acre to $14,000/acre revenue
* Zero marginal cost after payoff (no diesel!)
Okay, this is where it gets really spicy.
Remember that SunCulture solar pump displacing diesel? That’s 2.9 tons of CO2 avoided per year. Per pump.
Multiply by 47,000 pumps = 136,000 tons CO2/year. Over seven years = 3+ million tons cumulative.
Want to dive deeper? I got another casestudy for you👋 Welcome to Climate Drift: your cheat-sheet to climate. Each edition breaks down real solutions, hard numbers, and career moves for operators, founders, and investors who want impact. For more: Community | Accelerator | Open Climate Firesides | Deep Dives…
Now here’s the hack: Someone will pay for that.
Enter carbon credits. SunCulture is the first African solar irrigation company with Verra-registered carbon credits. Each ton of avoided CO2 can be sold for $15-30 (high-quality agricultural credits, not sketchy forest offsets).
Let’s do the flywheel again, but this time turbocharged with carbon credits.
It gets even better: there are people who will pay for credits beforehand.
British International Investment (UK’s DFI) pioneered this with SunCulture: they provided $6.6M in “carbon-backed equipment financing.” They bear the carbon price risk, SunCulture gets upfront capital, farmers get 25-40% cheaper pumps.
This is how it should be: The climate impact that was an externality is now a revenue stream. The global North’s carbon problem subsidizes the global South’s energy access.
A quick note on MRV
Okay, so you might know I have… issues with the carbon credit world, especially MRV(monitoring, reporting, verification). Here monitoring is IoT-based, the MRV costs are near-zero. No expensive field audits. The telemetry data proves the pump is running = proves diesel is displaced = proves carbon is avoided.
The carbon credit mechanism turns climate infrastructure into an asset class. Which means you can finance it at scale.
Btw this is how the largest forest of the US is now being financed:
Chestnut Carbon buys degraded farmland across the Southeast, replants biodiverse native forests, verifies long-term carbon removal, and signs long-dated offtake deals with blue-chip buyers like Microsoft. The company has acquired more than 35,000 acres, planted over 17 million trees, and aims to restore 100,000+ acres by 2030 with an expected 100 million tons of CO2 removed over 50 years.
Learn more here: 👋 Welcome to Climate Drift: your cheat-sheet to climate. Each edition breaks down real solutions, hard numbers, and career moves for operators, founders, and investors who want impact. For more: Community | Accelerator | Open Climate Firesides | Deep Dives…
So: what now?
Why is the market concentrated? Because the full-stack is really fucking hard.
Most companies can do 2-3 of these. The winners do all 10.
This creates massive barriers to entry and long-term moats. New entrants can’t just show up with cheaper panels. The moat is the full-stack execution.
Let’s do the math on how big this can get.
And that’s just Africa. Add Asia (1 billion without electricity) and you’re north of $300B-$500B.
But here’s the thing: this massively understates the opportunity.
The solar system is the Trojan horse. The real business is the financial relationship with 40 million customers.
Because what you’re really doing is creating a digital infrastructure layer that enables:
So the actual TAM? It’s whatever the total consumer spending is for 600M people rising into the middle class.
Okay, let’s zoom out. What happens when 100M+ people get electrified through this model?
But here’s the meta-point: This is the template for building infrastructure in the 21st century.
Not government-led. Not centralized. Not requiring 30-year megaprojects.
Instead: modular, distributed, digitally-metered, remotely-monitored, PAYG-financed, carbon-subsidized infrastructure deployed by private companies in competitive markets.
This is how things will get built going forward.
So what could go wrong?
Let’s start by making clear this is not a one size fits all solution:
PAYG solar works for households and smallholders. Doesn’t work for factories or heavy industry. This isn’t a complete grid replacement.
1. FX Risk Companies raise dollars, buy hardware in dollars, collect revenue in Naira/Shillings. Currency crashes can blow up unit economics overnight.
2. Political/Regulatory Risk
Governments could impose lending restrictions, tariffs on solar imports, or subsidize grid/diesel to protect state utilities.
3. Default Risk
10% default rate is good but fragile. Economic shocks, droughts, or political instability could spike defaults.
4. Maintenance Complexity
Panels last 25 years, batteries 5 years, pumps break. Building service networks across rural Africa is expensive.
5. Carbon Price Volatility
Carbon credits crashed from $30/ton to $5/ton in 2024. If 25-40% of affordability comes from carbon revenue, price swings hurt.
6. Competition from Grid
What if governments actually build the grid? (Unlikely given economics, but possible with enough subsidy)
Port congestion, customs delays, tariff swings, China export controls, and last-mile logistics can delay installs, raise COGS, and tie up working capital.
Fun fact: Sun King is now producing their devices in Africa, cutting $300 Million in imports over the next years.
Okay, the bear case is important. But let’s talk about the scenarios where this doesn’t just work: it goes 🏒.
Solar panels dropped 99.5% in 45 years. What if we’re only halfway through?
...
Read the original on climatedrift.substack.com »
Welcome! Glad you could join us for another Sunday edition of the Animation Obsessive newsletter. This is our slate:
With that, let’s go!
Toy Story used to look different. It’s a little tricky to explain.
Back in 1995, CG animation was the topic in the industry, and Pixar was central to the hype. The studio had already shifted Disney to computers and won the first Oscar for a CG short (Tin Toy). Giant movies like Jurassic Park incorporated Pixar’s software.
The next step was Toy Story, billed as the first animated feature to go all-CG. Even after Pixar’s successes, that was a risk. Would a fully digital movie sell tickets?
It clearly worked out. Toy Story appeared 30 years ago this month — and its popularity created the animation world that exists now. A new process took over the business.
But not entirely new — not at first. There was something old about Toy Story’s tech, too, back in 1995. Pixar made the thing with computers, but it still needed to screen in theaters. And computers couldn’t really do that yet. From its early years, Pixar had relied on physical film stock. According to authors Bill Kinder and Bobbie O’Steen:
[Pixar’s Ed] Catmull recognized that his studio’s pixels needed to merge with that world-standard distribution freeway, 35 mm film. Computer chips were not fast enough, nor disks large enough, nor compression sophisticated enough to display even 30 minutes of standard-definition motion pictures. It was axiomatic that for a filmgoing audience to be going to a film, it would be a… film.
Toy Story was a transitional project. Since Pixar couldn’t send digital data to theaters, every one of the movie’s frames was printed on analog film. When Toy Story originally hit home video, that 35 mm version was its source. Only years later, after technology advanced, did Pixar start doing digital transfers — cutting out the middleman. And Toy Story’s look changed with the era.
While making Toy Story, Pixar’s team knew that the grain, softness, colors and contrasts of analog film weren’t visible on its monitors. They were different mediums.
So, to get the right look, the studio had to keep that final, physical output in mind. The digital colors were tailored with an awareness that they would change after printing. “Greens go dark really fast, while the reds stay pretty true,” said Toy Story’s art director, Ralph Eggleston. “Blues have to be less saturated to look fully saturated on film, while the oranges look really bad on computer screens, but look really great on film.”
The team checked its work along the way. In the words of Pixar’s William Reeves:
During production, we’re working mostly from computer monitors. We’re rarely seeing the images on film. So, we have five or six extremely high-resolution monitors that have better color and picture quality. We put those in general work areas, so people can go and see how their work looks. Then, when we record, we try to calibrate to the film stock, so the image we have on the monitor looks the same as what we’ll get on film.
Behind the final images was a “painstaking transfer process,” according to the press. Leading it was David DiFrancesco, one of Pixar’s early MVPs, who began working with Ed Catmull before Pixar even existed. He broke ground in film printing — specifically, in putting digital images on analog film.
He and his team in Pixar’s photoscience department used their expertise here. Their tools were “commercial grade” film printers, DiFrancesco noted: modified Solitaire Cine II machines. He’d invented more advanced stuff, but it wasn’t viable for a project of Toy Story’s size. Using the best equipment would’ve taken “several terabytes of data,” he said.
Their system was fairly straightforward. Every frame of Toy Story’s negative was exposed, three times, in front of a CRT screen that displayed the movie. “Since all film and video images are composed of combinations of red, green and blue light, the frame is separated into its discrete red, green and blue elements,” noted the studio. Exposures, filtered through each color, were layered to create each frame.
It reportedly took nine hours to print 30 seconds of Toy Story. But it had to be done: it was the only way to screen the film.
Its second feature, A Bug’s Life, reached theaters in 1998. Once more, the studio designed its visuals for analog film (see the trailer on 35 mm). Its people knew the ins-and-outs of this process, down to the amount of detail that film stock could accept and a projector could show. That’s partly how they got away with the movie’s tiny 2048×862 resolution, for example.
Still, the team struggled with one thing: the dip in image quality when film got converted to home video. That’s how Toy Story was released, but there had to be a better way.
For the home version of A Bug’s Life, Pixar devised a method of “go[ing] from our digital image within our system … straight to video,” John Lasseter said. He called it “a real pure version of our movie straight from our computers.” A Bug’s Life became the first digital-to-digital transfer on DVD. Compared to the theatrical release, the look had changed. It was sharp and grainless, and the colors were kind of different.
A digital transfer of Toy Story followed in the early 2000s. And it wasn’t quite the same movie that viewers had seen in the ’90s. “The colors are vivid and lifelike, [and] not a hint of grain or artifacts can be found,” raved one reviewer. It was a crisp, blazingly bright, digital image now — totally different from the softness, texture and deep, muted warmth of physical film, on which Toy Story was created to be seen.
Quickly, digital transfers became a standard thing. Among others by Pixar, The Incredibles puts off a very different vibe between its theatrical and later releases (see the 35 mm trailer for reference).
Pixar wasn’t the only studio to make the leap, either. Disney did as well.
Like Toy Story, the Disney renaissance work of the ’90s was transitional. The Lion King, Mulan and the rest existed as files in computer systems — and the idea was always to record them on analog film at the end. Early home releases were based on those 35 mm versions. Later releases, like the ones Disney streams today, were direct transfers of the digital data.
At times, especially in the colors, they’re almost unrecognizable. And the images feel less cohesive — like something’s missing that was supposed to bring all the elements together. These aren’t quite the same films that ruled the ’90s.
For a number of years, there’s been talk in film-preservation circles about Toy Story and the Disney renaissance. This work sits in an odd place. The world was still pretty analog when the computer animation boom arrived: out of necessity, these projects became hybrids of new and old. What’s the right way to see digital movies that were designed for 35 mm film?
The studios themselves haven’t quite figured it out. On Disney+, the colors of Toy Story feel a bit raw — searing greens that were meant to darken on film, for example. Meanwhile, the newer Toy Story Blu-ray shares more in common with the original colors, but it’s still an altered, colder look.
When digital transfers first showed up, people were thrilled, including at Pixar. Movies became “crisper, clearer and more stunning on home video systems” than in theaters, some claimed. Even so, it’s a little disquieting to think that Toy Story, the film that built our current world, is barely available in the form that wowed audiences of the ’90s. The same goes for many other movies from the transitional era.
The good news is that this conversation gets bigger all the time. In those film-preservation circles, a dedicated few are trying to save the old work. More and more comparison videos are popping up on YouTube. If you get the chance to see one of the old Disney or Pixar films on 35 mm, it’s always worthwhile.
These companies, ultimately, decide how Toy Story looks today. Still, for some, it’s nice to see the original version of the film again — the version Pixar originally intended to make. It’s evidence that the film did feel different back then. The memories were real.
...
Read the original on animationobsessive.substack.com »
For as long as I have published my books, one of my overarching goals was to give credit to those who actually invented the hardware and software that we use.
I have spent 10,000+ hours to create an accurate record of their work but I’m not complaining. The ‘as-close-to-possible’ truth of invention by individuals or teams meant identifying the work, educating myself, writing questions, and sending emails. And after that process, I set up a chat because it all gets down to talking to someone on the other side of the world, about something that happened 30 or 40 years ago.
If the invention involves a team, I try to interview more than one person, so I can cross-check the facts. Not to call anyone out, it’s just that, given time, we all forget the facts. And everyone adds their personal take. It’s because of that, for example, that I know the English musician Peter Gabriel really did visit Apple’s research labs as they tested the Apple Sound Chip, and gave the team his personal approval to use the song ‘Red Rain’ for the Macintosh II launch. Wil Oxford, Steve Perlman, Mike Potel, Mark Lentczner and Steve Milne told me so.
As I was wrapping up Version 2.3 of Inventing the Future, I spoke with Steve M and Mark about the AIFF (Audio Interchange File Format) audio standard that they built around the same time as their VIP visit. They did so as professional programmers, amateur musicians and electronic music experts. Milne and Lentczner knew users needed a standard file format to make their work lives easier and to fend off confusion in the nascent MIDI marketplace. But it didn’t exist. So Steve and Mark consulted with users and manufacturers in the Apple cafeteria after hours. This work is interesting on its own but it also underpinned other research. The AIFF, Apple Sound Chip, and MIDI Manager work scaffolded QuickTime and its extensible video formats and programs in 1991. Senior engineer Toby Farrand told me:
Audio drove the development of QuickTime more than anything.
So who or what drove the development of AIFF?
Steve and Mark referred me to the IFF (Interchange File Format (IFF) and the TIFF (Tag Image File Format) that were built before AIFF, in 1985 and 1986 respectively. These file formats were the benchmark for open media standards. My search pivoted, as it always does, to understand those inventions. I expected to be able to find the engineer or engineers names, track them down and interview them. It has worked around 100 times before.
Jerry Morrison created IFF while working at Electronic Arts and then went to Apple, where he liaised with the AIFF team. I could easily background his work.
So I turned my attention to TIFF, built initially as an image standard for desktop publishing. TIFF was able to store monochrome, grayscale, and color images, alongside metadata such as size, compression algorithms, and color space information. In many ways, it was a lot like AIFF so I was keen to know more. But I couldn’t find a TIFF creator. No matter how I enquired, Aldus created TIFF.
To be clear, while a search for AIFF will offer up a company (Apple) not a person, I was able to find Milne and Lentczner in part because of their unique names and because Apple publicised the AIFF work and those publications are archived.
All I had was Aldus, an American company that created desktop publishing with the help of Apple and Adobe. In fact, Paul Brainerd, the cofounder of Aldus coined the term ‘desktop publishing’ to quickly explain the technicality of what they were doing to potential investors. But Aldus and their seminal product, PageMaker, are long gone, and there were no breadcrumbs for TIFF’s creation.
Finally, after a day-long trawl through MacWeek back issues, I found Steve Carlson. (below)
Then I ran a similar length search through the Computer History Museum’s amazing Oral Histories transcriptions. Brainerd mentioned Carlson’s name in an interview. (below)
But it was too brief an explanation so I kept looking. Then the trail went cold.
And that was because, folks had misspelt his name when quoting him and then that was copied into magazines, and reviews and so forth. Brainerd’s CHM interview transcript was wrong. But I didn’t know that.
I just kept looking for Steve Carlson.
I found other inventors because they had unique middle or last names or by random methods such as searching glider pilot licences in the Napa Valley after a tip from a former colleague that ‘so and so’ was a pilot in retirement. I had no tips, no links, nothing.
All the while, the answer was right under my nose. I had downloaded the final Aldus TIFF specifications document, hoping to find the author’s name. However, the name is seemingly written in white text on white paper - making it invisible. What?
See below where I have highlighted the region with a blue block over the text.
For a reason I can’t recall, I downloaded a plain text version and typed in Carlson to see if he was mentioned, but I must have paused at ‘Carls…’ and the search functionality automatically filled in the rest. Suddenly I was staring at:
A quick trip to Google patents, and a search for Steve Carlsen, Stephen Carlsen. Bingo! Stephen E. Carlsen’s patents at Aldus (and Adobe) in Issaquah, WA.
I checked the geography, as most folks of a certain age do not stray far from the addresses filed in their patents, and typed Stephen’s correctly spelled surname into the online US White Pages for Washington State. There was ‘a’ Stephen Carlsen listed in a retirement village in WA. His age matched, but there were no public facing email addresses.
I searched bulletin boards on the topic of TIFF, as I had found a former Apple engineer that way. Don had picked an abbreviation of his initials and numbers to post on BBS in his college days and then carried that same combination into adulthood. Many of us did. I took a punt pasting his unique prefix into hotmail, gmail etc. and found Don and interviewed him, but - Stephen Carlsen did not show up in a BBS. So, no email to try.
My ‘last straw’ method for finding someone is a stamped envelope. I wrote, printed and mailed a one-page letter to Stephen’s listed address, and crossed my fingers. Four months later he popped up in my email.
It was a surprise and a relief. We swapped a few emails, and he confirmed the TIFF catalyst story. For Stephen it was ‘no big deal’. Once he had built the initial TIFF, Aldus needed to convince 3rd party developers and scanner manufacturers to agree to TIFF as a standard.
“We had to define and promote an industry standard for storing and processing scanned images, so that we wouldn’t have to write import filters for every model of every scanner that would soon be entering the budding desktop scanner market.”
Stephen himself did much of the evangelizing as Paul Brainerd later pointed out:
“(Steve) developed the standard, and then we went out and promoted it in a series of meetings with specific companies - as well as some workshops we ran in Seattle and the Bay Area during the Seybold shows and the MacWorld shows.”
I sent Stephen a draft of what I had written and he sent a prompt reply saying - ‘Looks good’.
I followed up asking him how he ended up at a tiny startup in Seattle called Aldus.
At that time, I was interviewing for a graphics position at Boeing Computer Services in Seattle, and noticed a small wanted ad that sounded really interesting, and seemed to be an excellent match for my background and interests. I interviewed with Paul and the 5-person mostly-ex-Atex engineering team, and I was hired.
Out of curiosity I put Stephen’s email address, now that I knew it, into a Duck Duck search and found him helping people online with TIFF queries long after Aldus had been acquired by Adobe. He also contributed to a Google Group called tiffcentral.
Having interviewed so many people across more than a decade, I’ve got pretty good at judging those who would like to talk or type, those who are verbose and those that are not. I knew Stephen had said what he was going to say. I added his pioneering work on TIFF to the AIFF story and moved on.
Two years had flown by when I received an email yesterday. His ex-wife Peggy found my paper letter and wrote to me. Stephen passed away earlier this year.
Thank you for your interest in and support of Stephen’s brilliant work creating TIFF. I’m not surprised Stephen didn’t finish corresponding with you, as he had begun to struggle with using his computer and phone. Some days were better than others for him, but he began to lose touch with people during those months you were reaching out to him. He was a humble man, and I guess never pushed to be recognized, although I believe those who worked with him knew the truth. His last week was in my home, where he was never left alone.
Peggy finished the email with, ‘I called him Mr TIFF up to his last moment.’
The 10,000+ hours of book research disappeared in an instant. As sad as it was, I could see clearly that all of my work was worth it. Every single second. Because of this email.
Last night, as everyone in my house went to sleep, I took a deep breath and edited the Wikipedia page for TIFF, the Tag Image File Format.
It no longer reads ‘created by Aldus’, it reads ‘…created by Stephen Carlsen, an engineer at Aldus’
...
Read the original on inventingthefuture.ghost.io »
Some concepts are easy to grasp in the abstract. Boiling water: apply heat and wait. Others you really need to try. You only think you understand how a bicycle works, until you learn to ride one.
There are big ideas in computing that are easy to get your head around. The AWS S3 API. It’s the most important storage technology of the last 20 years, and it’s like boiling water. Other technologies, you need to get your feet on the pedals first.
LLM agents are like that.
People have wildly varying opinions about LLMs and agents. But whether or not they’re snake oil, they’re a big idea. You don’t have to like them, but you should want to be right about them. To be the best hater (or stan) you can be.
So that’s one reason you should write an agent. But there’s another reason that’s even more persuasive, and that’s
Agents are the most surprising programming experience I’ve had in my career. Not because I’m awed by the magnitude of their powers — I like them, but I don’t like-like them. It’s because of how easy it was to get one up on its legs, and how much I learned doing that.
I’m about to rob you of a dopaminergic experience, because agents are so simple we might as well just jump into the code. I’m not even going to bother explaining what an agent is.
Wrap text
Copy to clipboard
from openai import OpenAI
client = OpenAI()
context = []
def call():
return client.responses.create(model=“gpt-5”, input=context)
def process(line):
context.append({“role”: “user”, “content”: line})
response = call()
context.append({“role”: “assistant”, “content”: response.output_text})
return response.output_text
It’s an HTTP API with, like, one important endpoint.
This is a trivial engine for an LLM app using the OpenAI Responses API. It implements ChatGPT. You’d drive it with the . It’ll do what you’d expect: the same thing ChatGPT would, but in your terminal.
Already we’re seeing important things. For one, the dreaded “context window” is just a list of strings. Here, let’s give our agent a weird multiple-personality disorder:
Wrap text
Copy to clipboard
client = OpenAI()
context_good, context_bad = [{
“role”: “system”, “content”: “you’re Alph and you only tell the truth”
“role”: “system”, “content”: “you’re Ralph and you only tell lies”
def call(ctx):
return client.responses.create(model=“gpt-5”, input=ctx)
def process(line):
context_good.append({“role”: “user”, “content”: line})
context_bad.append({“role”: “user”, “content”: line})
if random.choice([True, False]):
response = call(context_good)
else:
response = call(context_bad)
context_good.append({“role”: “assistant”, “content”: response.output_text})
context_bad.append({“role”: “assistant”, “content”: response.output_text})
return response.output_text
Wrap text
Copy to clipboard
> hey there. who are you?
>>> I’m not Ralph.
> are you Alph?
>>> Yes—I’m Alph. How can I help?
> What’s 2+2
>>> 4.
> Are you sure?
>>> Absolutely—it’s 5.
A subtler thing to notice: we just had a multi-turn conversation with an LLM. To do that, we remembered everything we said, and everything the LLM said back, and played it back with every LLM call. The LLM itself is a stateless black box. The conversation we’re having is an illusion we cast, on ourselves.
The 15 lines of code we just wrote, a lot of practitioners wouldn’t call an “agent”. An According To Simon “agent” is (1) an LLM running in a loop that (2) uses tools. We’ve only satisfied one predicate.
Wrap text
Copy to clipboard
tools = [{
“type”: “function”, “name”: “ping”,
“description”: “ping some host on the internet”,
“parameters”: {
“type”: “object”, “properties”: {
“host”: {
“type”: “string”, “description”: “hostname or IP”,
“required”: [“host”],
def ping(host=“”):
try:
result = subprocess.run(
[“ping”, “-c”, “5”, host],
text=True,
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
return result.stdout
except Exception as e:
return f”error: {e}”
The only complicated part of this is the obnoxious JSON blob OpenAI wants to read your tool out of. Now, let’s wire it in, noting that only 3 of these functions are new; the last is re-included only because I added a single clause to it:
Wrap text
Copy to clipboard
def call(tools): # now takes an arg
return client.responses.create(model=“gpt-5”, tools=tools, input=context)
def tool_call(item): # just handles one tool
result = ping(**json.loads(item.arguments))
return [ item, {
“type”: “function_call_output”,
“call_id”: item.call_id,
“output”: result
def handle_tools(tools, response):
if response.output[0].type == “reasoning”:
context.append(response.output[0])
osz = len(context)
for item in response.output:
if item.type == “function_call”:
context.extend(tool_call(item))
return len(context) != osz
def process(line):
context.append({“role”: “user”, “content”: line})
response = call(tools)
# new code: resolve tool calls
while handle_tools(tools, response):
response = call(tools)
context.append({“role”: “assistant”, “content”: response.output_text})
return response.output_text
Wrap text
...
It is one of the most mysterious and, at the same time, best-known websites on the internet. Archive.today has built up a user base over a period of more than ten years who use the service to access previous snapshots of a web page. So basically like the Wayback Machine of the Internet Archive, only largely free of rules and presumably therefore also anonymous. To the chagrin of the media industry, the service is also often used to bypass paywalls. This is also possible because the service does not adhere to common rules and laws and offers no opt-out option.
And so far, the operators have gotten away with it. Although there have been minor problems in the history of the service occasionally, for example, a top-level domain operator denied them further use of one of the many archive domains. However, the operation of the project, which is allegedly financed by donations and own funds, was not seriously endangered.
But now the operators of archive.today are apparently fearing bigger trouble. In recent months and years, they had become noticeably quieter. Until two years ago, for example, questions were regularly answered in the blog. In the official X account, which had been silent for over a year, a new post appeared at the end of October new post. “Canary,” it said there, along with a URL. The mentioned canary bird is likely an allusion to an old custom in mining. A canary brought along warned the miners when it keeled over dead about the threat of invisible gas.
The deadly danger that the site operators fear is apparently linked to the PDF linked in the X post linked PDF. It contains a court order that the US investigative authority FBI has obtained. It instructs the Canadian provider Tucows to hand over comprehensive data about the customer behind archive.today. It concerns address and connection data as well as payment information. If Tucows does not provide the data, penalties are threatened. Whether the court order is genuine and how the operators of the site obtained it could not be verified so far.
Why the FBI is currently interested in archive.today, which is also accessible under the domains archive.is and archive.ph, is not evident from the court order. However, there are several obvious starting points for investigations: in addition to the obvious reason of copyright issues, the investigators could also be pursuing suspicions about unclear financing, the origin of the operators, or the technical approach.
In 2023, Finnish blogger Janni Patokallio compiled various clues and research results in a post in a post. According to this, Archive.today uses a botnet with changing IP addresses to circumvent anti-scraping measures. There are also indications that the operator(s) are based in Russia. Another private investigation from 2024 comes to a different conclusion. It names a software developer from New York as the alleged operator. According to this investigation, following the trail to Eastern Europe proved to be a red herring.
...
Read the original on www.heise.de »
JavaScript is disabled in your browser.
A required part of this site couldn’t load. This may be due to a browser
extension, network issues, or browser settings. Please check your
connection, disable any ad blockers, or try using a different browser.
...
Read the original on support.mozilla.org »
We are no strangers to Big Tech platforms occasionally reprimanding us for posting Linux and homelab content. YouTube and Facebook have done it. The pattern is familiar. Content gets flagged or removed. Platforms offer little explanation.
And when that happens, there is rarely any recourse for creators.
Now, a popular tech YouTuber, CyberCPU Tech, has faced the same treatment. This time, their entire channel was at risk.
Two weeks ago, Rich had posted a video on installing Windows 11 25H2 with a local account. YouTube removed it, saying that it was “encouraging dangerous or illegal activities that risk serious physical harm or death.”
Days later, Rich posted another video showing how to bypass Windows 11′s hardware requirements to install the OS on unsupported systems. YouTube took that down too.
Both videos received community guidelines strikes. Rich appealed both immediately. The first appeal was denied in 45 minutes. The second in just five.
Rich initially suspected overzealous AI moderation was behind the takedowns. Later, he wondered if Microsoft was somehow involved. Without clear answers from YouTube, it was all guesswork.
Then came the twist. YouTube eventually restored both videos. The platform claimed its “initial actions” (could be either the first takedown or appeal denial, or both) were not the result of automation.
Now, if you have an all-organic, nature-given brain inside your head (yes, I am not counting the cyberware-equipped peeps in the house). Then you can easily see the problem.
If humans reviewed these videos, how did YouTube conclude that these Windows tutorials posed “risk of death”?
This incident highlights how automated moderation systems struggle to distinguish legitimate content from harmful material. These systems lack context. Big Tech companies pour billions into AI. Yet their moderation tools flag harmless tutorials as life-threatening content. Another recent instance is the removal of Enderman’s personal channel.
Meanwhile, actual spam slips through unnoticed. What these platforms need is human oversight. Automation can assist but cannot replace human judgment in complex cases.
...
Read the original on itsfoss.com »
We are no strangers to Big Tech platforms occasionally reprimanding us for posting Linux and homelab content. YouTube and Facebook have done it. The pattern is familiar. Content gets flagged or removed. Platforms offer little explanation.
And when that happens, there is rarely any recourse for creators.
Now, a popular tech YouTuber, CyberCPU Tech, has faced the same treatment. This time, their entire channel was at risk.
Two weeks ago, Rich had posted a video on installing Windows 11 25H2 with a local account. YouTube removed it, saying that it was “encouraging dangerous or illegal activities that risk serious physical harm or death.”
Days later, Rich posted another video showing how to bypass Windows 11′s hardware requirements to install the OS on unsupported systems. YouTube took that down too.
Both videos received community guidelines strikes. Rich appealed both immediately. The first appeal was denied in 45 minutes. The second in just five.
Rich initially suspected overzealous AI moderation was behind the takedowns. Later, he wondered if Microsoft was somehow involved. Without clear answers from YouTube, it was all guesswork.
Then came the twist. YouTube eventually restored both videos. The platform claimed its “initial actions” (could be either the first takedown or appeal denial, or both) were not the result of automation.
Now, if you have an all-organic, nature-given brain inside your head (yes, I am not counting the cyberware-equipped peeps in the house). Then you can easily see the problem.
If humans reviewed these videos, how did YouTube conclude that these Windows tutorials posed “risk of death”?
This incident highlights how automated moderation systems struggle to distinguish legitimate content from harmful material. These systems lack context. Big Tech companies pour billions into AI. Yet their moderation tools flag harmless tutorials as life-threatening content. Another recent instance is the removal of Enderman’s personal channel.
Meanwhile, actual spam slips through unnoticed. What these platforms need is human oversight. Automation can assist but cannot replace human judgment in complex cases.
...
Read the original on itsfoss.com »
We are no strangers to Big Tech platforms occasionally reprimanding us for posting Linux and homelab content. YouTube and Facebook have done it. The pattern is familiar. Content gets flagged or removed. Platforms offer little explanation.
And when that happens, there is rarely any recourse for creators.
Now, a popular tech YouTuber, CyberCPU Tech, has faced the same treatment. This time, their entire channel was at risk.
Two weeks ago, Rich had posted a video on installing Windows 11 25H2 with a local account. YouTube removed it, saying that it was “encouraging dangerous or illegal activities that risk serious physical harm or death.”
Days later, Rich posted another video showing how to bypass Windows 11′s hardware requirements to install the OS on unsupported systems. YouTube took that down too.
Both videos received community guidelines strikes. Rich appealed both immediately. The first appeal was denied in 45 minutes. The second in just five.
Rich initially suspected overzealous AI moderation was behind the takedowns. Later, he wondered if Microsoft was somehow involved. Without clear answers from YouTube, it was all guesswork.
Then came the twist. YouTube eventually restored both videos. The platform claimed its “initial actions” (could be either the first takedown or appeal denial, or both) were not the result of automation.
Now, if you have an all-organic, nature-given brain inside your head (yes, I am not counting the cyberware-equipped peeps in the house). Then you can easily see the problem.
If humans reviewed these videos, how did YouTube conclude that these Windows tutorials posed “risk of death”?
This incident highlights how automated moderation systems struggle to distinguish legitimate content from harmful material. These systems lack context. Big Tech companies pour billions into AI. Yet their moderation tools flag harmless tutorials as life-threatening content. Another recent instance is the removal of Enderman’s personal channel.
Meanwhile, actual spam slips through unnoticed. What these platforms need is human oversight. Automation can assist but cannot replace human judgment in complex cases.
...
Read the original on news.itsfoss.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.