10 interesting stories served every morning and every evening.
OpenAI is now internally testing ‘ads’ inside ChatGPT that could redefine the web economy.
Up until now, the ChatGPT experience has been completely free.
While there are premium plans and models, you don’t see GPT sell you products or show ads. On the other hand, Google Search has ads that influence your buying behaviour.
As spotted by Tibor on X, ChatGPT Android app 1.2025.329 beta includes new references to an “ads feature” with “bazaar content”, “search ad” and “search ads carousel.”
This move could disrupt the web economy, as what most people don’t understand is that GPT likely knows more about users than Google.
For example, OpenAI could create personalised ads on ChatGPT that promote products that you really want to buy. It might also sneak in ads in the search ads, similar to Google Search ads.
The leak suggests that ads will initially be limited to the search experience only, but this may change in the future.
ChatGPT has roughly 800 million people using it every week, up from 100 million weekly users in November 2023 and about 300 million weekly users in late 2024.
An OpenAI-backed study estimated 700 million users sending 18 billion messages per week by July 2025, which lines up with this growth, and other analysts now peg traffic at around 5–6 billion visits per month.
GPT handles about 2.5 billion prompts a day, and India has become the single biggest user base, ahead of the US.
ChatGPT has everything it needs for ads to succeed. What do you think?
...
Read the original on www.bleepingcomputer.com »
More than a decade ago, when I was applying to graduate school, I went through a period of deep uncertainty. I had tried the previous year and hadn’t gotten in anywhere. I wanted to try again, but I had a lot going against me.
I’d spent most of my undergrad building a student job-portal startup and hadn’t balanced it well with academics. My GPA needed explaining. My GMAT score was just okay. I didn’t come from a big-brand employer. And there was no shortage of people with similar or stronger profiles applying to the same schools.
Even though I had learned a few things from the first round, the second attempt was still difficult. There were multiple points after I submitted applications where I lost hope.
But during that stretch, a friend and colleague kept repeating one line to me:
“All it takes is for one to work out.”
He’d say it every time I spiraled. And as much as it made me smile, a big part of me didn’t fully believe it. Still, it became a little maxim between us. And eventually, he was right — that one did work out. And it changed my life.
I’ve thought about that framing so many times since then.
You don’t need every job to choose you. You just need the one that’s the right fit.
You don’t need every house to accept your offer. You just need the one that feels like home.
You don’t need every person to want to build a life with you. You just need the one.
You don’t need ten universities to say yes. You just need the one that opens the right door.
These processes — college admissions, job searches, home buying, finding a partner — can be emotionally brutal. They can get you down in ways that feel personal. But in those moments, that truth can be grounding.
All it takes is for one to work out.
And that one is all you need.
...
Read the original on alearningaday.blog »
Iceland has taken the rare step of treating a climate-linked ocean threat as a matter of national survival, launching a coordinated government response to one of the most feared potential tipping points in the climate system.
Officials say the shift reflects mounting evidence that a key Atlantic current system could be heading toward dangerous instability.
According to CNN, Iceland’s National Security Council formally labelled the possible collapse of the Atlantic Meridional Overturning Circulation (AMOC) a national security risk in September — the first time the country has applied such a designation to a climate impact.
The move followed a government briefing on new research that raised “grave concerns” about the system’s future stability.
Jóhann Páll Jóhannsson, Iceland’s minister for environment, energy and climate, said the risks extend far beyond weather.
“Our climate, economy and security are deeply tied to the stability of the ocean currents around us,” he told CNN.
He later described the threat as “an existential threat,” warning that a breakdown could disrupt transport, damage infrastructure and hit the country’s fishing industry.
The AMOC — often compared to a giant conveyor belt — carries warm water northward before it cools and sinks, helping regulate weather across the Atlantic basin.
CNN reported that scientists increasingly worry that warming temperatures and disrupted salinity levels are slowing the system.
Some studies suggest a tipping point could be reached this century, though the exact timeline remains uncertain.
Stefan Rahmstorf, an oceanographer at Potsdam University, told CNN that a collapse “cannot be considered a low likelihood risk anymore.”
The consequences, he said, would be dramatic: surging sea levels along US and European coasts, major monsoon disruptions across Africa and Asia, and a deep freeze across parts of Europe.
For Iceland, he said, the country “would be close to the center of a serious regional cooling,” with sea ice potentially surrounding the island.
The security designation means Iceland will now pursue a high-level, cross-government effort to analyse the threat and consider how to manage or reduce the consequences. Jóhannsson said the decision
“reflects the seriousness of the issue and ensures that the matter gets the attention it deserves.”
Rahmstorf praised Iceland’s stance, telling CNN that other nations should treat the risk with similar urgency.
Jóhannsson said the country is confronting a stark possibility: “What we do know is that the current climate might change so drastically that it could become impossible for us to adapt… this is not just a scientific concern — it’s a matter of national survival and security.”
...
Read the original on www.dagens.com »
In the interests of clarity, I am a former NASA engineer/scientist with a PhD in space electronics. I also worked at Google for 10 years, in various parts of the company including YouTube and the bit of Cloud responsible for deploying AI capacity, so I’m quite well placed to have an opinion here.
The short version: this is an absolutely terrible idea, and really makes zero sense whatsoever. There are multiple reasons for this, but they all amount to saying that the kind of electronics needed to make a datacenter work, particularly a datacenter deploying AI capacity in the form of GPUs and TPUs, is exactly the opposite of what works in space. If you’ve not worked specifically in this area before, I’ll caution against making gut assumptions, because the reality of making space hardware actually function in space is not necessarily intuitively obvious.
The first reason for doing this that seems to come up is abundant access to power in space. This really isn’t the case. You basically have two options: solar and nuclear. Solar means deploying a solar array with photovoltaic cells — something essentially equivalent to what I have on the roof of my house here in Ireland, just in space. It works, but it isn’t somehow magically better than installing solar panels on the ground — you don’t lose that much power through the atmosphere, so intuition about the area needed transfers pretty well. The biggest solar array ever deployed in space is that of the International Space Station (ISS), which at peak can deliver a bit over 200kW of power. It is important to mention that it took several Shuttle flights and a lot of work to deploy this system — it measures about 2500 square metres, over half the size of an American football field.
Taking the NVIDIA H200 as a reference, the per-GPU-device power requirements are on the order of 0.7kW per chip. These won’t work on their own, and power conversion isn’t 100% efficient, so in practice 1kW per GPU might be a better baseline. A huge, ISS-sized, array could therefore power roughly 200 GPUs. This sounds like a lot, but lets keep some perspective: OpenAI’s upcoming Norway datacenter is intending to house 100,000 GPUs, probably each more power hungry than the H200. To equal this capacity, you’d need to launch 500 ISS-sized satellites. In contrast, a single server rack (as sold by NVIDIA preconfigured) will house 72 GPUs, so each monster satellite is only equivalent to roughly three racks.
Nuclear won’t help. We are not talking nuclear reactors here — we are talking about radioisotope thermal generators (RTGs), which typically have a power output of about 50W - 150W. So not enough to even run a single GPU, even if you can persuade someone to give you a subcritical lump of plutonium and not mind you having hundreds of chances to scatter it across a wide area when your launch vehicle explosively self-disassembles.
I’ve seen quite a few comments about this concept where people are saying things like, “Well, space is cold, so that will make cooling really easy, right?”
Really, really no.
Cooling on Earth is relatively straightforward. Air convection works pretty well — blow air across a surface, particularly one designed to have a large surface area to volume ratio like a heatsink, will transfer heat from the heatsink to the air quite effectively. If you need more power density than can be directly cooled in this way (and higher power GPUs are definitely in that category), you can use liquid cooling to transfer heat from the chip to a larger radiator/heatsink elsewhere. In datacenters on Earth, it is common to set up cooling loops where machines are cooled via chilled coolant (usually water) that is pumped around racks, with the heat extracted and cold coolant returned to the loop. Typically the coolant is cooled via convective cooling to the air, so one way or another this is how things work on Earth.
In space, there is no air. The environment is close enough to a hard, total vacuum as makes no practical difference, so convection just doesn’t happen. On the space engineering side, we typically think about thermal management, not just cooling. Thing is, space doesn’t really have a temperature as-such. Only materials have a temperature. It may come as a surprise, but in the Earth-Moon system the average temperature of pretty much anything is basically the same as the average temperature of Earth, because this is why Earth has that particular temperature. If a satellite is rotating, a bit like a chicken on a rotisserie, it will tend toward having a consistent temperature that’s roughly similar to that of the Earth surface. If it isn’t rotating, the side pointing away from the sun will tend to get progressively colder, with a limit due to the cosmic microwave background, around 4 Kelvin, just a little bit above absolute zero. On the sunward side, things can get a bit cooked, hitting hundreds of centigrade. Thermal management therefore requires very careful design, making sure that heat is carefully directed where it needs to go. Because there is no convection in a vacuum, this can only be achieved by conduction, or via some kind of heat pump.
I’ve designed space hardware that has flown in space. In one particular case, I designed a camera system that needed to be very small and lightweight, whilst still providing science-grade imaging capabilities. Thermal management was front and centre in the design process — it had to be, because power is scarce in small spacecraft, and thermal management has to be achieved whilst keeping mass to a minimum. So no heat pumps or fancy stuff for me — I went in the other direction, designing the system to draw a maximum of about 1 watt at peak, dropping to around 10% of that when the camera was idle. All this electrical power turns into heat, so if I can draw 1 watt only while capturing an image, then turn the image sensor off as soon as the data is in RAM, I can halve the consumption, then when the image has been downloaded to the flight computer I can turn the RAM off and drop the power down to a comparative trickle. The only thermal management needed was bolting the edge of the board to the chassis so the internal copper planes in the board could transfer any heat generated.
Cooling even a single H200 will be an absolute nightmare. Clearly a heatsink and fan won’t do anything at all, but there is a liquid cooled H200 variant. Let’s say this was used. This heat would need to be transferred to a radiator panel — this isn’t like the radiator in your car, no convection, remember? — which needs to radiate heat into space. Let’s assume that we can point this away from the sun.
The Active Thermal Control System (ATCS) on the ISS is an example of such a thermal control system. This is a very complex system, using an ammonia cooling loop and a large thermal radiator panel system. It has a dissipation limit of 16kW, so roughly 16 H200 GPUs, a bit over the equivalent to a quarter of a ground-based rack. The thermal radiator panel system measures 13.6m x 3.12 m, i.e., roughly 42.5 square metres. If we use 200kW as a baseline and assume all of that power will be fed to GPUs, we’d need a system 12.5 times bigger, i.e., roughly 531 square metres, or about 2.6 times the size of the relevant solar array. This is now going to be a very large satellite, dwarfing the ISS in area, all for the equivalent of three standard server racks on Earth.
This is getting into my PhD work now. Assuming you can both power and cool your electronics in space, you have the further problem of radiation tolerance.
The first question is where in space?
If you are in low Earth orbit (LEO), you are inside the inner radiation belt, where radiation dose is similar to that experienced by high altitude aircraft — more than an airliner, but not terrible. Further out, in mid Earth orbit (MEO), where the GPS satellites live, they are not protected by the Van Allen belts — worse, this orbit is literally inside them. Outside the belts, you are essentially in deep space (details vary with how close to the Sun you happen to be, but the principles are similar).
There are two main sources of radiation in space — from our own star, the Sun, and from deep space. This basically involves charged particles moving at a substantial percentage of the speed of light, from electrons to the nuclei of atoms with masses up to roughly that of oxygen. These can cause direct damage, by smashing into the material from which chips are made, or indirectly, by travelling through the silicon die without hitting anything but still leaving a trail of charge behind them.
The most common conseqence of this happening is a single-event upset (SEU), where a direct impact or (more commonly) a particle passing through a transistor briefly (approx 600 picoseconds) causes a pulse to happen where it shouldn’t have. If this causes a bit to be flipped, we call this a SEU. Other than damage to data, they don’t cause permanent damage.
Worse is single-event latch-up. This happens when a pulse from a charged particle causes a voltage to go outside the power rails powering the chip, causing a transistor essentially to turn on and stay on indefinitely. I’ll skip the semiconductor physics involved, but the short version is that if this happens in a bad way, you can get a pathway connected between the power rails that shouldn’t be there, burning out a gate permanently. This may or may not destroy the chip, but without mitigation it can make it unusable.
For longer duration missions, which would be the case with space based datacenters because they would be so expensive that they would have to fly for a long time in order to be economically viable, it’s also necessary to consider total dose effects. Over time, the performance of chips in space degrades, because repeated particle impacts make the tiny field-effect transistors switch more slowly and turn on and off less completely. In practice, this causes maximum viable clock rates to decay over time, and for power consumption to increase. Though not the hardest issue to deal with, this must still be mitigated or you tend to run into a situation where a chip that was working fine at launch stops working because either the power supply or cooling has become inadequate, or the clock is running faster than the chip can cope with. It’s therefore necessary to have a clock generator that can throttle down to a lower speed as needed — this can also be used to control power consumption, so rather than a chip ceasing to function it will just get slower.
The next FAQ is, can’t you just use shielding? No, not really, or maybe up to a point. Some kinds of shielding can make the problem worse — an impact to the shield can cause a shower of particles that then cause multiple impact at once, which is far harder to mitigate. The very strongest cosmic rays can go through an astonishing amount of solid lead — since mass is always at a premium, it’s rarely possible to deploy significant amounts of shielding, so radiation tolerance must be built into the system (this is often described as Radiation Hardness By Design, RHBD).
GPUs and TPUs and the high bandwidth RAM they depend on are absolutely worst case for radiation tolerance purposes. Small geometry transistors are inherently much more prone both to SEUs and latch-up. The very large silicon die area also makes the frequency of impacts higher, since that scales with area.
Chips genuinely designed to work in space are taped out with different gate structures and much larger geometries. The processors that are typically used have the performance of roughly a 20-year-old PowerPC from 2005. Bigger geometries are inherently more tolerant, both to SEUs and total dose, and the different gate topologies are immune to latch up, whilst providing some degree of SEU mitigation via fine-grained redundancy at the circuit level. Taping out a GPU or TPU with this kind of approach is certainly possible, but the performance would be a tiny fraction of that of a current generation Earth-based GPU/TPU.
There is a you-only-live-once (my terminology!) approach, where you launch the thing and hope for the best. This is commonplace in small cubesats, and also why small cubesats often fail after a few weeks on orbit. Caveat emptor!
Most satellites communicate with the ground via radio. It is difficult to get much more than about 1Gbps reliably. There is some interesting work using lasers to communicate with satellites, but this depends on good atmospheric conditions to be feasible. Contrasting this with a typical server rack on Earth, where 100Gbps rack-to-rack interconnect would be considered at the low end, and it’s easy to see that this is also a significant gap.
I suppose this is just about possible if you really want to do it, but I think I’ve demonstrated above that it would firstly be extremely difficult to achieve, disproportionately costly in comparison with Earth-based datacenters, and offer mediocre performance at best.
If you still think this is worth doing, good luck, space is hard. Myself, I think it’s a catastrophically bad idea, but you do you.
...
Read the original on taranis.ie »
Fed up with trillion-dollar companies exploiting your data? Forced to use their services? Your data held for ransom? Your data used to train their AI models? Opt-outs for data collection instead of opt-ins?
Join the movement to make companies more like Clippy. Set your profile picture to Clippy, make your voice heard.
Below is a video that explains the Be Like Clippy movement. It’s a call to action for developers, companies, and users alike to embrace a more open, transparent, and user-friendly approach to technology.
...
Read the original on be-clippy.com »
Americans have grown sour on one of the longtime key ingredients of the American dream.
Almost two-thirds of registered voters say that a four-year college degree isn’t worth the cost, according to a new NBC News poll, a dramatic decline over the last decade.
Just 33% agree a four-year college degree is “worth the cost because people have a better chance to get a good job and earn more money over their lifetime,” while 63% agree more with the concept that it’s “not worth the cost because people often graduate without specific job skills and with a large amount of debt to pay off.”
In 2017, U. S. adults surveyed were virtually split on the question — 49% said a degree was worth the cost and 47% said it wasn’t. When CNBC asked the same question in 2013 as part of its All American Economic Survey, 53% said a degree was worth it and 40% said it was not.
The eye-popping shift over the last 12 years comes against the backdrop of several major trends shaping the job market and the education world, from exploding college tuition prices to rapid changes in the modern economy — which seems once again poised for radical transformation alongside advances in AI.
“It’s just remarkable to see attitudes on any issue shift this dramatically, and particularly on a central tenet of the American dream, which is a college degree. Americans used to view a college degree as aspirational — it provided an opportunity for a better life. And now that promise is really in doubt,” said Democratic pollster Jeff Horwitt of Hart Research Associates, who conducted the poll along with the Republican pollster Bill McInturff of Public Opinion Strategies.
“What is really surprising about it is that everybody has moved. It’s not just people who don’t have a college degree,” Horwitt added.
National data from the Bureau of Labor Statistics shows that those with advanced degrees earn more and have lower unemployment rates than those with lower levels of education. That’s been true for years.
But what has shifted is the price of college. While there have been some small declines in tuition prices over the last decade, when adjusted for inflation, College Board data shows that the average, inflation-adjusted cost of public four-year college tuition for in-state students has doubled since 1995. Tuition at private, four-year colleges is up 75% over the same period.
Poll respondents who spoke with NBC News all emphasized those rising costs as a major reason why the value of a four-year degree has been undercut.
Jacob Kennedy, a 28-year-old server and bartender living in Detroit, told NBC News that while he believes “an educated populace is the most important thing for a country to have,” if people can’t use those degrees because of the debt they’re carrying, it undercuts the value.
Kennedy, who has a two-year degree, reflected on “the number of people who I’ve met working in the service industry who have four-year degrees and then within a year of graduating immediately quit their ‘grown-up jobs’ to go back to the jobs they had.”
“The cost overwhelms the value,” he continued. “You go to school with all that student debt — the jobs you get out of college don’t pay that debt, so you have to go find something else that can pay that debt.”
The 20-point decline over the last 12 years among those who say a degree is worth it — from 53% in 2013 to 33% now — is reflected across virtually every demographic group. But the shift in sentiment is especially striking among Republicans.
In 2013, 55% of Republicans called a college degree worth it, while 38% said it wasn’t worth it. In the new poll, just 22% of Republicans say the four-year degree is worth it, while 74% say it’s not.
Democrats have seen a significant shift too, but not to the same extent — a decline from 61% who said a degree was worth it in 2013 to 47% this year.
Over the same period, the composition of both parties has changed, with the Republican Party garnering new and deeper support from voters without college degrees, while the Democratic Party drew in more degree-holders.
Remarkably, less than half of voters with college degrees see those degrees as worth the cost: 46% now, down from 63% in 2013.
Those without a college degree were about split on the question in 2013. Now, 71% say a four-year degree is not worth the cost, while 26% say it is.
Preston Cooper, a senior fellow at the right-leaning American Enterprise Institute, said enough cracks have proliferated under the long-standing narrative that a college degree always pays off to create a serious rupture.
“Some people drop out, or sometimes people end up with a degree that is not worth a whole lot in the labor market, and sometimes people pay way too much for a degree relative to the value of what that credential is,” he said. “These cases have created enough exceptions to the rule that a bachelor’s degree always pays off, so that people are now more skeptical.”
The upshot is that interest in technical, vocational and two-year degree programs has soared.
“I think students are more wary about taking on the risk of a four-year or even a two-year degree,” he said. “They’re now more interested in any pathway that can get them into the labor force more quickly.”
Josiah Garcia, a 24-year-old in Virginia, said he recently enrolled in a program to receive a four-year engineering degree after working as an electrician’s apprentice. He said he was motivated to go back to school because he saw the degree as having a direct effect on his future earning potential.
But he added that he didn’t feel that those who sought other degrees in areas like art or theater could say the same.
“A lot of my friends who went to school for art or dance didn’t get the job they thought they could get after graduating,” he said, arguing that degrees for “softer skills” should be cheaper than those in STEM fields.
Jessica Burns, a 38-year-old Iowa resident and bachelor’s degree-holder who works for an insurance company, told NBC News that for her, the worth of a four-year-degree largely depends on the cost.
She went to a community college and then a state school to earn her degree, so she said she graduated without having to spend an “insane” amount of money.
But her husband went to a private college for his degree, and she quipped: “We are going to have student loan debt for him forever.”
Burns said she believes a college degree is “essential for a lot of jobs. You’re not going to get an interview if you don’t have a four-year degree for a lot of jobs in my field.”
But she framed the value of degrees more in terms of how society views them instead of intrinsic value.
“It’s not valuable because it’s brought a bunch of value added, it’s valuable because it’s the key to even getting in the door,” she said. “Our society needs to figure out that if we value it, we need to make it affordable.”
Burns said she believes that a lot more people in her millennial generation are “now saddled with a huge amount of debt, even as successful business professionals,” which will influence how her peers approach paying for college for their children.
There hasn’t just been a decline in the cost-benefit analysis of a degree. Gallup polling also shows a marked decline in public confidence in higher education over the last decade, albeit with a slight increase over the last year.
“This is a political problem. It’s also a real problem for higher education. Colleges and universities have lost that connection they’ve had with a large swath of the American people based on affordability,” Horwitt said. “They’re now seen as out of touch and not accessible to many Americans.”
The NBC News poll surveyed 1,000 registered voters Oct. 24-28 via a mix of telephone interviews and an online survey sent via text message. The margin of error is plus or minus 3.1 percentage points.
...
Read the original on www.nbcnews.com »
Let’s rip the Band-Aid off immediately: If your underlying business process is a mess, sprinkling “AI dust” on it won’t turn it into gold. It will just speed up the rate at which you generate garbage. In the world of Business IT, we get seduced by the shiny new toy. Right now, that toy is Artificial Intelligence. Boardrooms are buzzing with buzzwords like LLMs, agentic workflows, and generative reasoning. Executives are frantically asking, “What is our AI strategy?“Like every major technological shift before it—from the steam engine to the spreadsheet—AI does not inherently make an organization smarter. AI, like any other tool, only makes faster.If you automate a stupid decision, you just make stupid decisions at light speed. If you apply an agentic AI workflow to a bureaucratic nightmare of an approval chain, you haven’t fixed the bureaucracy; you’ve just built a robot that hates its job as much as your employees do.For decades, traditional software demanded structure. Rows, columns, booleans, and fixed fields. If data didn’t fit the box, the computer couldn’t read it.Because computers couldn’t handle the mess, humans handled it (before AI). And humans don’t always follow a flow chart. These processes—like “handling a complex customer complaint” or “brainstorming a marketing campaign”—are often ad-hoc, intuitive, and completely undocumented. They live in the heads of your senior staff, not in your SOPs.If you want to use AI to process unstructured data, you must first bring structure to the workflow itself. You need to improve your process design to account for the ambiguity that AI handles.What is the transformation? (What exactly is the human—or now the AI—supposed to extract or deduce from that mess?)The Old Way: An analyst reads 50 contracts (unstructured), highlights risks based on gut feeling (unstructured process), and summarizes them in 3 days.The AI Way: An AI scans 50 contracts and extracts specific risk clauses based on defined parameters in 3 minutes.The process (Review Contracts -> Identify Risk -> Summarize) hasn’t changed, but it had to be rigorously defined for the AI to work. The intelligence (knowing what a “risk” actually means) still requires human governance. What has changed is the velocity.Go back to the whiteboard. Map out your value chain—especially the messy, human-centric parts involving unstructured data that you previously ignored. Find the bottlenecks. Identify the waste.Technology changes.
The rules of business efficiency do not.
It’s always the process, stupid!
And that’s where actual AI Tools are missing that point, because they weren’t build for that
Von der Idee zur App ohne eine Zeile Code zu schreiben
Vom Datengrab zur Goldmine - KI Einsatz mit schnellem ROI (Promptcast)
Wie man KI am schnellsten gewinnbringend einsetzen kann (Diesmal nur als Prompcast)
Vom Datengrab zur Goldmine - KI Einsatz mit schnellem ROI0:00/894.6184131×
Live long and prosper 😉🖖
Silicon Valleys KI-Burggraben hat ein Leck — es heißt Open Source
Der Mythos der uneinnehmbaren Festung
In den Strategie-Etagen des Silicon Valley erzählt man sich gerne die Geschichte von den uneinnehmbaren Burggräben. Der KI-Wettlauf, so die Legende, sei ein Spiel für Giganten mit Budgets so groß wie Kleinstaaten. Nur eine Handvoll US Tech-Konzerne könne hier mitspielen, der Rest der Welt schaut
Was, wenn der lauteste Teilnehmer im Raum nicht zwangsläufig der führende ist?
...
Read the original on its.promp.td »
For those unfamiliar, Zigtools was founded to support the Zig community, especially newcomers, by creating editor tooling such as ZLS, providing building blocks for language servers written in Zig with lsp-kit, working on tools like the Zigtools Playground, and contributing to Zig editor extensions like vscode-zig.
A couple weeks ago, a Zig resource called Zigbook was released with a bold claim of “zero AI” and an original “project-based” structure.
Unfortunately, even a cursory look at the nonsense chapter structure, book content, examples, generic website, or post-backlash issue-disabled repo reveals that the book is wholly LLM slop and the project itself is structured like some sort of sycophantic psy-op, with botted accounts and fake reactions.
We’re leaving out all direct links to Zigbook to not give them any more SEO traction.
We thought that the broad community backlash would be the end of the project, but Zigbook persevered, releasing just last week a brand new feature, a “high-voltage beta” Zig playground.
As we at Zigtools have our own Zig playground (repo, website), our interest was immediately piqued. The form and functionality looked pretty similar and Zigbook even integrated (in a non-functional manner) ZLS into their playground to provide all the fancy editor bells-and-whistles, like code completions and goto definition.
Knowing Zigbook’s history of deception, we immediately investigated the WASM blobs. Unfortunately, the WASM blobs are byte-for-byte identical to ours. This cannot be a coincidence given the two blobs (zig.wasm, a lightly modified version of the Zig compiler, and zls.wasm, ZLS with a modified entry point for WASI) are entirely custom-made for the Zigtools Playground.
We archived the WASM files for your convenience, courtesy of the great Internet Archive:
We proceeded to look at the JavaScript code, which we quickly determined was similarly copied, but with LLM distortions, likely to prevent the code from being completely identical. Still, certain sections were copied one-to-one, like the JavaScript worker data-passing structure and logging (original ZLS playground code, plagiarized Zigbook code).
The following code from both files is identical:
try {
// @ts-ignore
const exitCode = wasi.start(instance);
postMessage({
stderr: `\n\n–-\nexit with exit code ${exitCode}\n–-\n`,
} catch (err) {
postMessage({ stderr: `${err}` });
postMessage({
done: true,
onmessage = (event) => {
if (event.data.run) {
run(event.data.run);
The \n\n–-\nexit with exit code ${exitCode}\n–-\n is perhaps the most obviously copied string.
Funnily enough, despite copying many parts of our code, Zigbook didn’t copy the most important part of the ZLS integration code, the JavaScript ZLS API designed to work with the ZLS WASM binary’s API. That JavaScript code is absolutely required to interact with the ZLS binary which they did plagiarize. Zigbook either avoided copying that JavaScript code because they knew it would be too glaringly obvious, because they fundamentally do not understand how the Zigtools Playground works, or because they plan to copy more of our code.
To be clear, copying our code and WASM blobs is entirely permissible given that the playground and Zig are MIT licensed. Unfortunately, Zigbook has not complied with the terms of the MIT license at all, and seemingly claims the code and blobs as their own without correctly reproducing the license.
We sent Zigbook a neutral PR correcting the license violations, but they quickly closed it and deleted the description, seemingly to hide their misdeeds.
The original description (also available in the “edits” dropdown of the original PR comment) is reproduced below:
We (@zigtools) noticed you were using code from the Zigtools Playground, including byte-by-byte copies of our WASM blobs and excerpts of our JavaScript source code. This is a violation of the MIT license that the Zigtools Playground is licensed under alongside a violation of the Zig MIT license (for the zig.wasm blob).The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
We’ve fixed this by adding the licenses in question to your repository. As your repository does not include a direct link to the *.wasm dependencies, we’ve added a license disclaimer on the playground page as well that mentions the licenses.
Zigbook’s aforementioned bad behavior and their continued violation of our license and unwillingness to fix the violation motivated us to write this blog post.
It’s sad that our first blog post is about the plagiarism of our coolest subproject. We challenged ourselves by creating a WASM-based client-side playground to enable offline usage, code privacy, and no server costs.
This incident has motivated us to invest more time into our playground and has generated a couple of ideas:
* We’d like to enable multifile support to allow more complex Zig projects to be run in the browser
* We’d like to collaborate with fellow Ziguanas to integrate the playground into their excellent Zig tutorials, books, and blogpostsA perfect example usecase would be enabling folks to hop into Ziglings online with the playgroundThe Zig website itself would be a great target as well!
* A perfect example usecase would be enabling folks to hop into Ziglings online with the playground
* The Zig website itself would be a great target as well!
* We’d like to support stack traces using DWARF debug info which is not yet emitted by the self-hosted Zig compiler
As Zig community members, we advise all other members of the Zig community to steer clear of Zigbook.
If you’re looking to learn Zig, we strongly recommend looking at the excellent official Zig learn page which contains excellent resources from the previously mentioned Ziglings to Karl Seguin’s Learning Zig.
We’re also using this opportunity to mention that we’re fundraising to keep ZLS sustainable for our only full-time maintainer, Techatrix. We’d be thrilled if you’d be willing to give just $5 a month. You can check out our OpenCollective or GitHub Sponsors.
...
Read the original on zigtools.org »
What can researchers do if they suspect that their manuscripts have been peer reviewed using artificial intelligence (AI)? Dozens of academics have raised concerns on social media about manuscripts and peer reviews submitted to the organizers of next year’s International Conference on Learning Representations (ICLR), an annual gathering of specialists in machine learning. Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work.
Graham Neubig, an AI researcher at Carnegie Mellon University in Pittsburgh, Pennsylvania, was one of those who received peer reviews that seemed to have been produced using large language models (LLMs). The reports, he says, were “very verbose with lots of bullet points” and requested analyses that were not “the standard statistical analyses that reviewers ask for in typical AI or machine-learning papers.”
But Neubig needed help proving that the reports were AI-generated. So, he posted on X (formerly Twitter) and offered a reward for anyone who could scan all the conference submissions and their peer reviews for AI-generated text. The next day, he got a response from Max Spero, chief executive of Pangram Labs in New York City, which develops tools to detect AI-generated text. Pangram screened all 19,490 studies and 75,800 peer reviews submitted for ICLR 2026, which will take place in Rio de Janeiro, Brazil, in April. Neubig and more than 11,000 other AI researchers will be attending.
Pangram’s analysis revealed that around 21% of the ICLR peer reviews were fully AI-generated, and more than half contained signs of AI use. The findings were posted online by Pangram Labs. “People were suspicious, but they didn’t have any concrete proof,” says Spero. “Over the course of 12 hours, we wrote some code to parse out all of the text content from these paper submissions,” he adds.
The conference organizers say they will now use automated tools to assess whether submissions and peer reviews breached policies on using AI in submissions and peer reviews. This is the first time that the conference has faced this issue at scale, says Bharath Hariharan, a computer scientist at Cornell University in Ithaca, New York, and senior programme chair for ICLR 2026. “After we go through all this process … that will give us a better notion of trust.”
The Pangram team used one of its own tools, which predicts whether text is generated or edited by LLMs. Pangram’s analysis flagged 15,899 peer reviews that were fully AI-generated. But it also identified many manuscripts that had been submitted to the conference with suspected cases of AI-generated text: 199 manuscripts (1%) were found to be fully AI-generated; 61% of submissions were mostly human-written; but 9% contained more than 50% AI-generated text.
Pangram described the model in a preprint1, which it submitted to ICLR 2026. Of the four peer reviews received for the manuscript, one was flagged as fully AI-generated and another as lightly AI-edited, the team’s analysis found.
AI is transforming peer review — and many scientists are worried
For many researchers who received peer reviews for their submissions to ICLR, the Pangram analysis confirmed what they had suspected. Desmond Elliott, a computer scientist at the University of Copenhagen, says that one of three reviews he received seemed to have missed “the point of the paper”. His PhD student who led the work suspected that the review was generated by LLMs, because it mentioned numerical results from the manuscript that were incorrect and contained odd expressions.
When Pangram released its findings, Elliott adds, “the first thing I did was I typed in the title of our paper because I wanted to know whether my student’s gut instinct was correct”. The suspect peer review, which Pangram’s analysis flagged as fully AI-generated, gave the manuscript the lowest rating, leaving it “on the borderline between accept and reject”, says Elliott. “It’s deeply frustrating”.
...
Read the original on www.nature.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.