10 interesting stories served every morning and every evening.
We’ve been searching for a memory-safe programming language to replace C++ in Ladybird for a while now. We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited. Rust is a different story. The ecosystem is far more mature for systems programming, and many of our contributors already know the language. Going forward, we are rewriting parts of Ladybird in Rust.
When we originally evaluated Rust back in 2024, we rejected it because it’s not great at C++ style OOP. The web platform object model inherits a lot of 1990s OOP flavor, with garbage collection, deep inheritance hierarchies, and so on. Rust’s ownership model is not a natural fit for that.
But after another year of treading water, it’s time to make the pragmatic choice. Rust has the ecosystem and the safety guarantees we need. Both Firefox and Chromium have already begun introducing Rust into their codebases, and we think it’s the right choice for Ladybird too.
Our first target was LibJS , Ladybird’s JavaScript engine. The lexer, parser, AST, and bytecode generator are relatively self-contained and have extensive test coverage through test262, which made them a natural starting point.
I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand. We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output. Zero regressions across the board:
No performance regressions on any of the JS benchmarks we track either.
Beyond the test suites, I’ve done extensive testing by browsing the web in a lockstep mode where both the C++ and Rust pipelines run simultaneously, verifying that output is identical for every piece of JavaScript that flows through them.
If you look at the code, you’ll notice it has a strong “translated from C++” vibe. That’s because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline. The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
This is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time. New Rust code will coexist with existing C++ through well-defined interop boundaries.
We want to be deliberate about which parts get ported and in what order, so the porting effort is managed by the core team. Please coordinate with us before starting any porting work so nobody wastes their time on something we can’t merge.
I know this will be a controversial move, but I believe it’s the right decision for Ladybird’s future. :^)
...
Read the original on ladybird.org »
Brian Merchant, writing for Blood in the Machine, reports that people across the United States are dismantling and destroying Flock surveillance cameras, amid rising public anger that the license plate readers aid U. S. immigration authorities and deportations.
Flock is the Atlanta-based surveillance startup valued at $7.5 billion a year ago and a maker of license plate readers. It has faced criticism for allowing federal authorities access to its massive network of nationwide license plate readers and databases at a time when U. S. Immigration and Customs Enforcement is increasingly relying on data to raid communities as part of the Trump administration’s immigration crackdown.
Flock cameras allow authorities to track where people go and when by taking photos of their license plates from thousands of cameras located across the United States. Flock claims it doesn’t share data with ICE directly, but reports show that local police have shared their own access to Flock cameras and its databases with federal authorities.
While some communities are calling on their cities to end their contracts with Flock, others are taking matters into their own hands.
Merchant reports instances of broken and smashed Flock cameras in La Mesa, California, just weeks after the city council approved the continuation of Flock cameras deployed in the city, despite a clear majority of attendees favoring their shutdown. A local report cited strong opposition to the surveillance technology, with residents raising privacy concerns.
Other cases of vandalism have stretched from California and Connecticut to Illinois and Virginia. In Oregon, six license plate-scanning cameras on poles were cut down and at least one spray-painted. A note left at the base of the severed poles said, “Hahaha get wrecked ya surveilling fucks,” reports Merchant.
According to DeFlock, a project aimed at mapping license plate readers, there are close to 80,000 cameras across the United States. Dozens of cities have so far rejected the use of Flock’s cameras, and some police departments have since blocked federal authorities from using their resources.
A Flock spokesperson did not say, when reached by TechCrunch, if the company keeps track of how many cameras have been destroyed since being deployed.
...
Read the original on techcrunch.com »
On Christmas Eve, 9 “peer-reviewed” economics papers were quietly retracted by Elsevier, the world’s largest academic publisher.
This includes 7 papers in the International Review of Financial Analysis (a good journal—it has an 18% acceptance rate):
Plus two more retractions in Finance Research Letters (29% acceptance rate):
Two days later, three more papers were retracted at the International Review of Economics & Finance (30% acceptance rate):
All 12 papers had one thing in common: Brian M Lucey, Professor of International Finance and Commodities, Trinity College Dublin — the #1 ranked economics and business school in Ireland — as a co-author.
Lucey published 56 papers in 2025, one paper every 6.5 days. Lmao.
Lucey has published 44 papers in Finance Research Letters alone, an Elsevier journal he edited.
I emailed Lucey for comment, but he did not respond.
Brian Lucey… where have I heard that name before?
Oh yeah, he bullied me on Twitter in 2023.
The stated reason for the retractions was that: “review of this submission was overseen, and the final decision was made, by the Editor Brian Lucey, despite his role as a co-author of the manuscript. This compromised the editorial process and breached the journal’s policies.”
In plain terms, Lucey was serving as editor while approving his own papers. The result was a complete bypass of peer review—an abuse of editorial authority that functioned as a citation-cartel scheme.
Apparently this was an open secret in the profession for many years, with EJMR comments going back 5+ years explicitly calling him out as a cheater:
Along with the 12 retractions, Lucey was removed as an editor at 5 journals: International Review of Financial Analysis, the International Review of Economics & Finance, Finance Research Letters, Financial Management, & Energy Finance.
Lucey remains as editor-in-chief at Wiley’s Journal of Economic Surveys.
I emailed Wiley, and they provided me with this statement:
We are aware of these concerns and have investigated Prof. Lucey’s activity on Journal of Economic Surveys. Our research integrity team did not find any concerns regarding conflict of interest or mishandling of papers, nor has Prof. Lucey published any papers in the journal since he joined the editorial team as a co-editor in 2024. We expect full commitment and adherence to our editorial practices and standards, and we will be monitoring the situation to ensure that there is no improper handling of papers at the journal.
In response to Wiley’s statement, one EMJR user wrote: “I am baffled how they could possibly still have confidence in him, given his serious and systematic ethical lapses in editorial positions. Sounds somewhat naive to expect ‘full adherence to our editorial practices and standards’!”
Until being purged from the leadership of these 5 journals, Lucey played a central role in coordinating Elsevier’s Finance Journals Ecosystem, which allows “participating journals to suggest transferring a rejected manuscript to another journal in the system without the need for resubmission and the associated cost.”
That system, and the editors involved, “came under fire last year when a preprint suggested it might facilitate citation stacking as a way to boost journal impact factors. The analysis in the preprint also suggested a citation ring involving Elsevier editors could be at work.”
I emailed the anonymous “Theophilos Nomos” who wrote this paper, but they did not respond to my email.
That pre-print names Samuel Vigne, a finance professor at Luiss Business School, former PhD student of Lucey, and prolific Lucey co-author (they have published at least 33 papers together) as a core node of Lucey’s citation cartel.
Multiple publications by Vigne and Lucey are flagged on PubPeer.
This example neatly illustrates how their co-authorship trading scheme operated:
It describes a draft uploaded to SSRN with three authors:
After submitting that draft to the Elsevier finance ecosystem, that draft was scrubbed from SSRN, and in the final published version, an additional author (Samuel Vigne) was added as a new author, with an “equal contribution” statement. The two versions are otherwise identical, containing the same figures, sections, and text.
Co-authorship trading is only one part of the operation. The other is citation stacking. In this model, a small, tightly linked group funnels an enormous volume of papers into the same handful of journals, then systematically stuffs those papers with citations to one another. The result is a rapid, artificial explosion in citation counts that makes them look like influential geniuses.
Take John Gooddell, a professor at the University of Akron and a Lucey co-author. Gooddell has published 68 papers in Finance Research Letters alone, a journal edited by Lucey. If each paper contains even a modest 50 references, that amounts to roughly 3,400 citations recycled through a single outlet. In 2024 alone, Gooddell published 61 papers. He’s not doing research. He’s farming citations.
Following Lucey’s retractions, Samuel Vigne was removed as the editor-in-chief of International Review of Financial Analysis and Finance Research Letters.
In addition to that anonymous pre-print, there is also a 2025 paper written by actual professors with sophisticated econometric analysis & graph theory which describes the citation cartel in much more detail. The conclusion of that paper is: ”Elsevier ecosystem journals benefited from the creation of the ecosystem … Elsevier journals in the ecosystem have overlapping editors and Elsevier appoints these editors in coordination with a single academic [Brian Lucey] that manages the fleet of ecosystem journals.”
Brian Lucey posted a reply to this paper, which was extremely weak and does not contain any tables or figures. It mostly ignores the data and structural model of the citation ring and instead leans on Lucey’s “lived experience” as an editor (“we have experience shepherding…”), while also nitpicking semantics and phrasing, such as Lucey complaining that they called him a “professor of finance” instead of his full honorific, “professor of international finance and commodities.”
The Elsevier ecosystem web page went live on 4 November 2020 , according to Lucey’s rebuttal. Below is a visualization of the network before and after this transition date, which shows a clear distortion of the citation network. During 2021-2025, the Ecosystem citations per article is 103 % higher.
2020 is also the year where Brian Lucey’s citation profile exhibits an exponential “J-curve”, a Hallmark of citation rings. Did he suddenly become a well-respected genius in 2020? Or did he figure out how to cheat the system?
In a comment to Retraction Watch, Lucey further argued that citation cartels are not a crime, because everyone does it.
”Because here’s the thing: Elsevier are aware of [editors publishing in their own journals] as a pretty common practice in finance and economics. We’ve given them evidence of hundreds of instances of this. And nothing has happened, which does raise the question, you know, maybe they’re going to go back and go look at all these. Presumably, they will treat everything the same.” Lucey shared his list of such instances. It includes 240 articles, 133 of which are in Science of the Total Environment, which was delisted from Clarivate’s Web of Science in November.
Dr. Thorsten Beck, in a blog post, confirmed that no, not everyone does it, and yes, it is a crime.
This incident raises an important question: is this common practice across academic journals? And are there rules for editors publishing in ‘their’ journals? As I was editor across three journals for a total of 11 years, I can certainly speak to this (and clearly say NO).I don’t have formal confirmation but I have been told by several independent sources that ultimately even Elsevier realised that this editor was seriously damaging the reputation of the journal, appointing a second editor and then easing out the ‘doubtful’ editor from his responsibilities.
The fallout from the Lucey–Vigne era extends far beyond a handful of retracted PDFs. What it exposes is a structural weakness in how academic “excellence” is manufactured, measured, and monetized. By presiding over a coordinated cluster of journals, a small group of editors effectively gained the ability to print their own academic currency.
However, blaming Lucey and Vigne alone ignores the hand that fed them. Elsevier did not just “allow” this to happen; they engineered the environment for it to flourish, because of incentives: Elsevier’s internal metrics (Impact Factors) directly benefitted from this behavior. It was a symbiotic corruption: the editors received a fast-track to academic stardom, and Elsevier received a high-margin, high-volume production line of citable content.
This is the “paper mill” reimagined for the elite: not a basement operation in a third-world nation, but a polished, corporate-mandated factory within the halls o the world’s most powerful publisher. This is the natural result of a corporate mandate to maximize profits by bundling journals into monopoly-priced packages, forcing universities to pay for the very “prestige” that Elsevier’s own staff helped to dilute. As one EJMR commenter noted, “The tragedy isn’t that they cheated; it’s that the system was designed to let them thrive for a decade before anyone bothered to look at the data.”
The question now is whether Trinity College Dublin will fire Lucey.
They did not respond to my inquiry.
An editor of a psychology journal was offered $1,500 per accepted paper.
Richard Tol, a professor of economics at the University of Sussex, wrote that he was offered $5,000 per paper.
Muhammad Ali Nasir, a professor of Macroeconomics at Leeds University, wrote about how common selling papers is in European finance journals: “I had been made such offers from anonymous emails but I choose not to engage and in one case forwarded the email to EiC. I will be surprised if any editor is not approached by these people.”
This raises a multi-million-euro question: given their documented corruption, are the various “educational consultancies” and special-purpose vehicles operated by Brian Lucey and Samuel Vigne used to circulate ecosystem funds, conference fees, or “consultancy” payouts from authors seeking a shortcut to publication?
Here is a hypothetical outline of how such a cash-flow scheme could function.“Hello [unknown, distant institutions], we offer consulting services: €€€ for excellent advice on how to publish in top-tier finance journals. Our advice yields results.”
I’m not going to provide details on how to corruptly have a paper published. I’m just going to speculate on what could be going on in a situation like this. It could be based on “consultancy fees” for advice on publishing that you or your institution pay to one of those companies. They give some advice, including what papers to cite, etc, and if you follow their advice you are likely to be published in one of their journals. This could be attractive for researchers and institutions in, e.g., China and the Middle East.
Another anonymous economics professor I spoke to told me:
Universities in East and West Asia pay cash bonuses for publications. Some authors hire a broker (many advertise openly on Facebook), other authors contact the editor directly. The cash bonus is shared between the author, broker, and editor. Besides selling papers, they also sell special issues, which allow the guest editors to do what they want.And they sell positions on the editorial board, which are important for promotion to the next academic rank.Some payments are in cash, others in kind. Finally, they organize conferences. Registration fees more than cover the costs of putting on a conference. The conference name suggests it is organized by a society, but it really is Lucey who pockets the profits.
Brian Lucey and Samuel Vigne operate four private companies in Ireland and the UK classified under “other education,” likely functioning as consultancies or special-purpose vehicles for academic or policy work.
The existence of these consultancies warrants investigation into potential conflicts of interest and financial misconduct.
...
Read the original on www.chrisbrunet.com »
Why are 21st century products still protected with 1950s materials like expanded polystyrene EPS that persist in landfill for centuries? EPS now carries commercial risk through plastic taxes and reputational risk through environmental impact. There is a better way.
Mushroom® Packaging is grown from mycelium and agricultural by products to form a high performance protective material. It matches EPS for strength and cost while eliminating persistent plastic waste. The finished material is fully dried and biologically inactive before it leaves our facility, so it will not grow or sprout.
As Europe’s first industrial scale mycelium packaging manufacturer, MMC proves that sustainability can operate at scale and at cost parity.
Since 2020 we have produced millions of units, removing thousands of tonnes of EPS from supply chains. In 2026 alone we will manufacture around ten million more pieces, displacing thousands of additional tonnes.
Leading brands including BA Kitchens, Renais Gin, ICAX Heat Pumps, Tom Dixon, Raymarine and Flextronics trust MMC to protect their products and reputations.
Regulation is tightening. Customers are demanding change. Businesses still dependent on EPS risk being left behind.
...
Read the original on magicalmushroom.com »
Wildfire Games, an international group of volunteer game developers, proudly announces the release of 0 A. D. Release 28: “Boiorix”, the twenty-eighth version of 0 A.D., a free, open-source real-time strategy game of ancient warfare. The release is named after the king of the Cimbri Germanic tribe Boiorix.
Download and installation instructions are available for Windows, Linux, and macOS. 0 A.D. is free software. This means you can download, redistribute, modify and contribute to the application under the same licenses: GNU Public Licence version 2 (GPL v2) for code and Creative Commons Attribution Share-Alike 3.0 (CC-BY-SA 3.0) for artwork. Although you might find some people selling copies of 0 A.D., either over the internet or on physical media, you will always have the option to download 0 A.D. completely gratis, directly from the developers. No “freemium” model, no in-game advertising, no catch.
Don’t forget to deactivate every mod before updating the game to avoid any risk of conflict. If you’re a mod creator, please look at this page on how to port your mod to the new version. As always, feel free to reach out to us for assistance.
Now is the time to contribute!
The Release 28 is our first release without the Alpha label: our development process has matured, our releases are more frequent, and our commitment to quality has never been higher. Now is the time to join us and place 0 A.D. in the spotlight. We need your help to make the game flourish and to bring new features to life.
As you can see, this release unfortunately comes without a video trailer. It is difficult for the current team to spread the word about our beloved game. We are in sore need of contributors in the following areas:
Of course, we are also always looking for, and providing a welcoming contribution environment, for:
Translators (get started right away on Transifex)
and of course, Developers and Artists — the team will gladly welcome contributions in all areas.
You can also support us by simply donating. This allows us to pay the server hosting fees for our multiplayer, websites, and development environments.
Terror Germanicus, the fear of the Germanic tribes migrating south, from the Jutland region, towards the Roman Republic, is coming to 0 A.D. in Release 28.
The Cimbri were a large group of Germanic peoples originally from the north of modern-day Denmark. In the late 2nd century BC, their migration south into Italy and France would spark the decade-long Cimbrian War against the Roman Republic. Accompanied by powerful armies and seeresses, Germanic convoys, in long trains of wagons, brought livestock, shelter and goods. The Cimbri placed great importance on animals for religious sacrifices.
In 0 A.D., we represent the nomadic coalition formed between the Cimbri, the Teutones, the Ambrones, and other Celto-German tribes simply as the “Germans”. The Germans are a semi-nomadic civilization with a flexible economy owing to Supply Wagons and Wagon Encampments, which can be fortified. The unique technologies “Wagon Trains” and “Migratory Resettlement” lean into this flexibility, reducing dependence on territorial boundaries. The Germans also feature an aggressive lineup of siege units, with a crush-dealing unit available in each phase. Between their economic flexibility and unique military units, Cimbrian raiders, Log Rams, and Seeresses, the Germans are a mysterious force to be reckoned with.
Play with this new faction, against their historical Roman foes, or turn history around by making them battle the 14 other factions of the game. Many other novelties await you in the new release of 0 A.D.!
In an effort to improve historical consistency, we have replaced the visual appearance of civilian units. Previously described as a “female citizen”, the basic economic unit is now called the “civilian” and has male and female models.
This enhancement was made possible by incremental improvements of the engine, which now allows a unit to have variants not only in its appearance, but also in its voice and in other gendered characteristics.
In the civilizations displayed in the game, women did not usually hold citizenship, which was a prized social status. The “female citizen” was a misnomer. It was also incorrect to display all men as soldiers, and most women as servants. Instead, we want to describe the armies of 0 A.D. as followed by a group of minions of lower social status, able to support the soldiers in the army camp, but not on the battlefield. Those are the new Civilians. Citizens, on the other hand, were soldiers, able to wage war as well as working, which we have always been accurately describing in the game with the citizen-soldier concept. The ambiguity of the term “citizen” is removed: this word now only describes citizen soldiers.
This change does not touch the balance of the game at all. The so-called “female citizens” keep all their statistics, only their appearance and name have changed. The citizen soldiers are not touched at all.
In order to display text, we used to pre-render fonts and load them into memory when starting the game. In order to display scripts such as Chinese, we needed to load a large atlas of thousands of characters into memory, which could overwhelm the players’ RAM. As a consequence, we were forced to provide East Asian languages as mods, which was an accessibility issue for non-English speaking users of these languages.
On top of memory management improvements, we now use the Freetype library in the engine to render fonts on the fly when the game runs. Modding the fonts also becomes far easier with this new feature.
This new rendering system also improves the text display with GUI scaling, for users with Hi-DPI screens or who simply wish to use a larger interface.
In the future, we hope to also use this feature to render ancient scripts, such as hieroglyphs and cuneiform.
New personalization options are available in the game setup screen.
You can remove some players entirely (removing all of the initial buildings and units in their starting zone) in Skirmish and Scenario games.
It is also possible to set the population limit per team:
Lastly, some code refactoring allowed us to fix outstanding bugs in the game setup. For instance, in Alpha 27, a recurring issue would create an unwanted flood event in games where the user had previously played a flood game. This issue has been fixed.
The multiplayer lobby received some quality of life improvements. Verifying TLS certificates is now enabled by default when connecting to the multiplayer lobby, reducing the risk of man-in-the-middle attacks. A secure connection to the lobby will become mandatory in future releases, so please check that TLS encryption and certificate verification are not disabled in your settings, and report any issue you may encounter.
It is also more straightforward now to host matches, as there is no need to decide whether to use STUN or not; and a bug causing freezes when joining a match got fixed.
We have decided to rename the main menu entry for playing with friends over LAN or by direct IP: now called Multiplayer > Connect by IP, it is still the same system for direct matchmaking without using the lobby.
In Release 28, we have upgraded the SpiderMonkey JavaScript engine to version 128. This upgrade drops support for Windows 7 and 8.1, and for macOS below 10.15. Windows 10 and 11 are now the only supported Windows versions, and we will try our best to keep supporting Windows 10 as long as possible.
Still on Windows, we now provide a long-awaited 64-bit build, which should address infrequent out-of-memory errors. The 64-bit version will become the default one for the next release, and the 32-bit build will eventually be deprecated in the future.
On Linux distributions, special care is always given to release bundles for package maintainers, but we also walked the extra mile to provide an AppImage in official releases, starting with Release 28. We are also working close together with maintainers of the Snap and Flatpak versions, so that you can enjoy the latest release as soon as we get it out.
Our contributor manowar has brought gifts for the history nerds among you with a dozen new quotes in the game load screen, and, together with Vantha, they have added new tips for both beginners and seasoned players.
Structure, Civil Center, and Fortress default (ungarrisoned) capture resistance increased from 0.5, 5, 10, to 5, 30, and 45, respectively.
Civilians (formerly Women) given a capture attack of 1.0.
Units’ destinations are distributed around the endpoint, allowing groups to move cohesively without colliding and forming long lines.
Cataphract Champion Cavalry +2 Hack and Pierce armor, but speed decreased from 17.1 to 14.4.
3 traders are no longer required for researching Diaspora.
New civilization bonus: Stone gathering storehouse technologies are free and instant with each phase.
Mercenary refactoring and differentiation. The Celtic embassy trains sword cavalry and infantry.The Iberian embassy trains unique ranged infantry mercenaries. The Italic embassy trains spear cavalry and infantry.
Minister economy and building auras increased from 2% to 10%, but ranged reduced from 40 meters to 20 meters. This is no longer stackable.
Ministers and Ministry available in village phase instead of town phase.
The full list of changes can be found at the changelog page of the wiki.
After numerous contributions in many areas of the game, especially the user interface and the game simulation, Vantha has joined the team at the beginning of the preparation of Release 28. We are extremely happy to welcome him!
If you experience a technical problem with the game, please report it at gitea.wildfiregames.com. This is also the first address to visit when you wish to dedicate some of your time to help patch the code. Got any further questions or suggestions? Discuss them with other players and developers at the forum or talk with us directly in the IRC chat rooms: #0ad and #0ad-dev on QuakeNet.
See our LinkTree. For press/media inquiries, please DM play0ad@mastodon.social on Mastodon, or email webmaster at wildfiregames dot com.
Fix wrong value for the Aura of Kush Hero Arakamaniby obelix on February 21, 2026 at 12:16 PM by Atrik on February 21, 2026 at 7:48 AM by Vladislav Belov on February 20, 2026 at 9:46 PM by Vladislav Belov on February 20, 2026 at 9:46 PM by Vladislav Belov on February 20, 2026 at 9:46 PM
Mastodon
...
Read the original on play0ad.com »
My old 2016 MacBook Pro has been collecting dust in a cabinet for some time now. The laptop suffers from a “flexgate” problem, and I don’t have any practical use for it. For quite some time, I’ve been thinking about repurposing it as a guinea pig, to play with FreeBSD — an OS that I’d aspired to play with for a long while, but had never had a real reason to.
During the recent holiday season, right after FreeBSD 15 release, I’ve finally found time to set the laptop up. Doing that I didn’t plan, or even think, this may turn into a story about AI coding.
2016 MacBook Pro models use Broadcom BCM4350 Wi-Fi chip. FreeBSD doesn’t have native support for this chip. To have a working Wi-Fi, a typical suggestion on FreeBSD forums, is to run wifibox — a tiny Linux VM, with the PCI Wi-Fi device in pass through, that allows Linux to manage the device through its brcmfmac driver.
Brcmfmac is a Linux driver (ISC licence) for set of FullMAC chips from Broadcom. The driver offloads the processing jobs, like 802.11 frame movement, WPA encryption and decryption, etc, to the firmware, which is running inside the chip. Meanwhile, the driver and the OS do high-level management work (ref Broadcom brcmfmac(PCIe) in Linux Wireless documentation).
Say we want to build a native FreeBSD kernel module for the BCM4350 chip. In theory, this separation of jobs between the firmware and the driver sounds perfect. The “management” part of work is what FreeBSD already does for other supported Wi-Fi devices. We need to port some amount of existing “glue code” from specifics of Linux to FreeBSD. If we ignore a lot of details, the problem doesn’t sound too complicated, right?
A level-zero idea, when one hears about “porting a bunch of existing code from A to B”, in 2026 is, of course, to use AI. So that was what I tried.
I cloned the brcmfmac subtree, and asked Claude Code to make it work for FreeBSD. FreeBSD already has drivers that work through LinuxKPI — compatibility layer for running Linux kernel drivers. So I specifically pointed Claude at the iwlwifi driver (a softmac driver for Intel wireless network card), asking “do as they did it”. And, at first, this even looked like this can work — Claude told me so.
The module, indeed, compiled, but it didn’t do anything. Because, of course: the VM, where we tested the module, didn’t even have the hardware. After I set the PCI device into the VM, and attempted to load the driver against the chip, the challenges started to pop up immediately. The kernel paniced, and after Claude fixed the panics, it discovered that “module didn’t do anything”. Claude honestly tried to sift through the code, adding more and more #ifdef __FreeBSD__ wrappers here and there. It complained about missing features in LinuxKPI. The module kept causing panics, and the agent kept building FreeBSD-specific shims and callbacks, while warning me that this project will be very complicated and messy.
After a number of sessions, the diff, produced by the agent, stared to look significantly larger than what I’d hoped it will be. Even worse, the driver didn’t look even close to be working. This was right around time when Armin Ronacher posted about his experience building a game from scratch with Claude Opus and PI agent.
Besides the part that working in Pi coding agent feels more productive, than in Claude Code, the video got me thinking that my approach to the task was too straightforward. The code of brcmfmac driver is moderately large. The driver supports several generations of Wi-Fi adaptors, different capabilities, etc. But my immediate task was very narrow: one chip, only PCI, only Wi-Fi client.
Instead of continuing with the code, I spawned a fresh Pi session, and asked the agent to write a detailed specification of how the brcmfmac driver works, with the focus on BCM4350 Wi-Fi chip. I explicitly set the audience for the specification to be readers, who are tasked with implementing the specification in a clean-room environment. I asked the agent to explain how things work “to the bits”. I added some high-level details for how I wanted the specification to be laid out, and let the agent go brrrr.
After a couple of rounds, the agent produced me a “book of 11 chapters”, that honestly looked like a fine specification
% ls –tree spec/
spec
├── 00-overview.md
├── 01-data-structures.md
├── 02-bus-layer.md
├── 03-protocol-layer.md
├── 04-firmware-interface.md
├── 05-event-handling.md
├── 06-cfg80211-operations.md
├── 07-initialization.md
├── 08-data-path.md
├── 09-firmware-commands.md
└── 10-structures-reference.md
Of course, one can’t just trust what AI has written.
To proofread the spec I spawned a clean Pi sessions, and — for fun — asked Codex model, to read the specification, and flag any places, where the text isn’t aligned with the driver’s code (“Source code is the ground truth. The spec needs to be verified, and updated with any missing or wrong details”). The agent followed through and found several places to fix, and also proposed multiple improvements.
Of course, one can’t just trust what AI has written, even if this was in a proofreading session.
To double-proofread the fixes I spawned another clean Pi sessions, asking Opus model to verify if what was proposed was aligned with how it works in the code of the driver.
As a procrastination exercise, I tried this loop with a couple of coding models: Opus 4.5, Opus 4.6, Codex 5.2, Gemini 3 Pro preview. So far my experience was that Gemini hallucinated the most. This was quite sad, given that the model itself isn’t too bad for simple coding tasks, and it is free for a limited use.
Having a written specification should have (in theory) explained how a driver’s code interacts with the firmware.
I started a fresh project, with nothing but the mentioned “spec”, and prompted the Pi agent, that we were building a brand new FreeBSD driver for BCM4350 chip. I pointed the agent to the specification, and asked it to ask me back about any important decisions we must make, and details we must outline, before jumping into “slopping the code”. The agent came back with questions and decision points, like “Will the driver live in the kernels source-tree?”, “Will we write the code in C?”, “Will we rely on LinuxKPI?”, “What are our high-level milestones?”, etc. One influential bit, that turned fairly productive moving forward, was that I asked the agent to document all these decision points in the project’s docs, and to explicitly referenced to these decision docs in the project’s AGENTS.md.
It’s worth saying that, just like in any real project, not all decisions stayed to the end. For example,
Initially I asked the agent to build the driver using linuxkpi and linuxkpi_wlan. My naive thinking was that, given the spec was written after looking at Linux driver’s code, it might be simpler for the agent, than building the on top of the native primitives. After a couple of sessions, it didn’t look like this was the case. I asked the agent to drop LinuxKPI from the code, and to refactor everything. The agent did it in one go, and updated the decision document.
With specification, docs and a plan, the workflow process turned into a “boring routine”. The agent had SSH access to both the build host, and a testing VM, that had been running with the Wi-Fi PCI device passed from the host. It methodically crunched through the backlog of its own milestones, iterating over the code, building and testing the module. Every time a milestone or a portion was finished, I asked the agent to record the progress to the docs. Occasionally, an iteration of the code crashed or hanged the VM. When this happened, before fixing the problem, I asked — in a forked Pi’s session — to summarize, investigate and record the problem for agent’s future-self.
After many low-involved sessions, I got a working FreeBSD kernel module for the BCM4350 Wi-Fi chip. The module supports Wi-Fi network scanning, 2.4GHz/5GHz connectivity, WPA/WPA2 authentication.
The source code is in repository github.com/narqo/freebsd-brcmfmac. I didn’t write any piece of code there. There are several known issues, which I will task the agent to resolve, eventually. Meanwhile, I strongly advise against using it for anything beyond a studying exercise.
...
Read the original on vladimir.varank.in »
The link you clicked leads to a Base64 encoded string.
To decode it, you can use:
Skip to contentFebruary Updates 🌸The largest collection of free stuff on the internet!Learn how to block ads, trackers and other nasty things. Explore the world of AI and machine learning.Stream, download, torrent and binge all your favourite movies and shows!Stream, download and torrent songs, podcasts and more!Download and play all your favourite games or emulate some old but gold ones!Whether you’re a bookworm, otaku or comic book fan, you’ll be able to find your favourite pieces of literature here!Download all your favourite software, movies, shows, music, games and more!Download your favourite media using the BitTorrent protocol.All forms of content for Android and iOS.Content in languages other than English.Various topics like food, travel, news, shopping, fun sites and more!
...
Read the original on fmhy.net »
Catch bugs before they make it to production
Usage Guide
...
The Las Vegas Metropolitan Police Department (LVMPD) quietly entered an agreement in 2023 with Flock Security, an automated license plate reader company that uses cameras to collect vehicle information and cross-reference it with police databases.
But unlike many of the other police departments around the country that use the cameras in their police work, Metro funds the project with donor money funneled into a private foundation. It’s an arrangement that allows Metro to avoid soliciting public comment on the surveillance technology, which critics worry could be co-opted to track undocumented immigrants, political dissidents and abortion seekers, among others.
“It’s a short circuit of the democratic process,” Jay Stanley, a Washington D. C.-based lawyer for the American Civil Liberties Union (ACLU) who works on how technology can infringe on individual privacy and civil liberties, said in an interview with The Nevada Independent.
The cameras scan license plates as well as vehicles’ identifying details — such as make, model and color — plugging that information into a national database that police can use to search the location of specific vehicles beyond their own jurisdictions. Flock operates more than 80,000 of these AI-powered cameras nationwide, and the company’s popularity has exploded in recent years, with police touting it as a tool to solve crime faster and boost public safety.
Although taxpayer dollars fund Flock cameras in other jurisdictions, most of the cameras in the Las Vegas area have been bought with money from the Horowitz Family Foundation, a philanthropy group connected to the Las Vegas-based venture capitalist Ben Horowitz, co-founder of the firm Andreessen Horowitz.
The Horowitz Family Foundation did not respond to a request for comment at the time of publication.
Metro told The Nevada Independent that it operates approximately 200 Flock license plate reader cameras on city or county infrastructure and it shares its Flock data with hundreds of state and local law enforcement agencies throughout the country.
Since late 2023, Las Vegas police have made more than 23,000 searches of vehicles, according to the website Have I Been Flocked, which compiles public audit logs of Flock data.
As the cameras were not bought with public funds, Metro does not have to hold meetings with the public to comment on the technology, something experts say leaves citizens without any input on the policing method.
In other cities, Stanley said Flock is often brought up and discussed during city council meetings or other public forums. It’s not required to be on public meeting agendas in the Las Vegas area.
“Police departments serve the community and are supposed to make life in the community better. Does the community want this technology imposed on it?” Stanley said.
Though Horowitz’s foundation donated additional funds for Flock cameras in October, it was not brought up at the Clark County Commission meeting that month, nor was their use discussed anytime in 2025, according to commission meeting minutes.
Some municipalities in Clark County, such as the City of Las Vegas, have license plate reader policies that includes a public Flock policy with a dashboard on how many license plates Flock picked up (about 185,000 in the past month in the city), how many cameras were in use (22 in Las Vegas), and how many searches had been done on a monthly basis (five in the past 30 days). In comparison, Metro’s policy is not publicly available online, though The Indy obtained a copy through a public records request.
Flock’s most recent contract with Metro, signed in 2023, stipulates that the company retains all rights in any recordings or data provided by the service and that Flock can use any of the data for “any purpose” at the company’s discretion. The agreement also says that Flock recordings are not stored for longer than 30 days.
Meanwhile, Metro policy says that department members will not seek or retain license plate reader information about individuals or an organization based solely on their citizenship, social views, race or other classifications protected by law. The policy states that retained license plate reader data does not include specific identification of individuals. Misuse of the data will result in disciplinary action up to termination, according to the policy.
But for many, including a former officer who spoke to The Indy on the condition of anonymity for fear of professional repercussions, such policies are not enough.
“It’s ripe for misuse,” the officer said, pointing to examples around the country of people using Flock to look for current and former romantic partners and track their movements. A police chief in Kansas used Flock to track his ex-girlfriend 228 times in four months. An officer in South Carolina used public cameras to monitor his wife, who he suspected was having an affair.
The former Metro officer said his major concern was not the technology itself, but the fact that there was little transparency on how the technology was being used or what the department’s policy was on Flock usage.
“If you look around the country where license plate readers are being used, there’s some kind of public meeting, there’s some kind of public process,” the officer said. “What’s happening here is on a very large scale — they’re putting out surveillance technology — and there’s no public disclosure.”
The Horowitz Foundation donation in October included a software subscription to Flock’s Nova feature, which allows officers to easily access private license plate information alongside other personal data, such as Social Security numbers, credit scores, property and occupancy information, as well as emails or social media handles.
Experts say this data could be used to identify undocumented immigrants, political protesters and people traveling across state lines to obtain abortions.
Athar Haseebullah, the executive director of the ACLU of Nevada, said that Flock not only poses a heightened risk for immigrants, but anyone engaged in actions that are found to be politically defiant. He pointed to a case in Texas where police conducted a nationwide search using Flock technology for a woman who self-induced an abortion.
“This could be ripe for abuse by ICE (Immigrations and Customs Enforcement), but it could also be ripe for abuse by other government entities,” Haseebullah said. In 2025, the ACLU pushed back against a measure that would allow local jurisdictions to use automated traffic cameras to crack down on speeding and red-light crossings, although the bill was never voted on.
Flock has received backlash nationwide for allowing federal agencies such as Customs and Border Patrol to tap into their data. The company has said it does not work with ICE after evidence was found that the agency used Flock data for immigration investigations. Several cities have terminated or modified their Flock agreements after realizing they were inadvertently sharing their data with other agencies.
However, though Flock might not want to partner with ICE, it has little choice — Flock is obligated to fulfill subpoenas from ICE and can’t refuse a legal warrant, Andrew Ferguson, an attorney and a professor researching tech and police surveillance at George Washington University, said.
Flock’s surveillance cameras are meant to catch crime, though experts say it could deter certain behaviors if citizens are aware they are being watched.
“There’s a chilling effect knowing that your government is essentially tracking you wherever you go,” Ferguson said. “It might be even more chilling if you put cameras in sensitive places, like a medical clinic, or a Gambler’s Anonymous meeting, or a church.”
In a city such as Las Vegas, known for drinking, gambling and a hearty party culture, surveillance is the last thing people are interested in, according to Ferguson.
“Things are happening in Vegas that are not going to stay in Vegas,” Ferguson said. “They’re going to be broadcast through Flock.”
As recently as October of last year, the Horowitz Family Foundation donated almost $1.9 million for Flock license plate readers and another $2.47 million for supporting software for Flock machines, according to the minutes of an LVMPD fiscal affairs committee meeting.
Because the donations aren’t coming directly to Metro, but to the nonprofit LVMPD foundation, also known as “Friends of Metro,” any discussions on the cameras’ use aren’t subject to Nevada’s open meeting laws.
The license plate readers and their supporting software are not the only gift that the Horowitz Family Foundation, led by Ben Horowitz’s wife, Felicia Horowitz, has donated to Las Vegas police. The foundation has also gifted drones, as well as Tesla Cybertrucks, to the agency.
Proponents have billed the gifts as morale boosters for police that help the agency stay on the cutting edge without tapping into limited taxpayer dollars. Critics, such as the Progressive Leadership Alliance of Southern Nevada, have suggested that the Cybertrucks show that Metro is “prioritizing corporate giveaways.”
Felicia Horowitz said she is focused on “creating the best community in America” in Las Vegas, according to her bio from a local nonprofit organization that she sits on the board of. Part of that is combating crime and keeping citizens safe. In a Wall Street Journal article, Felicia Horowitz emphasized how crime and weak policing had hurt Black communities across the country.
“The new policies — defund the police, don’t prosecute crime — are destroying the communities where I grew up,” Felicia Horowitz, who is Black, told the WSJ in 2024. Felicia Horowitz was raised in Los Angeles and the Horowitzes relocated to Las Vegas around 2021 and 2022 after decades in California.
So far, the foundation has not publicly commented on whether it will continue donating money for Flock services. Some experts think the donations might be a strategy called “penetration pricing,” where a company gives free or reduced products or services in order to hook consumers before charging them.
“There’s no question that there’s a financial interest in them proving that the Flock technology works in Las Vegas so that they can sell it to other places,” said Ferguson.
The former police officer said he was concerned about taxpayers having to cough up funds to continue Flock services if the Horowitz money ran dry.
“Once you start relying on a certain type of policing, it’s going to be hard to switch over, and then who will foot the bill?” the officer said.
...
Read the original on thenevadaindependent.com »
Meta, Amazon, Google, OpenAI, and other tech companies spent billions last year investing in AI. They’re expected to spend even more, roughly $700 billion, this year on dozens of new data centers to train and run their advanced models.
This spending frenzy has kept Wall Street buzzing and fueled a narrative that all this investment is helping prop up and even grow the U. S. economy.
President Donald Trump has cited that argument as a reason the industry should not face state-level regulations.
“Investment in AI is helping to make the U. S. Economy the ‘HOTTEST’ in the World — But overregulation by the States is threatening to undermine this Growth Engine,” Trump wrote in a post on Truth Social in November. “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.”
Some prominent economists have also given credibility to this story with their analysis. Jason Furman, a Harvard economics professor, said in a post on X that investments in information processing equipment and software accounted for 92% of GDP growth in the first half of the year. Meanwhile, economists at the Federal Reserve Bank of St. Louis similarly estimated that AI-related investments made up 39% of GDP growth in the third quarter of 2025.
But now some Wall Street analysts are starting to rethink this narrative.
“It was a very intuitive story,” Joseph Briggs, a Goldman Sachs analyst, told The Washington Post on Monday. “That maybe prevented or limited the need to actually dig deeper into what was happening.”
Briggs’ colleague, Goldman Sachs Chief Economist Jan Hatzius, said in an interview with the Atlantic Council that AI investment spending has had “basically zero” contribution to the U. S. GDP growth in 2025.
“We don’t actually view AI investment as strongly growth positive,” said Hatzius. “I think there’s a lot of misreporting, actually, of the impact AI investment had on U. S. GDP growth in 2025, and it’s much smaller than is often perceived.”
Hatzius said one major reason is that much of the equipment powering AI is imported. While U. S. companies are spending billions, importing chips and hardware offsets those investments in GDP calculations.
“A lot of the AI investment that we’re seeing in the U. S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP,” he said.
On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth.
So far, many business leaders say AI hasn’t significantly improved productivity.
A recent survey of nearly 6,000 executives in the U. S., Europe, and Australia found that despite 70% of firms actively using AI, about 80% reported no impact on employment or productivity.
...
Read the original on gizmodo.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.