10 interesting stories served every morning and every evening.
When I moved to a new apartment with my family, the cable company we were used to wasn’t available. We had to settle for Dish Network. I wasn’t too happy about making that switch, but something on their website caught my attention. For an additional $5 a month, I could have access to DVR. I switched immediately.
This was 2007. DVR was not new, but it wasn’t commonly bundled with set-top boxes. TiVo was still the popular way to record, pause, and rewind live TV. We received two set-top boxes, one for each room with a TV, and three remotes. Two remotes had IR (infrared) blasters and, surprisingly, one RF (radio frequency) remote.
After using the RF remote, I wondered: Why would anyone ever use an IR remote again? You didn’t need a direct line of sight with the device you were controlling. I could actually stand in the kitchen and control the TV. It was amazing. But with the convenience of RF came other problems that IR users never had to worry about. Interference.
After several months of enjoying my service, one of my neighbors, the loudest in the building, also switched to Dish Network. And he also got the RF remote. This was the type of neighbor who would leave the house with the TV on, volume blasting.
One day, I was in the living room watching TV when the channel just flipped. I must have accidentally hit a button, so I changed it back. But not a few seconds later, the channel changed again. Then the volume went up. I figured my sister must have had the RF remote and was messing with me. But no, the remote was in my hand. I assumed something was wrong with it.
The whole time I was watching TV, the channels kept randomly switching. I banged the remote on the table a couple of times, but it still switched. I removed the batteries from the remote, it still switched. I unplugged the device for a few minutes, plugged it back in, and… it still switched. Frustrated, I went through the device settings and disabled the RF remote. That’s when it finally stopped. I wasn’t happy with this solution, but it allowed me to watch TV until I figured something out.
One evening, when everyone was asleep and the neighbor was watching a loud TV show, I decided to diagnose the issue. The moment I pressed the power button on the RF remote, my TV and set-top box turned on, and the neighbor’s TV went silent. “Fuck!” I heard someone say. I was confused. Did I just do that? The TV turned back on, the volume went up. I walked to the window armed with the remote. I counted to three, then pressed the power button. My neighbor’s TV went silent. He growled.
I am the captain now.
Every time he turned the TV on, I pressed the power button again and his device went off. Well, what do you know? We had interference somehow. Our remotes were set up to operate at the same frequency. Each remote controlled both devices.
But I’m not that kind of neighbor. I wasn’t going to continue to mess with him. Instead, I decided I would pay him a visit in the morning and explain that our remotes are tuned to the same frequency. I would bring the RF remote with me just to show him a demo. I was going to be a good neighbor.
In the morning, I went downstairs, remote in hand. I knocked on the door, and a gentleman in his forties answered the door. I had rehearsed my speech and presentation. This would be a good opportunity to build a good rapport, and have a shared story. Maybe he would tell me how he felt when the TV went off. How he thought there was a ghost in the house or something. But that’s not what happened.
“Hi, I’m Ibrahim. Your upstairs neighbor…” I started and was interrupted almost immediately. “Whatever you are selling,” he yelled. “I’m not buying.” and he closed the door on my face. I knocked a second time, because obviously there was a misunderstanding. He never answered. Instead, the TV turned on and a movie played at high volume. So much for my prepared speech.
The RF settings on my set-top box remained turned off. My family never discovered its benefit anyway, they always pointed at the box when pressing the buttons. It wasn’t much of an inconvenience. In fact, I later found in the manual that you could reprogram the device and remote to use a different frequency. I did not reprogram my remote. Instead, my family used the two IR remotes, and brought the RF remote in my bedroom where it permanently remained on my night stand.
Why in the bedroom? Because I decided to teach my neighbor some good manners. Whenever he turned up his volume, I would simply turn off his device. I would hear his frustration, and his attempts at solving the problem. Like a circus animal trainer, I remained consistent. If the volume of his TV went above what I imagined to be 15 to 20, I would press the power button. It became a routine for me for weeks. Some nights were difficult, I would keep the remote under my pillow, battling my stubborn neighbor all night.
One day, I noticed that I hadn’t pressed the button in days. I opened the window and I could still hear the faint sound of his TV. Through trial and error, he learned the lesson. If the volume remained under my arbitrary threshold, the TV would remain on. But as soon as he passed that threshold, the device would turn off.
Sometimes, he would have company and there would be noise coming out of his apartment. I used the one tool in my tool box to send him a message. Turn off the TV. All of the sudden, my neighbor and his guest will be reminded of the unspoken rules, and become mindful of their neighbors.
Maybe somewhere on the web, in some obscure forum, someone asked the question: “Why does my set-top box turn off when I increase the volume?” Well, it might be 18 years too late, but there’s your answer. There is a man out there who religiously sets his volume to 18. He doesn’t quite know why. That’s Pavlovian conditioning at its best.
Next: This is NOT the worst LLM you’ll ever use
...
Read the original on idiallo.com »
In the past three years, I’ve been using LLMs for assisted coding. If you read this, you probably went through the same evolution: from copying and pasting code into ChatGPT, to Copilot auto-completions (which never worked for me), to Cursor, and finally the new breed of coding agent harnesses like Claude Code, Codex, Amp, Droid, and opencode that became our daily drivers in 2025.
I preferred Claude Code for most of my work. It was the first thing I tried back in April after using Cursor for a year and a half. Back then, it was much more basic. That fit my workflow perfectly, because I’m a simple boy who likes simple, predictable tools. Over the past few months, Claude Code has turned into a spaceship with 80% of functionality I have no use for. The system prompt and tools also change on every release, which breaks my workflows and changes model behavior. I hate that. Also, it flickers.
I’ve also built a bunch of agents over the years, of various complexity. For example, Sitegeist, my little browser-use agent, is essentially a coding agent that lives inside the browser. In all that work, I learned that context engineering is paramount. Exactly controlling what goes into the model’s context yields better outputs, especially when it’s writing code. Existing harnesses make this extremely hard or impossible by injecting stuff behind your back that isn’t even surfaced in the UI.
Speaking of surfacing things, I want to inspect every aspect of my interactions with the model. Basically no harness allows that. I also want a cleanly documented session format I can post-process automatically, and a simple way to build alternative UIs on top of the agent core. While some of this is possible with existing harnesses, the APIs smell like organic evolution. These solutions accumulated baggage along the way, which shows in the developer experience. I’m not blaming anyone for this. If tons of people use your shit and you need some sort of backwards compatibility, that’s the price you pay.
I’ve also dabbled in self-hosting, both locally and on DataCrunch. While some harnesses like opencode support self-hosted models, it usually doesn’t work well. Mostly because they rely on libraries like the Vercel AI SDK, which doesn’t play nice with self-hosted models for some reason, specifically when it comes to tool calling.
So what’s an old guy yelling at Claudes going to do? He’s going to write his own coding agent harness and give it a name that’s entirely un-Google-able, so there will never be any users. Which means there will also never be any issues on the GitHub issue tracker. How hard can it be?
To make this work, I needed to build:
* pi-ai: A unified LLM API with multi-provider support (Anthropic, OpenAI, Google, xAI, Groq, Cerebras, OpenRouter, and any OpenAI-compatible endpoint), streaming, tool calling with TypeBox schemas, thinking/reasoning support, seamless cross-provider context handoffs, and token and cost tracking.
* pi-agent-core: An agent loop that handles tool execution, validation, and event streaming.
* pi-tui: A minimal terminal UI framework with differential rendering, synchronized output for (almost) flicker-free updates, and components like editors with autocomplete and markdown rendering.
* pi-coding-agent: The actual CLI that wires it all together with session management, custom tools, themes, and project context files.
My philosophy in all of this was: if I don’t need it, it won’t be built. And I don’t need a lot of things.
I’m not going to bore you with the API specifics of this package. You can read it all in the README.md. Instead, I want to document the problems I ran into while creating a unified LLM API and how I resolved them. I’m not claiming my solutions are the best, but they’ve been working pretty well throughout various agentic and non-agentic LLM projects.
There’s really only four APIs you need to speak to talk to pretty much any LLM provider: OpenAI’s Completions API, their newer Responses API, Anthropic’s Messages API, and Google’s Generative AI API.
They’re all pretty similar in features, so building an abstraction on top of them isn’t rocket science. There are, of course, provider-specific peculiarities you have to care for. That’s especially true for the Completions API, which is spoken by pretty much all providers, but each of them has a different understanding of what this API should do. For example, while OpenAI doesn’t support reasoning traces in their Completions API, other providers do in their version of the Completions API. This is also true for inference engines like llama.cpp, Ollama, vLLM, and LM Studio.
For example, in openai-completions.ts:
* Cerebras, xAI, Mistral, and Chutes don’t like the store field
* Mistral and Chutes use max_tokens instead of max_completion_tokens
* Cerebras, xAI, Mistral, and Chutes don’t support the developer role for system prompts
* Different providers return reasoning content in different fields (reasoning_content vs reasoning)
To ensure all features actually work across the gazillion of providers, pi-ai has a pretty extensive test suite covering image inputs, reasoning traces, tool calling, and other features you’d expect from an LLM API. Tests run across all supported providers and popular models. While this is a good effort, it still won’t guarantee that new models and providers will just work out of the box.
Another big difference is how providers report tokens and cache reads/writes. Anthropic has the sanest approach, but generally it’s the Wild West. Some report token counts at the start of the SSE stream, others only at the end, making accurate cost tracking impossible if a request is aborted. To add insult to injury, you can’t provide a unique ID to later correlate with their billing APIs and figure out which of your users consumed how many tokens. So pi-ai does token and cache tracking on a best-effort basis. Good enough for personal use, but not for accurate billing if you have end users consuming tokens through your service.
Special shout out to Google who to this date seem to not support tool call streaming which is extremely Google.
pi-ai also works in the browser, which is useful for building web-based interfaces. Some providers make this especially easy by supporting CORS, specifically Anthropic and xAI.
Context handoff between providers was a feature pi-ai was designed for from the start. Since each provider has their own way of tracking tool calls and thinking traces, this can only be a best-effort thing. For example, if you switch from Anthropic to OpenAI mid-session, Anthropic thinking traces are converted to content blocks inside assistant messages, delimited by tags. This may or may not be sensible, because the thinking traces returned by Anthropic and OpenAI don’t actually represent what’s happening behind the scenes.
These providers also insert signed blobs into the event stream that you have to replay on subsequent requests containing the same messages. This also applies when switching models within a provider. It makes for a cumbersome abstraction and transformation pipeline in the background.
I’m happy to report that cross-provider context handoff and context serialization/deserialization work pretty well in pi-ai:
Speaking of models, I wanted a typesafe way of specifying them in the getModel call. For that I needed a model registry that I could turn into TypeScript types. I’m parsing data from both OpenRouter and models.dev (created by the opencode folks, thanks for that, it’s super useful) into models.generated.ts. This includes token costs and capabilities like image inputs and thinking support.
And if I ever need to add a model that’s not in the registry, I wanted a type system that makes it easy to create new ones. This is especially useful when working with self-hosted models, new releases that aren’t yet on models.dev or OpenRouter, or trying out one of the more obscure LLM providers:
import { Model, stream } from ‘@mariozechner/pi-ai’;
const ollamaModel: Model
Many unified LLM APIs completely ignore providing a way to abort requests. This is entirely unacceptable if you want to integrate your LLM into any kind of production system. Many unified LLM APIs also don’t return partial results to you, which is kind of ridiculous. pi-ai was designed from the beginning to support aborts throughout the entire pipeline, including tool calls. Here’s how it works:
import { getModel, stream } from ‘@mariozechner/pi-ai’;
const model = getModel(‘openai’, ‘gpt-5.1-codex’);
const controller = new AbortController();
// Abort after 2 seconds
setTimeout(() => controller.abort(), 2000);
const s = stream(model, {
messages: [{ role: ‘user’, content: ‘Write a long story’ }]
signal: controller.signal
for await (const event of s) {
if (event.type === ‘text_delta’) {
process.stdout.write(event.delta);
} else if (event.type === ‘error’) {
console.log(`${event.reason === ‘aborted’ ? ‘Aborted’ : ‘Error’}:`, event.error.errorMessage);
// Get results (may be partial if aborted)
const response = await s.result();
if (response.stopReason === ‘aborted’) {
console.log(‘Partial content:’, response.content);
Another abstraction I haven’t seen in any unified LLM API is splitting tool results into a portion handed to the LLM and a portion for UI display. The LLM portion is generally just text or JSON, which doesn’t necessarily contain all the information you’d want to show in a UI. It also sucks hard to parse textual tool outputs and restructure them for display in a UI. pi-ai’s tool implementation allows returning both content blocks for the LLM and separate content blocks for UI rendering. Tools can also return attachments like images that get attached in the native format of the respective provider. Tool arguments are automatically validated using TypeBox schemas and AJV, with detailed error messages when validation fails:
What’s still lacking is tool result streaming. Imagine a bash tool where you want to display ANSI sequences as they come in. That’s currently not possible, but it’s a simple fix that will eventually make it into the package.
Partial JSON parsing during tool call streaming is essential for good UX. As the LLM streams tool call arguments, pi-ai progressively parses them so you can show partial results in the UI before the call completes. For example, you can display a diff streaming in as the agent rewrites a file.
Finally, pi-ai provides an agent loop that handles the full orchestration: processing user messages, executing tool calls, feeding results back to the LLM, and repeating until the model produces a response without tool calls. The loop also supports message queuing via a callback: after each turn, it asks for queued messages and injects them before the next assistant response. The loop emits events for everything, making it easy to build reactive UIs.
The agent loop doesn’t let you specify max steps or similar knobs you’d find in other unified LLM APIs. I never found a use case for that, so why add it? The loop just loops until the agent says it’s done. On top of the loop, however, pi-agent-core provides an Agent class with actually useful stuff: state management, simplified event subscriptions, message queuing with two modes (one-at-a-time or all-at-once), attachment handling (images, documents), and a transport abstraction that lets you run the agent either directly or through a proxy.
Am I happy with pi-ai? For the most part, yes. Like any unifying API, it can never be perfect due to leaky abstractions. But it’s been used in seven different production projects and has served me extremely well.
Why build this instead of using the Vercel AI SDK? Armin’s blog post mirrors my experience. Building on top of the provider SDKs directly gives me full control and lets me design the APIs exactly as I want, with a much smaller surface area. Armin’s blog gives you a more in-depth treatise on the reasons for building your own. Go read that.
I grew up in the DOS era, so terminal user interfaces are what I grew up with. From the fancy setup programs for Doom to Borland products, TUIs were with me until the end of the 90s. And boy was I fucking happy when I eventually switched to a GUI operating system. While TUIs are mostly portable and easily streamable, they also suck at information density. Having said all that, I thought starting with a terminal user interface for pi makes the most sense. I could strap on a GUI later whenever I felt like I needed to.
So why build my own TUI framework? I’ve looked into the alternatives like Ink, Blessed, OpenTUI, and so on. I’m sure they’re all fine in their own way, but I definitely don’t want to write my TUI like a React app. Blessed seems to be mostly unmaintained, and OpenTUI is explicitly not production ready. Also, writing my own TUI framework on top of Node.js seemed like a fun little challenge.
Writing a terminal user interface is not rocket science per se. You just have to pick your poison. There’s basically two ways to do it. One is to take ownership of the terminal viewport (the portion of the terminal contents you can actually see) and treat it like a pixel buffer. Instead of pixels you have cells that contain characters with background color, foreground color, and styling like italic and bold. I call these full screen TUIs. Amp and opencode use this approach.
The drawback is that you lose the scrollback buffer, which means you have to implement custom search. You also lose scrolling, which means you have to simulate scrolling within the viewport yourself. While this is not hard to implement, it means you have to re-implement all the functionality your terminal emulator already provides. Mouse scrolling specifically always feels kind of off in such TUIs.
The second approach is to just write to the terminal like any CLI program, appending content to the scrollback buffer, only occasionally moving the “rendering cursor” back up a little within the visible viewport to redraw things like animated spinners or a text edit field. It’s not exactly that simple, but you get the idea. This is what Claude Code, Codex, and Droid do.
Coding agents have this nice property that they’re basically a chat interface. The user writes a prompt, followed by replies from the agent and tool calls and their results. Everything is nicely linear, which lends itself well to working with the “native” terminal emulator. You get to use all the built-in functionality like natural scrolling and search within the scrollback buffer. It also limits what your TUI can do to some degree, which I find charming because constraints make for minimal programs that just do what they’re supposed to do without superfluous fluff. This is the direction I picked for pi-tui.
If you’ve done any GUI programming, you’ve probably heard of retained mode vs immediate mode. In a retained mode UI, you build up a tree of components that persist across frames. Each component knows how to render itself and can cache its output if nothing changed. In an immediate mode UI, you redraw everything from scratch each frame (though in practice, immediate mode UIs also do caching, otherwise they’d fall apart).
pi-tui uses a simple retained mode approach. A Component is just an object with a render(width) method that returns an array of strings (lines that fit the viewport horizontally, with ANSI escape codes for colors and styling) and an optional handleInput(data) method for keyboard input. A Container holds a list of components arranged vertically and collects all their rendered lines. The TUI class is itself a container that orchestrates everything.
When the TUI needs to update the screen, it asks each component to render. Components can cache their output: an assistant message that’s fully streamed doesn’t need to re-parse markdown and re-render ANSI sequences every time. It just returns the cached lines. Containers collect lines from all children. The TUI gathers all these lines and compares them to the lines it previously rendered for the previous component tree. It keeps a backbuffer of sorts, remembering what was written to the scrollback buffer.
Then it only redraws what changed, using a method I call differential rendering. I’m very bad with names, and this likely has an official name.
Here’s a simplified demo that illustrates what exactly gets redrawn.
First render: Just output all lines to the terminal
Width changed: Clear screen completely and re-render everything (soft wrapping changes)
Normal update: Find the first line that differs from what’s on screen, move the cursor to that line, and re-render from there to the end
There’s one catch: if the first changed line is above the visible viewport (the user scrolled up), we have to do a full clear and re-render. The terminal doesn’t let you write to the scrollback buffer above the viewport.
To prevent flicker during updates, pi-tui wraps all rendering in synchronized output escape sequences (CSI ?2026h and CSI ?2026l). This tells the terminal to buffer all the output and display it atomically. Most modern terminals support this.
How well does it work and how much does it flicker? In any capable terminal like Ghostty or iTerm2, this works brilliantly and you never see any flicker. In less fortunate terminal implementations like VS Code’s built-in terminal, you will get some flicker depending on the time of day, your display size, your window size, and so on. Given that I’m very accustomed to Claude Code, I haven’t spent any more time optimizing this. I’m happy with the little flicker I get in VS Code. I wouldn’t feel at home otherwise. And it still flickers less than Claude Code.
How wasteful is this approach? We store an entire scrollback buffer worth of previously rendered lines, and we re-render lines every time the TUI is asked to render itself. That’s alleviated with the caching I described above, so the re-rendering isn’t a big deal. We still have to compare a lot of lines with each other. Realistically, on computers younger than 25 years, this is not a big deal, both in terms of performance and memory use (a few hundred kilobytes for very large sessions). Thanks V8. What I get in return is a dead simple programming model that lets me iterate quickly.
I don’t need to explain what features you should expect from a coding agent harness. pi comes with most creature comforts you’re used to from other tools:
* Runs on Windows, Linux, and macOS (or anything with a Node.js runtime and a terminal)
* Message queuing while the agent is working
If you want the full rundown, read the README. What’s more interesting is where pi deviates from other harnesses in philosophy and implementation.
That’s it. The only thing that gets injected at the bottom is your AGENTS.md file. Both the global one that applies to all your sessions and the project-specific one stored in your project directory. This is where you can customize pi to your liking. You can even replace the full system prompt if you want to. Compared to, for example, Claude Code’s system prompt, Codex’s system prompt, or opencode’s model-specific prompts (the Claude one is a cut-down version of the original Claude Code prompt they copied).
You might think this is crazy. In all likelihood, the models have some training on their native coding harness. So using the native system prompt or something close to it like opencode would be most ideal. But it turns out that all the frontier models have been RL-trained up the wazoo, so they inherently understand what a coding agent is. There does not appear to be a need for 10,000 tokens of system prompt, as we’ll find out later in the benchmark section, and as I’ve anecdotally found out by exclusively using pi for the past few weeks. Amp, while copying some parts of the native system prompts, seems to also do just fine with their own prompt.
Here are the tool definitions:
read
Read the contents of a file. Supports text files and images (jpg, png,
gif, webp). Images are sent as attachments. For text files, defaults to
first 2000 lines. Use offset/limit for large files.
- path: Path to the file to read (relative or absolute)
- offset: Line number to start reading from (1-indexed)
- limit: Maximum number of lines to read
write
Write content to a file. Creates the file if it doesn’t exist, overwrites
if it does. Automatically creates parent directories.
- path: Path to the file to write (relative or absolute)
- content: Content to write to the file
edit
Edit a file by replacing exact text. The oldText must match exactly
(including whitespace). Use this for precise, surgical edits.
- path: Path to the file to edit (relative or absolute)
- oldText: Exact text to find and replace (must match exactly)
- newText: New text to replace the old text with
bash
...
Read the original on mariozechner.at »
Rust is one of the most loved languages out there, is fast, and has an amazing community. Rust invented the concept of ownership as a solution memory management issues without resorting to something slower like Garbage Collection or Reference Counting. But, when you don’t need to be quite as low level, it gives you utilities such as Rc, Arc and Cow to do reference counting and “clone-on-right” in your code. And, when you need to go lower-level still, you can use the unsafe system and access raw C pointers.
Rust also has a bunch of awesome features from functional languages like tagged enums, match expressions, first class functions and a powerful type system with generics.
Rust has an LLVM-based compiler which lets it compile to native code and WASM.
I’ve also been doing a bit of Swift programming for a couple of years now. And the more I learn Rust, the more I see a reflection of Swift. (I know that Swift stole a lot of ideas from Rust, I’m talking about my own perspective here).
Swift, too, has awesome features from functional languages like tagged enums, match expressions and first-class functions. It too has a very powerful type system with generics.
Swift too gives you complete type-safety without a garbage collector. By default, everything is a value type with “copy-on-write” semantics. But when you need extra speed you can opt into an ownership system and “move” values to avoid copying. And if you need to go even lower level, you can use the unsafe system and access raw C pointers.
Swift has an LLVM-based compiler which lets it compile to native code and WASM.
You’re probably feeling like you just read the same paragraphs twice. This is no accident. Swift is extremely similar to Rust and has most of the same feature-set. But there is a very big difference is perspective. If you consider the default memory model, this will start to make a lot of sense.
Rust is a low-level systems language at heart, but it gives you the tools to go higher level. Swift starts at a high level and gives you the ability to go low-level.
The most obvious example of this is the memory management model. Swift use value-types by default with copy-on-write semantics. This is the equivalent of using Cow<> for all your values in Rust. But defaults matter. Rust makes it easy to use “moved” and “borrowed” values but requires extra ceremony to use Cow<> values as you need to “unwrap” them .as_mutable() to actually use the value within. Swift makes these Copy-on-Write values easy to use and instead requires extra ceremony to use borrowing and moving instead. Rust is faster by default, Swift is simpler and easier by default.
Swift’s syntax is a masterclass in taking awesome functional language concepts and hiding them in C-like syntax to trick the developers into accepting them.
Consider match statements. This is what a match statement looks like in Rust:
Here’s how that same code would be written in Swift:
Swift doesn’t have a match statement or expression. It has a switch statement that developers are already familiar with. Except this switch statement is actually not a switch statement at all. It’s an expression. It doesn’t “fallthrough”. It does pattern matching. It’s just a match expression with a different name and syntax.
In fact, Swift treats enums as more than just types and lets you put methods directly on it:
Rust doesn’t have null, but it does have None. Swift has a nil, but it’s really just a None in hiding. Instead of an Option, Swift let’s you use T?, but the compiler still forces you to check that the value is not nil before you can use it.
You get the same safety with more convenience since you can do this in Swift with an optional type:
let val: T?if let val { // val is now of type `T`.}
Also, you’re not forced to wrap every value with a Some(val) before returning it. The Swift compiler takes care of that for you. A T will transparently be converted into a T? when needed.
Rust doesn’t have try-catch. Instead it has a Result type which contains the success and error types.
Swift doesn’t have a try-catch either, but it does have do-catch and you have to use try before calling a function that could throw. Again, this is just deception for those developers coming from C-like languages. Swift’s error handling works exactly like Rust’s behind the scenes, but it is hidden in a clever, familiar syntax.
func usesErrorThrowingFunction() throws { let x = try thisFnCanThrow()}func handlesErrors() { do { let x = try thisFnCanThrow() } catch err { // handle the `err` here. }}
This is very similar to how Rust let’s you use ? at the end of statements to automatically forward errors, but you don’t have to wrap your success values in Ok().
There are many common problems that Rust’s compiler will catch at compile time and even suggest solutions for you. The example that portrays this well is self-referencing enums.
Consider an enum that represents a tree. Since, it is a recursive type, Rust will force you to use something like Box<> for referencing a type within itself.
This makes the problem explicit and forces you to deal with it directly. Swift is a little more, automatic.
Note: that you still have to annotate this enum with the indirect keyword to indicate that it is recursive. But once you’ve done that, Swift’s compiler takes care of the rest. You don’t have to think about Box<> or Rc<>. The values just work normally.
Swift was designed to replace Objective-C and needed to be able to interface with existing code. So, it has made a lot of pragmatic choices that makes it a much less “pure” and “minimalist” language. Swift is a pretty big language compared to Rust and has many more features built-in. However, Swift is designed with “progressive disclosure” in mind which means that just as soon as you think you’ve learned the language a little more of the iceberg pops out of the water.
Here are just some of the language features:
Swift is a far easier language to get started and productive with. The syntax is more familiar and a lot more is done for you automatically. But this really just makes Swift a higher-level language and it comes with the same tradeoffs.
By default, a Rust program is much faster than a Swift program. This is because Rust is fast by default, and lets you be slow, while Swift is easy by default and lets you be fast.
Based on this, I would say both languages have their uses. Rust is better for systems and embedded programming. It’s better for writing compilers and browser engines (Servo) and it’s better for writing entire operating systems.
Swift is better for writing UI and servers and some parts of compilers and operating systems. Over time I expect to see the overlap get bigger.
There is a perception that Swift is only a good language for Apple platforms. While this was once true, this is no longer the case and Swift is becoming increasingly a good cross-platform language. Hell, Swift even compiles to wasm, and the forks made by the swift-wasm team were merged back into Swift core earlier this year.
Swift on Windows is being used by The Browser Company to share code and bring the Arc browser to windows. Swift on Linux has long been supported by Apple themselves in order to push “Swift on Server”. Apple is directly sponsoring the Swift on Server conference.
This year Embedded Swift was also announced which is already being used on small devices like the Panic Playdate.
Swift website has been highlighting many of these projects:
The browser company says that Interoperability is Swift’s super power.
And the Swift project has been trying make working with Swift a great experience outside of XCode with projects like an open source LSP and funding the the VSCode extension.
Compile times are (like Rust) quite bad. There is some amount of feature creep and the language is larger than it should be. Not all syntax feels familiar. The package ecosystem isn’t nearly as rich as Rust.
But the “Swift is only for Apple platforms” is an old and tired cliche at this point. Swift is already a cross-platform, ABI-stable language with no GC, automatic Reference Counting and the option to opt into ownership for even more performance. Swift packages increasingly work on Linux. Foundation was ported to Swift, open sourced and made open source. It’s still early days for Swift as a good, more convenient, Rust alternative for cross-platform development, but it is here now. It’s no longer a future to wait for.
...
You have limited time, but get more time for each animal listed. When the timer runs out, that’s game over.
No overlapping terms.
For example, if you list “bear” and “polar bear”, you get no point (or time bonus) for the latter. But you can still get a point for a second kind of bear. Order doesn’t matter.
...
Read the original on rose.systems »
A leader in the global fight against smallpox and a champion of vaccine science, William Foege died last SaturdayThe late physicians and health administrators William Foege (middle), J. Donald Millar (left) and J. Michael Lane (right), all of whom served in the Global Smallpox Eradication Program, in 1980. Sign Up for Our Free Daily NewsletterI agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy . We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. William Foege, a leader in the global fight to eliminate smallpox, has died. Foege passed away on Saturday at the age of 89, according to the Task Force for Global Health, a public health organization he co-founded.Foege headed the U.S. Centers for Disease Control and Prevention’s Smallpox Eradication Program in the 1970s. Before the disease was officially eradicated in 1980, it killed around one in three people who were infected. According to the CDC, there have been no new smallpox cases since 1977.“If you look at the simple metric of who has saved the most lives, he is right up there with the pantheon,” said former CDC director Tom Frieden to the Associated Press. “Smallpox eradication has prevented hundreds of millions of deaths.”If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Foege went on to lead the CDC and served as a senior medical adviser and senior fellow at the Bill & Melinda Gates Foundation. In 2012 then president Barack Obama awarded him the Presidential Medal of Freedom.Foege was a vocal proponent of vaccines for public health, writing with epidemiologist Larry Brilliant in Scientific American in 2013 that the effort to eliminate polio “has never been closer” to success. “By working together,” they wrote, “we will soon relegate polio—alongside smallpox—to the history books.” Polio remains a “candidate for eradication,” according to the World Health Assembly.And in 2025 Foege, alongside several other former CDC directors, spoke out against the policies of the current secretary of health and human services Robert F. Kennedy, Jr. In a New York Times op-ed, they wrote that the top health official’s tenure was “unlike anything we had ever seen at the agency.”In a statement, Task Force for Global Health CEO Patrick O’Carroll remembered Foege as an “inspirational” figure, both for early-career public health workers and veterans of the field. “Whenever he spoke, his vision and compassion would reawaken the optimism that prompted us to choose this field, and re-energize our efforts to make this world a better place,” O’Carroll said.It’s Time to Stand Up for ScienceIf you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
...
Read the original on www.scientificamerican.com »
For the last few months, I have been developing a new reporting application. Early on, I decided to add a –dry-run option to the run command. This turned out to be quite useful — I have used it many times a day while developing and testing the application.
The application will generate a set of reports every weekday. It has a loop that checks periodically if it is time to generate new reports. If so, it will read data from a database, apply some logic to create the reports, zip the reports, upload them to an sftp server, check for error responses on the sftp server, parse the error responses, and send out notification mails. The files (the generated reports, and the downloaded feedback files) are moved to different directories depending on the step in the process. A simple and straightforward application.
Early in the development process, when testing the incomplete application, I remembered that Subversion (the version control system after CVS, before Git) had a –dry-run option. Other linux commands have this option too. If a command is run with the argument –dry-run, the output will print what will happen when the command is run, but no changes will be made. This lets the user see what will happen if the command is run without the –dry-run argument.
I remembered how helpful that was, so I decided to add it to my command as well. When I run the command with –dry-run, it prints out the steps that will be taken in each phase: which reports that will be generated (and which will not be), which files will be zipped and moved, which files will be uploaded to the sftp server, and which files will be downloaded from it (it logs on and lists the files).
Looking back at the project, I realized that I ended up using the –dry-run option pretty much every day.
I am surprised how useful I found it to be. I often used it as a check before getting started. Since I know –dry-run will not change anything, it is safe to run without thinking. I can immediately see that everything is accessible, that the configuration is correct, and that the state is as expected. It is a quick and easy sanity check.
I also used it quite a bit when testing the complete system. For example, if I changed a date in the report state file (the date for the last successful report of a given type), I could immediately see from the output whether it would now be generated or not. Without –dry-run, the actual report would also be generated, which takes some time. So I can test the behavior, and receive very quick feedback.
The downside is that the dryRun-flag pollutes the code a bit. In all the major phases, I need to check if the flag is set, and only print the action that will be taken, but not actually doing it. However, this doesn’t go very deep. For example, none of the code that actually generates the report needs to check it. I only need to check if that code should be invoked in the first place.
The type of application I have been writing is ideal for –dry-run. It is invoked by a command, and it may create some changes, for example generating new reports. More reactive applications (that wait for messages before acting) don’t seem to be a good fit.
I added –dry-run on a whim early on in the project. I was surprised at how useful I found it to be. Adding it early was also good, since I got the benefit of it while developing more functionality.
The –dry-run flag is not for every situation, but when it fits, it can be quite useful.
...
Read the original on henrikwarne.com »
Log in to Our Site
Please log in (using your forums account) to make the most
of our site, or register here.
Learn More
Adventure Game Studio (AGS) is open-source software for creating graphical point-and-click adventure games. It is free, standalone, and requires no subscription.
The Windows-based IDE, streamlines game creation by integrating tools for importing graphics, writing scripts, and testing. Games created with AGS can be played on multiple platforms, including Linux, iOS, and Android.
Suitable for all skill levels, AGS features an active community for support and socialising.
Showcase your games by uploading them to this website.
Rot your brain by consuming AI slop and services in this classic arcade style game created for the MAGS January 2026 game jam in the AGS forums. Move […]
You awaken alone on a cold, rocky shore beneath a moonless sky, dragged from the sea through a sewer pipe with no memory of who you are, how you […]
A dead man’s soul cries out against the force of a ferocious blizzard. He cries for help. He cries for answers. Then he screams as he is torn apart […]
The jury of Los Angeles County District has ruled in favor of four police officers accused of abusing their power against coloured citizen Rodney […]
Rot your brain by consuming AI slop and services in this classic arcade style game created for the MAGS January 2026 game jam in the AGS forums.Move […]
You awaken alone on a cold, rocky shore beneath a moonless sky, dragged from the sea through a sewer pipe with no memory of who you are, how you […]
A dead man’s soul cries out against the force of a ferocious blizzard. He cries for help. He cries for answers. Then he screams as he is torn apart […]
The jury of Los Angeles County District has ruled in favor of four police officers accused of abusing their power against coloured citizen Rodney […]
The latest from our forums
In: The Rumpus Room
By: cat (2 hours ago)
In: The Rumpus Room
By: Danvzare (3 hours ago)
In: AGS Games in Production
By: Rui “Giger Kitty” Pires (4 hours ago)
AGS has an active and friendly community, with many ways of keeping in touch and getting help with your project or games made with AGS.
These include our local forums, Facebook page, Discord server, in-person meet-ups, and many more.
The AGS community is run by a team of dedicated volunteers, who put their time and efforts into keeping it running as a welcoming, friendly and informative place to be. The AGS server and forums are paid for out of our own pockets, so in effect it costs us money to provide a free service to AGS users.
If you appreciate the work we do, and would like to give a little something back, please use the below link to donate via PayPal. Any profit made after covering server costs will be put back into hosting community events such as Mittens.
By continuing to use this site you agree to the use of
cookies .
Please visit this page to see exactly how we use these.
...
Read the original on www.adventuregamestudio.co.uk »
Like many organizations, Wiki Education has grappled with generative AI, its impacts, opportunities, and threats, for several years. As an organization that runs large-scale programs to bring new editors to Wikipedia (we’re responsible for about 19% of all new active editors on English Wikipedia), we have deep understanding of what challenges face new content contributors to Wikipedia — and how to support them to successfully edit. As many people have begun using generative AI chatbots like ChatGPT, Gemini, or Claude in their daily lives, it’s unsurprising that people will also consider using them to help draft contributions to Wikipedia. Since Wiki Education’s programs provide a cohort of content contributors whose work we can evaluate, we’ve looked into how our participants are using GenAI tools.
We are choosing to share our perspective through this blog post because we hope it will help inform discussions of GenAI-created content on Wikipedia. In an open environment like the Wikimedia movement, it’s important to share what you’ve learned. In this case, we believe our learnings can help Wikipedia editors who are trying to protect the integrity of content on the encyclopedia, Wikipedians who may be interested in using generative AI tools themselves, other program leaders globally who are trying to onboard new contributors who may be interested in using these tools, and the Wikimedia Foundation, whose product and technology team builds software to help support the development of high-quality content on Wikipedia.
Our fundamental conclusion about generative AI is: Wikipedia editors should never copy and paste the output from generative AI chatbots like ChatGPT into Wikipedia articles.
Let me explain more.
Since the launch of ChatGPT in November 2022, we’ve been paying close attention to GenAI-created content, and how it relates to Wikipedia. We’ve spot-checked work of new editors from our programs, primarily focusing on citations to ensure they were real and not hallucinated. We experimented with tools ourselves, we led video sessions about GenAI for our program participants, and we closely tracked on-wiki policy discussions around GenAI. Currently, English Wikipedia prohibits the use of generative AI to create images or in talk page discussions, and recently adopted a guideline against using large language models to generate new articles.
As our Wiki Experts Brianda Felix and Ian Ramjohn worked with program participants throughout the first half of 2025, they found more and more text bearing the hallmarks of generative AI in article content, like bolded words or bulleted lists in odd places. But the use of generative AI wasn’t necessarily problematic, as long as the content was accurate. Wikipedia’s open editing process encourages stylistic revisions to factual text to better fit Wikipedia’s style.
This finding led us to invest significant staff time into cleaning up these articles — far more than these editors had likely spent creating them. Wiki Education’s core mission is to improve Wikipedia, and when we discover our program has unknowingly contributed to misinformation on Wikipedia, we are committed to cleaning it up. In the clean-up process, Wiki Education staff moved more recent work back to sandboxes, we stub-ified articles that passed notability but mostly failed verification, and we PRODed some articles that from our judgment weren’t salvageable. All these are ways of addressing Wikipedia articles with flaws in their content. (While there are many grumblings about Wikipedia’s deletion processes, we found several of the articles we PRODed due to their fully hallucinated GenAI content were then de-PRODed by other editors, showing the diversity of opinion about generative AI among the Wikipedia community.
Given what we found through our investigation into the work from prior terms, and given the increasing usage of generative AI, we wanted to proactively address generative AI usage within our programs. Thanks to in-kind support from our friends at Pangram, we began running our participants’ Wikipedia edits, including in their sandboxes, through Pangram nearly in real time. This is possible because of the Dashboard course management platform Sage built, which tracks edits and generates tickets for our Wiki Experts based on on-wiki edits.
We created a brand-new training module on Using generative AI tools with Wikipedia. This training emphasizes where participants could use generative AI tools in their work, and where they should not. The core message of these trainings is, do not copy and paste anything from a GenAI chatbot into Wikipedia.
We crafted a variety of automated emails to participants who Pangram detected were adding text created by generative AI chatbots. Sage also recorded some videos, since many young people are accustomed to learning via video rather than reading text. We also provided opportunities for engagement and conversation with program participants.
In total, we had 1,406 AI edit alerts in the second half of 2025, although only 314 of these (or 22%) were in the article namespace on Wikipedia (meaning edits to live articles). In most cases, Pangram detected participants using GenAI in their sandboxes during early exercises, when we ask them to do things like choose an article, evaluate an article, create a bibliography, and outline their contribution.
Pangram struggled with false positives in a few sandbox scenarios:
* Bibliographies, which are often a combination of human-written prose (describing a source and its relevance) and non-prose text (the citation for a source, in some standard format)
* Outlines with a high portion of non-prose content (such as bullet lists, section headers, text fragments, and so on)
We also had a handful of cases where sandboxes were flagged for AI after a participant copied an AI-written section from an existing article to use as a starting point to edit or to expand. (This isn’t a flaw of Pangram, but a reminder of how much AI-generated content editors outside our programs are adding to Wikipedia!)
In broad strokes, we found that Pangram is great at analyzing plain prose — the kind of sentences and paragraphs you’ll find in the body of a Wikipedia article — but sometimes it gets tripped up by formatting, markup, and non-prose text. Early on, we disabled alert emails for participants’ bibliography and outline exercises, and throughout the end of 2025, we refined the Dashboard’s preprocessing steps to extract the prose portions of revisions and convert them to plain text before sending them to Pangram.
Many participants also reported “just using Grammarly to copy edit.” In our experience, however, the smallest fixes done with Grammarly never trigger Pangram’s detection, but if you use its more advanced content creation features, the resulting text registers as being AI generated.
But overwhelmingly, we were pleased with Pangram’s results. Our early interventions with participants who were flagged as using generative AI for exercises that would not enter mainspace seemed to head off their future use of generative AI. We supported 6,357 new editors in fall 2025, and only 217 of them (or 3%) had multiple AI alerts. Only 5% of the participants we supported had mainspace AI alerts. That means thousands of participants successfully edited Wikipedia without using generative AI to draft their content.
For those who did add GenAI-drafted text, we ensured that the content was reverted. In fact, participants sometimes self-reverted once they received our email letting them know Pangram had detected their contributions as being AI created. Instructors also jumped in to revert, as did some Wikipedians who found the content on their own. Our ticketing system also alerted our Wiki Expert staff, who reverted the text as soon as they could.
While some instructors in our Wikipedia Student Program had concerns about AI detection, we had a lot of success focusing the conversation on the concept of verifiability. If the instructor as subject matter expert could attest the information was accurate, and they could find the specific facts in the sources they were cited to, we permitted text to come back to Wikipedia. However, the process of attempting to verify student-created work (which in many cases the students swore they’d written themselves) led many instructors to realize what we had found in our own assessment: In their current states, GenAI-powered chatbots cannot write factually accurate text for Wikipedia that is verifiable.
We believe our Pangram-based detection interventions led to fewer participants adding GenAI-created content to Wikipedia. Following the trend lines, we anticipated about 25% of participants to add GenAI content to Wikipedia articles; instead, it was only 5%, and our staff were able to revert all problematic content.
I’m deeply appreciative of everyone who made this success possible this term: Participants who followed our recommendations, Pangram who gave us access to their detection service, Wiki Education staff who did the heavy lift of working with all of the positive detections, and the Wikipedia community, some of whom got to the problematic work from our program participants before we did.
So far, I’ve focused on the problems with generative AI-created content. But that’s not all these tools can do, and we did find some ways they were useful. Our training module encourages editors — if their institution’s policies permit it — to consider using generative AI tools for:
To evaluate the success of these use scenarios, we worked directly with 7 of the classes we supported in fall 2025 in our Wikipedia Student Program. We asked students to anonymously fill out a survey every time they used generative AI tools in their Wikipedia work. We asked what tool they used, what prompt they used, how they used the output, and whether they found it helpful. While some students filled the survey out multiple times, others filled it out once. We had 102 responses reporting usage at various stages in the project. Overwhelmingly, 87% of the responses who reported using generative AI said it was helpful for them in the task. The most popular tool by far was ChatGPT, with Grammarly as a distant second, and the others in the single-digits of usage.
* Identifying articles to work on that were relevant to the course they were taking
* Highlighting gaps within existing articles, including missing sections or more recent information that was missing
* Finding reliable sources that they hadn’t already located
* Pointing to which database a certain journal article could be found
* When prompted with the text they had drafted and the checklist of requirements, evaluating the draft against those requirements
* Identifying categories they could add to the article they’d edited
Critically, no participants reported using AI tools to draft text for their assignments. One student reported: “I pasted all of my writing from my sandbox and said ‘Put this in a casual, less academic tone’ … I figured I’d try this but it didn’t sound like what I normally write and I didn’t feel that it captured what I was trying to get across so I scrapped it.”
While this was an informal research project, we received enough positive feedback from it to believe using ChatGPT and other tools can be helpful in the research stage if editors then critically evaluate the output they get, instead of blindly accepting it. Even participants who found AI helpful reported that they didn’t use everything it gave them, as some was irrelevant. Undoubtedly, it’s crucial to maintain the human thinking component throughout the process.
My conclusion is that, at least as of now, generative AI-powered chatbots like ChatGPT should never be used to generate text for Wikipedia; too much of it will simply be unverifiable. Our staff would spend far more time attempting to verify facts in AI-generated articles than if we’d simply done the research and writing ourselves.
That being said, AI tools can be helpful in the research process, especially to help identify content gaps or sources, when used in conjunction with a human brain that carefully evaluates the information. Editors should never simply take a chatbot’s suggestion; instead, if they want to use a chatbot, they should use it as a brainstorm partner to help them think through their plans for an article.
To date, Wiki Education’s interventions as our program participants edit Wikipedia show promise for keeping unverifiable, GenAI-drafted content off Wikipedia. Based on our experiences in the fall term, we have high confidence in Pangram as a detector of AI content, at least in Wikipedia articles. We will continue our current strategy in 2026 (with more small adjustments to make the system as reliable as we can).
More generally, we found participants had less AI literacy than popular discourse might suggest. Because of this, we created a supplemental large language models training that we’ve offered as an optional module for all participants. Many participants indicated that they found our guidance regarding AI to be welcome and helpful as they attempt to navigate the new complexities created by AI tools.
We are also looking forward to more research on our work. A team of researchers — Francesco Salvi and Manoel Horta Ribeiro at Princeton University, Robert Cummings at the University of Mississippi, and Wiki Education’s Sage Ross — have been looking into Wiki Education’s Wikipedia Student Program editors’ use of generative AI over time. Preliminary results have backed up our anecdotal understanding, while also revealing nuances of how text produced by our students over time has changed with the introduction of GenAI chatbots. They also confirmed our belief in Pangram: After running student edits from 2015 up until the launch of ChatGPT through Pangram, without any date information involved, the team found Pangram correctly identified that it was all 100% human written. This research will continue into the spring, as the team explores ways of unpacking the effects of AI on different aspects of article quality.
And, of course, generative AI is a rapidly changing field. Just because these were our findings in 2025 doesn’t mean they will hold true throughout 2026. Wiki Education remains committed to monitoring, evaluating, iterating, and adapting as needed. Fundamentally, we are committed to ensuring we add high quality content to Wikipedia through our programs. And when we miss the mark, we are committed to cleaning up any damage.
While I’ve focused this post on what Wiki Education has learned from working with our program participants, the lessons are extendable to others who are editing Wikipedia. Already, 10% of adults worldwide are using ChatGPT, and drafting text is one of the top use cases. As generative AI usage proliferates, its usage by well-meaning people to draft content for Wikipedia will as well. It’s unlikely that longtime, daily Wikipedia editors would add content copied and pasted from a GenAI chatbot without verifying all the information is in the sources it cites. But many casual Wikipedia contributors or new editors may unknowingly add bad content to Wikipedia when using a chatbot. After all, it provides what looks like accurate facts, cited to what are often real, relevant, reliable sources. Most edits we ended up reverting seemed acceptable with a cursory review; it was only after we attempted to verify the information that we understood the problems.
Because this unverifiable content often seems okay at first pass, it’s critical for Wikipedia editors to be equipped with tools like Pangram to more accurately detect when they should take a closer look at edits. Automating review of text for generative AI usage — as Wikipedians have done for copyright violation text for years — would help protect the integrity of Wikipedia content. In Wiki Education’s experience, Pangram is a tool that could provide accurate assessments of text for editors, and we would love to see a larger scale version of the tool we built to evaluate edits from our programs to be deployed across all edits on Wikipedia. Currently, editors can add a warning banner that highlights that the text might be LLM generated, but this is based solely on the assessment of the person adding the banner. Our experience suggests that judging by tone alone isn’t enough; instead, tools like Pangram can flag highly problematic information that should be reverted immediately but that might sound okay.
We’ve also found success in the training modules and support we’ve created for our program participants. Providing clear guidance — and the reason why that guidance exists — has been key in helping us head off poor usage of generative AI text. We encourage Wikipedians to consider revising guidance to new contributors in the welcome messages to emphasize the pitfalls of adding GenAI-drafted text. Software aimed at new contributors created by the Wikimedia Foundation should center starting with a list of sources and drawing information from them, using human intellect, instead of generative AI, to summarize information. Providing guidance upfront can help well-meaning contributors steer clear of bad GenAI-created text.
Wikipedia recently celebrated its 25th birthday. For it to survive into the future, it will need to adapt as technology around it changes. Wikipedia would be nothing without its corps of volunteer editors. The consensus-based decision-making model of Wikipedia means change doesn’t come quickly, but we hope this deep-dive will help spark a conversation about changes that are needed to protect Wikipedia into the future.
...
Read the original on wikiedu.org »
US law enforcement has been investigating allegations by former Meta Platforms Inc. contractors that Meta personnel can access WhatsApp messages, despite the company’s statements that the chat service is private and encrypted, according to interviews and an agent’s report seen by Bloomberg News.
The former contractors’ claims — that they and some Meta staff had “unfettered” access to WhatsApp messages — were being examined by special agents with the US Department of Commerce, according to the law enforcement records, as well as a person familiar with the matter and one of the contractors. Similar claims were also the subject of a 2024 whistleblower complaint to the US Securities and Exchange Commission, according to the records and the person, who spoke on the condition that they not be identified out of concern for potential retaliation. The investigation and whistleblower complaint haven’t been previously reported.
...
Read the original on www.bloomberg.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.