10 interesting stories served every morning and every evening.
Toulouse, France, 28 November 2025 — Analysis of a recent event involving an A320 Family aircraft has revealed that intense solar radiation may corrupt data critical to the functioning of flight controls.
Airbus has consequently identified a significant number of A320 Family aircraft currently in-service which may be impacted.
Airbus has worked proactively with the aviation authorities to request immediate precautionary action from operators via an Alert Operators Transmission (AOT) in order to implement the available software and/or hardware protection, and ensure the fleet is safe to fly. This AOT will be reflected in an Emergency Airworthiness Directive from the European Union Aviation Safety Agency (EASA).
Airbus acknowledges these recommendations will lead to operational disruptions to passengers and customers. We apologise for the inconvenience caused and will work closely with operators, while keeping safety as our number one and overriding priority.
...
Read the original on www.airbus.com »
OpenAI is now internally testing ‘ads’ inside ChatGPT that could redefine the web economy.
Up until now, the ChatGPT experience has been completely free.
While there are premium plans and models, you don’t see GPT sell you products or show ads. On the other hand, Google Search has ads that influence your buying behaviour.
As spotted by Tibor on X, ChatGPT Android app 1.2025.329 beta includes new references to an “ads feature” with “bazaar content”, “search ad” and “search ads carousel.”
This move could disrupt the web economy, as what most people don’t understand is that GPT likely knows more about users than Google.
For example, OpenAI could create personalised ads on ChatGPT that promote products that you really want to buy. It might also sneak in ads in the search ads, similar to Google Search ads.
The leak suggests that ads will initially be limited to the search experience only, but this may change in the future.
ChatGPT has roughly 800 million people using it every week, up from 100 million weekly users in November 2023 and about 300 million weekly users in late 2024.
An OpenAI-backed study estimated 700 million users sending 18 billion messages per week by July 2025, which lines up with this growth, and other analysts now peg traffic at around 5–6 billion visits per month.
GPT handles about 2.5 billion prompts a day, and India has become the single biggest user base, ahead of the US.
ChatGPT has everything it needs for ads to succeed. What do you think?
...
Read the original on www.bleepingcomputer.com »
Every couple of years somebody notices that large tech companies sometimes produce surprisingly sloppy code. If you haven’t worked at a big company, it might be hard to understand how this happens. Big tech companies pay well enough to attract many competent engineers. They move slowly enough that it looks like they’re able to take their time and do solid work. How does bad code happen?
I think the main reason is that big companies are full of engineers working outside their area of expertise. The average big tech employee stays for only a year or two. In fact, big tech compensation packages are typically designed to put a four-year cap on engineer tenure: after four years, the initial share grant is fully vested, causing engineers to take what can be a 50% pay cut. Companies do extend temporary yearly refreshes, but it obviously incentivizes engineers to go find another job where they don’t have to wonder if they’re going to get the other half of their compensation each year.
If you count internal mobility, it’s even worse. The longest I have ever stayed on a single team or codebase was three years, near the start of my career. I expect to be re-orged at least every year, and often much more frequently.
However, the average tenure of a codebase in a big tech company is a lot longer than that. Many of the services I work on are a decade old or more, and have had many, many different owners over the years. That means many big tech engineers are constantly “figuring it out”. A pretty high percentage of code changes are made by “beginners”: people who have onboarded to the company, the codebase, or even the programming language in the past six months.
To some extent, this problem is mitigated by “old hands”: engineers who happen to have been in the orbit of a particular system for long enough to develop real expertise. These engineers can give deep code reviews and reliably catch obvious problems. But relying on “old hands” has two problems.
First, this process is entirely informal. Big tech companies make surprisingly little effort to develop long-term expertise in individual systems, and once they’ve got it they seem to barely care at all about retaining it. Often the engineers in question are moved to different services, and have to either keep up their “old hand” duties on an effectively volunteer basis, or abandon them and become a relative beginner on a brand new system.
Second, experienced engineers are always overloaded. It is a busy job being one of the few engineers who has deep expertise on a particular service. You don’t have enough time to personally review every software change, or to be actively involved in every decision-making process. Remember that you also have your own work to do: if you spend all your time reviewing changes and being involved in discussions, you’ll likely be punished by the company for not having enough individual output.
Putting all this together, what does the median productive engineer at a big tech company look like? They are usually:
* competent enough to pass the hiring bar and be able to do the work, but either
* working on a codebase or language that is largely new to them, or
* trying to stay on top of a flood of code changes while also juggling their own work.
They are almost certainly working to a deadline, or to a series of overlapping deadlines for different projects. In other words, they are trying to do their best in an environment that is not set up to produce quality code.
That’s how “obviously” bad code happens. For instance, a junior engineer picks up a ticket for an annoying bug in a codebase they’re barely familiar with. They spend a few days figuring it out and come up with a hacky solution. One of the more senior “old hands” (if they’re lucky) glances over it in a spare half-hour, vetoes it, and suggests something slightly better that would at least work. The junior engineer implements that as best they can, tests that it works, it gets briefly reviewed and shipped, and everyone involved immediately moves on to higher-priority work. Five years later somebody notices this and thinks “wow, that’s hacky - how did such bad code get written at such a big software company”?
I have written a lot about the internal tech company dynamics that contribute to this. Most directly, in Seeing like a software company I argue that big tech companies consistently prioritize internal legibility - the ability to see at a glance who’s working on what and to change it at will - over productivity. Big companies know that treating engineers as fungible and moving them around destroys their ability to develop long-term expertise in a single codebase. That’s a deliberate tradeoff. They’re giving up some amount of expertise and software quality in order to gain the ability to rapidly deploy skilled engineers onto whatever the problem-of-the-month is.
I don’t know if this is a good idea or a bad idea. It certainly seems to be working for the big tech companies, particularly now that “how fast can you pivot to something AI-related” is so important. But if you’re doing this, then of course you’re going to produce some genuinely bad code. That’s what happens when you ask engineers to rush out work on systems they’re unfamiliar with.
Individual engineers are entirely powerless to alter this dynamic. This is particularly true in 2025, when the balance of power has tilted away from engineers and towards tech company leadership. The most you can do as an individual engineer is to try and become an “old hand”: to develop expertise in at least one area, and to use it to block the worst changes and steer people towards at least minimally-sensible technical decisions. But even that is often swimming against the current of the organization, and if inexpertly done can cause you to get PIP-ed or worse.
I think a lot of this comes down to the distinction between pure and impure software engineering. To pure engineers - engineers working on self-contained technical projects, like a programming language - the only explanation for bad code is incompetence. But impure engineers operate more like plumbers or electricians. They’re working to deadlines on projects that are relatively new to them, and even if their technical fundamentals are impeccable, there’s always something about the particular setup of this situation that’s awkward or surprising. To impure engineers, bad code is inevitable. As long as the overall system works well enough, the project is a success.
At big tech companies, engineers don’t get to decide if they’re working on pure or impure engineering work. It’s not their codebase! If the company wants to move you from working on database infrastructure to building the new payments system, they’re fully entitled to do that. The fact that you might make some mistakes in an unfamiliar system - or that your old colleagues on the database infra team might suffer without your expertise - is a deliberate tradeoff being made by the company, not the engineer.
It’s fine to point out examples of bad code at big companies. If nothing else, it can be an effective way to get those specific examples fixed, since execs usually jump at the chance to turn bad PR into good PR. But I think it’s a mistake to attribute primary responsibility to the engineers at those companies. If you could wave a magic wand and make every engineer twice as strong, you would still have bad code, because almost nobody can come into a brand new codebase and quickly make changes with zero mistakes. The root cause is that most big company engineers are forced to do most of their work in unfamiliar codebases.
edit: this post got lots of comments on both Hacker News and lobste.rs.
It was surprising to me that many commenters find this point of view unplesasantly nihilistic. I consider myself fairly optimistic about my work. In fact, I meant this post as a rousing defence of big tech software engineers from their critics! Still, I found this response blog post to be an excellent articulation of the “this is too cynical” position, and will likely write a followup post about it soon. If you can’t wait, I wrote a bit on this topic at the start of 2025 in Is it cynical to do what your manager wants?.
Some Hacker News commenters had alternate theories for why bad code happens: lack of motivation, deliberately demoralizing engineers so they won’t unionize, or just purely optimizing for speed. I don’t find these compelling, based on my own experience. Many of my colleagues are highly motivated, and I just don’t believe any tech company is deliberately trying to make its engineers demoralized and unhappy.
A few readers disagreed with me about RSUs providing an incentive to leave, because their companies give stock refreshers. I don’t know about this. I get refreshers too, but if they’re not in the contract, then I don’t think it matters - the company can decide not to give you 50% of your comp at-will by just pausing the refreshers, which is an incentive to move jobs so it’s locked in for four more years.
...
Read the original on www.seangoedecke.com »
Above all, thank you to everyone that made this possible. But I wanted to emphasize and give special thanks to
Rairii for engineering all these ROMs,
Mac84 for archiving and sharing all the CHRP discs
, ELN for engineering all the Mac mini G4 ROM compatibility scripts and creating all the ROM and other Mac OS tooling, and to the Mac community at large everywhere that assisted in all of this into becoming reality. There’s honestly many, many people to thank we owe over this one way or another, both in small and big ways.
...
Read the original on macos9lives.com »
Run Windows applications (including Microsoft 365 and Adobe Creative Cloud) on GNU/Linux with KDE Plasma, GNOME or XFCE, integrated seamlessly as if they were native to the OS.
Creating shortcuts to selected Windows applications on the host GNU/Linux OS.
Using FreeRDP as a backend to seamlessly render Windows applications alongside GNU/Linux applications.
* The GNU/Linux /home directory is accessible within Windows via the \\tsclient\home mount.
* Integration with Nautilus, allowing you to right-click files to open them with specific Windows applications based on the file MIME type.
* The official taskbar widget enables seamless administration of the Windows subsystem and offers an easy way to launch Windows applications.
* Microsoft Office links (e.g. ms-word://) from the host system are automatically opened in the Windows subsystem. (Note: You may need to use a User Agent Switcher browser extension and set the User-Agent to Windows, as the Office webapps typically hide the “Open in Desktop App” option for Linux users.)
WinApps supports ALL Windows applications. Support does not, however, extend to kernel-level anti-cheat systems (e.g. Riot Vanguard).
Scanning Windows for any community tested applications (list below).
Scanning Windows for any other .exe files listed within the Windows Registry.
Community tested applications benefit from high-resolution icons and pre-populated MIME types. This enables file managers to determine which Windows applications should open files based on file extensions. Icons for other detected applications are pulled from .exe files.
Contributing to the list of supported applications is encouraged through submission of pull requests! Please help us grow the WinApps community.
Please note that the provided list of community tested applications is community-driven. As such, some applications may not be tested and verified by the WinApps team.
Both Docker and Podman are recommended backends for running the Windows virtual machine, as they facilitate an automated Windows installation process. WinApps is also compatible with libvirt. While this method requires considerably more manual configuration, it also provides greater virtual machine customisation options. All three methods leverage the KVM hypervisor, ensuring excellent virtual machine performance. Ultimately, the choice of backend depends on your specific use case.
The following guides are available:
If you already have a Windows VM or server you wish to use with WinApps, you will still have to follow the final steps described in the libvirt documentation.
WinApps requires FreeRDP version 3 or later. If not available for your distribution through your package manager, you can install the Flatpak:
flatpak install flathub com.freerdp. FreeRDP
sudo flatpak override –filesystem=home com.freerdp.FreeRDP # To use `+home-drive`
However, if you have weird issues like #233 when running Flatpak, please compile FreeRDP from source according to this guide.
Create a configuration file at ~/.config/winapps/winapps.conf containing the following:
# WINAPPS CONFIGURATION FILE #
# INSTRUCTIONS
# - Leading and trailing whitespace are ignored.
# - Empty lines are ignored.
# - Lines starting with ‘#’ are ignored.
# - All characters following a ‘#’ are ignored.
# [WINDOWS USERNAME]
RDP_USER=“MyWindowsUser”
# [WINDOWS PASSWORD]
# NOTES:
# - If using FreeRDP v3.9.0 or greater, you *have* to set a password
RDP_PASS=“MyWindowsPassword”
# [WINDOWS DOMAIN]
# DEFAULT VALUE: ‘’ (BLANK)
RDP_DOMAIN=“”
# [WINDOWS IPV4 ADDRESS]
# NOTES:
# - If using ‘libvirt’, ‘RDP_IP’ will be determined by WinApps at runtime if left unspecified.
# DEFAULT VALUE:
# - ‘docker’: ‘127.0.0.1’
# - ‘podman’: ‘127.0.0.1’
# - ‘libvirt’: ‘’ (BLANK)
RDP_IP=“127.0.0.1”
# [VM NAME]
# NOTES:
# - Only applicable when using ‘libvirt’
# - The libvirt VM name must match so that WinApps can determine VM IP, start the VM, etc.
# DEFAULT VALUE: ‘RDPWindows’
VM_NAME=“RDPWindows”
# [WINAPPS BACKEND]
# DEFAULT VALUE: ‘docker’
# VALID VALUES:
# - ‘docker’
# - ‘podman’
# - ‘libvirt’
# - ‘manual’
WAFLAVOR=“docker”
# [DISPLAY SCALING FACTOR]
# NOTES:
# - If an unsupported value is specified, a warning will be displayed.
# - If an unsupported value is specified, WinApps will use the closest supported value.
# DEFAULT VALUE: ‘100’
# VALID VALUES:
# - ‘100’
# - ‘140’
# - ‘180’
RDP_SCALE=“100”
# [MOUNTING REMOVABLE PATHS FOR FILES]
# NOTES:
# - By default, `udisks` (which you most likely have installed) uses /run/media for mounting removable devices.
# This improves compatibility with most desktop environments (DEs).
# ATTENTION: The Filesystem Hierarchy Standard (FHS) recommends /media instead. Verify your system’s configuration.
# - To manually mount devices, you may optionally use /mnt.
# REFERENCE: https://wiki.archlinux.org/title/Udisks#Mount_to_/media
REMOVABLE_MEDIA=“/run/media”
# [ADDITIONAL FREERDP FLAGS & ARGUMENTS]
# NOTES:
# - You can try adding /network:lan to these flags in order to increase performance, however, some users have faced issues with this.
# If this does not work or if it does not work without the flag, you can try adding /nsc and /gfx.
# DEFAULT VALUE: ‘/cert:tofu /sound /microphone +home-drive’
# VALID VALUES: See https://github.com/awakecoding/FreeRDP-Manuals/blob/master/User/FreeRDP-User-Manual.markdown
RDP_FLAGS=“/cert:tofu /sound /microphone +home-drive”
# [DEBUG WINAPPS]
# NOTES:
# - Creates and appends to ~/.local/share/winapps/winapps.log when running WinApps.
# DEFAULT VALUE: ‘true’
# VALID VALUES:
# - ‘true’
# - ‘false’
DEBUG=“true”
# [AUTOMATICALLY PAUSE WINDOWS]
# NOTES:
# - This is currently INCOMPATIBLE with ‘manual’.
# DEFAULT VALUE: ‘off’
# VALID VALUES:
# - ‘on’
# - ‘off’
AUTOPAUSE=“off”
# [AUTOMATICALLY PAUSE WINDOWS TIMEOUT]
...
Read the original on github.com »
Iceland has taken the rare step of treating a climate-linked ocean threat as a matter of national survival, launching a coordinated government response to one of the most feared potential tipping points in the climate system.
Officials say the shift reflects mounting evidence that a key Atlantic current system could be heading toward dangerous instability.
According to CNN, Iceland’s National Security Council formally labelled the possible collapse of the Atlantic Meridional Overturning Circulation (AMOC) a national security risk in September — the first time the country has applied such a designation to a climate impact.
The move followed a government briefing on new research that raised “grave concerns” about the system’s future stability.
Jóhann Páll Jóhannsson, Iceland’s minister for environment, energy and climate, said the risks extend far beyond weather.
“Our climate, economy and security are deeply tied to the stability of the ocean currents around us,” he told CNN.
He later described the threat as “an existential threat,” warning that a breakdown could disrupt transport, damage infrastructure and hit the country’s fishing industry.
The AMOC — often compared to a giant conveyor belt — carries warm water northward before it cools and sinks, helping regulate weather across the Atlantic basin.
CNN reported that scientists increasingly worry that warming temperatures and disrupted salinity levels are slowing the system.
Some studies suggest a tipping point could be reached this century, though the exact timeline remains uncertain.
Stefan Rahmstorf, an oceanographer at Potsdam University, told CNN that a collapse “cannot be considered a low likelihood risk anymore.”
The consequences, he said, would be dramatic: surging sea levels along US and European coasts, major monsoon disruptions across Africa and Asia, and a deep freeze across parts of Europe.
For Iceland, he said, the country “would be close to the center of a serious regional cooling,” with sea ice potentially surrounding the island.
The security designation means Iceland will now pursue a high-level, cross-government effort to analyse the threat and consider how to manage or reduce the consequences. Jóhannsson said the decision
“reflects the seriousness of the issue and ensures that the matter gets the attention it deserves.”
Rahmstorf praised Iceland’s stance, telling CNN that other nations should treat the risk with similar urgency.
Jóhannsson said the country is confronting a stark possibility: “What we do know is that the current climate might change so drastically that it could become impossible for us to adapt… this is not just a scientific concern — it’s a matter of national survival and security.”
...
Read the original on www.dagens.com »
Let’s rip the Band-Aid off immediately: If your underlying business process is a mess, sprinkling “AI dust” on it won’t turn it into gold. It will just speed up the rate at which you generate garbage. In the world of Business IT, we get seduced by the shiny new toy. Right now, that toy is Artificial Intelligence. Boardrooms are buzzing with buzzwords like LLMs, agentic workflows, and generative reasoning. Executives are frantically asking, “What is our AI strategy?“Like every major technological shift before it—from the steam engine to the spreadsheet—AI does not inherently make an organization smarter. AI, like any other tool, only makes faster.If you automate a stupid decision, you just make stupid decisions at light speed. If you apply an agentic AI workflow to a bureaucratic nightmare of an approval chain, you haven’t fixed the bureaucracy; you’ve just built a robot that hates its job as much as your employees do.For decades, traditional software demanded structure. Rows, columns, booleans, and fixed fields. If data didn’t fit the box, the computer couldn’t read it.Because computers couldn’t handle the mess, humans handled it (before AI). And humans don’t always follow a flow chart. These processes—like “handling a complex customer complaint” or “brainstorming a marketing campaign”—are often ad-hoc, intuitive, and completely undocumented. They live in the heads of your senior staff, not in your SOPs.If you want to use AI to process unstructured data, you must first bring structure to the workflow itself. You need to improve your process design to account for the ambiguity that AI handles.What is the transformation? (What exactly is the human—or now the AI—supposed to extract or deduce from that mess?)The Old Way: An analyst reads 50 contracts (unstructured), highlights risks based on gut feeling (unstructured process), and summarizes them in 3 days.The AI Way: An AI scans 50 contracts and extracts specific risk clauses based on defined parameters in 3 minutes.The process (Review Contracts -> Identify Risk -> Summarize) hasn’t changed, but it had to be rigorously defined for the AI to work. The intelligence (knowing what a “risk” actually means) still requires human governance. What has changed is the velocity.Go back to the whiteboard. Map out your value chain—especially the messy, human-centric parts involving unstructured data that you previously ignored. Find the bottlenecks. Identify the waste.Technology changes.
The rules of business efficiency do not.
It’s always the process, stupid!
And that’s where actual AI Tools are missing that point, because they weren’t build for that
Von der Idee zur App ohne eine Zeile Code zu schreiben
Vom Datengrab zur Goldmine - KI Einsatz mit schnellem ROI (Promptcast)
Wie man KI am schnellsten gewinnbringend einsetzen kann (Diesmal nur als Prompcast)
Vom Datengrab zur Goldmine - KI Einsatz mit schnellem ROI0:00/894.6184131×
Live long and prosper 😉🖖
Silicon Valleys KI-Burggraben hat ein Leck — es heißt Open Source
Der Mythos der uneinnehmbaren Festung
In den Strategie-Etagen des Silicon Valley erzählt man sich gerne die Geschichte von den uneinnehmbaren Burggräben. Der KI-Wettlauf, so die Legende, sei ein Spiel für Giganten mit Budgets so groß wie Kleinstaaten. Nur eine Handvoll US Tech-Konzerne könne hier mitspielen, der Rest der Welt schaut
Was, wenn der lauteste Teilnehmer im Raum nicht zwangsläufig der führende ist?
...
Read the original on its.promp.td »
From a physicist point of view I want to mention this trick and its generalization for operators:
“Two commuting matrices are simultaneously diagonalizable”
(for physicists all matrices are diagonalizable). Of course the idea is that if you know the eigenvectors of one matrix/operator then diagonalizing the other one is much easier. Here are some applications.
1)The system is translation invariant : Because the eigenvectors of the translation operator are $e^{ik.x}$, then one should use the Fourier transform. It solves all the wave equations for light, acoustics, of free quantum electrons or the heat equation in homogeneous media.
2)The system has a discrete translation symmetry: The typical system is the atoms in a solid state that form a crystal. We have a discrete translation operator $T_a\phi(x)=\phi(x+a)$ with $a$ the size of the lattice and then we should try $\phi_k(x+a)=e^{ik.a}\phi_k(x)$ as it is an eigenvector of $T_a$. This gives the Bloch-Floquet theory where the spectrum is divided into band structure. It is one of the most famous model of condensed matter as it explains the different between conductors or insulators.
3)The system is rotational invariant: One should then use and diagonalize the rotation operator first. This will allow us to find the eigenvalue/eigenvectors of the Hydrogen atom. By the way we notice the eigenspace of the Hydrogen are stable by rotation and are therefore finite dimension representations of $SO(3)$. The irreducible representations of $SO(3)$ have dimension 1,3,5,… and they appears, considering also the spin of the electron, as the columns of the periodic table of the elements (2,6,10,14,…).
4)$SU(3)$ symmetry: Particle physics is extremely complicated. However physicists have discovered that there is an underlying $SU(3)$ symmetry. Then considering the representations of $SU(3)$ the zoology of particles seems much more organized (A, B).
...
Read the original on mathoverflow.net »
Former Qantas captain Dr Ian Getley, who holds a PHD in cosmic and solar radiation in aviation, says flights can be affected by coronal mass ejections (CME), which is when plasma is ejected from the sun into space.
The higher the severity of the CME, the more likely it is that issues could arise with satellites and aircraft electronics above 28,000 ft (8.5 km), he tell us.
A CME releases heavily charged particles that shoot into the Earth’s atmosphere.
These create more charged particles in the upper atmosphere, which in turn can interfere with aircraft electronics.
The former pilot says his research began after a 2003 flight between LA and New York, where he experienced this phenomenon first hand.
...
Read the original on www.bbc.com »
...
Read the original on neberej.github.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.