10 interesting stories served every morning and every evening.
Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.
The user is never asked. Never told. LinkedIn’s privacy policy does not mention it.
Because LinkedIn knows each user’s real name, employer, and job title, it is not searching anonymous visitors. It is searching identified people at identified companies. Millions of companies. Every day. All over the world.
Fairlinked e. V. is an association of commercial LinkedIn users. We represent the professionals who use LinkedIn, the businesses that invest in and depend on the platform, and the toolmakers who build products for it.
BrowserGate is our investigation and campaign to document one of the largest corporate espionage and data breach scandals in digital history, to inform the public and regulators, to collect evidence, and to raise funds for the legal proceedings required to stop it.
LinkedIn’s scan reveals the religious beliefs, political opinions, disabilities, and job search activity of identified individuals. LinkedIn scans for extensions that identify practicing Muslims, extensions that reveal political orientation, extensions built for neurodivergent users, and 509 job search tools that expose who is secretly looking for work on the very platform where their current employer can see their profile.
Under EU law, this category of data is not regulated. It is prohibited. LinkedIn has no consent, no disclosure, and no legal basis. Its privacy policy does not mention any of this.
LinkedIn scans for over 200 products that directly compete with its own sales tools, including Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s employer, it can map which companies use which competitor products. It is extracting the customer lists of thousands of software companies from their users’ browsers without anyone’s knowledge.
Then it uses what it finds. LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets.
In 2023, the EU designated LinkedIn as a regulated gatekeeper under the Digital Markets Act and ordered it to open its platform to third-party tools. LinkedIn’s response:
It published two restricted APIs and presented them to the European Commission as compliance. Together, these APIs handle approximately 0.07 calls per second. Meanwhile, LinkedIn already operates an internal API called Voyager that powers every LinkedIn web and mobile product at 163,000 calls per second. In Microsoft’s 249-page compliance report to the EU, the word “API” appears 533 times. “Voyager” appears zero times.
At the same time, LinkedIn expanded its surveillance of the exact tools the regulation was designed to protect. The scan list grew from roughly 461 products in 2024 to over 6,000 by February 2026. The EU told LinkedIn to let third-party tools in. LinkedIn built a surveillance system to find and punish every user of those tools.
LinkedIn loads an invisible tracking element from HUMAN Security (formerly PerimeterX), an American-Israeli cybersecurity firm, zero pixels wide, hidden off-screen, that sets cookies on your browser without your knowledge. A separate fingerprinting script runs from LinkedIn’s own servers. A third script from Google executes silently on every page load. All of it encrypted. None of it disclosed.
Microsoft has 33,000 employees and a $15 billion legal budget. We have the evidence. What we need is people and funding to hold them accountable.
...
Read the original on browsergate.eu »
Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter
Your browser does not support the video tag. Your browser does not support the video tag. A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag.A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag.
Build autonomous agents that plan, navigate apps, and complete tasks on your behalf, with native support for function calling. Develop applications with strong audio and visual understanding, for rich multimodal support.Create multilingual experiences that go beyond translation and understand cultural context.Improve performance for specific tasks by training Gemma using your preferred frameworks and techniques.Run models on your own hardware for efficient development and deployment.
A new level of intelligence for mobile and IoT devicesAudio and vision support for real-time edge processing. They can run completely offline with near-zero latency on edge devices like phones, Raspberry Pi, and Jetson Nano.
Advanced reasoning for IDEs, coding assistants, and agentic workflows. These models are optimized for consumer GPUs — giving students, researchers, and developers the ability to turn workstations into local-first AI servers.
Gemma 4 models undergo the same rigorous infrastructure security protocols as our proprietary models. By choosing Gemma 4, enterprises and sovereign organizations gain a trusted, transparent foundation that delivers state-of-the-art capabilities while meeting the highest standards for security and reliability.
...
Read the original on deepmind.google »
In 2023, the Swedish government announced that the country’s schools would be going back to basics, emphasizing skills such as reading and writing, particularly in early grades. After mostly being sidelined, physical books are now being reintroduced into classrooms, and students are learning to write the old-fashioned way: by hand, with a pencil or pen, on sheets of paper. The Swedish government also plans to make schools cellphone-free throughout the country.
Educational authorities have been investing heavily. Last year alone, the education ministry allocated $83 million to purchase textbooks and teachers’ guides. In a country with about 11 million people, the aim is for every student to have a physical textbook for each subject. The government also put $54 million towards the purchase of fiction and non-fiction books for students.
These moves represent a dramatic pivot from previous decades, during which Sweden — and many other nations — moving away from physical books in favor of tablets and digital resources in an effort to prepare students for life in an online world. Perhaps unsurprisingly, the Nordic country’s efforts have sparked a debate on the role of digital technology in education, one that extends well beyond the country’s borders. U. S. parents in districts that have adopted digital technology to a great extent may be wondering if educators will reverse course, too.
So why did Sweden pivot? In an email to Undark, Linda Fälth, a researcher in teacher education at Linnaeus University, wrote that the “decision to reinvest in physical textbooks and reduce the emphasis on digital devices” was prompted by several factors, including questions around whether the digitalization of classrooms had been evidence-based. “There was also a broader cultural reassessment,” Fälth wrote. “Sweden had positioned itself as a frontrunner in digital education, but over time concerns emerged about screen time, distraction, reduced deep reading, and the erosion of foundational skills such as sustained attention and handwriting.”
Fälth noted that proponents of reform believe that “basic skills — especially reading, writing, and numeracy — must be firmly established first, and that physical textbooks are often better suited for that purpose.”
Between 2000 and 2012, Swedish students’ scores on standardized tests steadily declined in reading, math, and science. Though they recovered ground between 2012 and 2018, those scores had dropped again by 2022.
Though it’s unclear precisely how much of the decline is due to digitization, there is some evidence that analog teaching materials for reading may be superior to screen learning. However, this applies to expository as opposed to narrative texts. Narrative texts tell a story, whether fiction or non-fiction, while expository texts are designed to inform, describe, or explain a topic in a logical, factual manner.
Swedish officials emphasize that digital technology isn’t being removed from schools altogether. Rather, digital aids “should only be introduced in teaching at an age when they encourage, rather than hinder, pupils’ learning.” Achieving digital competence remains an important objective, particularly in higher grades.
Historically, the technology industry has pushed for more use of digital learning, seeing itself as a transformer of education. In the 1980s, Apple helped bring about the use of computers in schools. Then, starting with the use of the internet, and later integrating mobile devices, technology reshaped the educational landscape. Education experts suggest it can foster a learning experience that is more interactive, accessible, and tailored to the needs of individual students.
In the U. S., the trend nationally in recent years has been toward the use of increasingly sophisticated methods of digital learning, such as providing children with laptops or devices like the iPad. According to a survey conducted by the EdWeek Research Center, part of the trade publication Education Week, 90 percent of school district leaders were providing devices for every middle and high school student as of March 2021. More than 80 percent of school district leaders said the same was true for elementary school students.
And now, technology giants such as Google, Microsoft, and OpenAI are urging schools to teach literacy in artificial intelligence. It’s believed by some working in education that schools ought to prepare pupils for employers who expect digital fluency. This may indeed be pertinent in the age of AI. More than 50 percent of teens in America have used AI chatbots for schoolwork, according to a survey conducted by the Pew Research Center.
According to a 2023 survey, 30 percent of educators said their students spend at least half of their classroom reading time doing so digitally. But this may have drawbacks. Researchers suggest that reading on digital displays instead of paper may be more demanding mentally, especially for younger students. Studies have linked heavy digital use to reduced comprehension and memory retention as well as eye strain.
The limitations of educational technology became apparent during the Covid-19 pandemic. When online learning became the norm, experts began questioning whether technology’s promises had materialized. In a post on LinkedIn, Pam Kastner, a literacy consultant and adjunct professor at Mount Saint Joseph University, suggests: “Technology is a tool, not a teacher.” She views the cognitive architecture for reading as being built for print.
A well-known critic of the use of smartphones and social media by children, Jonathan Haidt, posted in February: “Putting computers and tablets on students desks in K-12 may turn out to be among the costliest mistakes in the history of education”.
The U. S. spent $30 billion in 2024 on laptops and tablets and other educational technology, 10 times more than on textbooks. Neuroscientist and educator Jared Cooney Horvath has lamented the heavy use of digital devices in education. He has said that Gen Z, persons born roughly between 1997 and 2012 and known for growing up with digital technology as an integral part of their lives, is the first generation in modern history to score lower on cognitive measures than the previous one. In January of this year, he told a Senate committee that this has resulted in a generation of children who are less cognitively capable than their parents.
Whether the U. S. will follow Sweden’s path remains to be seen. Naomi Baron, a professor emerita of linguistics at American University, told Undark she doesn’t see the U.S. turning to Sweden for advice. This is in part because of financial incentives: “First, commercial textbook publishers have been pushing digital materials — heavily for financial reasons generally ignoring the research comparing comprehension, etc. with print vs. digital reading.” Baron also wrote that “American educators themselves are generally unaware of the now substantial research literature here, and instead focus on saving their students (or school districts) money.” Still, some American educators appear to be aware that digital technology might be making education worse. Teachers seem especially concerned about the possible detrimental effects of overuse of AI.
At the same time, some American parents have recently started forming networks, teaching one another how to opt out of school-issued laptops and devices and back into physical textbooks, along with a reversion to pen or pencil and paper. Parents point to evidence showing better information retention when pupils read it on paper. This reaction may reflect a growing backlash to digital technology in education, driven by concerns about possibly excessive screen time and potential harms to youth, including possibly addictive distractions.
If U. S. educational leaders were to consult their Swedish colleagues, the advice they’d likely get is not to remove digital technology whole cloth. “The goal is recalibration rather than reversal,” wrote Fälth. This was echoed in a statement sent to Undark by the Swedish Ministry of Education and Research: The “Swedish government believes that digitalization is fundamentally important and beneficial, but the use of digital tools in schools must be carried out carefully and thoughtfully.”
In other words, the objective is not to reject digitalization. It’s more nuanced than that. The goal is to judiciously establish boundaries around technology’s selective and sequential use over stages of a pupil’s educational development. This means introducing digital technology at later ages after basic reading and other skills have been achieved.
...
Read the original on undark.org »
This is the first of a series of articles in which you will learn about what may be one of the silliest, most preventable, and most costly mishaps of the 21st century, where Microsoft all but lost OpenAI, its largest customer, and the trust of the US government.
I joined Azure Core on the dull Monday morning of May 1st, 2023, as a senior member of the Overlake R&D team, the folks behind the Azure Boost offload card and network accelerator.
I wasn’t new to Azure, having run what is likely the longest-running production subscription of this cloud service, which launched in February 2010 as Windows Azure.
I wasn’t new to Microsoft either, having been part of the Windows team since 1/1/2013 and later helped migrate SharePoint Online to Azure, before joining the Core OS team as a kernel engineer, where I notably helped improve the kernel and helped invent and deliver the Container platform that supports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all shipping technologies that resulted in multiple granted patents.
Furthermore, I contributed to brainstorming the early Overlake cards in 2020-2021, drafting a proposal for a Host OS Accelerator Card communication protocol and network stack, when all we had was a debugger’s serial connection. I also served as a Core OS specialist, helping Azure Core engineers diagnose deep OS issues.
I rejoined in 2023 as an Azure expert on day one, having contributed to the development of some of the technologies on which Azure relies and having used the platform for more than a decade, both outside and inside Microsoft at a global scale.
As a returning employee, I skipped the New Employee Orientation and had my Global Security invite for 12 noon to pick up my badge, but my future manager asked if I could come in earlier, as the team had their monthly planning meeting that morning.
I, of course, agreed and arrived a few minutes before 10 am at the entrance of the Studio X building, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I followed him to a meeting room through a labyrinth of corridors.
The room was chock-full, with more people on a live conference call. The dev manager, the leads, the architects, the principal and senior engineers shared the space with what appeared to be new hires and junior personnel.
The screen projected a slide where I recognized a number of familiar acronyms, like COM, WMI, perf counters, VHDX, NTFS, ETW, and a dozen others, mixed with new Azure-related ones, in an imbroglio of boxes linked by arrows.
I sat quietly at the back while a man was walking the room through a big porting plan of their current stack to the Overlake accelerator. As I listened, it was not immediately clear what that series of boxes with Windows user-mode and kernel components had to do with that plan.
After a few minutes, I risked a question: Are you planning to port those Windows features to Overlake? The answer was yes, or at least they were looking into it. The dev manager showed some doubt, and the man replied that they could at least “ask a couple of junior devs to look into it.”
The room remained silent for an instant. I had seen the hardware specs for the SoC on the Overlake card in my previous tenure: the RAM capacity and the power budget, which was just a tiny fraction of the TDP you can expect from a regular server CPU.
The hardware folks I had spoken with told me they could only spare 4KB of dual-ported memory on the FPGA for my doorbell shared-memory communication protocol.
Everything was nimble, efficient, and power-savvy, and the team I had joined 10 minutes earlier was seriously considering porting half of Windows to that tiny, fanless, Linux-running chip the size of a fingernail.
That felt like Elon talking about colonizing Mars: just nuke the poles then grow an atmosphere! Easier said than done, uh?
That entire 122-strong org was knee-deep in impossible ruminations involving porting Windows to Linux to support their existing VM management agents.
The man was a Principal Group Engineering Manager overseeing a chunk of the software running on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they really contemplated porting Windows to Linux to support their current software.
At first, I questioned my understanding. Was that serious? The rest of the talk left no doubt: the plan was outlined, and the dev leads were tasked with contributing people to the effort. It was immediately clear to me that this plan would never succeed and that the org needed a lot of help.
That first hour in the new role left me with a mix of strange feelings, stupefaction, and incredulity.
The stack was hitting its scaling limits on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hypervisor was capable of, and was a noisy neighbor consuming so many resources that it was causing jitter observable from the customer VMs.
There is no dimension in the universe where this stack would fit on a tiny ARM SoC and scale up by many factors. It was not going to happen.
I have seen a lot in my decades of industry (and Microsoft) experience, but I had never seen an organization so far from reality. My day-one problem was therefore not to ramp up on new technology, but rather to convince an entire org, up to my skip-skip-level, that they were on a death march.
Somewhere, I knew it was going to be a fierce uphill battle. As you can imagine, it didn’t go well, as you will later learn.
I spent the next few days reading more about the plans, studying the current systems, and visiting old friends in Core OS, my alma mater. I was lost away from home in a bizarre territory where people made plans that didn’t make sense with the aplomb of a drunk LLM.
I notably spent more than 90 minutes chatting in person with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the kernel team years earlier.
His org is responsible for delivering Mariner Linux (now Azure Linux) and the trimmed-down distro running on the Overlake / Azure Boost card. He kindly answered all my questions, and I learned that they had identified 173 agents (one hundred seventy-three) as candidates for porting to Overlake.
I later researched this further and found that no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node, what they all did, how they interacted with one another, what their feature set was, or even why they existed in the first place.
Azure sells VMs, networking, and storage at the core. Add observability and servicing, and you should be good. Everything else, SQL, K8s, AI workloads, and whatnot all build on VMs with xPU, networking, and storage, and the heavy lifting to make the magic happen is done by the good Core OS folks and the hypervisor.
How the Azure folks came up with 173 agents will probably remain a mystery, but it takes a serious amount of misunderstanding to get there, and this is also how disasters are built.
Now, fathom for a second that this pile of uncontrolled “stuff” is orchestrating the VMs running Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the government clouds and other mission-critical infrastructure, and you’ll be close to understanding how a grain of sand in that fragile pileup can cause a global collapse, with serious National Security implications as well as potential business-ending consequences for Microsoft.
We are still far from the vaporized trillion in market cap, my letters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their total silence, the quasi-loss of OpenAI, the breach of trust with the US government as publicly stated by the Secretary of Defense, the wasted engineering efforts, the Rust mandate, my stint on the OpenAI bare-metal team in Azure Core, the escort sessions from China and elsewhere, and the delayed features publicly implied as shipping since 2023, before the work even began.
If you’re running production workloads on Azure or relying on it for mission-critical systems, this story matters more than you think.
...
Read the original on isolveproblems.substack.com »
Chat
What can I do with 128 GB of unified RAM?
Load up models like gpt-oss-120b or Qwen-Coder-Next for advanced tool use.
What should I tune first?
You can use –no-mmap to speed up load times and increase context size to 64 or more.
Image Generation
A pitcher of lemonade in the style of a renaissance painting
Speech
Hello, I am your AI assistant. What can I do for you today?
Open source. Private. Ready in minutes on any PC.
Lemonade exists because local AI should be free, open, fast, and private.
Lemonade is integrated in many apps and works out-of-box with hundreds more thanks to the OpenAI API standard.
Chat
What can I do with 128 GB of unified RAM?
Load up models like gpt-oss-120b or Qwen-Coder-Next for advanced tool use.
What should I tune first?
You can use –no-mmap to speed up load times and increase context size to 64 or more.
Image Generation
A pitcher of lemonade in the style of a renaissance painting
Speech
Hello, I am your AI assistant. What can I do for you today?
Open source. Private. Ready in minutes on any PC.
Lemonade exists because local AI should be free, open, fast, and private.
Lemonade is integrated in many apps and works out-of-box with hundreds more thanks to the OpenAI API standard.
Everything from install to runtime is optimized for fast setup, broad compatibility, and local-first execution.
Lightweight service that is only 2MB.
Simple installer that sets up the stack automatically.
Works with hundreds of apps out-of-box and integrates in minutes.
Configures dependencies for your GPU and NPU.
Works with llama.cpp, Ryzen AI SW, FastFlowLM, and more.
Run more than one model at the same time.
A GUI that lets you download, try, and switch models quickly.
...
Read the original on lemonade-server.ai »
In memory of the 72,000+ Palestinians killed in the Israeli genocide in Gaza.
...
Read the original on bkhmsi.github.io »
Tailscale should feel nearly invisible when it’s connecting you and all your devices together. But on some MacBooks, for a time, it could be a little too invisible. We have two fixes for it: one small and slightly quirky, and another really useful one, available now on macOS.
The small, quirky fix might soon become a thing of the past for the vast majority of Tailscale users on Macs. I wanted to document it here: to help other developers, to mark this moment in time, and quietly crow about our windowed macOS interface now being generally available.
So here’s the issue we had with Tailscale’s icon slipping into darkness, its little work-around, and then our greater solution.
At its debut on macOS, Tailscale was a command-line tool and a menu bar utility. Some MacBooks, starting with 2021 MacBook Pro models, have a notch in the top-middle of their display. And depending on how many other apps with menu bar icons are running, the Tailscale app’s icon can be hidden inside that notch.
Apple, a company that traditionally favors simple functionality over dense settings, does not offer users, or developers, a path out of the darkness. If there are more menu bar icons then there is space to the right side of the notch, the menu bar items simply disappear into the notch-y ether. If you don’t see it, you can’t click it. There is no notification to the user, no overflow section, no options to rearrange the menu bar items.
As of this writing, Apple has some indirect work-arounds, like pushing more of its own system icons into a revamped Control Center, and offering a somewhat inelegant “Scale to fit below camera” option. Third-party menu-bar-managing apps like ICE and Bartender can help, but they add complications and overhead.
“We don’t have any control over where things get rendered in the menu bar,” said one Tailscale engineer, who asked to go nameless so as to share their honest opinion. “You just say, ‘I want to be a menu bar app.’ They shove it up there, and that’s it, you end up where you end up.”
Given this there-or-not-there behavior, Tailscale developers received a number of bug reports from users when, after the notched MacBooks’ debut, their Tailscale icons fell into the middle-screen distance. “They were like, ’Actually, I can’t find my Tailscale. It’s gone. It didn’t start,” the engineer said. “We’re like, ‘No, it’s there, it’s just hiding behind the notch.’ But we kind of got sick of that.”
Mac menu bar icons may not know they are trapped inside the no-pixel phantom zone, but they can report that something is blocking them. Using data from occlusionState, the Tailscale app can see that its icon is in mid-bar limbo.
And while it cannot move, it can speak. Specifically, a pop-up message can say:
This affable warning is not perfect, by any means. The notch warning can be inadvertently triggered by other display quirks, like opening and closing the MacBook lid, moving between monitors, or some combination of the two. But it helped triage the “Where are my Tailscale settings?” issue for a while.
Apple could certainly make some changes to prevent this being an issue at all. The system could prevent menu bar icons from rendering in the notch area at all. An overflow mechanism could stack the icons that would otherwise drop into a negative notch zone. Or developers could be given more information and tools about icons’ notch-itive states.
In the meantime, here’s a look at the Swift code that let our app know it should chirp a bit when hidden. It should be unnecessary with the new windowed app—unless you enable the “Hide Dock icon” option in the windowed client options, in which case it might still call out its hidden nature.
As we noted at its September beta release, a windowed version of Tailscale’s macOS app doesn’t replace the menu bar app, but runs alongside it. It can be pulled up from the Dock or a Spotlight search, and makes a lot of Tailscale data and features more accessible.
The windowed interface, enabled by default starting with version 1.96.2 of our macOS client, offers:
* A searchable list of tailnet devices and their connection status
* Easily ping, copy IP addresses, and send files through Taildrop to devices
* Easy access to exit nodes, searchable and with one recommended based on latency, performance, and location
* A red dot on the Dock icon to note critical errors
* A “mini player” that shrinks Tailscale down to the bare minimum
* A product tour of all these things upon installing/updating
Let us know what you think of the new interface so we can make it better. We’re working on a comparable UI for Windows devices. And we’re always looking for ways to bring a little bit of functional whimsy to our software.
...
Read the original on tailscale.com »
The race is on to test new vehicles in the underground Large Hadron Collider tunnel, ahead of major works starting this summer
The race is on to test new vehicles in the underground Large Hadron Collider tunnel, ahead of major works starting this summer
Update: did you enjoy our April Fool’s day story? While we won’t be racing karts through the tunnel, we are gearing up for major works to prepare for HiLumi LHC and its new technologies. The image is based on a real 1991 CERN image of the monorail used to transport people and equipment in the tunnel during the lifetime of the Large Electron-Positron Collider (LEP), which preceded the LHC.
Following on from the robotic mice, CERN engineers have now developed a super-charged kart to enable workers to race through the Large Hadron Collider (LHC) underground tunnel during the upcoming major works, starting this summer.
The karts promise a power boost to activities during this period, known as Long Shutdown 3 (LS3), which will see the LHC transformed into the High-Luminosity LHC. These vehicles will replace the bicycles that were used until now to travel through the 27-km underground tunnel, enabling engineers and technicians to speed to areas where improvements to the accelerator are required.
“Each kart is turbo-boosted by 64 superconducting engines,” explains project leader Mario Idraulico. “When the engines are cooled to below their critical temperatures, the Meissner effect levitates the karts, allowing them to zip through the tunnels at high speeds and, mamma mia, they’re super!”
Early tests have been promising, and the next steps involve testing different kart designs in an underground race. Safety coordinator Luigi Fratello has ensured that each driver will be issued with Safety and Health Equipment for Long and Limited Stays (SHELLS), although his response to drivers wanting bananas in the tunnel was “Oh no!”
These karts, although developed to support CERN’s fundamental research programme, show clear applications for society. CERN’s Knowledge Transfer Group has begun discussions with European startup company Quantum Mushroom to explore aerospace applications and powering for next-generation anti-gravity vehicles.
Surprisingly, the kart project began from a collaboration between CERN engineers and onsite nursery school children — one example of CERN’s commitment to inspiring future generations. “We’re thrilled that the children’s kart designs were the inspiration for the engineered karts,” exclaimed schoolteacher Yoshi Kyouryuu, mid-way through painting spots on eggs for an Easter egg hunt.
“As educators, we promote curiosity from a young age, which is why we paint question marks all over our yellow school walls,” explained school director, Rosalina Pfirsich, looking up from her storybook. “With all the contributions the children have made to the upcoming High-Luminosity LHC project, we’ve taken to calling them Luma!”
Find out more about the High-Luminosity LHC project.
...
Read the original on home.web.cern.ch »
This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is.
Learn more about Bluesky at bsky.social and atproto.com. right now the astronauts are calling houston because the computer on the spaceship is running two instances of microsoft outlook and they can’t figure out why. nasa is about to remote into the computer
...
Read the original on bsky.app »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.