10 interesting stories served every morning and every evening.
Skip to main content
New EU rules to stop the destruction of unsold clothes and shoesNew EU rules to stop the destruction of unsold clothes and shoesThe Delegated and Implementing Acts will support businesses in complying with new requirements.
The European Commission today (Feb 9) adopted new measures under the Ecodesign for Sustainable Products Regulation (ESPR) to prevent the destruction of unsold apparel, clothing, accessories and footwear. The rules will help cut waste, reduce environmental damage and create a level playing field for companies embracing sustainable business models, allowing them to reap the benefits of a more circular economy.Every year in Europe, an estimated 4-9% of unsold textiles are destroyed before ever being worn. This waste generates around 5.6 million tons of CO2 emissions — almost equal to Sweden’s total net emissions in 2021.To help reduce this wasteful practice, the ESPR requires companies to disclose information on the unsold consumer products they discard as waste. It also introduces a ban on the destruction of unsold apparel, clothing accessories and footwear.The Delegated and Implementing Acts adopted today will support businesses in complying with these requirements by:Clarifying derogations: The Delegated Act outlines specific and justified circumstances under which the destruction will be permitted, for instance, due to safety reasons or product damage. National authorities will oversee compliance.Facilitating disclosure: The Implementing Act introduces a standardised format for businesses to disclose the volumes of unsold consumer goods they discard. This applies from February 2027, giving businesses sufficient time to adapt.Instead of discarding stock, companies are encouraged to manage their stock more effectively, handle returns, and explore alternatives such as resale, remanufacturing, donations, or reuse.The ban on destruction of unsold apparel, clothing accessories and footwear and the derogations will apply to large companies from 19 July 2026. Medium-sized companies are expected to follow in 2030. The rules on disclosure under the ESPR already apply to large companies and will also apply to medium-sized companies in 2030.“The textile sector is leading the way in the transition to sustainability, but there are still challenges. The numbers on waste show the need to act. With these new measures, the textile sector will be empowered to move towards sustainable and circular practices, and we can boost our competitiveness and reduce our dependencies.“The destruction of unsold goods is a wasteful practice. In France alone, around €630 million worth of unsold products are destroyed each year. Online shopping also fuels the issue: in Germany, nearly 20 million returned items are discarded annually. Textiles are a major part of the problem, and a key focus for action. To cut waste and reduce the sector’s environmental footprint, the European Commission is promoting more sustainable production while helping European companies stay competitive. The ESPR is central to this effort. It will make products on the EU market more durable, reusable and recyclable, while boosting efficiency and circularity.Delegated Regulation setting out derogations from the prohibition of destruction of unsold consumer products | European CommissionImplementing Regulation on the details and format for the disclosure of information on discarded unsold consumer products | European CommissionThe destruction of returned and unsold textiles in Europe’s circular economy | European Environment Agency (EEA)
EU Environment newsletters deliver the latest updates about the European Commission’s environmental priorities straight to your inbox.
...
Read the original on environment.ec.europa.eu »
tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.
The last month was a whirlwind, never would I have expected that my playground project would create such waves. The internet got weird again, and it’s been incredibly fun to see how my work inspired so many people around the world.
There’s an endless array of possibilities that opened up for me, countless people trying to push me into various directions, giving me advice, asking how they can invest or what I will do. Saying it’s overwhelming is an understatement.
When I started exploring AI, my goal was to have fun and inspire people. And here we are, the lobster is taking over the world. My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research.
Yes, I could totally see how OpenClaw could become a huge company. And no, it’s not really exciting for me. I’m a builder at heart. I did the whole creating-a-company game already, poured 13 years of my life into it and learned a lot. What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.
I spent last week in San Francisco talking with the major labs, getting access to people and unreleased research, and it’s been inspiring on all fronts. I want to thank all the folks I talked to this week and am thankful for the opportunities.
It’s always been important to me that OpenClaw stays open source and given the freedom to flourish. Ultimately, I felt OpenAI was the best place to continue pushing on my vision and expand its reach. The more I talked with the people there, the clearer it became that we both share the same vision.
The community around OpenClaw is something magical and OpenAI has made strong commitments to enable me to dedicate my time to it and already sponsors the project. To get this into a proper structure I’m working on making it a foundation. It will stay a place for thinkers, hackers and people that want a way to own their data, with the goal of supporting even more models and companies.
Personally I’m super excited to join OpenAI, be part of the frontier of AI research and development, and continue building with all of you.
The claw is the law.
...
Read the original on steipete.me »
That the U. S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.
The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.
The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.
But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contrast between public understanding of Ring and what Amazon was now boasting it could do.
Many people were not just surprised but quite shocked and alarmed to learn that what they thought was merely their own personal security system now has the ability to link with countless other Ring cameras to form a neighborhood-wide (or city-wide, or state-wide) surveillance dragnet. That Amazon emphasized that this feature is available (for now) only to those who “opt-in” did not assuage concerns.
Numerous media outlets sounded the alarm. The online privacy group Electronic Frontier Foundation (EFF) condemned Ring’s program as previewing “a world where biometric identification could be unleashed from consumer devices to identify, track, and locate anything — human, pet, and otherwise.”
Many private citizens who previously used Ring also reacted negatively. “Viral videos online show people removing or destroying their cameras over privacy concerns,” reported USA Today. The backlash became so severe that, just days later, Amazon — seeking to assuage public anger — announced the termination of a partnership between Ring and Flock Safety, a police surveillance tech company (while Flock is unrelated to Search Party, public backlash made it impossible, at least for now, for Amazon to send Ring’s user data to a police surveillance firm).
The Amazon ad seems to have triggered a long-overdue spotlight on how the combination of ubiquitous cameras, AI, and rapidly advancing facial recognition software will render the term “privacy” little more than a quaint concept from the past. As EFF put it, Ring’s program “could already run afoul of biometric privacy laws in some states, which require explicit, informed consent from individuals before a company can just run face recognition on someone.”
Those concerns escalated just a few days later in the context of the Tucson disappearance of Nancy Guthrie, mother of long-time TODAY Show host Savannah Guthrie. At the home where she lives, Nancy Guthrie used Google’s Nest camera for security, a product similar to Amazon’s Ring.
Guthrie, however, did not pay Google for a subscription for those cameras, instead solely using the cameras for real-time monitoring. As CBS News explained, “with a free Google Nest plan, the video should have been deleted within 3 to 6 hours — long after Guthrie was reported missing.” Even professional privacy advocates have understood that customers who use Nest without a subscription will not have their cameras connected to Google’s data servers, meaning that no recordings will be stored or available for any period beyond a few hours.
For that reason, Pima County Sheriff Chris Nanos announced early on “that there was no video available in part because Guthrie didn’t have an active subscription to the company.” Many people, for obvious reasons, prefer to avoid permanently storing comprehensive daily video reports with Google of when they leave and return to their own home, or who visits them at their home, when, and for how long.
Despite all this, FBI investigators on the case were somehow magically able to “recover” this video from Guthrie’s camera many days later. FBI Director Kash Patel was essentially forced to admit this when he released still images of what appears to be the masked perpetrator who broke into Guthrie’s home. (The Google user agreement, which few users read, does protect the company by stating that images may be stored even in the absence of a subscription.)
While the “discovery” of footage from this home camera by Google engineers is obviously of great value to the Guthrie family and law enforcement agents searching for Guthrie, it raises obvious yet serious questions about why Google, contrary to common understanding, was storing the video footage of unsubscribed users. A former NSA data researcher and CEO of a cybersecurity firm, Patrick Johnson, told CBS: “There’s kind of this old saying that data is never deleted, it’s just renamed.”
It is rather remarkable that Americans are being led, more or less willingly, into a state-corporate, Panopticon-like domestic surveillance state with relatively little resistance, though the widespread reaction to Amazon’s Ring ad is encouraging. Much of that muted reaction may be due to a lack of realization about the severity of the evolving privacy threat. Beyond that, privacy and other core rights can seem abstract and less of a priority than more material concerns, at least until they are gone.
It is always the case that there are benefits available from relinquishing core civil liberties: allowing infringements on free speech may reduce false claims and hateful ideas; allowing searches and seizures without warrants will likely help the police catch more criminals, and do so more quickly; giving up privacy may, in fact, enhance security.
But the core premise of the West generally, and the U. S. in particular, is that those trade-offs are never worthwhile. Americans still all learn and are taught to admire the iconic (if not apocryphal) 1775 words of Patrick Henry, which came to define the core ethos of the Revolutionary War and American Founding: “Give me liberty or give me death.” It is hard to express in more definitive terms on which side of that liberty-versus-security trade-off the U.S. was intended to fall.
These recent events emerge in a broader context of this new Silicon Valley-driven destruction of individual privacy. Palantir’s federal contracts for domestic surveillance and domestic data management continue to expand rapidly, with more and more intrusive data about Americans consolidated under the control of this one sinister corporation.
Facial recognition technology — now fully in use for an array of purposes from Customs and Border Protection at airports to ICE’s patrolling of American streets — means that fully tracking one’s movements in public spaces is easier than ever, and is becoming easier by the day. It was only three years ago that we interviewed New York Times reporter Kashmir Hill about her new book, “Your Face Belongs to Us.” The warnings she issued about the dangers of this proliferating technology have not only come true with startling speed but also appear already beyond what even she envisioned.
On top of all this are advances in AI. Its effects on privacy cannot yet be quantified, but they will not be good. I have tried most AI programs simply to remain abreast of how they function.
After just a few weeks, I had to stop my use of Google’s Gemini because it was compiling not just segregated data about me, but also a wide array of information to form what could reasonably be described as a dossier on my life, including information I had not wittingly provided it. It would answer questions I asked it with creepy, unrelated references to the far-too-complete picture it had managed to create of many aspects of my life (at one point, it commented, somewhat judgmentally or out of feigned “concern,” about the late hours I was keeping while working, a topic I never raised).
Many of these unnerving developments have happened without much public notice because we are often distracted by what appear to be more immediate and proximate events in the news cycle. The lack of sufficient attention to these privacy dangers over the last couple of years, including at times from me, should not obscure how consequential they are.
All of this is particularly remarkable, and particularly disconcerting, since we are barely more than a decade removed from the disclosures about mass domestic surveillance enabled by the courageous whistleblower Edward Snowden. Although most of our reporting focused on state surveillance, one of the first stories featured the joint state-corporate spying framework built in conjunction with the U. S. security state and Silicon Valley giants.
The Snowden stories sparked years of anger, attempts at reform, changes in diplomatic relations, and even genuine (albeit forced) improvements in Big Tech’s user privacy. But the calculation of the U. S. security state and Big Tech was that at some point, attention to privacy concerns would disperse and then virtually evaporate, enabling the state-corporate surveillance state to march on without much notice or resistance. At least as of now, the calculation seems to have been vindicated.
...
Read the original on greenwald.substack.com »
Imagine you’re maintaining a native project. You use Visual Studio for building on Windows, so you do the responsible thing and list it as a dependency
If you’re lucky enough not to know this yet, I envy you. Unfortunately, at this point even Boromir knows…
What you may not realize is, you’ve actually signed up to be unpaid tech support for Microsoft’s “Visual Studio Installer”. You might notice GitHub Issues becoming less about your code and more about broken builds, specifically on Windows. You find yourself explaining to a contributor that they didn’t check the “Desktop development with C++” workload, but specifically the v143 build tools and the 10.0.22621.0 SDK. No, not that one, the other one. You spend less time on your project because you’re too busy being a human-powered dependency resolver for a 50GB IDE.
Saying “Install Visual Studio” is like handing contributors a choose-your-own-adventure book riddled with bad endings, some of which don’t let you go back. I’ve had to re-image my entire OS more than once over the years.
Why is this tragedy unique to Windows?
On Linux, the toolchain is usually just a package manager command away. On the other hand, “Visual Studio” is thousands of components. It’s so vast that Microsoft distributes it with a sophisticated GUI installer where you navigate a maze of checkboxes, hunting for which “Workloads” or “Individual Components” contain the actual compiler. Select the wrong one and you might lose hours installing something you don’t need. Miss one, like “Windows 10 SDK (10.0.17763.0)” or “Spectre-mitigated libs,” and your build fails three hours later with a cryptic error like MSB8101. And heaven help you if you need to downgrade to an older version of the build tools for a legacy project.
The Visual Studio ecosystem is built on a legacy of ‘all-in-one’ monoliths. It conflates the editor, the compiler, and the SDK into a single, tangled web. When we list ‘Visual Studio’ as a dependency, we’re failing to distinguish between the tool we use to write code and the environment required to compile it.
Hours-long waits: You spend an afternoon watching a progress bar download 15GB just to get a 50MB compiler. Zero transparency: You have no idea which files were installed or where they went. Your registry is littered with cruft and background update services are permanent residents of your Task Manager.No version control: You can’t check your compiler into Git. If a teammate has a slightly different Build Tools version, your builds can silently diverge.The “ghost” environment: Uninstalling is never truly clean. Moving to a new machine means repeating the entire GUI dance, praying you checked the same boxes.
Even after installation, compiling a single C file from the command line requires finding the Developer Command Prompt. Under the hood, this shortcut invokes vcvarsall.bat, a fragile batch script that globally mutates your environment variables just to locate where the compiler is hiding this week.
Ultimately, you end up with build instructions that look like a legal disclaimer:
“Works on my machine with VS 17.4.2 (Build 33027.167) and SDK 10.0.22621.0. If you have 17.5, please see Issue #412. If you are on ARM64, godspeed.”
On Windows, this has become the “cost of doing business”. We tell users to wait three hours for a 20GB install just so they can compile a 5MB executable. It’s become an active deterrent to native development.
I’m not interested in being a human debugger for someone else’s installer. I want the MSVC toolchain to behave like a modern dependency: versioned, isolated, declarative.
I spent a few weeks building an open source tool to make things better. It’s called msvcup. It’s a small CLI program. On good network/hardware, it can install the toolchain/SDK in a few minutes, including everything to cross-compile to/from ARM. Each version of the toolchain/SDK gets its own isolated directory. It’s idempotent and fast enough to invoke every time you build. Let’s try it out.
#include @setlocal
@if not exist msvcup.exe (
echo msvcup.exe: installing…
curl -L -o msvcup.zip https://github.com/marler8997/msvcup/releases/download/v2026_02_07/msvcup-x86_64-windows.zip
tar xf msvcup.zip
del msvcup.zip
) else (
echo msvcup.exe: already installed
@if not exist msvcup.exe exit /b 1
set MSVC=msvc-14.44.17.14
set SDK=sdk-10.0.22621.7
msvcup install –lock-file msvcup.lock –manifest-update-off %MSVC% %SDK%
@if %errorlevel% neq 0 (exit /b %errorlevel%)
msvcup autoenv –target-cpu x64 –out-dir autoenv %MSVC% %SDK%
@if %errorlevel% neq 0 (exit /b %errorlevel%)
.\autoenv\cl hello.c
Believe it or not, this build.bat script replaces the need to “Install Visual Studio”. This script should run on any Windows system since Windows 10 (assuming it has curl/tar which have been shipped since 2018). It installs the MSVC toolchain, the Windows SDK and then compiles our program.
For my fellow Windows developers, go ahead and take a moment. Visual Studio can’t hurt you anymore. The build.bat above isn’t just a helper script; it’s a declaration of independence from the Visual Studio Installer. Our dependencies are fully specified, making builds reproducible across machines. And when those dependencies are installed, they won’t pollute your registry or lock you into a single global version.
Also note that after the first run, the msvcup commands take milliseconds, meaning we can just leave these commands in our build script and now we have a fully self-contained script that can build our project on virtually any modern Windows machine.
msvcup is inspired by a small Python script written by Mārtiņš Možeiko. The key insight is that Microsoft publishes JSON manifests describing every component in Visual Studio, the same manifests the official installer uses. msvcup parses these manifests, identifies just the packages needed for compilation (the compiler, linker, headers, and libraries), and downloads them directly from Microsoft’s CDN. Everything lands in versioned directories under C:\msvcup\. For details on lock files, cross-compilation, and other features, see the msvcup README.md.
The astute will also notice that our build.bat script never sources any batch files to set up the “Developer Environment”. The script contains two msvcup commands. The first installs the toolchain/SDK, and like a normal installation, it includes “vcvars” scripts to set up a developer environment. Instead, our build.bat leverages the msvcup autoenv command to create an “Automatic Environment”. This creates a directory that contains wrapper executables to set the environment variables on your behalf before forwarding to the underlying tools. It even includes a toolchain.cmake file which will point your CMake projects to these tools, allowing you to build your CMake projects outside a special environment.
At Tuple (a pair-programming app), I integrated msvcup into our build system and CI, which allowed us to remove the requirement for the user/CI to pre-install Visual Studio. Tuple compiles hundreds of C/C++ projects including WebRTC. This enabled both x86_64 and ARM builds on the CI as well as keeping the CI and everyone on the same toolchain/SDK.
Everything installs into a versioned directory. No problem installing versions side-by-side. Easy to remove or reinstall if something goes wrong. Cross-compilation enabled out of the box. msvcup currently always downloads the tools for all supported cross-targets, so you don’t have to do any work looking for all the components you need to cross-compile.Lock file support. A self-contained list of all the payloads/URLs. Everyone uses the same packages, and if Microsoft changes something upstream, you’ll know.Blazing fast. The install and autoenv commands are idempotent and complete in milliseconds when there’s no work to do.
No more “it works on my machine because I have the 2019 Build Tools installed.” No more registry-diving to find where cl.exe is hiding this week. With msvcup, your environment is defined by your code, portable across machines, and ready to compile in milliseconds.
msvcup focuses on the core compilation toolchain. If you need the full Visual Studio IDE you’ll still need the official installer. For most native development workflows, though, it covers what you actually need.
Let’s try this on a real project. Here’s a script that builds raylib from scratch on a clean Windows system. In this case, we’ll just use the SDK without the autoenv:
@setlocal
set TARGET_CPU=x64
@if not exist msvcup.exe (
echo msvcup.exe: installing…
curl -L -o msvcup.zip https://github.com/marler8997/msvcup/releases/download/v2026_02_07/msvcup-x86_64-windows.zip
tar xf msvcup.zip
del msvcup.zip
set MSVC=msvc-14.44.17.14
set SDK=sdk-10.0.22621.7
msvcup.exe install –lock-file msvcup.lock –manifest-update-off %MSVC% %SDK%
@if %errorlevel% neq 0 (exit /b %errorlevel%)
@if not exist raylib (
git clone https://github.com/raysan5/raylib -b 5.5
call C:\msvcup\%MSVC%\vcvars-%TARGET_CPU%.bat
call C:\msvcup\%SDK%\vcvars-%TARGET_CPU%.bat
cmd /c “cd raylib\projects\scripts && build-windows”
@if %errorlevel% neq 0 (exit /b %errorlevel%)
@echo build success: game exe at:
@echo .\raylib\projects\scripts\builds\windows-msvc\game.exe
No Visual Studio installation. No GUI. No prayer. Just a script that does exactly what it says.
P. S. Here is a page that shows how to use msvcup to build LLVM and Zig from scratch on Windows.
...
Read the original on marler8997.github.io »
Oat is an ultra-lightweight HTML + CSS, semantic UI component library with zero dependencies. No framework, build, or dev complexity. Just include the tiny CSS and JS files and you are good to go building decent looking web applications with most commonly needed components and elements.
Semantic tags and attributes are styled contextually out of the box without classes, forcing best practices, and reducing markup class pollution. A few dynamic components are WebComponents and use minimal JavaScript.
Fully-standalone with no dependencies on any JS or CSS frameworks or libraries. No Node.js ecosystem garbage or bloat.
Native elements like , , and semantic attributes like role=“button” are styled directly. No classes.
Semantic HTML and ARIA roles are used (and forced in many places) throughout. Proper keyboard navigation support for all components and elements.
Easily customize the overall theme by overriding a handful of CSS variables. data-theme=“dark” on body automatically uses the bundled dark theme.
This was made after the unending frustration with the over-engineered bloat, complexity, and dependency-hell of pretty much every Javascript UI library and framework out there. Done with the continuous PTSD of rug-pulls and lockins of the Node.js ecosystem trash. [1]
I’ve published this, in case other Node.js ecosystem trauma victims find it useful.
My goal is a simple, minimal, vanilla, standards-based UI library that I can use in my own projects for the long term without having to worry about Javascript ecosystem trash. Long term because it’s just simple vanilla CSS and JS. The look and feel are influenced by the shadcn aesthetic.
...
Read the original on oat.ink »
Modern CSS code snippets, side by side with the old hacks they replace. Every technique you still Google has a clean, native replacement now.
Modern CSS code snippets, side by side with the old hacks they replace. Every technique you still Google has a clean, native replacement now.
/* only L changes, same perceived hue */
/* …for each value */
Click outside to close
((), …)
/* animate any one without touching the rest */
// Shows on mouse click too, or people remove it (a11y fail)
...
Read the original on modern-css.com »
Hideki Sato, the designer behind virtually every Sega console, and the company’s former president, has died age 77.
Japanese games outlet Beep21 reports that Sato passed away this weekend.
Sato and his R&D team were responsible for the creation of Sega’s arcade and home console hardware, including the Master System, Genesis / Mega Drive, Saturn, and Dreamcast.
The engineer joined Sega in 1971 and was the company’s acting president between 2001 and 2003. He left the company in 2008.
“From the beginning, Sega’s home console development has always been influenced by our arcade development,” Sato previously told Famitsu in an interview covering Sega’s history.
“Our first 8-bit machine was the SC-3000. This was a PC for beginner-level users. At that time, Sega only did arcade games, so this was our first challenge. We had no idea how many units we’d sell.”
Sato said of Mega Drive, Sega’s most successful console: “At that point, we decided to start developing a new home console. By then, arcade games were using 16-bit CPUs.
“Arcade development was something we were very invested in, so we were always using the most cutting-edge technology there. Naturally, it started us thinking: what if we used that technology in a home console?
“Two years after we started development, it was done: a 16-bit CPU home console, the Megadrive. The 68000 chip had also recently come down in price, so the timing was right.”
On Dreamcast, the release that ultimately ended Sega’s run in hardware, Sato said the keyword for the development was “play and communication.”
“The ultimate form of communication is a direct connection with another, and we included the modem and the linkable VMUs for that purpose,” he said.
“We had also planned to have some sort of linking function with cell phones, but we weren’t able to realize it. Consumers were now used to the raging ‘bit wars’, so even though we knew it was a lot of nonsense, we needed to appeal to them in those terms with the Dreamcast.
“And so we marketed it as having a ‘128 bit graphics engine RISC CPU’, even the SH-4 was only 64-bit. (laughs) On the other hand, we extensively customized the original SH-4 for the Dreamcast, to the point where I think you could almost call it something new.”
...
Read the original on www.videogameschronicle.com »
Yes, I know I’m crazy, but I figured why not. I’m enjoying working the PC6502 project but having a little tower of PCBs on the sofa isn’t the best. It’s very simple, these are the specs
* 65C22 VIA (for timers and some IO)
Lower parts (main board, battery, keyboard) in it’s case
Screen with BASIC code
First bring up
* 2026-01-01 - Initial power up of PCBs gives all the correct voltages
* 2026-01-03 - Bring up of board with simple ROM/RAM/Console working.
* 2026-01-04 - VIA working, ACIA working, comms to/from the keyboard in basic working. Begun integrating keyboard into firmware
* 2026-01-05 - Keyboard now integrated into firmware, so you can type on the keyboard and don’t need the console for input
* 2026-01-09 - Compact flash working, Beeper also now working. Also runs from battery just fine.
* 2026-01-16 - Connected a 4.3″ 800x480 RA8875 based display and got that working. I failed to get the LT7683 based display working.
* 2026-01-17 - work on a number of case related things that did not quite work in actual life.
* 2026-01-18 - Tweaked CPLD to slow down FTDI read/writes. Also begun work on bios, added start beep and begun work on load/save functions
* 2026-02-08 - Added more commands, notably SAVE,LOAD and DIR for compact flash
* add in larger display (going to try a 10.1″ RA8889 based 1024x600, fall back is a 9″ RA8875 based 800x480)
The memory map is fairly stable at the moment, everything seems to be working fine.
I’ve Added a some extra commands to EhBASIC and they are as follows;
* CIRCLE X,Y,R,C,F - Draws a Circle, X is 0-799, Y is 0-479, R(radius) is 1 - 65535, C is 8bit RGB Value (RRRGGGBB), F is fill (0 = no fill, 1 = fill)
* COLOUR <0-255> - Sets the colour (text) to 8bit RGB value, in the form RRRGGGBB
* DIR - Scans the Compact Flash card and shows slot number and name for any files present
* ELIPSE - X,Y,RX,RY,C,F - Draws an elipse, X is 0-799, Y is 0-479, RX is X radius, RY is Y radius, C is colour and F is fill
* LINE X,Y,EX,EY,C - Draws a line, X is 0-799, Y is 0-479, EX is X end point (0-799), EY is Y end point (0-479), C is colour
* MODE <0,1> - Sets the display mode, MODE 0 is text, MODE 1 is graphics
* OUTK - Outputs Text to the 8 character display on the keybed, can be a string or value, anything more than 8 characters will result in text shifting. a String will clear the display and then output the characters
* PLOT X,Y,C - Plots a dot, X is 0-799, Y is 0-479 and C is 8bit RGB Value (RRRGGGBB)
* SAVE <0-2047>,“” - SAVE current BASIC program into a SLOT and give it a name, upto 16 characters
* SQUARE - X,Y,EX,EY,C,F - Draws a square, X is 0-799, Y is 0-479, EX is X end point (0-799), EY is Y end point (0-479), C is colour and F is fill
* WOZMON - Jumps to wozmon, Q will exit WOZMON and return to basic (Handy for check chunks of memory)
...
Read the original on github.com »
New York City’s public hospital system is paying millions to Palantir, the controversial ICE and military contractor, according to documents obtained by The Intercept.
Since 2023, the New York City Health and Hospitals Corporation has paid Palantir nearly $4 million to improve its ability to track down payment for the services provided at its hospitals and medical clinics. Palantir, a data analysis firm that’s now a Wall Street giant thanks to its lucrative work with the Pentagon and U. S. intelligence community, deploys its software to make more efficient the billing of Medicaid and other public benefits. That includes automated scanning of patient health notes to “Increase charges captured from missed opportunities,” contract materials reviewed by The Intercept show.
Palantir’s administrative involvement in the business of healing people stands in contrast to its longtime role helping facilitate warfare, mass deportations, and dragnet surveillance.
In 2016, The Intercept revealed Palantir’s role behind XKEYSCORE, a secret NSA bulk surveillance program revealed by the whistleblower Edward Snowden that allowed the U. S. and its allies to search the unfathomably large volumes of data they collect. The company has also attracted global scrutiny and criticism for its “strategic partnership” with the Israeli military while it was leveling Gaza.
But it’s Palantir’s work with U. S. Immigration and Customs Enforcement that is drawing the most protest today. The company provides a variety of services to help the federal government find and deport immigrants. ICE’s Palantir-furnished case management software, for example, “plays a critical role in supporting the daily operations of ICE, ensuring critical mission success,” according to federal contracting documents.
“It’s unacceptable that the same company that is targeting our neighbors for deportation and providing tools to the Israeli military is also providing software for our hospitals,” said Kenny Morris, an organizer with the American Friends Service Committee, which shared the contract documents with The Intercept.
Established by the state legislature, New York City Health and Hospitals is the nation’s biggest municipal health care system, administering over 70 facilities throughout New York City, including Bellevue Hospital, and providing care for over 1 million New Yorkers annually.
New York City Health and Hospitals spokesperson Adam Shrier did not respond to multiple requests to discuss the contract’s details. Palantir spokesperson Drew Messing said the company does not use or share hospital data outside the bounds of its contract.
Palantir’s contract with New York’s public health care system allows the company to work with patients’ protected health information, or PHI. With permission from New York City Health and Hospitals, Palantir can “de-identify PHI and utilize de-identified PHI for purposes other than research,” the contract states. De-identification generally involves the stripping of certain revealing information, such as names, Social Security numbers, and birthdays. Such provisions are common in contracts involving health data.
Activists who oppose Palantir’s involvement in New York point to a large body of research that indicates re-identifying personal data, including in medial contexts, is often trivial.
“Any contract that shares any of New Yorkers’ highly personal data from NYC Health & Hospital’s with Palantir, a key player in the Trump administration’s mass deportation effort, is reckless and puts countless lives at risk,” said Beth Haroules of the New York Civil Liberties Union. “Every New Yorker, without exception, has a right to quality healthcare and city services. New Yorkers must be able to seek healthcare without fear that their intimate medical information, or immigration status, will be delivered to the federal government on a silver platter.”
Palantir has long provided similar services to the U. K. National Health Service, a business relationship that today has an increasing number of detractors. Palantir “has absolutely no place in the NHS, looking after patients’ personal data,” Green Party leader Zack Polanski recently stated in a letter to the U.K. health secretary.
Some New York-based groups feel similarly out of distrust for what the firm could do with troves of sensitive personal data.
“Palantir is targeting the exact patients that NYCHH is looking to serve,” said Jonathan Westin of the Brooklyn-based organization Climate Organizing Hub. “They should immediately sever their contract with Palantir and stand with the millions of immigrant New Yorkers that are being targeted by ICE in this moment.”
“The chaos Palantir is inflicting through its technology is not just limited to the kidnapping of our immigrant neighbors and the murder of heroes like our fellow nurse, Alex Pretti,” said Hannah Drummond, an Asheville, North Carolina-based nurse and organizer with National Nurses United, a nursing union. “As a nurse and patient advocate, I don’t want anything having to do with Palantir in my hospital — and neither should any elected leader who claims to represent nurses.”
Palantir’s vocally right-wing CEO Alex Karp has been a frequent critic of New York City’s newly inaugurated democratic socialist Mayor Zohran Mamdani. Health and Hospitals operates as a public benefit corporation, but the mayor can exert considerable influence over the network, for instance through the appointment of its board of directors. Its president, Dr. Mitchell Katz, was renominated by Mamdani, then the mayor-elect, late last year.
The mayor’s office did not respond in time for publication when asked about its stance on the contract.
...
Read the original on theintercept.com »
Magnus Carlsen (Norway) is the 2026 FIDE Freestyle Chess World Champion. A draw in the fourth and final game against Fabiano Caruana (USA) was enough to seal a 2.5–1.5 match victory in Weissenhaus, Germany.
The decisive moment came in game three. Carlsen won from a dead lost position, turning the match in his favor. Entering the final game, he needed only a draw and achieved it in an equal endgame after Caruana missed late chances to mount a comeback. Both finalists qualified for the 2027 FIDE Freestyle Chess World Championship.
...
Read the original on www.fide.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.