10 interesting stories served every morning and every evening.
Capture a web page as it appears now for use as a trusted citation in the future.
...
Read the original on archive.org »
Computer chips that cram billions of electronic devices into a few square inches have powered the digital economy and transformed the world. Scientists may be on the cusp of launching a similar technological revolution — this time using light.
In a significant advance toward that goal, National Institute of Standards and Technology (NIST) scientists and collaborators have pioneered a way to make integrated circuits for light by depositing complex patterns of specialized materials onto silicon wafers. These so-called photonics chips use optical devices such as lasers, waveguides, filters and switches to shuttle light around and process information. The new advance could provide a big boost for emerging technologies such as artificial intelligence, quantum computers and optical atomic clocks.
Making circuitry for light as powerful and ubiquitous as circuitry for electrons is one of today’s technological frontiers, says Scott Papp, a NIST physicist whose group led the research, published this week in Nature. “We’re learning to make complex circuits with many functions, cutting across many application areas.”
When it comes to information transfer and processing, light can do things that electricity can’t. Photons — particles of light — are far zippier than electrons at working their way through circuits.
Laser light is also essential for controlling powerful, emerging quantum technologies such as optical atomic clocks and quantum computers.
But several hurdles remain before integrated photonics can truly hit its stride. One involves lasers. High-quality, compact and efficient lasers exist in only a few wavelengths, or colors, of light. For example, semiconductor lasers are very good at generating infrared light with a wavelength of 980 nanometers, or billionths of a meter — a color just outside the range of human vision.
Emerging technologies such as optical atomic clocks and quantum computers need laser light in many other colors as well. The lasers that produce those colors are big, costly and power-hungry, effectively confining these quantum technologies to a handful of special-purpose labs.
By integrating lasers into circuits on chips, scientists hope to help quantum technologies become cheaper and more portable, so they can start to fulfill their vast promise.
The new NIST photonics chip is a bit like a layer cake. NIST physicists Papp and Grant Brodnik, along with colleagues, started with a standard wafer of silicon coated with silicon dioxide (glass) and lithium niobate, a so-called nonlinear material that can change the color of light coming into it.
The researchers then added pieces of metal to electrically control how the circuits convert one color of light to others. The scientists also created other metal-lithium niobate interfaces that allowed them to rapidly turn light on and off within the circuits — a crucial ability for data processing and high-speed routing.
The icing on the cake, so to speak, was a second nonlinear material called tantalum pentoxide, or tantala. Tantala can transform light in ways that feel like magic, taking in a single laser color and putting out the full rainbow of visible light colors plus a wide range of infrared wavelengths. Papp and colleagues have spent years developing techniques to fabricate circuits out of tantala without heating it up, allowing the material to be deposited onto other materials without damaging them.
By patterning the different materials on top of each other in a three-dimensional stack, the researchers produced a single chip that efficiently routes light between layers. That allowed them to merge the light-manipulating wizardry of tantala with the controllability of lithium niobate. The new technique “allows seamless integration,” says Brodnik. “The real power is that tantala can be added to existing circuitry.”
Ultimately, the researchers were able to fit roughly 50 fingernail-sized chips containing 10,000 photonic circuits, each outputting a unique color, onto a wafer roughly the size of a beer coaster. “We can create all these different colors, just by designing circuits,” says Papp.
Quantum technologies such as clocks and computers could be among the biggest beneficiaries of integrated photonics. These devices often use arrays of atoms to store and process information. For each type of atom, physicists need lasers tailored to the atom’s internal quantum energy levels. For example, rubidium atoms, commonly used in quantum computers and clocks, respond to red light with a wavelength of 780 nanometers. Strontium atoms, another popular choice, “see” blue light at 461 nanometers. Shine other colors on the atoms and nothing happens.
The bulky, costly and complicated lasers needed to produce these bespoke colors have been a major hindrance to getting quantum computers and optical clocks out of the lab and into the field, where they could have big impacts. Cheap, low-power, portable optical clocks, for example, could help predict volcanic eruptions and earthquakes, offer an alternative to GPS for positioning and navigation, and help scientists investigate scientific mysteries such as the nature of dark matter. Quantum computers could offer new ways to study the physics and chemistry of drugs and materials.
Integrated photonic circuits aren’t just for quantum. Papp believes NIST’s photonics chips could help efficiently shuttle signals between the specialized chips used by tech firms, potentially making AI-based tools more powerful and efficient. Tech companies are also interested in using photonics to improve virtual reality displays.
While NIST’s chips aren’t yet ready for mass production, the technique used to create them provides a path forward, Papp and Brodnik say. The NIST scientists collaborated with experts at Octave Photonics, a Louisville, Colorado-based startup company founded by former NIST researchers that’s now working to scale up the technology.
“When you see the chip glowing in the lab, taking in invisible light and making all this visible light in one integrated chip — it’s obvious how many potential applications there could be,” says Papp.
...
Read the original on www.nist.gov »
UPDATE–Vercel, a widely used cloud platform for developing and deploying apps, has disclosed a breach of its internal systems, and says a “limited subset of customers” is affected.
The incident came to light on Sunday and the company says it has brought in an incident response provider to investigate the intrusion. The company recommends that customers check activity logs for suspicious activity and also rotate environmental variables as a precaution. Vercek also suggests that customers use its sensitive environmental variables feature to mark things such as API keys as sensitive, which then causes Vercel to store them in an unreadable format.
Vercel said the intrusion was related to the compromise of a third-party app.
“Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations,” the company said.
Vercel did not identify the app but included IOCs the identifier for it. Given that the intrusion originated with a third-party app, there may well be other related incidents emerging in the coming hours or days.
“We’ve identified a security incident that involved unauthorized access to certain internal Vercel systems. We are actively investigating, and we have engaged incident response experts to help investigate and remediate. We have notified law enforcement and will update this page as the investigation progresses,” the company said in a statement.
“At this time, we have identified a limited subset of customers that were impacted and are engaging with them directly.”
Vercel provides a wide range of services for developers and enterprises, and has a number of offerings that are focused on agentic AI workloads.
Vercel did not specify which of its systems were compromised or how many of its customers are affected.
This story was updated on April 19 to add information about the source of the intrusion.
...
Read the original on decipher.sc »
Pausing a game is so common that I doubt many of us ever really think about it. Maybe a pause menu has a cool song, or maybe you’re playing an always-online game that features a pause menu that doesn’t actually pause anything. In those cases, you might momentarily contemplate the act of pausing a video game. Those are the rare exceptions. Normally, we all just pause and unpause without a second thought. It’s just expected that most games will let you pause the action.
But how does that actually work? How do developers actually let you pause a game?
I asked developers on social media to tell me how they make a game pause, and the answers I got were all over the place. Many devs said that most modern game engines support pausing, and it shouldn’t cause too many issues as long as you don’t screw anything up while making the game. But, as you might expect, game development is weird and complicated and messy, and that means sometimes pausing a game involves manipulating time.
“In Waves of Steel, pausing slows the game speed down to 0.000000001 times normal speed,” explained game developer Chris Weisiger on BlueSky. “In other words, it’d take about three years of real-time for one second of game time to pass. I did this because I heard that Unity has special behavior for when gamespeed is 0, which I wanted to avoid.”
“As a hobbyist in Unreal, I do something a little stupid,” said dev Tommy Hanusa on social media. “I set the timescale to .000001 so that I can let the player/tester eject from the pause and fly around (with an appropriately ridiculous speed of like 5000000) in case they want to show me something.”
Many other devs told me that they just set the game’s timescale to 0 when you hit pause and make sure that certain functions, like the menu UI, ignore that command and still work as expected.
Another aspect of pausing a game that I hadn’t considered was that there are different kinds of pauses. For example, hitting start might pause a game and bring up the pause menu. But what if you disconnect a controller? What if you open the game’s inventory? What if you hit the guide button on an Xbox and pop out to the guide? These are different kinds of pauses, and some games have a whole bunch of them.
“I worked on various games at Frontier, including Kinectimals on the Xbox 360,” explained game dev Andrew Gillett via email. “I wasn’t directly involved with this part of the game, but I recall there were something like seven different levels of ‘pause.’ For example, the game should pause if the Kinect camera is disconnected, and this is a different kind of pause than when the user has brought up the Xbox system menu.”
Dreamless on BlueSky explained that these different kinds of pauses could sometimes cause headaches for devs.
“I remember in the Xbox/PS2 era we’d do a pause for normal gameplay,” said Dreamless. “With exceptions like can’t pause during QTEs & etc. Then, when it was time to ship, we’d read the [Technical Requirements Checklists] and have to go back and add a special pause for when you unplug the controller. The two pauses would conflict and cause bugs.”
Perhaps my favorite pause method involves devs freezing time and then taking a screenshot of the game which the game then uses as the background behind the pause menu UI, letting them get up to all sorts of nasty business behind that image, like not rendering enemies or even moving the player to an empty room.
“Usually, I will…take a screenshot of the gameplay at the point the game is paused and then draw that under whatever pause screen menu while also no longer drawing the actual objects,” said game dev DW O’Boyle. “This is mostly just to free up some memory, but it isn’t really necessary for the style of games I make.”
“In most of the Vlambeer games and Minit / Disc Room,” said developer Jan Willem Nijman, “I take a screenshot (with the UI disabled), then either jump to a completely different empty room or deactivate everything…with that screenshot as the background, [and] on unpause jump back [to the game]. Sometimes there’s a 1-frame delay because that screenshot needs the UI disabled.”
When someone replied that this trick always felt “hacky” to them, Nijman said that in every game they’ve worked on, you’ll find “a healthy dose of hackyness.”
My big takeaway from all of these responses is that, generally speaking, pausing a game isn’t the most complicated feature to get working in a project. However, you still need to be mindful of how you implement it, and do proper amounts of testing if your game has quirks that might cause issues when you start pausing game time.
Developer Caliban Darklock told me on BlueSky that a lot of game makers screw up adding a pause function early on in their development career, which can lead to problems, but can also be a very important learning moment.
“The first time I implemented ‘pause’ in a game, I had every single game object checking whether the game was paused in every single frame, which degraded performance across the whole game,” said Darklock. “Now all my objects are arranged in a hierarchy, and only one object at the top checks if the game is paused.”
“Most developers do a horrible, sloppy nightmare job the first time they implement this, and then they know better for the rest of their lives.”
...
Read the original on kotaku.com »
We’ve identified a security incident that involved unauthorized access to certain internal Vercel systems. We are actively investigating, and we have engaged incident response experts to help investigate and remediate. We have notified law enforcement and will update this page as the investigation progresses.
At this time, we have identified a limited subset of customers that were impacted and are engaging with them directly.
Our services remain operational, and we will continue to update this page with new information.
We are taking actions to protect Vercel systems and customers.
Our investigation is ongoing. In the meantime, here are best practices you can follow for peace of mind:
* Review the activity log for your account and environments for suspicious activity.
* Review and rotate environment variables. Environment variables marked as “sensitive” in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.
* Take advantage of the sensitive environment variables feature going forward, so that secret values are protected from being read in the future.
For support rotating your secrets or other technical support, contact us through vercel.com/help.
Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.
We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.
...
Read the original on vercel.com »
What is Antithesis?
How Antithesis works
How we’re different
Problems we solve
Security approach
Distributed systems reliability glossary
Cost of outages white paper
Deterministic simulation testing primer
Property-based testing primer
Autonomous testing primer
Techniques to improve software testing
Catalog of reliability properties for key-value datastores
Catalog of reliability properties for blockchains
Test ACID-compliance with a Ring test
What are skiplists good for?
A while back, I joined Phil Eaton’s book club on The Art of Multiprocessor Programming, and the topic of skiplists came up.
For most of my career, skiplists had always seemed like a niche data structure, with a rabid cult following but not a whole ton of applicability to my life. Then six or so years ago, we encountered a problem at Antithesis that seemed intractable until it turned out that a generalization of skiplists was exactly what we needed.
Before I tell you about that, though, let me explain what skiplists are (feel free to skip ahead if you already know them well).
A skiplist is a randomized data structure that’s basically a drop-in replacement for a binary search tree with the same interface and the same asymptotic complexity on each of its operations. Some people like them because you can produce relatively simple and understandable lock-free concurrent implementations, and others like them as a matter of taste, or because they enjoy listening to bands that you’ve totally never heard of.
In implementation terms, you can think of them roughly as linked lists plus “express lanes”:
You start with a basic linked list, and then add a hierarchy of linked lists with progressively fewer nodes in them. In the example above, the nodes in the higher-level lists are chosen probabilistically, with each node having a 50% chance of being promoted to the next level.1
This helps with search, because you can use the higher-level lists to skip more quickly to the node you want:
For (much) more on skiplists, see The Ubiquitous Skiplist.
Here we’ve found the node with an ID of 38 by starting at the top level and working downwards. At each level we advance until the next node would have an ID that’s too high, then jump down a level.
In a regular linked list of n nodes, finding a node would take O(n) time, because you’re walking through the nodes one by one. Skiplists let you jump levels, with each level halving the number of nodes you need to check, so you end up finding the node in O(log n) time.
This is all very nice, but after reading about this data structure I literally never thought about it again, until one day we encountered the following problem at Antithesis…
Antithesis runs customers’ software many times to look for bugs. Each time, our fuzzer injects different faults and tells your testing code to make different random decisions. Over many runs, these choices create a branching tree of timelines: each path from root to leaf represents one sequence of choices the fuzzer made and what happened as a result.
There were a lot of queries that we wanted to do which basically amounted to fold operations up or down this tree. For example, given a particular log message, what’s the unique history of events that led to it? (Walk up the parent pointers from that node to the root.)
The trouble was that the amount of data output by the software we were testing was so huge, we had to throw it all into an analytic database, and at the time we were using Google BigQuery. Analytic databases are optimized for scanning massive amounts of data in parallel to compute aggregate results. The tradeoff is that they’re slow at point lookups, where you fetch a specific row by its ID.
This matters, because the natural way to represent a tree in a database is with parent pointers — each node is a row in the table, with a parent_id column pointing to its parent. To answer a question like “show me the history leading to this log message”, you’d need to walk up the tree one node at a time: look up the node, get its parent ID, look up the parent node, and so on. Each step is a point lookup. In an OLTP database designed for point lookups, that’s fine.2 But in BigQuery, basically every operation results in a full table scan, which means even the most basic queries would end up doing O(depth) reads over your entire data set. Yikes!
I mean, not actually, but it’s less bad..
One alternative would have been to split the data: store just the tree structure (the parent pointers) in a database that’s good at point lookups, and keep the bulk data in BigQuery. But this approach would have created other problems. Every insert would need to write to both systems, and since we want to analyze the data online (while new writes are streaming in) keeping the two databases consistent would require something like two-phase commit (2PC). I prefer not to invent new 2PC problems where I don’t need them. And anyway, at the time BigQuery had very loose consistency semantics, so it’s not even clear that keeping the two systems in sync would have been possible.
Skiplists to the rescue! Or rather, a weird thing we invented called a “skiptree”…
Well, it’s like a skiplist, but it’s a tree.
More helpfully, here’s an example:
You have a level-0 tree, and then a hierarchy of trees above it. Each tree has roughly 50% of the nodes of the level below (the removed nodes are shown with grey dotted lines on the diagram).
If you pick any path from the root to a leaf, the nodes along that path — together with their appearances in the higher-level trees — form a skiplist. So a skiptree is really just a bunch of skiplists sharing structure, one for every root-to-leaf path in the tree.
To store the skiptree, you create a SQL table for each level: tree0, tree1, and so on. Each table has a row for every node in that tree. Instead of having a single parent_id column, it has a column for the closest ancestor node in the tree above (we’ll call that next_level_ancestor) and another column (call it ancestors_between) with a list of all nodes between the current node and the next-level ancestor.
For the diagram above, tree0 would look like this:
As an example, take the row for node H. Node H’s parent is D, which is not in tree1. D’s parent B is also not in tree1, but B’s parent A is, so next_level_ancestor is A. Then ancestors_between stores B and D.
The higher-level tables work the same way:
You can use these tables to find the ancestors of a node by chaining together JOINs, working your way up the tables.
For example, to find all ancestors of node I, start at table0. The next_level_ancestor column tells you to JOIN on node C in table1, collecting node G from the ancestors_between column on the way. Then in table1 you find that the next_level_ancestor is node A, with no other nodes to collect on the way. Node A is the root of the tree so you’re now done: the total list of ancestors is [G, C, A]. In a deeper tree you’d keep going by looking in tree2, tree3 and so on.
Hey! Now we can find ancestors with a single non-recursive SQL query with a fixed number of JOINs. We just had to do… 40 or so JOINs.3
Best of all, at the time BigQuery’s pricing charged you for the amount of data scanned, rather than for compute, and the geometric distribution of table sizes meant that each of these queries only cost twice a normal table scan.4
The number of skip levels was precisely chosen to generate a number of joins just under the BigQuery planner’s hard-coded limit.
Of course, there were disadvantages, like the SQL itself. The textual size of these queries was often measured in the kilobytes. But what do I look like, a caveman? We didn’t write the SQL. We wrote a compiler in JavaScript that generated it. And that is how most test properties in Antithesis were evaluated for the first six years of the company, until we finally wrote our own analytic database that could do efficient tree-shaped queries.5
I’m sure it cost Google a whole lot more.
Later I discovered that a skiptree is closely related to a real data structure called a skip graph, a distributed data structure based on skiplists. Which just goes to show that there is nothing new under the sun. Whatever crazy idea you have, there’s a good chance some other crazy person has already done it. Moral of the story: you never know when an exotic data structure will save you a lot of time and money.
Migrating from BigQuery to Pangolin (our in-house tree database) was what enabled us to launch our new pre-observability feature last year.
Also, while Andy Pavlo is correct that a well-written tree will always trounce a skiplist, the great thing about skiplists is that a totally naive implementation has adequate performance. That comes in handy when you’re writing them in, say, SQL.
Thank you to Phil Eaton for suggesting that we write this up.
You made it to the end! Grab some stickers
Place them anywhere and watch the compliments compile.
Get free stickers
You made it to the end! Grab some stickers
Place them anywhere and watch the compliments compile.
Get free stickers
You made it to the end!
Get free stickers
You made it to the end! Grab some stickers
Place them anywhere and watch the compliments compile.
Get free stickers
...
Read the original on antithesis.com »
On April 17, engineers at NASA’s Jet Propulsion Laboratory (JPL) in Southern California sent commands to shut down an instrument aboard Voyager 1 called the Low-energy Charged Particles experiment, or LECP. The nuclear-powered spacecraft is running low on power, and turning off the LECP is considered the best way to keep humanity’s first interstellar explorer going.
Mission engineers at NASA’s Jet Propulsion Laboratory in Southern California turned off the Low-energy Charged Particles experiment aboard Voyager 1 on April 17, 2026.
The LECP has been operating almost without interruption since Voyager 1 launched in 1977 — almost 49 years. It measures low-energy charged particles, including ions, electrons, and cosmic rays originating from our solar system and galaxy. The instrument has provided critical data about the structure of the interstellar medium, detecting pressure fronts and regions of varying particle density in the space beyond our heliosphere. The twin Voyagers are the only spacecraft that are far enough from Earth to provide this information.
Like Voyager 2, Voyager 1 relies on a radioisotope thermoelectric generator, a device that converts heat from decaying plutonium into electricity. Both probes lose about 4 watts of power each year. After almost a half-century in space, power margins have grown razor thin, requiring the team to conserve energy by shutting off heaters and instruments while making sure the spacecraft don’t get so cold that their fuel lines freeze.
During a routine, planned roll maneuver on Feb. 27, Voyager 1’s power levels fell unexpectedly. Mission engineers knew any additional drop in power could trigger the spacecraft’s undervoltage fault protection system, which would shut down components on its own to safeguard the probe, requiring recovery by the flight team — a lengthy process that carries its own risks.
The Voyager team needed to act first.
“While shutting down a science instrument is not anybody’s preference, it is the best option available,” said Kareem Badaruddin, Voyager mission manager at JPL. “Voyager 1 still has two remaining operating science instruments — one that listens to plasma waves and one that measures magnetic fields. They are still working great, sending back data from a region of space no other human-made craft has ever explored. The team remains focused on keeping both Voyagers going for as long as possible.”
The choice of which instrument to turn off next wasn’t made in the heat of the moment. Years ago, the Voyager science and engineering teams sat down together and agreed on the order in which they would shut off parts of the spacecraft while ensuring the mission can continue to conduct its unique science. Of the 10 identical sets of instruments that each spacecraft carries, seven have been shut off so far. For Voyager 1, the LECP was next on that list. The team shut off the LECP on Voyager 2 in March 2025.
Because Voyager 1 is more than 15 billion miles (25 billion kilometers) from Earth, the sequence of commands to shut down the instrument will take 23 or so hours to reach the spacecraft, and the shutdown process itself will take about three hours and 15 minutes to complete. One part of the LECP — a small motor that spins the sensor in a circle to scan in all directions — will remain on. It uses little power (0.5 watts), and keeping it running gives the team the best chance of being able to turn the instrument back on someday if they find extra power.
Engineers are confident that shutting down the LECP will give Voyager 1 about a year of breathing room. They are using the time to finalize a more ambitious energy-saving fix for both Voyagers they call “the Big Bang,” which is designed to further extend Voyager operations. The idea is to swap out a group of powered devices all at once — hence the nickname — turning some things off and replacing them with lower-power alternatives to keep the spacecraft warm enough to continue gathering science data.
The team will implement the Big Bang on Voyager 2 first, which has a little more power to spare and is closer to Earth, making it the safer test subject. Tests are planned for May and June 2026. If they go well, the team will attempt the same fix on Voyager 1 no sooner than July. If it works, there is even a chance that Voyager 1’s LECP could be switched back on.
...
Read the original on science.nasa.gov »
Police lured the man to a meeting and arrested him after accessing a private WhatsApp group with colleagues
Police lured the man to a meeting and arrested him after accessing a private WhatsApp group with colleagues
Police accessed the closed WhatsApp group chat, saved the evidence and told the man to come to a meeting before arresting him. The offending image showed smoke rising above a building after the March 2026 strikes and had only been shared in the private group chat. He remains in detention on charges including publishing information deemed harmful to state interests, the maximum sentence of which is two years. Read more: Dubai ’arrests survivors of Iranian drone strike after they sent images of explosion aftermath to loved ones’Read more: British holidaymaker, 60, arrested in Dubai for ‘filming missiles’
Radha Stirling, chief executive of London-based advocacy group Detained in Dubai, said Dubai police had “explicitly confirmed they are conducting electronic surveillance operations capable of detecting private WhatsApp messages.“She said people were being tracked, identified, and arrested not for public statements, but for private exchanges between colleagues.“’Companies like WhatsApp must answer urgent questions about user privacy.” she added.
Ms Stirling continued: “If private communications can be detected and used as the basis for arrest by overreaching or hypersensitive states, users worldwide need clarity on how their data is being accessed.” The police report said authorities learned of the material’s existence “’through electronic monitoring operations”.A special team from the Electronic and Cybercrime Department was told to find the account holder who shared the video. The airline worker was tracked down, lured to a meeting and arrested by police.The case was then escalated to State Security Prosecution. He remains in detention.
The UAE government owns majority holdings in telecom companies Etisalat and Du. This gives security services the power to observe all communications on their networks. The Arab state has also used the Israeli-developed software Pegasus which allows agents to listen into private calls and read messages, even if they are shared on encrypted apps like WhatsApp,.The spyware can infect a device even without the user activating a link - such as via a WhatsApp call, even if it isn’t answered.Once inside, it can access all WhatsApp messages, logos and contacts.Ms Stirling said other tourists, airline crew and residents have reported being detained for sending, receiving or keeping content even when they did not share it.
...
Read the original on www.lbc.co.uk »
This document is published under the Creative Commons
CC BY-NC-SA 4.0 license. This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.
This document contains no LLM-generated text — zero, zip, nada. (Yes, I do use em-dashes and semicolons; yes, I have written every one myself.)
For errors noticed or other suggestions, please file issues on
Common-Lisp. Net’s GitLab instance or
GitHub.
...
Read the original on fset.common-lisp.dev »
In a dramatic middle-of-the-night stand off, a bipartisan set of lawmakers pushing for true reform and privacy protections for Americans bought us some more time to fight! They are holding out for, at a minimum, the requirement of an actual probable cause warrant for FBI access to information collected under the mass spying program known as 702.
A reauthorization with virtually no changes was defeated because a core group of lawmakers held strong; they know that people are hungry for real reform that protects the privacy of our communications. We now have a 10-day extension to continue to push Congress to pass a real reform bill.
The Lawmakers rallied late Thursday night to reject a proposed amendment that made gestures at privacy protections, but it would not have improved on the status quo and would have reauthorized Section 702 for five more years to boot.
Section 702 is rife with problems, loopholes, and compliance issues that need fixing. The National Security Agency collects full conversations being conducted by and with targets overseas — including by and with Americans in the U. S. – and stores them in massive databases. The NSA then allows other agencies, including the Federal Bureau of Investigation, to access untold amounts of that information. In turn, the FBI takes a “finders keepers” approach to this data: they reason that since it’s already collected under one law, it’s OK for them to see it.
Under current practice, the FBI can query and even read the U. S. side of that communication without a warrant. What’s more, victims of this surveillance won’t even know and have very few ways of finding out that their communications have been surveilled. EFF and other civil liberties advocates have been trying for years to know when data collected through Section 702 is used as evidence against them.
Reforming Section 702 is even more urgent because of revelations hinted at by Senator Ron Wyden’s public statements concerning a “secret interpretation” of the law that enables surveillance of Americans, and a public “Dear Colleague” letter he sent to fellow Senators about FBI abuse of Section 702.
That’s right—the way the government conducts mass surveillance is so secret and unaccountable even the way they interpret the law is classified.
“In many cases these will be law-abiding Americans having perfectly legitimate, often sensitive, conversations,” Wyden wrote. “These Americans could include journalists, foreign aid workers, people with family members overseas - even women trying to get abortion medication from an overseas provider. Congress has an obligation to protect our country from foreign threats and protect the rights of these and other Americans.”
We have 10 days to make it clear to Congress: 702 needs real reforms. Not a blanket reauthorization. Not lip service to change. Real reform.
...
Read the original on www.eff.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.