10 interesting stories served every morning and every evening.
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The analysis finds that solar is making gains in every E. U. country, while coal is broadly in retreat. Last year, solar alone supplied more than 20 percent of power in Hungary, Cyprus, Greece, Spain, and the Netherlands. Meanwhile, in 19 European countries, coal accounted for less than 5 percent of power. In 2025, both Ireland and Finland joined the ranks of European countries that have shuttered their last remaining coal plants.
Warming, however, continues to challenge the shift to clean energy as drought saps hydropower. Last year, hydro output dropped slightly in the E. U., and natural gas power rose to compensate.
“The next priority for the E. U. should be to put a serious dent in reliance on expensive, imported gas,” said Ember analyst Beatrice Petrovich. “Gas not only makes the E.U. more vulnerable to energy blackmail, it’s also driving up prices.”
In parts of Europe, there are signs that increasingly cheap batteries are beginning to displace natural gas in the early evening, when power demand is high, but solar output is waning. Said Petrovich, “As this trend accelerates it could limit how much gas is needed in evening hours, therefore stabilizing prices.”
An E. U. Plan to Slash Micropollutants in Wastewater Is Under Attack
...
Read the original on e360.yale.edu »
On January 21, 2026, @fredwilson challenged @seth: AI can write code, but it can’t affect the physical world. This is our response. Real corn, grown from seed to harvest, with every decision made by Claude Code.
AI doesn’t need to drive a tractor. It needs to orchestrate the systems and people who do. A farm manager doesn’t personally plant every seed. They aggregate data, make decisions, coordinate contractors. Claude Code becomes that farm manager— 24/7, data-driven, fully documented.
The Process — How this was built
Fred processes emails as part of his daily check routine. Priority given to farmers and operators.
A project by @seth, inspired by @fredwilson, orchestrated by Claude Code (Opus 4.5)Want to help? Iowa land leads, ag expertise, vibe coders welcome: [email protected]
...
Read the original on proofofcorn.com »
Tesla was told that if it couldn’t resolve the deceptive marketing within those 60 days, the sales suspension would take effect. That would be bad for the automaker, as California is far and away its largest market in the US, albeit one that is shrinking each quarter. Having to suspend sales entirely in the state would be disastrous. Some had speculated that Tesla could change Autopilot’s name to something less misleading, but the company chose a more drastic approach.
Now, if you buy a new Tesla and want it to steer itself—while you pay attention to the road—you will have to pay for FSD. Until the middle of February, that can be done for a one-time fee of $8,000. But starting on February 14, that option goes away, too, and the sole choice will be a $99/month FSD subscription.
But probably not for very long. Last night, Musk revealed on his social media platform that “the $99/month for supervised FSD will rise as FSD’s capabilities improve. The massive value jump is when you can be on your phone or sleeping for the entire ride (unsupervised FSD).”
The quest for recurring revenue streams is becoming something of a holy grail in the automotive industry as OEMs that previously treated their customers as a single sale now hope to make themselves more attractive to investors by encouraging customers to give them regular payouts.
This may have contributed to General Motors’ decision to drop Apple CarPlay and Android Automotive. BMW has also experimented with subscription services. Tesla’s stock price remains so high that such games are probably unnecessary here, but with falling profit margins, declining sales, and the loss of emissions credits to bolster the bottom line, one can see why regular cash infusions from Tesla drivers would be desirable.
...
Read the original on arstechnica.com »
Disclaimer: None of this is written by AI, I’m still a real person writing my own blog like its 1999
I finally figured out how to do Gigabit Ethernet over my existing phone wires.
I’ve mostly lived with powerline adapters over recent years. Some worked well, some did not (try few and return what doesn’t work in your home). One I had for a while gave me stable 30 Mbps, which was little but good enough for internet at the time. I care very much about having stable low latency for gaming, more than bandwidth.
Fast forward to my current situation, that powerline adapter regularly lost connection which was a major problem. I got some new ones with the latest and greatest G.hn 2400 standard. The final contender served around 180 Mbps to my office (with high variance 120 to 280 Mbps), or around 80 Mbps to the top floor. It’s good enough to watch YouTube/TV yet it’s far from impressive.
One peculiar thing from the UK: Internet providers don’t truly offer gigabit internet. They have a range of deals like 30 Mbps — 75 Mbps — 150 Mbps — 300 Mbps — 500 Mbps — 900 Mbps, each one costing a few more pounds per month than the last. This makes the UK simultaneously one of the cheapest and one of the most expensive countries to get Internet.
Long story short, new place, new hardware, new deals, the internet has been running at 500 Mbps for some time now.
Every 50 GB of Helldivers 2 update (because these idiots shipped the same content in duplicate 5 times) is a painful reminder that the setup is not operating at capacity.
Problem: How to get 500 Mbps to my room?
I’ve been looking for a way to reuse phone wires for a while, because British houses are full of phone sockets. There are 2 sockets in my office room.
I can’t stress enough how much we love our phone sockets. It’s not uncommon to have a one bed flat with 2 phone sockets in the living room and 2 phone sockets in the bedroom and a master socket in the technical room. It’s ridiculous.
A new house bought today could have 10 phone sockets and 0 Ethernet sockets. There is still no regulation that requires new build to get Ethernet wiring (as far as I know).
There’s got to be a way to use the existing phone infrastructure.
I know the technology exists. It’s one of the rare cases where the technology exists and is mature, but nobody can be bothered to make products for it.
The standards that run powerline adapters (HomePlug AV200, AV500, G.hn 2400) can work with any pair of wires. It should work ten times better on dedicated phone wires instead of noisy power wires, if only manufacturers could be bothered to pull their fingers out of their arse and make the products that are needed.
After countless years of research, I finally found one German manufacturer that’s making what needs to be made https://www.gigacopper.net/wp/en/home-networking/
I was lazy so I ordered online in self-service (which is definitely the wrong way to go about it). It’s available on Ebay DE and Amazon DE, it’s possible to order from either with a UK account, make sure to enter a UK address for delivery (some items don’t allow it).
The better approach is almost certainly to speak to the seller to get a quote, with international shipping and the import invoice excluding VAT (to avoid paying VAT on VAT).
The package got the usual Royal Mail treatment:
* The package was shipped by DHL Germany
* The package was transferred to Royal Mail when entering the UK
* After some days, the DHL website said they tried to deliver but nobody home, this is bullshit
* Royal website said the package reached the depot and was awaiting delivery, this is bullshit
* In reality, the package was stuck at the border, as usual
* Google to find “website to pay import fee on parcel”
* Entered the DHL tracking number into the Royal Mail form for a Royal Mail tracking number
* The website said that the parcel had import fees to pay, this is correct
* Paid the fee online, 20% VAT + a few pounds of handling fees
* The package will be scheduled for delivery a few days later
* Royal Mail and DHL updated their status another two or three times with false information
* Royal Mail delivered a letter saying there was a package waiting on fees, though it was paid
Basically, you need to follow the tracking regularly until the package is tagged as lost or failed delivery, which is the cue to pay import fees.
It’s the normal procedure to buy things from Europe since Brexit 2020. It’s actually quite shocking that Royal Mail still hasn’t updated their tracking system to be able to give a status “waiting on import fees to be paid online”. They had 6 years!
This is the gigacopper G4201TM: 1 RJ11 phone line, 1 RJ45 gigabit Ethernet port, 1 power
* It came with a German to UK power adapter (unexpected and useful)
* It came with a standard RJ11 cable (expected and useless)
* 3M removable hanging strip to stick to the wall, the device is very light
There is a gigacopper G4202TM: with an RJ45 to connect to the phone line instead of a RJ11 (not sure if it’s a newer model or just a variant, as that one has two gigabit Ethernet ports). Don’t be confused by having a RJ45 port that is not a RJ45 port.
There is a gigacopper G4201C (1 port) and G4204C (4 port) for Ethernet over coaxial. Some countries have coax in every room for TV/satellite. This may be of interest to some readers.
Plugged it and it works!
I discovered soon afterwards that I bought the wrong item. There is an InHome and a Client/Server variant of the product. Make sure to buy the InHome variant.
* The InHome variant can have up to 16 devices, communicating to any peer on the medium, with sub millisecond latency.
* The client-server variant is preconfigured as a pair, splitting the bandwidth 70% download / 30% upload, with few milliseconds latency. I think it’s a use case for ISP and long range connections.
Thankfully the difference is only the firmware. I spoke to the vendor who was very helpful and responsive. They sent me the firmware and the tools to patch.
I have a fetish for low latency. This screenshot is oddly satisfying.
The web interface says 1713 Mbps on the physical layer, the debugging tool says PHONE 200MHz — Connected 1385 Mbps.
I wanted to verify whether the device can do a full Gigabit. Unfortunately I realized I don’t have any device that can test that.
Phones are wireless, which is too slow to test anything. I checked out of curiosity, my phone did 100 Mbps to 400 Mbps right next to the router. Grabbed two laptops only to realize they didn’t have any Ethernet port. I dug up an old laptop from storage with an Ethernet port. The laptop couldn’t boot, the CPU fan didn’t start and the laptop refused to boot with a dead fan.
There is a hard lesson here: 1 Gbps ought to be enough for any home. Using the phone line is as good as having Ethernet wiring through the house if it can deliver a (shared) 1.7 Gbps link to multiple rooms.
Still, I really wanted to verify that the device can do a full Gbps, I procured an USB-C to Ethernet adapter.
Full speed achieved, testing from a phone to a computer with iperf3.
Some readers might wonder about the wiring.
I didn’t check the wiring before buying anything because it’s pointless. British sockets are always daisy chained in an incomprehensible maze.
Phone sockets need 2 wires and can be daisy chained. Ethernet sockets need 8 wires. They often use the same Cat5 cable because it’s the most widely available (8 wires cable, the 6 extra wires can remain unconnected).
It’s possible to swap the phone socket for an RJ45 socket, if you only have 2 sockets connected with the right cable. It’s not possible when sockets are daisy chained. (You could put a double or triple RJ45 socket with a switch to break a daisy chain, but it quickly becomes impractical in a British house with 5 to 10 sockets in an arbitrary layout.)
I opened one socket in the office room. There are two Cat5 cables daisy chained. There are 3 wires connected.
It’s probably daisy chained with the other socket in the room, or it’s daisy chained with the socket in the other room that’s closer. Who knows.
I opened the BT master socket in the technical room. It should have the cables coming from the other rooms. It should connect the internal phone wires with the external phone line.
There is one single Cat5 cable. There are 4 wires connected. It’s definitely not a master socket. WTF?!
It’s interesting that this socket has 4 wires connected but the socket in the office has 3 wires connected. The idiot who did the wiring was inconsistent. The gigacopper device can operate over 2 wires (200 MHz Phone SISO) or over 4 wires (100 MHz Phone MIMO). I can try the other modes if I finish the job.
The search for the master socket continues. The cables from the other floors should all be coming down somewhere around here. There is a blank plate next to it (right).
This might be the external phone line? A bunch of wires are crimped together, colours do not match. It’s the hell of a mess.
Only sure thing, they are different cables because they are different colours. They might be going to a junction box somewhere else. Probably behind a wall that’s impossible to access!
Conclusion: There is zero chance to get proper Ethernet wiring out of this mess.
The gigacopper device to do gigabit Ethernet over phone line is a miracle!
There is an enormous untapped market for gigabit Ethernet over phone sockets in the UK.
...
Read the original on thehftguy.com »
I’m David Rosenthal, and this is a place to discuss the work I’m doing in Digital Preservation.
...
Read the original on blog.dshr.org »
There’s a kind of polite fiction at the heart of the software industry. It goes something like this:
Estimating how long software projects will take is very hard, but not impossible. A skilled engineering team can, with time and effort, learn how long it will take for them to deliver work, which will in turn allow their organization to make good business plans.
This is, of course, false. As every experienced software engineer knows, it is not possible to accurately estimate software projects. The tension between this polite fiction and its well-understood falseness causes a lot of strange activity in tech companies.
For instance, many engineering teams estimate work in t-shirt sizes instead of time, because it just feels too obviously silly to the engineers in question to give direct time estimates. Naturally, these t-shirt sizes are immediately translated into hours and days when the estimates make their way up the management chain.
Alternatively, software engineers who are genuinely trying to give good time estimates have ridiculous heuristics like “double your initial estimate and add 20%“. This is basically the same as giving up and saying “just estimate everything at a month”.
Should tech companies just stop estimating? One of my guiding principles is that when a tech company is doing something silly, they’re probably doing it for a good reason. In other words, practices that appear to not make sense are often serving some more basic, illegible role in the organization. So what is the actual purpose of estimation, and how can you do it well as a software engineer?
Before I get into that, I should justify my core assumption a little more. People have written a lot about this already, so I’ll keep it brief.
I’m also going to concede that sometimes you can accurately estimate software work, when that work is very well-understood and very small in scope. For instance, if I know it takes half an hour to deploy a service, and I’m being asked to update the text in a link, I can accurately estimate the work at something like 45 minutes: five minutes to push the change up, ten minutes to wait for CI, thirty minutes to deploy.
For most of us, the majority of software work is not like this. We work on poorly-understood systems and cannot predict exactly what must be done in advance. Most programming in large systems is research: identifying prior art, mapping out enough of the system to understand the effects of changes, and so on. Even for fairly small changes, we simply do not know what’s involved in making the change until we go and look.
The pro-estimation dogma says that these questions ought to be answered during the planning process, so that each individual piece of work being discussed is scoped small enough to be accurately estimated. I’m not impressed by this answer. It seems to me to be a throwback to the bad old days of software architecture, where one architect would map everything out in advance, so that individual programmers simply had to mechanically follow instructions. Nobody does that now, because it doesn’t work: programmers must be empowered to make architectural decisions, because they’re the ones who are actually in contact with the code. Even if it did work, that would simply shift the impossible-to-estimate part of the process backwards, into the planning meeting (where of course you can’t write or run code, which makes it near-impossible to accurately answer the kind of questions involved).
In short: software engineering projects are not dominated by the known work, but by the unknown work, which always takes 90% of the time. However, only the known work can be accurately estimated. It’s therefore impossible to accurately estimate software projects in advance.
Estimates do not help engineering teams deliver work more efficiently. Many of the most productive years of my career were spent on teams that did no estimation at all: we were either working on projects that had to be done no matter what, and so didn’t really need an estimate, or on projects that would deliver a constant drip of value as we went, so we could just keep going indefinitely.
In a very real sense, estimates aren’t even made by engineers at all. If an engineering team comes up with a long estimate for a project that some VP really wants, they will be pressured into lowering it (or some other, more compliant engineering team will be handed the work). If the estimate on an undesirable project - or a project that’s intended to “hold space” for future unplanned work - is too short, the team will often be encouraged to increase it, or their manager will just add a 30% buffer.
One exception to this is projects that are technically impossible, or just genuinely prohibitively difficult. If a manager consistently fails to pressure their teams into giving the “right” estimates, that can send a signal up that maybe the work can’t be done after all. Smart VPs and directors will try to avoid taking on technically impossible projects.
Another exception to this is areas of the organization that senior leadership doesn’t really care about. In a sleepy backwater, often the formal estimation process does actually get followed to the letter, because there’s no director or VP who wants to jump in and shape the estimates to their ends. This is one way that some parts of a tech company can have drastically different engineering cultures to other parts. I’ll let you imagine the consequences when the company is re-orged and these teams are pulled into the spotlight.
Estimates are political tools for non-engineers in the organization. They help managers, VPs, directors, and C-staff decide on which projects get funded and which projects get cancelled.
The standard way of thinking about estimates is that you start with a proposed piece of software work, and you then go and figure out how long it will take. This is entirely backwards. Instead, teams will often start with the estimate, and then go and figure out what kind of software work they can do to meet it.
Suppose you’re working on a LLM chatbot, and your director wants to implement “talk with a PDF”. If you have six months to do the work, you might implement a robust file upload system, some pipeline to chunk and embed the PDF content for semantic search, a way to extract PDF pages as image content to capture formatting and diagrams, and so on. If you have one day to do the work, you will naturally search for simpler approaches: for instance, converting the PDF to text client-side and sticking the entire thing in the LLM context, or offering a plain-text “grep the PDF” tool.
This is true at even at the level of individual lines of code. When you have weeks or months until your deadline, you might spend a lot of time thinking airily about how you could refactor the codebase to make your new feature fit in as elegantly as possible. When you have hours, you will typically be laser-focused on finding an approach that will actually work. There are always many different ways to solve software problems. Engineers thus have quite a lot of discretion about how to get it done.
So how do I estimate, given all that?
I gather as much political context as possible before I even look at the code. How much pressure is on this project? Is it a casual ask, or do we have to find a way to do this? What kind of estimate is my management chain looking for? There’s a huge difference between “the CTO really wants this in one week” and “we were looking for work for your team and this seemed like it could fit”.
Ideally, I go to the code with an estimate already in hand. Instead of asking myself “how long would it take to do this”, where “this” could be any one of a hundred different software designs, I ask myself “which approaches could be done in one week?“.
I spend more time worrying about unknowns than knowns. As I said above, unknown work always dominates software projects. The more “dark forests” in the codebase this feature has to touch, the higher my estimate will be - or, more concretely, the tighter I need to constrain the set of approaches to the known work.
Finally, I go back to my manager with a risk assessment, not with a concrete estimate. I don’t ever say “this is a four-week project”. I say something like “I don’t think we’ll get this done in one week, because X Y Z would need to all go right, and at least one of those things is bound to take a lot more work than we expect. Ideally, I go back to my manager with a series of plans, not just one:
* We tackle X Y Z directly, which might all go smoothly but if it blows out we’ll be here for a month
* We bypass Y and Z entirely, which would introduce these other risks but possibly allow us to hit the deadline
* We bring in help from another team who’s more familiar with X and Y, so we just have to focus on Z
In other words, I don’t “break down the work to determine how long it will take”. My management chain already knows how long they want it to take. My job is to figure out the set of software approaches that match that estimate.
Sometimes that set is empty: the project is just impossible, no matter how you slice it. In that case, my management chain needs to get together and figure out some way to alter the requirements. But if I always said “this is impossible”, my managers would find someone else to do their estimates. When I do that, I’m drawing on a well of trust that I build up by making pragmatic estimates the rest of the time.
Many engineers find this approach distasteful. One reason is that they don’t like estimating in conditions of uncertainty, so they insist on having all the unknown questions answered in advance. I have written a lot about this in Engineers who won’t commit and How I provide technical clarity to non-technical leaders, but suffice to say that I think it’s cowardly. If you refuse to estimate, you’re forcing someone less technical to estimate for you.
Some engineers think that their job is to constantly push back against engineering management, and that helping their manager find technical compromises is betraying some kind of sacred engineering trust. I wrote about this in Software engineers should be a little bit cynical. If you want to spend your career doing that, that’s fine, but I personally find it more rewarding to find ways to work with my managers (who have almost exclusively been nice people).
Other engineers might say that they rarely feel this kind of pressure from their directors or VPs to alter estimates, and that this is really just the sign of a dysfunctional engineering organization. Maybe! I can only speak for the engineering organizations I’ve worked in. But my suspicion is that these engineers are really just saying that they work “out of the spotlight”, where there’s not much pressure in general and teams can adopt whatever processes they want. There’s nothing wrong with that. But I don’t think it qualifies you to give helpful advice to engineers who do feel this kind of pressure.
The common view is that a manager proposes some technical project, the team gets together to figure out how long it would take to build, and then the manager makes staffing and planning decisions with that information. In fact, it’s the reverse: a manager comes to the team with an estimate already in hand (though they might not come out and admit it), and then the team must figure out what kind of technical project might be possible within that estimate.
This is because estimates are not by or for engineering teams. They are tools used for managers to negotiate with each other about planned work. Very occasionally, when a project is literally impossible, the estimate can serve as a way for the team to communicate that fact upwards. But that requires trust. A team that is always pushing back on estimates will not be believed when they do encounter a genuinely impossible proposal.
When I estimate, I extract the range my manager is looking for, and only then do I go through the code and figure out what can be done in that time. I never come back with a flat “two weeks” figure. Instead, I come back with a range of possibilities, each with their own risks, and let my manager make that tradeoff.
It is not possible to accurately estimate software work. Software projects spend most of their time grappling with unknown problems, which by definition can’t be estimated in advance. To estimate well, you must therefore basically ignore all the known aspects of the work, and instead try and make educated guesses about how many unknowns there are, and how scary each unknown is.
edit: I should thank one of my readers, Karthik, who emailed me to ask about estimates, thus revealing to me that I had many more opinions than I thought.
...
Read the original on www.seangoedecke.com »
We’re excited to announce our latest major release, Zotero 8. Zotero 8 builds on the new design and features of Zotero 7 and includes a huge number of improvements and refinements.
Zotero 8 introduces a new unified citation dialog, replacing the previous citation dialog (the “red bar”), the “classic” citation dialog, and the Add Note dialog (the “yellow bar”).
The new dialog has two modes: List mode and Library mode. List mode lets you quickly search for citations from across your Zotero libraries by title, creator, and year. Library mode includes a library browser, letting you find items in specific libraries or collections. You can switch between the two modes with a single click, preserving any added items or entered search terms. By default, it will open in the last mode you used, but you can choose a different default mode in the settings.
In Zotero 7, we added the ability to quickly add citations for selected items and open documents. In the new dialog, these options are available in both List mode and Library mode, so you can make these quick selections even if you otherwise prefer to add items via the library browser.
As before, once you’ve selected an item, you can click on its bubble to customize the citation with a page number, prefix, etc. It’s also now possible to add any locator — not just a page number — right from the search bar by typing the full or short name (e.g., “line 10” or “l. 10” after the citation and pressing Enter/Return.
You can switch between adding citations and adding notes using buttons in the bottom left, corresponding to the Add/Edit Citation and Add Note buttons in your word processor.
Annotations you make on PDFs, EPUBs, and webpage snapshots now show up under their parent attachments in the items list.
Showing annotations in the items list makes it easier to view annotations across a library or collection, and it also makes it possible to search for annotations directly. For example, you can search for all annotations in a collection with a given tag and then create a note from those annotations or copy them to an external text editor with Quick Copy.
In Advanced Search, you can use “Item Type” “is” “Annotation” to match annotations or use the Annotation Text and Annotation Comment search conditions to search for specific parts of the annotation.
You can assign tags to selected annotations by dragging them to the tag selector, just like other items.
Selected annotations show up in the item pane, grouped by top-level item.
We’ve added a new Appearance panel in the reader that provides quick access to view settings and introduces support for reader themes.
The view settings are per-document settings. Themes are applied globally for all documents, including in the attachment preview in the item pane, and apply to PDFs, EPUBs, and webpage snapshots.
We offer a number of built-in themes (“Dark”, “Snow”, “Sepia”), and you can create custom themes just by specifying a foreground and background color. (Some other theme engines require additional accent colors, but we’ve tried to make this as simple as possible for users by automatically adjusting other colors based on the foreground and background colors.) You can set a different theme that applies to light mode and dark mode.
The themes replace the previous on-by-default “Use Dark Mode for Content” option, which inverted images in dark mode. We’re now simply darkening images a bit when using a dark theme. Images and ink annotations in the reader sidebar and note editor are now only darkened as well (and only when Zotero itself is in dark mode).
When possible, we also try to apply themes to PDF pages containing full-page images, such as scanned papers, by replacing whitish/dark colors with theme colors. (Otherwise we simply darken the page slightly.)
It’s now possible to open notes in tabs in addition to separate windows. Note tabs fill the whole window, with wide margins for better readability and a clean, distraction-free space for note-taking.
By default, double-clicking a note in the items list will open it in a tab. You can choose to open the note in the other space from the context menu, and you can change the default behavior using the “Open notes in new windows instead of tabs” setting in the General pane of the settings.
Notes in tabs have a separate font size setting in the View menu.
Reading Mode reformats webpage snapshots for easier reading, with unnecessary page elements removed. You can adjust line height and other view options from the Appearance panel.
We’ve reworked the tabs menu to make it faster to interact with via the keyboard.
You can now press Ctrl/Cmd-; to bring up the menu at any time.
Once the menu is open, it simultaneously accepts search input, up/down navigation, and row selection, without the need to move between different parts of the menu. You can simply start typing the name of an open tab and then press Enter/Return to switch to it once you’ve narrowed down the list.
It’s also possible to quickly close multiple tabs by moving between the row close buttons with up/down and pressing space bar to close a tab.
Zotero now automatically keeps attachment filenames in sync with parent item metadata as you make changes (e.g., changing the title). In previous versions, while Zotero would automatically rename files when you first added them to your library, if you later edited the item’s metadata, you would need to right-click on the attachment and select “Rename File from Parent Metadata”.
You can configure which file types renaming applies to from the General tab of the Zotero settings.
After upgrading to this version, existing eligible files that don’t match the current filename format won’t be automatically renamed, but you can choose to rename them en masse from the Zotero settings. Zotero will also prompt you to rename all files if you change the filename format.
“Rename File from Parent Metadata” has been removed from the item context menu. If a filename doesn’t match the configured filename format (e.g., because automatic renaming is disabled or you changed the format but didn’t choose to rename all files), you can click the “Rename File to Match Parent Item” button next to the filename in the attachment’s item pane to rename it.
Zotero 7 introduced more consistent handling of attachment titles, preserving simpler, less-redundant titles (e.g., “Full Text PDF” or “Preprint PDF”) in cases where the title was previously changed to match the filename. Zotero 8 further refines its renaming and titling logic when adding multiple and/or non-primary attachments, to bring the functionality better in line with the intended behavior.
We’ve also added a “Normalize Attachment Titles” option under Tools → Manage Attachments to update old primary attachments with titles matching the filename to use simpler titles such as “PDF”.
While we recommend the default behavior, allowing Zotero to rename primary files and keep them renamed while using simpler titles in the items list, if you really prefer to view filenames instead of titles, you can now enable “Show attachment filenames in the items list” option in the General pane of the settings.
Zotero 8 adds a version for Linux running on ARM64 devices. This includes ARM-based Chromebooks, Apple Silicon Macs running Linux (Linux VMs, Asahi Linux), and Raspberry Pis.
If you’ve been unable to run Zotero on your ARM-based device, or you’ve been running the x86_64 version under emulation, give it a try.
We’ve made a number of changes across the interface to address common requests:
* A new button in the library tab allows you to quickly close the item pane without dragging its edge or using the menus.
* You can reorder item pane sections by dragging their icons in the side navigation bar.
* You can drag items, collections, and searches into the trash.
* You can drag attachments, notes, and related items from the item pane (e.g., to copy files to the filesystem or use Quick Copy).
* Collections automatically expand when you drag over them, making it easier to drop collections or items into subcollections.
* You can delete attachments from the item pane.
* Tabs maintain their size as you close them for faster closing of multiple tabs.
With Zotero 8, the Zotero Connector save popup can autocomplete tags in your Zotero library and allows you to add a note to items as you save them.
Zotero 8 includes much more than we can list here. See the changelog for additional details.
Zotero 8 requires macOS 10.15 or later, Windows 10 or later, or a Firefox 140–compatible Linux system.
Going forward, we’re changing how we put out Zotero releases. Read about Zotero’s new release schedule.
If you’re already running Zotero, you can upgrade from within Zotero by going to Help → “Check for Updates…”.
Don’t yet have Zotero? Download Zotero 8 now.
This entry was posted
on Thursday, January 22nd, 2026 at 12:52 pm by Dan Stillman and is filed under Features, News.
...
Read the original on www.zotero.org »
Windows PCs by default will backup their encryption keys to the cloud, and Microsoft isn’t afraid to share those with the FBI if requested.
Windows PCs by default will backup their encryption keys to the cloud, and Microsoft isn’t afraid to share those with the FBI if requested.
Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.
The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.
This was possible because the device in question had its BitLocker encryption key saved in the cloud. By default, Windows 11 forces the use of a Microsoft Account, and the OS will automatically tie your BitLocker encryption key to your online account so that users can easily recover their data in scenarios where they might get locked out. This can be disabled, letting you choose where to save them locally, but the default behavior is to store the key in Microsoft’s cloud when setting up a PC with a Microsoft Account.
“While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide… how to manage their keys,” Microsoft spokesperson Charles Chamberlayne said in a statement to Forbes.
Microsoft told Forbes that it receives around 20 requests for BitLocker encryption keys from the FBI a year, but the majority of requests are unable to be met because the encryption key was never uploaded to the company’s cloud.
This is notable as other tech companies, such as Apple, have famously refused to provide law enforcement with access to encrypted data stored on their products. Apple has openly fought against the FBI in the past when it was asked to provide a backdoor into an iPhone. Other tech giants, such as Meta, will store encryption keys in the cloud, but use zero-knowledge architectures and encrypt the keys server-side so that only the user can access them.
It’s frankly shocking that the encryption keys that do get uploaded to Microsoft aren’t encrypted on the cloud side, too. That would prevent Microsoft from seeing the keys, but it seems that, as things currently stand, those keys are available in an unencrypted state, and it is a privacy nightmare for customers.
To see Microsoft so willingly hand over the keys to encrypted Windows PCs is concerning, and should make everybody using a modern Windows computer think twice before backing up their keys to the cloud. You can see which PCs have their BitLocker keys stored on Microsoft’s servers on the Microsoft Account website here, which will let you delete them if present.
Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!
...
Read the original on www.windowscentral.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.