10 interesting stories served every morning and every evening.

Are you a robot?

www.bloomberg.com

Please make sure your browser sup­ports JavaScript and cook­ies and that you are not block­ing them from load­ing. For more in­for­ma­tion you can re­view our Terms of Service and Cookie Policy.

GitHub Copilot is moving to usage-based billing

github.blog

TL;DR: Today, we are an­nounc­ing that all GitHub Copilot plans will tran­si­tion to us­age-based billing on June 1, 2026.

Instead of count­ing pre­mium re­quests, every Copilot plan will in­clude a monthly al­lot­ment of GitHub AI Credits, with the op­tion for paid plans to pur­chase ad­di­tional us­age. Usage will be cal­cu­lated based on to­ken con­sump­tion, in­clud­ing in­put, out­put, and cached to­kens, us­ing the listed API rates for each model.

This change aligns Copilot pric­ing with ac­tual us­age and is an im­por­tant step to­ward a sus­tain­able, re­li­able Copilot busi­ness and ex­pe­ri­ence for all users.

To help cus­tomers pre­pare, we are also launch­ing a pre­view bill ex­pe­ri­ence in early May, giv­ing users and ad­mins vis­i­bil­ity into pro­jected costs be­fore the June 1 tran­si­tion. This will be avail­able to users via their Billing Overview page when they log in to github.com.

Why we’re mak­ing this change

Copilot is not the same prod­uct it was a year ago.

It has evolved from an in-ed­i­tor as­sis­tant into an agen­tic plat­form ca­pa­ble of run­ning long, multi-step cod­ing ses­sions, us­ing the lat­est mod­els, and it­er­at­ing across en­tire repos­i­to­ries. Agentic us­age is be­com­ing the de­fault, and it brings sig­nif­i­cantly higher com­pute and in­fer­ence de­mands.

Today, a quick chat ques­tion and a multi-hour au­tonomous cod­ing ses­sion can cost the user the same amount. GitHub has ab­sorbed much of the es­ca­lat­ing in­fer­ence cost be­hind that us­age, but the cur­rent pre­mium re­quest model is no longer sus­tain­able.

Usage-based billing fixes that. It bet­ter aligns pric­ing with ac­tual us­age, helps us main­tain long-term ser­vice re­li­a­bil­ity, and re­duces the need to gate heavy users.

What’s chang­ing

Starting June 1, pre­mium re­quest units (PRUs) will be re­placed by GitHub AI Credits.

Credits will be con­sumed based on to­ken us­age, in­clud­ing in­put, out­put, and cached to­kens, ac­cord­ing to the pub­lished API rates for each model.

A few im­por­tant de­tails:

Base plan pric­ing is not chang­ing. Copilot Pro re­mains $10/month, Pro+ re­mains $39/month, Business re­mains $19/user/month, and Enterprise re­mains $39/user/month.

Code com­ple­tions and Next Edit sug­ges­tions re­main in­cluded in all plans and do not con­sume AI Credits.

Fallback ex­pe­ri­ences will no longer be avail­able. Today, users who ex­haust PRUs may fall back to a lower-cost model and con­tinue work­ing. Under the new model, us­age will in­stead be gov­erned by avail­able cred­its and ad­min bud­get con­trols.

Copilot code re­view will also con­sume GitHub Actions min­utes, in ad­di­tion to GitHub AI Credits. These min­utes are billed at the same per-minute rates as other GitHub Actions work­flows.

Last week, we also rolled out tem­po­rary changes to Copilot Individual plans, in­clud­ing Free, Pro, Pro+, and Student, and paused self-serve Copilot Business plan pur­chases. These were re­li­a­bil­ity and per­for­mance mea­sures as we pre­pare for the broader tran­si­tion to us­age-based billing. We will loosen us­age lim­its once us­age-based billing is in ef­fect.

What this means for in­di­vid­u­als

Copilot Pro and Pro+ monthly sub­scrip­tions will in­clude monthly AI Credits aligned to their cur­rent sub­scrip­tion prices:

Copilot Pro: $10/month, in­clud­ing $10 in monthly AI Credits

Copilot Pro+: $39/month, in­clud­ing $39 in monthly AI Credits

Users on a monthly Pro or Pro+ plan will au­to­mat­i­cally mi­grate to us­age-based billing on June 1, 2026.

Users on an­nual Pro or Pro+ plans will re­main on their ex­ist­ing plan with pre­mium re­quest-based pric­ing un­til their plan ex­pires. Model mul­ti­pli­ers will in­crease on June 1 (see table) for an­nual plan sub­scribers only. At ex­pi­ra­tion, they will tran­si­tion to Copilot Free with the op­tion to up­grade to a paid monthly plan. Alternatively, they may con­vert to a monthly paid plan be­fore their an­nual plan ex­pires, and we will pro­vide pro­rated cred­its for the re­main­ing value of their an­nual plan.

What this means for busi­nesses and en­ter­prises

Copilot Business and Copilot Enterprise monthly seat pric­ing re­mains un­changed:

Copilot Business: $19/user/month, in­clud­ing $19 in monthly AI Credits

Copilot Enterprise: $39/user/month, in­clud­ing $39 in monthly AI Credits

To sup­port the tran­si­tion, ex­ist­ing Copilot Business and Copilot Enterprise cus­tomers will au­to­mat­i­cally re­ceive pro­mo­tional in­cluded us­age for June, July, and August:

Copilot Business: $30 in monthly AI Credits

Copilot Enterprise: $70 in monthly AI Credits

We are also in­tro­duc­ing pooled in­cluded us­age across a busi­ness, which helps elim­i­nate stranded ca­pac­ity. Instead of each user’s un­used in­cluded us­age be­ing iso­lated, cred­its can be pooled across the or­ga­ni­za­tion.

Admins will also have new bud­get con­trols. They will be able to set bud­gets at the en­ter­prise, cost cen­ter, and user lev­els. When the in­cluded pool is ex­hausted, or­ga­ni­za­tions can choose whether to al­low ad­di­tional us­age at pub­lished rates or cap spend.

The bot­tom line

Plan prices aren’t chang­ing. You’ll have full con­trol over what you spend, tools to track your us­age, and the op­tion to pur­chase more AI Credits if and when you need them.

If you have ques­tions, visit our doc­u­men­ta­tion for in­di­vid­u­als and for busi­nesses and en­ter­prises, and our FAQ and re­lated dis­cus­sion.

Written by

Mario Rodriguez leads the GitHub Product team as Chief Product Officer. His core iden­tity is be­ing a learner and his pas­sion is cre­at­ing de­vel­oper tools—so much so that he has spent the last 20 years liv­ing that mis­sion in lead­er­ship roles across Microsoft and GitHub. Mario most re­cently over­saw GitHub’s AI strat­egy and the GitHub Copilot prod­uct line, launch­ing and grow­ing Copilot across thou­sands of or­ga­ni­za­tions and mil­lions of users. Mario spends time out­side of GitHub with his wife and two daugh­ters. He also co-chairs and founded a char­ter school in an ef­fort to progress ed­u­ca­tion in rural re­gions of the United States.

Related posts

Explore more from GitHub

Docs

Everything you need to mas­ter GitHub, all in one place.

Go to Docs

GitHub

Build what’s next on GitHub, the place for any­one from any­where to build any­thing.

Start build­ing

Customer sto­ries

Meet the com­pa­nies and en­gi­neer­ing teams that build with GitHub.

Learn more

The GitHub Podcast

Catch up on the GitHub pod­cast, a show ded­i­cated to the top­ics, trends, sto­ries and cul­ture in and around the open source de­vel­oper com­mu­nity on GitHub.

Listen now

Staring at walls to improve focus and productivity

www.alexselimov.com

I came across a video by Simple Lucas de­scrib­ing a rou­tine to im­prove fo­cus and pro­duc­tiv­ity.

The rou­tine was ba­si­cally:

Don’t use any screens/​en­ter­tain­ment when try­ing to fo­cus on work.

When you start to feel men­tally drained, sit and stare at a wall for x min­utes to re­cover fo­cus.

I’ve been try­ing it, and it’s a very ef­fec­tive (but hard) rou­tine.

The prob­lem

The core prob­lem is that most peo­ple by de­fault are in an in­for­ma­tion over­load.

A pa­per pub­lished in 2012 showed that in 2008 the av­er­age per­son was re­ceiv­ing 34 GB of in­for­ma­tion daily, with a daily in­for­ma­tion ex­po­sure growth rate of about 5.4% per year 1.

Extrapolating that trend, we would be at about 87 GB worth of data to­day.

This cal­cu­la­tion in­cludes au­dio, vi­sual, and text data and in­cor­po­rates qual­ity into the mea­sure­ment, i.e. 10 min­utes of HD video has more in­for­ma­tion than 10 min­utes of 480p video.

It’s un­clear to me ex­actly how the qual­ity im­pacts things, but re­gard­less it is ob­vi­ous that we are all be­ing drowned in a sea of in­for­ma­tion.

I cer­tainly go through pe­ri­ods of brain fog” and lack of fo­cus/​mo­ti­va­tion.

These pe­ri­ods usu­ally go some­thing like:

Get a bad night of sleep (up late for an event, kids keep wak­ing me up).

Wake up very tired so con­sume large amounts of caf­feine.

Have trou­ble fo­cus­ing af­ter 2/3 cups so use me­dia while work­ing to dull the pain (music/podcasts) or take more breaks” (reading hack­ernews).

Stay up late be­cause I’m wired on caf­feine and dopamine from scrolling.

Go back to 2.

I find these cy­cles very hard to break out of when I’m in them.

The me­dia con­sump­tion con­sti­tutes a small dopamine hit.

Large num­bers of small hits puts you in a hole, where you need even more/​stronger hits to feel good.

Disconnecting

The ob­vi­ous so­lu­tion is to dis­con­nect from scrolling, but that does­n’t over­come the biggest is­sue.

When I’m in this brain fog” cy­cle (and some­times out­side of it), I will find that around 1/2 pm I hit a wall.

My head will start hurt­ing, my mo­ti­va­tion will be trash, and my pro­duc­tiv­ity sig­nif­i­cantly de­grades.

My first in­stinct is to go for more cof­fee.

That usu­ally lets me keep work­ing, but at a slow/​painful pace.

While look­ing for fo­cus­ing strate­gies I came across the life-chang­ing so­lu­tion…

Stare at a Wall!

After watch­ing Simple Lucas’ ex­pe­ri­ence, I de­cided to try it when I hit my fo­cus wall.

It worked.

In my at­tempts, I com­bined wall star­ing with a few other con­cepts I had heard about.

First was ac­ti­vat­ing the parasym­pa­thetic ner­vous sys­tem by star­ing at the wall out-of-focus” and us­ing pe­riph­eral vi­sion.

Second was in­cor­po­rat­ing mind blank­ing which means try­ing to think of noth­ing.

I tried in­ter­vals of 5 – 10 min­utes and when I was done, my fo­cus was back!

What I did­n’t ex­pect was how dif­fi­cult it would be.

Sitting for 5 – 10 min­utes star­ing at a wall with­out think­ing of any­thing is hard!

I re­late it some­what to the feel­ing I have with work­ing out.

Often times I want to avoid it be­cause it’s hard, but I’m al­ways happy when I push through and com­plete it.

It was the ex­act same ex­pe­ri­ence with the wall star­ing.

So far I’ve been feel­ing sig­nif­i­cant fo­cus/​pro­duc­tiv­ity im­prove­ments.

I’ve also been us­ing some other strate­gies to im­prove fo­cus, which I’ll be talk­ing about in a fu­ture post.

I plan to con­tinue this rou­tine and will up­date to see how much it has im­pacted pro­duc­tiv­ity/​fo­cus.

Thanks for read­ing!

https://​ijoc.org/​in­dex.php/​ijoc/​ar­ti­cle/​view/​1566 ↩︎

https://​ijoc.org/​in­dex.php/​ijoc/​ar­ti­cle/​view/​1566 ↩︎

4TB of voice samples were just stolen from 40,000 AI contractors

app.oravys.com

Forensic in­tel­li­gence // Breach analy­sis

4TB of voice sam­ples were just stolen from 40,000 AI con­trac­tors. Here is how to ver­ify if yours is be­ing weaponized.

By the ORAVYS foren­sic desk

Published April 24, 2026

~7 min read

On April 4, 2026, the ex­tor­tion group Lapsus$ posted Mercor on its leak site. The dump is re­ported at roughly four ter­abytes and bun­dles a pay­load that breach an­a­lysts have been warn­ing about for two years: voice bio­met­rics paired with the same per­son’s gov­ern­ment-is­sued iden­tity doc­u­ment. According to the leaked sam­ple in­dex, the archive cov­ers more than 40,000 con­trac­tors who signed up to la­bel data, record read­ing pas­sages, and run through ver­i­fi­ca­tion calls for AI train­ing.

Five con­trac­tor law­suits were filed within ten days of the post. The plain­tiffs ar­gue that the com­pany col­lected voice prints un­der a training data” fram­ing with­out mak­ing clear they were also a per­ma­nent bio­met­ric iden­ti­fier. The law­suits mat­ter, but the peo­ple whose voices were al­ready ex­fil­trated have a more im­me­di­ate ques­tion. What does an at­tacker ac­tu­ally do with thirty sec­onds of some­one’s clean read voice plus a scan of their dri­ver’s li­cense?

Why this breach is dif­fer­ent

Most voice leaks in the last decade fell into one of two buck­ets. Either a call cen­ter got popped and record­ings were stolen with no easy way to map them back to iden­tity. Or an ID-document bro­ker leaked dri­ver’s li­censes and self­ies with­out any au­dio at­tached. Mercor merged both columns. The con­trac­tor on­board­ing pipeline asked for a pass­port or dri­ver’s li­cense scan, then a we­b­cam selfie, then a sit-down voice record­ing read­ing scripted prompts in a quiet room. That se­quence, in one row of one data­base, is ex­actly what a syn­thetic voice cloning ser­vice needs as in­put.

The Wall Street Journal re­ported in February 2026 that high-qual­ity voice cloning now re­quires roughly fif­teen sec­onds of clean ref­er­ence au­dio for tools avail­able off the shelf. The Mercor record­ings are re­ported to av­er­age two to five min­utes of stu­dio-clean speech per con­trac­tor. That is far past the thresh­old. Pair it with a ver­i­fied ID doc­u­ment and the at­tacker has both the clone and the cre­den­tial needed to put the clone to work.

What at­tack­ers can now do with stolen voice data

The threat mod­els be­low are not spec­u­la­tive. Each is a doc­u­mented tech­nique al­ready used in the wild be­fore this breach.

Bank ver­i­fi­ca­tion by­pass. Several US and UK banks still treat voice­print match­ing as one of two fac­tors. A clone of the ac­count holder read­ing a chal­lenge phrase clears the au­dio gate, leav­ing only a knowl­edge ques­tion that of­ten comes from the same leaked dataset.

Vishing the vic­tim’s em­ployer. Calling HR or fi­nance pre­tend­ing to be the em­ployee to redi­rect pay­roll, re­quest a wire, or un­lock a work­sta­tion. The Krebs on Security archive lists more than two dozen con­firmed cases since 2023.

Deepfake video calls in the Hong Kong Arup tem­plate. In 2024 a fi­nance worker at Arup wired roughly 25 mil­lion dol­lars af­ter a multi-per­son deep­fake video call. The voices and faces had been built from pub­lic footage. Mercor leaked some­thing bet­ter than pub­lic footage: stu­dio au­dio plus a ver­i­fied ID.

Insurance claim fraud. Pindrop re­ported a 475 per­cent year-over-year in­crease in syn­thetic voice at­tacks against in­sur­ance call cen­ters across 2025. Auto, life, and dis­abil­ity claims are the prime tar­gets be­cause they are set­tled by phone.

Romance and grand­par­ent scams tar­get­ing fam­ily mem­bers. The FBI Internet Crime Complaint Center logged 2.3 bil­lion dol­lars in losses for vic­tims aged 60 and over in cal­en­dar year 2026. The sin­gle fastest-grow­ing cat­e­gory was emer­gency im­per­son­ation calls, where the syn­thetic voice claims to be a rel­a­tive in trou­ble.

How to check if your voice is be­ing mis­used

If you ever up­loaded a voice sam­ple to Mercor, or to any of the other AI train­ing bro­kers that op­er­ated through 2025, treat your voice the way you would treat a leaked pass­word. You can­not ro­tate it, but you can change what it un­locks. Here is the short list.

Self-audit your pub­lic au­dio foot­print. Search YouTube, pod­cast di­rec­to­ries, and old Zoom record­ings for sam­ples of your voice that are pub­licly in­dex­able. Take down what you can. The less ref­er­ence au­dio is in the open, the less ro­bust an at­tack­er’s clone.

Set up a ver­bal code­word with fam­ily and fi­nance con­tacts. Pick a phrase that has never been spo­ken on a record­ing and never typed in chat. Brief the peo­ple who han­dle money on your be­half. If a call ever asks for a trans­fer, the code­word is manda­tory.

Rotate where voice­prints are still in use. Google Voice Match, Amazon Alexa Voice ID, Apple per­sonal voice, and any bank­ing voice­print en­roll­ment can be deleted and re­placed. Do that now, ide­ally from a new record­ing in a dif­fer­ent acoustic en­vi­ron­ment than the leaked sam­ple.

Tell your bank to dis­able voice­print as a ver­i­fi­ca­tion fac­tor. Ask in writ­ing for multi-fac­tor au­then­ti­ca­tion that com­bines an app to­ken or hard­ware key with a knowl­edge fac­tor. Many banks let you opt out of voice as a pri­mary fac­tor; few of them ad­ver­tise it.

Run sus­pi­cious record­ings through a foren­sic scan­ner. If you re­ceive an au­dio file or voice­mail that claims to be from some­one you know and asks for money, ac­cess, or ur­gency, run it through a deep­fake de­tec­tor be­fore act­ing. ORAVYS of­fers a free check for the first three sam­ples sub­mit­ted by breach vic­tims (see the of­fer be­low).

The foren­sic check­list that ex­perts use

When a sam­ple lands on a foren­sic an­a­lyst’s desk, the fol­low­ing ar­ti­facts are the first pass. Each is some­thing a syn­thetic voice tends to get slightly wrong, even when the per­cep­tual qual­ity is high.

Codec mis­match. The au­dio claims to come from a phone call but the spec­tral sig­na­ture does not match any known tele­phony codec.

Breath pat­terns. Real speak­ers in­hale at pre­dictable points dic­tated by phrase length and lung ca­pac­ity. Synthetic voices of­ten skip breaths or in­sert them at the wrong syl­labic bound­ary.

Micro-jitter. Natural vo­cal folds vi­brate with small ir­reg­u­lar­i­ties. Generated au­dio is of­ten too clean at the mil­lisec­ond level.

Formant tra­jec­tory. Vowel tran­si­tions fol­low phys­i­cal ar­tic­u­la­tor paths in a real mouth. Cloned voices some­times take im­pos­si­ble short­cuts be­tween for­mants.

Room acoustics in­con­sis­tency. The re­verb sig­na­ture should be iden­ti­cal from the start of the file to the end. Generated au­dio is of­ten dry while the splice con­text is re­ver­ber­ant.

Prosody flat­ness. Synthetic speech of­ten has nar­rower pitch and en­ergy vari­ance than the same speaker would have in real con­di­tions.

Speech rate sta­bil­ity. Real hu­mans speed up and slow down with con­tent. Generated speech tends to hold a metro­nomic rate across long pas­sages.

What ORAVYS does specif­i­cally

More than 3,000 foren­sic en­gines run in par­al­lel on every sub­mit­ted sam­ple, cov­er­ing sig­nal, prosody, ar­tic­u­la­tion, codec, and prove­nance do­mains.

AudioSeal wa­ter­mark de­tec­tion flags files gen­er­ated by ma­jor com­mer­cial voice mod­els when the wa­ter­mark is pre­served, giv­ing a de­ter­min­is­tic pos­i­tive when pre­sent.

An anti-spoof­ing mod­ule trained against the ASVspoof pub­lic bench­marks scores the like­li­hood that a sam­ple was syn­the­sized rather than recorded.

Biometric pro­cess­ing is RGPD com­pli­ant. Audio is never used to train com­mer­cial mod­els with­out ex­plicit con­sent and is purged on a de­fined re­ten­tion sched­ule.

Free ver­i­fi­ca­tion for Mercor breach vic­tims

If you were a Mercor con­trac­tor and you be­lieve your voice may al­ready be in cir­cu­la­tion, ORAVYS will an­a­lyze the first three sus­pect sam­ples free of charge. You will re­ceive a foren­sic re­port cov­er­ing wa­ter­mark de­tec­tion, anti-spoof­ing score, and the ar­ti­fact check­list above. No card re­quired, no quota gate.

Run a foren­sic check →

Sources cited in this ar­ti­cle: Lapsus$ leak site in­dex (April 2026), Wall Street Journal voice cloning re­port (February 2026), Pindrop Voice Intelligence Report 2025, FBI IC3 Elder Fraud Report 2026, Krebs on Security archives. Lawsuit ref­er­ences are mat­ters of pub­lic record. ORAVYS does not host or re­dis­trib­ute the leaked dataset and does not ac­cept it as in­put.

GitHub - pgbackrest/pgbackrest: Reliable PostgreSQL Backup & Restore

github.com

NOTICE OF OBSOLESCENCE

TL;DR: pg­Back­Rest is no longer be­ing main­tained. If you fork pg­Back­Rest, please se­lect a new name for your pro­ject.

After a lot of thought, I have de­cided to stop work­ing on pg­Back­Rest. I did not come to this de­ci­sion lightly. pg­Back­Rest has been my pas­sion pro­ject for the last thir­teen years, and I was for­tu­nate to have cor­po­rate spon­sor­ship for much of this time, but there were also many late nights and week­ends as I worked to make pg­Back­Rest the pro­ject it is to­day, aided by nu­mer­ous con­trib­u­tors. Every open-source de­vel­oper knows ex­actly what I mean and how much of your life gets de­voted to a spe­cial pro­ject.

Since Crunchy Data was sold, I have been main­tain­ing pg­Back­Rest and look­ing for a po­si­tion that would al­low me to con­tinue the work, but so far I have not been suc­cess­ful. Likewise, my ef­forts to se­cure spon­sor­ship have also fallen far short of what I need to make the pro­ject vi­able.

Like every­one else, I need to make a liv­ing, and the range of pg­Back­Rest-re­lated roles is very lim­ited. I can now con­sider a wider va­ri­ety of op­por­tu­ni­ties, but those will not leave me time to work on pg­Back­Rest, which re­quires a fair amount of time for main­te­nance, bug fixes, PR re­views, an­swer­ing is­sues, etc. That does not even in­clude time to write new fea­tures, which is what I re­ally love to do. Rather than do the work poorly and/​or spo­rad­i­cally, I think it makes more sense to have a hard stop.

I imag­ine at some point pg­Back­Rest will be forked, but that will be a new pro­ject with new main­tain­ers, and they will need to build trust the same way we did.

Again, many thanks to all the pg­Back­Rest con­trib­u­tors over the years. It was a plea­sure work­ing with you!

Introduction

pg­Back­Rest is a re­li­able backup and re­store so­lu­tion for PostgreSQL that seam­lessly scales up to the largest data­bases and work­loads.

pg­Back­Rest v2.58.0 is the cur­rent sta­ble re­lease. Release notes are on the Releases page.

Features

Parallel Backup & Restore

Compression is usu­ally the bot­tle­neck dur­ing backup op­er­a­tions so pg­Back­Rest solves this prob­lem with par­al­lel pro­cess­ing and more ef­fi­cient com­pres­sion al­go­rithms such as lz4 and zstd.

Local or Remote Operation

A cus­tom pro­to­col al­lows pg­Back­Rest to backup, re­store, and archive lo­cally or re­motely via TLS/SSH with min­i­mal con­fig­u­ra­tion. An in­ter­face to query PostgreSQL is also pro­vided via the pro­to­col layer so that re­mote ac­cess to PostgreSQL is never re­quired, which en­hances se­cu­rity.

Multiple Repositories

Multiple repos­i­to­ries al­low, for ex­am­ple, a lo­cal repos­i­tory with min­i­mal re­ten­tion for fast re­stores and a re­mote repos­i­tory with a longer re­ten­tion for re­dun­dancy and ac­cess across the en­ter­prise.

Full, Differential, & Incremental Backups (at File or Block Level)

Full, dif­fer­en­tial, and in­cre­men­tal back­ups are sup­ported. pg­Back­Rest is not sus­cep­ti­ble to the time res­o­lu­tion is­sues of rsync, mak­ing dif­fer­en­tial and in­cre­men­tal back­ups safe with­out the re­quire­ment to check­sum each file. Block-level back­ups save space by only copy­ing the parts of files that have changed.

Backup Rotation & Archive Expiration

Retention po­lices can be set for full and dif­fer­en­tial back­ups to cre­ate cov­er­age for any time frame. The WAL archive can be main­tained for all back­ups or strictly for the most re­cent back­ups. In the lat­ter case WAL re­quired to make older back­ups con­sis­tent will be main­tained in the archive.

Backup Integrity

Checksums are cal­cu­lated for every file in the backup and rechecked dur­ing a re­store or ver­ify. After a backup fin­ishes copy­ing files, it waits un­til every WAL seg­ment re­quired to make the backup con­sis­tent reaches the repos­i­tory.

Backups in the repos­i­tory may be stored in the same for­mat as a stan­dard PostgreSQL clus­ter (including ta­ble­spaces). If com­pres­sion is dis­abled and hard links are en­abled it is pos­si­ble to snap­shot a backup in the repos­i­tory and bring up a PostgreSQL clus­ter di­rectly on the snap­shot. This is ad­van­ta­geous for ter­abyte-scale data­bases that are time con­sum­ing to re­store in the tra­di­tional way.

All op­er­a­tions uti­lize file and di­rec­tory level fsync to en­sure dura­bil­ity.

Page Checksums

If page check­sums are en­abled pg­Back­Rest will val­i­date the check­sums for every file that is copied dur­ing a backup. All page check­sums are val­i­dated dur­ing a full backup and check­sums in files that have changed are val­i­dated dur­ing dif­fer­en­tial and in­cre­men­tal back­ups.

Validation fail­ures do not stop the backup process, but warn­ings with de­tails of ex­actly which pages have failed val­i­da­tion are out­put to the con­sole and file log.

This fea­ture al­lows page-level cor­rup­tion to be de­tected early, be­fore back­ups that con­tain valid copies of the data have ex­pired.

Backup Resume

An in­ter­rupted backup can be re­sumed from the point where it was stopped. Files that were al­ready copied are com­pared with the check­sums in the man­i­fest to en­sure in­tegrity. Since this op­er­a­tion can take place en­tirely on the repos­i­tory host, it re­duces load on the PostgreSQL host and saves time since check­sum cal­cu­la­tion is faster than com­press­ing and re­trans­mit­ting data.

Streaming Compression & Checksums

Compression and check­sum cal­cu­la­tions are per­formed in stream while files are be­ing copied to the repos­i­tory, whether the repos­i­tory is lo­cated lo­cally or re­motely.

If the repos­i­tory is on a repos­i­tory host, com­pres­sion is per­formed on the PostgreSQL host and files are trans­mit­ted in a com­pressed for­mat and sim­ply stored on the repos­i­tory host. When com­pres­sion is dis­abled a lower level of com­pres­sion is uti­lized to make ef­fi­cient use of avail­able band­width while keep­ing CPU cost to a min­i­mum.

Delta Restore

The man­i­fest con­tains check­sums for every file in the backup so that dur­ing a re­store it is pos­si­ble to use these check­sums to speed pro­cess­ing enor­mously. On a delta re­store any files not pre­sent in the backup are first re­moved and then check­sums are gen­er­ated for the re­main­ing files. Files that match the backup are left in place and the rest of the files are re­stored as usual. Parallel pro­cess­ing can lead to a dra­matic re­duc­tion in re­store times.

Parallel, Asynchronous WAL Push & Get

Dedicated com­mands are in­cluded for push­ing WAL to the archive and get­ting WAL from the archive. Both com­mands sup­port par­al­lelism to ac­cel­er­ate pro­cess­ing and run asyn­chro­nously to pro­vide the fastest pos­si­ble re­sponse time to PostgreSQL.

WAL push au­to­mat­i­cally de­tects WAL seg­ments that are pushed mul­ti­ple times and de-du­pli­cates when the seg­ment is iden­ti­cal, oth­er­wise an er­ror is raised. Asynchronous WAL push al­lows trans­fer to be of­floaded to an­other process which com­presses WAL seg­ments in par­al­lel for max­i­mum through­put. This can be a crit­i­cal fea­ture for data­bases with ex­tremely high write vol­ume.

Asynchronous WAL get main­tains a lo­cal queue of WAL seg­ments that are de­com­pressed and ready for re­play. This re­duces the time needed to pro­vide WAL to PostgreSQL which max­i­mizes re­play speed. Higher-latency con­nec­tions and stor­age (such as S3) ben­e­fit the most.

The push and get com­mands both en­sure that the data­base and repos­i­tory match by com­par­ing PostgreSQL ver­sions and sys­tem iden­ti­fiers. This vir­tu­ally elim­i­nates the pos­si­bil­ity of mis­con­fig­ur­ing the WAL archive lo­ca­tion.

Tablespace & Link Support

Tablespaces are fully sup­ported and on re­store ta­ble­spaces can be remapped to any lo­ca­tion. It is also pos­si­ble to remap all ta­ble­spaces to one lo­ca­tion with a sin­gle com­mand which is use­ful for de­vel­op­ment re­stores.

File and di­rec­tory links are sup­ported for any file or di­rec­tory in the PostgreSQL clus­ter. When restor­ing it is pos­si­ble to re­store all links to their orig­i­nal lo­ca­tions, remap some or all links, or re­store some or all links as nor­mal files or di­rec­to­ries within the clus­ter di­rec­tory.

S3, Azure, and GCS Compatible Object Store Support

pg­Back­Rest repos­i­to­ries can be lo­cated in S3, Azure, and GCS com­pat­i­ble ob­ject stores to al­low for vir­tu­ally un­lim­ited ca­pac­ity and re­ten­tion.

Encryption

pg­Back­Rest can en­crypt the repos­i­tory to se­cure back­ups wher­ever they are stored.

Compatibility with ten ver­sions of PostgreSQL

pg­Back­Rest in­cludes sup­port for ten ver­sions of PostgreSQL, the five sup­ported ver­sions and the last five EOL ver­sions. This al­lows am­ple time to up­grade to a sup­ported ver­sion.

Getting Started

pg­Back­Rest strives to be easy to con­fig­ure and op­er­ate:

User guides for var­i­ous op­er­at­ing sys­tems and PostgreSQL ver­sions.

User guides for var­i­ous op­er­at­ing sys­tems and PostgreSQL ver­sions.

Command ref­er­ence for com­mand-line op­er­a­tions.

Command ref­er­ence for com­mand-line op­er­a­tions.

Configuration ref­er­ence for cre­at­ing pg­Back­Rest con­fig­u­ra­tions.

Configuration ref­er­ence for cre­at­ing pg­Back­Rest con­fig­u­ra­tions.

Sponsorship

pg­Back­Rest would not ex­ist with­out spon­sors. Writing new fea­tures, fix­ing bugs, re­view­ing con­tri­bu­tions, an­swer­ing ques­tions from the com­mu­nity, and main­te­nance all take a con­sid­er­able amount of time.

Current spon­sors: Supabase.

Past spon­sors: Crunchy Data, Resonate.

Recognition

Armchair graphic by Alexander Skowalsky.

China blocks Meta's $2 billion takeover of AI startup Manus

www.cnbc.com

China’s state plan­ner on Monday called for Meta to un­wind its $2 bil­lion ac­qui­si­tion of Manus, a Singaporean ar­ti­fi­cial in­tel­li­gence startup with Chinese roots.

The de­ci­sion to pro­hibit for­eign in­vest­ment in Manus was made in ac­cor­dance with laws and reg­u­la­tions, the National Development and Reform Commission said in a brief state­ment. It added that it has asked the par­ties in­volved to with­draw the ac­qui­si­tion trans­ac­tion.

Shares of Meta closed 0.53% higher on Monday.

The deal had at­tracted scrutiny from both China and Washington, as law­mak­ers in the U.S. have pro­hib­ited American in­vestors from back­ing Chinese AI com­pa­nies di­rectly. Meanwhile, Beijing has in­creased ef­forts to dis­cour­age Chinese AI founders from mov­ing busi­ness off­shore.

watch now

The Chinese gov­ern­men­t’s in­ter­ven­tion in the trans­ac­tion drew alarm among tech founders and ven­ture cap­i­tal­ists in the coun­try who were hop­ing to take ad­van­tage of the so-called Singapore-washing model, where com­pa­nies re­lo­cate from China to the city-state to avoid scrutiny from Beijing and Washington.

Manus was founded in China be­fore re­lo­cat­ing to Singapore. The com­pany de­vel­ops gen­eral-pur­pose AI agents and launched its first gen­eral AI agent in March last year, which can ex­e­cute com­plex tasks such as mar­ket re­search, cod­ing and data analy­sis. The re­lease saw the startup lauded as the next DeepSeek.

Manus said it had passed $100 mil­lion in an­nual re­cur­ring rev­enue, or ARR, in December, eight months on from launch­ing a prod­uct, which it claimed made it the fastest startup in the world at the time to hit the mile­stone from $0.

The com­pany raised $75 mil­lion in a round led by U.S. VC Benchmark in April last year.

When Meta an­nounced the deal late last year, the tech gi­ant said it would look to ac­cel­er­ate ar­ti­fi­cial in­tel­li­gence in­no­va­tion for busi­nesses and in­te­grate ad­vanced au­toma­tion into its con­sumer and en­ter­prise prod­ucts, in­clud­ing its Meta AI as­sis­tant.

But in January, China’s Ministry of Commerce said it would con­duct an as­sess­ment and in­ves­ti­ga­tion into how the ac­qui­si­tion com­plied with laws and reg­u­la­tions con­cern­ing ex­port con­trols, tech­nol­ogy im­port and ex­port, and over­seas in­vest­ment.

A Meta spokesper­son told CNBC that the trans­ac­tion complied fully with ap­plic­a­ble law,” and that it an­tic­i­pated an ap­pro­pri­ate res­o­lu­tion to the in­quiry.”

When asked about China’s move to block Meta’s ac­qui­si­tion of Manus, APEC Senior Officials Meeting Chairman Chen Xu told re­porters that it is important that all par­ties act in a spirit of mu­tual ben­e­fit.”

While Chen said he did not know the specifics of the is­sue, he said that if such an is­sue can be han­dled prop­erly, it can help fa­cil­i­tate more sub­stan­tive dis­cus­sions in APEC.” That’s ac­cord­ing to an of­fi­cial English trans­la­tion.

CNBCs Anniek Bao and Dylan Butts con­tributed to this story.

GitHub - dirac-run/dirac: Coding Agent singularly focused efficiency and context curation. Reduces API costs by 50-80% vs other agent AND improves the code quality at the same time. Uses Hash Anchored edits, massively parallel operations, AST manipulation and many many other optimizations. https://dirac.run/

github.com

Dirac - Accurate & Highly Token Efficient Open Source AI Agent

Dirac topped the Terminal-Bench-2 leader­board for gem­ini-3-flash-pre­view with a 65.2% score!

Dirac topped the Terminal-Bench-2 leader­board for gem­ini-3-flash-pre­view with a 65.2% score!

It is a well stud­ied phe­nom­e­non that any given mod­el’s rea­son­ing abil­ity de­grades with the con­text length. If we can keep con­text tightly cu­rated, we im­prove both ac­cu­racy and cost while mak­ing larger changes tractable in a sin­gle task.

Dirac is an open-source cod­ing agent built with this in mind. It re­duces API costs by 64.8% on av­er­age while pro­duc­ing bet­ter and faster work. Using hash-an­chored par­al­lel ed­its, AST ma­nip­u­la­tion, and a suite of ad­vanced op­ti­miza­tions. Oh, and no MCP.

Our goal: Optimize for bang-for-the-buck on tool­ing with bare min­i­mum prompt­ing in­stead of go­ing blindly min­i­mal­is­tic.

📊 Evals

Dirac is bench­marked against other lead­ing open-source agents on com­plex, real-world refac­tor­ing tasks. Dirac con­sis­tently achieves 100% ac­cu­racy at a frac­tion of the cost. These evals are run on pub­lic github re­pos and should be re­pro­ducible by any­one.

🏆 TerminalBench 2.0 Leaderboard: Dirac re­cently topped the Terminal-Bench-2 leader­board with a 65.2% score us­ing gem­ini-3-flash-pre­view. This out­per­forms both Google’s of­fi­cial base­line (47.6%) and the top closed-source agent Junie CLI (64.3%). This was achieved with­out any bench­mark-spe­cific info or any AGENTS.md files be­ing in­serted.

🏆 TerminalBench 2.0 Leaderboard: Dirac re­cently topped the Terminal-Bench-2 leader­board with a 65.2% score us­ing gem­ini-3-flash-pre­view. This out­per­forms both Google’s of­fi­cial base­line (47.6%) and the top closed-source agent Junie CLI (64.3%). This was achieved with­out any bench­mark-spe­cific info or any AGENTS.md files be­ing in­serted.

Note on the cost table be­low: A bug was dis­cov­ered in Cline, the par­ent repo, af­ter run­ning these evals (issue #10314). We have sub­mit­ted a PR #10315 to fix this. This bug caused the evals for Dirac and Cline to slightly un­der­re­port the num­bers ($0.03 vs $0.05 per mil­lion to­ken cache read). Although there won’t be a large dif­fer­ence, we will up­date the evals soon.

Note on the cost table be­low: A bug was dis­cov­ered in Cline, the par­ent repo, af­ter run­ning these evals (issue #10314). We have sub­mit­ted a PR #10315 to fix this. This bug caused the evals for Dirac and Cline to slightly un­der­re­port the num­bers ($0.03 vs $0.05 per mil­lion to­ken cache read). Although there won’t be a large dif­fer­ence, we will up­date the evals soon.

All tasks for all mod­els used gem­ini-3-flash-pre­view with think­ing set to high

🟢 Success | 🟡 Incomplete | 🔴 Failure

🟢 Success | 🟡 Incomplete | 🔴 Failure

Cost Comparison: Dirac is 64.8% cheaper than the com­pe­ti­tion (a 2.8x cost re­duc­tion).

* Expected num­ber of files to be mod­i­fied/​cre­ated to com­plete the task.

See evals/​README.md for de­tailed task de­scrip­tions and method­ol­ogy.

Cost Comparison: Dirac is 64.8% cheaper than the com­pe­ti­tion (a 2.8x cost re­duc­tion).

* Expected num­ber of files to be mod­i­fied/​cre­ated to com­plete the task.

See evals/​README.md for de­tailed task de­scrip­tions and method­ol­ogy.

🚀 Key Features

Hash-Anchored Edits: Dirac uses sta­ble line hashes to tar­get ed­its with ex­treme pre­ci­sion, avoid­ing the lost in trans­la­tion” is­sues of tra­di­tional line-num­ber based edit­ing.

AST-Native Precision: Built-in un­der­stand­ing of lan­guage syn­tax (TypeScript, Python, C++, etc.) al­lows Dirac to per­form struc­tural ma­nip­u­la­tions like func­tion ex­trac­tion or class refac­tor­ing with 100% ac­cu­racy.

Multi-File Batching: Dirac can process and edit mul­ti­ple files in a sin­gle LLM roundtrip, sig­nif­i­cantly re­duc­ing la­tency and API costs.

High-Bandwidth Context: Optimized con­text cu­ra­tion keeps the agent lean and fast, en­sur­ing the LLM al­ways has the most rel­e­vant in­for­ma­tion with­out wast­ing to­kens.

Autonomous Tool Use: Dirac can read/​write files, ex­e­cute ter­mi­nal com­mands, use a head­less browser, and more - all while keep­ing you in con­trol with an ap­proval-based work­flow.

Skills & AGENTS.md: Customize Dirac’s be­hav­ior with pro­ject-spe­cific in­struc­tions us­ing AGENTS.md files. It also seam­lessly picks up Claude’s skills by au­to­mat­i­cally read­ing from .ai, .claude, and .agents di­rec­to­ries.

Native Tool Calling Only: To en­sure max­i­mum re­li­a­bil­ity and per­for­mance, Dirac ex­clu­sively sup­ports mod­els with na­tive tool call­ing en­abled. (Note: MCP is not sup­ported).

📦 Installation

VS Code Extension

Install Dirac from the VS Code Marketplace.

CLI (Terminal)

Install the Dirac CLI glob­ally us­ing npm:

npm in­stall -g dirac-cli

🚀 CLI Quick Start

Authenticate:

dirac auth

dirac auth

Run your first task:

dirac Analyze the ar­chi­tec­ture of this pro­ject”

dirac Analyze the ar­chi­tec­ture of this pro­ject”

Configuration (Environment Variables)

You can pro­vide API keys via en­vi­ron­ment vari­ables to skip the dirac auth step. This is ideal for CI/CD or non-per­sis­tent en­vi­ron­ments:

ANTHROPIC_API_KEY

OPENAI_API_KEY

OPENROUTER_API_KEY

GEMINI_API_KEY

GROQ_API_KEY

MISTRAL_API_KEY

XAI_API_KEY (x.ai)

HF_TOKEN (HuggingFace)

… and oth­ers (see src/​shared/​stor­age/​env-con­fig.ts for the full list).

Common Commands

dirac prompt”: Start an in­ter­ac­tive task.

dirac -p prompt”: Run in Plan Mode to see the strat­egy be­fore ex­e­cut­ing.

dirac -y prompt”: Yolo Mode (auto-approve all ac­tions, great for sim­ple fixes).

git diff | dirac Review these changes”: Pipe con­text di­rectly into Dirac.

dirac his­tory: View and re­sume pre­vi­ous tasks.

🛠️ Getting Started

Open the Dirac side­bar in VS Code.

Configure your pre­ferred AI provider (Anthropic, OpenAI, OpenRouter, etc.).

Start a new task by de­scrib­ing what you want to build or fix.

Watch Dirac go!

📈 Star History

📄 License

Dirac is open source and li­censed un­der the Apache License 2.0.

🤝 Acknowledgments

Dirac is a fork of the ex­cel­lent Cline pro­ject. We are grate­ful to the Cline team and con­trib­u­tors for their foun­da­tional work.

Built with ❤️ by Max Trivedi at Dirac Delta Labs

Dutch central bank chooses Lidl for European Cloud

www.techzine.eu

De Nederlandsche Bank will sign a ma­jor con­tract to­mor­row with Schwarz Digits, the IT arm of Lidl owner Schwarz Group. DNB aims to re­duce its de­pen­dence on American cloud com­pa­nies. As a ma­jor Dutch or­ga­ni­za­tion, it is opt­ing for a European part­ner, but how will this play out?

Sales Director Bernd Wagner an­nounced the news on Monday at the Hannover Messe, ac­cord­ing to De Telegraaf. The move it­self comes as no sur­prise. DNB Director Steven Maijoor an­nounced last October that he in­tended to set a good ex­am­ple” and switch to a European cloud, though he ac­knowl­edged that it is not yet as ro­bust or high-qual­ity as the one from the U.S.”

That is pre­cisely the con­sid­er­a­tion every com­pany must make. Can a European al­ter­na­tive func­tion well enough to meet the or­ga­ni­za­tion’s re­quire­ments and needs? Lidl’s plat­form has been in de­vel­op­ment for years, but those of Amazon, Google, and Microsoft have some­times had as much as 20 years of de­vel­op­ment work be­hind them.

The fact that the tran­si­tion to European al­ter­na­tives does not al­ways go smoothly is ev­i­dent in Schleswig-Holstein, where the lo­cal gov­ern­ment is al­ready strug­gling with the mi­gra­tion from Microsoft to an open-source en­vi­ron­ment.

Large or­ga­ni­za­tions are al­ready con­nected to Lidl’s cloud. For ex­am­ple, Lidl and the German su­per­mar­ket chain Kaufland use it. Deutsche Bahn also col­lab­o­rates with the Schwarz Group. But now a Dutch or­ga­ni­za­tion from a highly reg­u­lated sec­tor is opt­ing for this cloud.

Concerns about cloud de­pen­dency

Last year, the Dutch Central Bank (DNB) and the Netherlands Authority for the Financial Markets (AFM) warned that the Dutch fi­nan­cial sec­tor had be­come too de­pen­dent on for­eign IT ser­vice providers, par­tic­u­larly American ones. These con­cerns were fu­eled by geopo­lit­i­cal ten­sions. For ex­am­ple, a pros­e­cu­tor at the International Criminal Court in The Hague was cut off from his Microsoft email ac­count by President Donald Trump. The ICC is now also switch­ing to non-Amer­i­can sys­tems.

Incidentally, when is­su­ing that warn­ing, DNB also had to ad­mit that it it­self is largely de­pen­dent on American ser­vice providers for its dig­i­tal in­fra­struc­ture.

Schwarz Digits and Stackit

Schwarz Digits, via the Stackit cloud plat­form, has long po­si­tioned it­self as a European al­ter­na­tive to American hy­per­scalers. The Lidl-owned com­pany is build­ing a sov­er­eign cloud where all data falls un­der European law. This sets it apart from American providers, who, un­der the Cloud Act, are re­quired to hand over data to U.S. au­thor­i­ties. Schwarz Digits re­cently an­nounced an in­vest­ment of 11 bil­lion eu­ros in a large data cen­ter in Lübbenau.

The pro­ject orig­i­nally be­gan as an in­ter­nal IT sys­tem for Lidl and Kaufland, but is now also at­tract­ing ex­ter­nal clients, in­clud­ing SAP and Bayern Munich. Together with Deutsche Telekom, it is work­ing on broader European IT al­ter­na­tives.

A spokesper­son for DNB con­firmed con­cerns about cloud de­pen­dency on Monday but de­clined to com­ment on in­di­vid­ual con­tracts. That is why, with every new step to­ward the cloud, we ex­plic­itly as­sess geopo­lit­i­cal risks and ex­plore how we can re­duce our de­pen­dency,” the spokesper­son said.

Quarkdown | Markdown with superpowers

quarkdown.com

Spend your time writ­ing

and don’t worry about the rest.

.docauthor {Jennifer Chu}

.pagemargin {topright}

 .docauthor | MIT News

# X-ray flashes from a su­per­mas­sive black hole

!(70%)[Black hole](img/​black­hole.jpg)

.abstract

One su­per­mas­sive black hole has kept as­tronomers glued to their scopes

for the last sev­eral years.

The black hole in ques­tion is `1ES 1927+654`, which is about as

mas­sive as a mil­lion suns and sits in a galaxy that is 270 mil­lion

light-years away.

In 2018, as­tronomers at MIT and else­where ob­served that the black

hole’s corona — a cloud of whirling, white-hot plasma — sud­denly

**disappeared**, be­fore re­assem­bling months later.

The brief though dra­matic shut-off was a first in black hole as­tron­omy.

> This would be the clos­est thing we know of around any black hole.

> - Megan Masterson, a grad­u­ate stu­dent in physics at MIT

Complete au­thor­ing ex­pe­ri­ence

Write Markdown to reach flow state faster.Use Quarkdown’s ex­ten­sions to achieve more.

One tool to rule them all

Whether you are writ­ing a re­search pa­per, a quick re­port, a com­pany-wide wiki, class notes, or prepar­ing in­ter­ac­tive slides for your next talk, there’s only one line you need.

Replaces

.doctype {paged}

For ar­ti­cles, books and re­ports.

Replaces

.doctype {plain}

For notes, knowl­edge bases and sim­ple sta­tic web­sites.

Replaces

.doctype {docs}

For wikis, tech­ni­cal doc­u­men­ta­tion and large knowl­edge bases.

Replaces

.doctype {slides}

For lec­tures, talks and in­ter­ac­tive pre­sen­ta­tions.

Typesetting for the im­pa­tient

With blaz­ing fast com­pi­la­tion and live pre­view, see re­sults in­stantly as you type.

Don’t re­peat your­self

Reuse your work­flow thanks to pow­er­ful script­ing ca­pa­bil­i­ties.

.function {animal}

name ecosys­tem pic­ture:

 .row

 .clip {circle}

 .picture

- **Name**: .name

- **Ecosystem**: .ecosystem

.animal {Red panda} ecosys­tem:{Tem­per­ate forests}

 ![Red panda](img/​red-panda.jpg)

.animal {Sea ot­ter} ecosys­tem:{Kelp forests}

 ![Sea ot­ter](img/​sea-ot­ter.jpg)

.animal {Clownfish} ecosys­tem:{Coral reefs}

 ![Clownfish](img/clownfish.jpg)

ozark.hendrix.edu

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.