10 interesting stories served every morning and every evening.




1 641 shares, 34 trendiness

A$AP Rocky Releases Helicopter Music Video featuring Gaussian Splatting

Believe it or not, A$AP Rocky is a huge fan of ra­di­ance fields.

Yesterday, when A$AP Rocky re­leased the mu­sic video for Helicopter, many view­ers fo­cused on the chaos, the mo­tion, and the un­mis­tak­able early MTV en­ergy of the piece. What’s eas­ier to miss, un­less you know what you’re look­ing at, is that nearly every hu­man per­for­mance in the video was cap­tured vol­u­met­ri­cally and ren­dered as dy­namic splats.

I spoke with Evercoast, the team re­spon­si­ble for cap­tur­ing the per­for­mances, as well as Chris Rutledge, the pro­jec­t’s CG Supervisor at Grin Machine, and Wilfred Driscoll of WildCapture and Fitsū.ai, to un­der­stand how Helicopter came to­gether and why this pro­ject rep­re­sents one of the most am­bi­tious real world de­ploy­ments of dy­namic gauss­ian splat­ting in a ma­jor mu­sic re­lease to date.

The de­ci­sion to shoot Helicopter vol­u­met­ri­cally was­n’t dri­ven by tech­nol­ogy for tech­nol­o­gy’s sake. According to the team, the di­rec­tor Dan Strait ap­proached the pro­ject in July with a clear cre­ative goal to cap­ture hu­man per­for­mance in a way that would al­low rad­i­cal free­dom in post-pro­duc­tion. This would have been ei­ther im­prac­ti­cal or pro­hib­i­tively ex­pen­sive us­ing con­ven­tional film­ing and VFX pipelines.

Chris told me he’d been track­ing vol­u­met­ric per­for­mance cap­ture for years, fas­ci­nated by emerg­ing tech­niques that could en­able vi­su­als that sim­ply weren’t pos­si­ble be­fore. Two years ago, he be­gan pitch­ing the idea to di­rec­tors in his cir­cle, in­clud­ing Dan, as a someday” work­flow. When Dan came back this sum­mer and said he wanted to use vol­u­met­ric cap­ture for the en­tire video, the pro­lif­er­a­tion of gauss­ian splat­ting en­abled them to take it on.

The aes­thetic leans heav­ily into ki­netic mo­tion. Dancers col­lid­ing, bod­ies sus­pended in midair, chaotic fight scenes, and per­form­ers in­ter­act­ing with props that later dis­solve into some­thing else en­tirely. Every punch, slam, pull-up, and fall you see was phys­i­cally per­formed and cap­tured in 3D.

Almost every hu­man fig­ure in the video, in­clud­ing Rocky him­self, was recorded vol­u­met­ri­cally us­ing Evercoast’s sys­tem. It’s all real per­for­mance, pre­served spa­tially.

This is not the first time that A$AP Rocky has fea­tured a ra­di­ance field in one of his mu­sic videos. The 2023 mu­sic video for Shittin’ Me fea­tured sev­eral NeRFs and even the GUI for Instant-NGP, which you can spot through­out the piece.

The pri­mary shoot for Helicopter took place in August in Los Angeles. Evercoast de­ployed a 56 cam­era RGB-D ar­ray, syn­chro­nized across two Dell work­sta­tions. Performers were sus­pended from wires, hang­ing up­side down, do­ing pull-ups on ceil­ing-mounted bars, swing­ing props, and per­form­ing stunts, all in­side the cap­ture vol­ume.

Scenes that ap­pear sur­real in the fi­nal video were, in re­al­ity, grounded in very phys­i­cal se­tups, such as wooden planks stand­ing in for he­li­copter blades, real wire rigs, and real props. The vol­u­met­ric data al­lowed those el­e­ments to be re­moved, re­com­posed, or en­tirely re­con­tex­tu­al­ized later with­out los­ing the au­then­tic­ity of the hu­man mo­tion.

Over the course of the shoot, Evercoast recorded more than 10 ter­abytes of raw data, ul­ti­mately ren­der­ing roughly 30 min­utes of fi­nal splat­ted footage, ex­ported as PLY se­quences to­tal­ing around one ter­abyte.

That data was then brought into Houdini, where the post pro­duc­tion team used CG Nomads GSOPs for ma­nip­u­la­tion and se­quenc­ing, and OTOYs OctaneRender for fi­nal ren­der­ing. Thanks to this com­bi­na­tion, the pro­duc­tion team was also able to re­light the splats.

One of the more pow­er­ful as­pects of the work­flow was Evercoast’s abil­ity to pre­view vol­u­met­ric cap­tures at mul­ti­ple stages. The di­rec­tor could see live spa­tial feed­back on set, gen­er­ate quick mesh based pre­views sec­onds af­ter a take, and later re­view fully ren­dered splats through Evercoast’s web player be­fore down­load­ing mas­sive PLY se­quences for Houdini.

In prac­tice, this meant cre­ative de­ci­sions could be made rapidly and cheaply, with­out com­mit­ting to heavy down­stream pro­cess­ing un­til the team knew ex­actly what they wanted. It’s a work­flow that more closely re­sem­bles sim­u­la­tion than tra­di­tional film­ing.

Chris also dis­cov­ered that Octane’s Houdini in­te­gra­tion had ma­tured, and that Octane’s early splat sup­port was far enough along to en­able re­light­ing. According to the team, the abil­ity to re­light splats, in­tro­duce shad­ow­ing, and achieve a more di­men­sional 3D video” look was a ma­jor rea­son the fi­nal aes­thetic lands the way it does.

The team also used Blender heav­ily for lay­out and pre­vis, con­vert­ing splat se­quences into light­weight proxy caches for scene plan­ning. Wilfred de­scribed how WildCapture’s in­ter­nal tool­ing was used se­lec­tively to in­tro­duce tem­po­ral con­sis­tency. In his words, the team de­rived prim­i­tive pose es­ti­ma­tion skele­tons that could be used to trans­fer mo­tion, sup­port col­li­sion se­tups, and al­low Houdini’s sim­u­la­tion toolset to han­dle rigid body, soft body, and more phys­i­cally grounded in­ter­ac­tions.

One re­cur­ring re­ac­tion to the video has been con­fu­sion. Viewers as­sume the im­agery is AI-generated. According to Evercoast, that could­n’t be fur­ther from the truth. Every stunt, every swing, every fall was phys­i­cally per­formed and cap­tured in real space. What makes it feel syn­thetic is the free­dom vol­u­met­ric cap­ture af­fords. You aren’t lim­ited by the cam­er­a’s com­po­si­tion. You have free rein to ex­plore, repo­si­tion cam­eras af­ter the fact, break spa­tial con­ti­nu­ity, and re­com­bine per­for­mances in ways that 2D sim­ply can’t.

In other words, ra­di­ance field tech­nol­ogy is­n’t re­plac­ing re­al­ity. It’s pre­serv­ing every­thing.

...

Read the original on radiancefields.com »

2 549 shares, 19 trendiness

The A in AGI stands for Ads

Here we go again, the tech press is hav­ing an­other AI doom cy­cle.

I’ve pri­mar­ily writ­ten this as a re­sponse to an NYT an­a­lyst paint­ing a com­pletely un­sub­stan­ti­ated, base­less, spec­u­la­tive, out­ra­geous, EGREGIOUS, pre­pos­ter­ous grim pic­ture” on OpenAI go­ing bust.

Mate come on. OpenAI is not dy­ing, they’re not run­ning out of money. Yes, they’re cre­at­ing pos­si­bly the cra­zi­est cir­cu­lar econ­omy and de­fy­ing every eco­nom­ics law since Adam Smith pub­lished The Wealth of Nations’. $1T in com­mit­ments is gen­uinely in­sane. But I doubt they’re look­ing to be ac­quired; hon­estly by who? you don’t raise $40 BILLION at $260 BILLION VALUATION to get ac­quired. It’s all for the $1T IPO.

But it seems that the pin­na­cle of hu­man in­tel­li­gence: the great­est, smartest, bright­est minds have all come to­gether to… build us an­other ad en­gine. What hap­pened to su­per­in­tel­li­gence and AGI?

See if OpenAI was not a di­rect threat to the cur­rent ad gi­ants would Google be ad­ver­tis­ing Gemini every chance they get? Don’t for­get they’re also cap­i­tal­is­ing on their brand new high-in­tent ad fun­nel by launch­ing ads on Gemini and AI overview.

March: Closed $40B fund­ing round at $260B val­u­a­tion, the largest raise by a pri­vate tech com­pany on record.

July: First $1B rev­enue month, dou­bled from $500M monthly in January.

January 2026: Both our Weekly Active User (WAU) and Daily Active User (DAU) fig­ures con­tinue to pro­duce all-time-highs (Jan 14 was the high­est, Jan 13 was the sec­ond high­est, etc.)”

January 16, 2026: Announced ads in ChatGPT free and Go tiers.

Yes, OpenAI is burn­ing $8-12B in 2025. Compute in­fra­struc­ture is ob­vi­ously not cheap when serv­ing 190M peo­ple daily.

So let’s try to model their ex­pected ARPU (annual rev­enue per user) by un­der­stand­ing what OpenAI is ac­tu­ally build­ing and how it com­pares to ex­ist­ing ad plat­forms.

The ad prod­ucts they’ve con­firmed thus far:

* Ads at bot­tom of an­swers when there’s a rel­e­vant spon­sored prod­uct or ser­vice based on your cur­rent con­ver­sa­tion

Testing starts in the com­ing weeks” for logged-in adults in the U. S. on free and Go tiers. Ads will be clearly la­beled and sep­a­rated from the or­ganic an­swer.” Users can learn why they’re see­ing an ad or dis­miss it.

* Choice and con­trol: Users can turn off per­son­al­iza­tion and clear ad data

* Plus, Pro, Business, and Enterprise tiers won’t have ads

They also men­tioned a pos­si­bil­ity of con­ver­sa­tional ads where you can ask fol­low-up ques­tions about prod­ucts di­rectly.

Revenue tar­gets: Reports sug­gest OpenAI is tar­get­ing $1B in ad rev­enue for 2026, scal­ing to $25B by 2029, though OpenAI has­n’t con­firmed these num­bers pub­licly. We can use these as the con­ser­v­a­tive bench­mark, but know­ing the sheer prod­uct tal­ent at OpenAI, the fund­ing and hunger. I think they’re blow past this.

* Self-serve plat­form: Advertisers bid for place­ments, su­per su­per su­per likely, ex­actly what Google does, prob­a­bly their biggest rev­enue stream.

* Affiliate com­mis­sions: Built-in check­outs so users can buy prod­ucts in­side ChatGPT, OpenAI takes com­mis­sion, sim­i­lar to their Shopify col­lab.

* Sidebar spon­sored con­tent: When users ask about top­ics with mar­ket po­ten­tial, spon­sored info ap­pears in a side­bar marked Sponsored”

Now let’s com­pare this to ex­ist­ing ad plat­forms:

* How it works: Auction-based sys­tem where ad­ver­tis­ers bid on key­words. Ads ap­pear in search re­sults based on bid + qual­ity score.

* Why it works: High in­tent (search queries) + owns the en­tire ver­ti­cal stack (ad tech, auc­tion sys­tem, tar­get­ing, decades of op­ti­miza­tion)

* Ad rev­enue: [$212.4B in ad rev­enue in the first 3 quar­ters of 2025]https://​www.de­mand­sage.com/​google-ads-sta­tis­tics/ (8.4% growth from 2024′s $273.4B)

* Google does­n’t re­port ARPU so we need to cal­cu­late it: ARPU = $296.2B (projected) ÷ 5.01B = $59.12 per user an­nu­ally.

* How it works: Auction-based pro­moted tweets in time­line. Advertisers only pay when users com­plete ac­tions (click, fol­low, en­gage).

* Why it works: Timeline en­gage­ment, CPC ~$0.18, but does­n’t own ver­ti­cal stack and does it on a smaller scale

* Intent level: High. 2.5B prompts daily in­cludes prod­uct re­search, rec­om­men­da­tions, com­par­isons. More in­tent than Meta’s pas­sive scrolling, com­pa­ra­ble to Google search.

* Scale: 1B WAU by Feb 2026, but free users only (~950M at 95% free tier).

So where should ChatGPT’s ARPU sit?

It sits with Search, not Social.

Which puts it be­tween X ($5.54) and Meta ($49.63). OpenAI has bet­ter in­tent than Meta but worse in­fra­struc­ture. They have more scale than X but no ver­ti­cal in­te­gra­tion. When a user asks ChatGPT Help me plan a 5-day trip to Kyoto” or Best CRM for small busi­ness,” that is High Intent. That is a Google-level query, not a Facebook-level scroll.

We al­ready have a bench­mark for this: Perplexity.

In late 2024/2025, re­ports con­firmed Perplexity was charg­ing CPMs ex­ceed­ing $50. This is com­pa­ra­ble to pre­mium video or high-end search, and miles above the ~$2-6 CPMs seen on so­cial feeds.

If Perplexity can com­mand $50+ CPMs with a smaller user base, OpenAI’s High Agency” prod­uct team will likely floor their pric­ing there.

* 2026: $5.50 (The Perplexity Floor”) - Even with a clumsy beta and low fill rate, high-in­tent queries com­mand pre­mium pric­ing. If they serve just one ad every 20 queries at a Perplexity-level CPM, they hit this num­ber ef­fort­lessly.

* 2027: $18.00 - The launch of a self-serve ad man­ager (like Meta/Google) al­lows mil­lions of SMBs to bid. Competition dri­ves price.

* 2028: $30.00 - This is where Ads” be­come Actions.” OpenAI won’t just show an ad for a flight; they will book it. Taking a cut of the trans­ac­tion (CPA model) yields 10x the rev­enue of show­ing a ban­ner.

* 2029: $50.00 (Suuuuuuuper bull­ish case) - Approaching Google’s ~$60 ARPU. By now, the in­fra­struc­ture is ma­ture, and Conversational Commerce” is the stan­dard. This is what Softbank is pray­ing will hap­pen.

And we’re for­get­ting that OpenAI have a se­ri­ous se­ri­ous prod­uct team, I don’t doubt for once they’ll be fully ca­pa­ble of build­ing out the stack and in­te­grat­ing ads til they oc­cupy your en­tire sub­con­scious.

In fact they hired Fidji Simo as their CEO of Applications”, a newly cre­ated role that puts her in charge of their en­tire rev­enue en­gine. Fidji is a Meta pow­er­house who spent a decade at Facebook work­ing on the Facebook App and… ads:

Leading Monetization of the Facebook App, with a fo­cus on mo­bile ad­ver­tis­ing that rep­re­sents the vast ma­jor­ity of Facebook’s rev­enue. Launched new ad prod­ucts such as Video Ads, Lead Ads, Instant Experiences, Carousel ads, etc.

Launched and grew video ad­ver­tis­ing to be a large por­tion of Facebook’s rev­enue.

But 1.5-1.8B free users by 2028? That as­sumes zero com­pe­ti­tion im­pact from any­one, cer­tainly not the loom­ing gi­ant Gemini. Unrealistic.

The main rev­enue growth comes from ARPU scal­ing not just user growth.

Crunching all the num­bers from High Intent” model, 2026 looks dif­fer­ent.

* 35M pay­ing sub­scribers: $8.4B min­i­mum (conservatively as­sum­ing all at $20/mo Plus tier)

* Definitely higher with Pro ($200/mo) and Enterprise (custom pric­ing)

* ChatGPT does 2.5B prompts daily this is what ad­ver­tis­ers would class as both higher en­gage­ment and higher in­tent than pas­sive scrolling (although you can fit more ads in a scroll than a chat)

* Reality Check: This as­sumes they mon­e­tise typ­i­cal search queries at rates Perplexity has al­ready proven pos­si­ble.

These pro­jec­tions use fu­ture­search.ai’s base fore­cast ($39B me­dian for mid-2027, no ads) + ad­ver­tis­ing over­lay from in­ter­nal OpenAI docs + con­ser­v­a­tive user growth.

Ads were the key to un­lock­ing prof­itabil­ity, you must’ve seen it com­ing, thanks to you not skip­ping that 3 minute health in­sur­ance ad - you, yes you helped us achieve AGI!

Mission align­ment: Our mis­sion is to en­sure AGI ben­e­fits all of hu­man­ity; our pur­suit of ad­ver­tis­ing is al­ways in sup­port of that mis­sion and mak­ing AI more ac­ces­si­ble.

The A in AGI stands for Ads! It’s all ads!! Ads that you can’t even block be­cause they are BAKED into the streamed prob­a­bilis­tic word se­lec­tor pur­pose­fully skewed to out­put the high­est bid­der’s mar­ket­ing copy.

Look on the bright side, if they’re turn­ing to ads it likely means AGI is not on the hori­zon. Your job is safe!

It’s 4:41AM in London, I’m knack­ered. Idek if I’m gonna post this be­cause I love AI and do agree that some things are a nec­es­sary evil to achieve a greater goal (AGI).

Nevertheless, if you have any ques­tions or com­ments, shout me -> os­samachaib.cs@gmail.com.

...

Read the original on ossa-ma.github.io »

3 494 shares, 18 trendiness

Statement by Denmark, Finland, France, Germany, the Netherlands, Norway, Sweden and the United Kingdom (englanniksi)

...

Read the original on www.presidentti.fi »

4 420 shares, 23 trendiness

A Social Filesystem — overreacted

Pay what you like

You write a doc­u­ment, hit save, and the file is on your com­puter. It’s yours. You can in­spect it, you can send it to a friend, and you can open it with other apps.

Files come from the par­a­digm of per­sonal com­put­ing.

This post, how­ever, is­n’t about per­sonal com­put­ing. What I want to talk about is so­cial com­put­ing—apps like Instagram, Reddit, Tumblr, GitHub, and TikTok.

What do files have to do with so­cial com­put­ing?

But first, a shoutout to files.

Files, as orig­i­nally in­vented, were not meant to live in­side the apps.

Since files rep­re­sent your cre­ations, they should live some­where that you con­trol. Apps cre­ate and read your files on your be­half, but files don’t be­long to the apps.

Files be­long to you—the per­son us­ing those apps.

Apps (and their de­vel­op­ers) may not own your files, but they do need to be able to read and write them. To do that re­li­ably, apps need your files to be struc­tured. This is why app de­vel­op­ers, as part of cre­at­ing apps, may in­vent and evolve file for­mats.

A file for­mat is like a lan­guage. An app might speak” sev­eral for­mats. A sin­gle for­mat can be un­der­stood by many apps. Apps and for­mats are many-to-many. File for­mats let dif­fer­ent apps work to­gether with­out know­ing about each other.

SVG is an open spec­i­fi­ca­tion. This means that dif­fer­ent de­vel­op­ers agree on how to read and write SVG. I cre­ated this SVG file in Excalidraw, but I could have used Adobe Illustrator or Inkscape in­stead. Your browser al­ready knew how to dis­play this SVG. It did­n’t need to hit any Excalidraw APIs or to ask per­mis­sions from Excalidraw to dis­play this SVG. It does­n’t mat­ter which app has cre­ated this SVG.

The file for­mat is the API.

Of course, not all file for­mats are open or doc­u­mented.

Some file for­mats are ap­pli­ca­tion-spe­cific or even pro­pri­etary like .doc. And yet, al­though .doc was un­doc­u­mented, it did­n’t stop mo­ti­vated de­vel­op­ers from re­verse-en­gi­neer­ing it and cre­at­ing more soft­ware that reads and writes .doc:

Another win for the files par­a­digm.

The files par­a­digm cap­tures a real-world in­tu­ition about tools: what we make with a tool does not be­long to the tool. A man­u­script does­n’t stay in­side the type­writer, a photo does­n’t stay in­side the cam­era, and a song does­n’t stay in the mi­cro­phone.

Our mem­o­ries, our thoughts, our de­signs should out­live the soft­ware we used to cre­ate them. An app-ag­nos­tic stor­age (the filesys­tem) en­forces this sep­a­ra­tion.

You may cre­ate a file in one app, but some­one else can read it us­ing an­other app. You may switch the apps you use, or use them to­gether. You may con­vert a file from one for­mat to an­other. As long as two apps cor­rectly speak” the same file for­mat, they can work in tan­dem even if their de­vel­op­ers hate each oth­ers’ guts.

And if the app sucks?

Someone could al­ways cre­ate the next app” for the files you al­ready have:

Apps may come and go, but files stay—at least, as long as our apps think in files.

See also: File over app

When you think of so­cial apps—In­sta­gram, Reddit, Tumblr, GitHub, TikTok—you prob­a­bly don’t think about files. Files are for per­sonal com­put­ing only, right?

But what if they be­haved as files—at least, in all the im­por­tant ways? Suppose you had a folder that con­tained all of the things ever POSTed by your on­line per­sona:

It would in­clude every­thing you’ve cre­ated across dif­fer­ent so­cial apps—your posts, likes, scrob­bles, recipes, etc. Maybe we can call it your everything folder”.

Of course, closed apps like Instagram aren’t built this way. But imag­ine they were. In that world, a Tumblr post” or an Instagram fol­low” are so­cial file for­mats:

* You post­ing on Tumblr would cre­ate a Tumblr post” file in your folder.

* You fol­low­ing on Instagram would put an Instagram fol­low” file into your folder.

* You up­vot­ing on Hacker News would add an HN up­vote” file to your folder.

Note this folder is not some kind of an archive. It’s where your data ac­tu­ally lives:

Files are the source of truth—the apps would re­flect what­ev­er’s in your folder.

Any writes to your folder would be synced to the in­ter­ested apps. For ex­am­ple, delet­ing an Instagram fol­low” file would work just as well as un­fol­low­ing through the app. Crossposting to three Tumblr com­mu­ni­ties could be done by cre­at­ing three Tumblr post” files. Under the hood, each app man­ages files in your folder.

In this par­a­digm, apps are re­ac­tive to files. Every ap­p’s data­base mostly be­comes de­rived data—an app-spe­cific cached ma­te­ri­al­ized view of every­body’s fold­ers.

This might sound very hy­po­thet­i­cal, but it’s not. What I’ve de­scribed so far is the premise be­hind the AT pro­to­col. It works in pro­duc­tion at scale. Bluesky, Leaflet, Tangled, Semble, and Wisp are some of the new open so­cial apps built this way.

It does­n’t feel dif­fer­ent to use those apps. But by lift­ing user data out of the apps, we force the same sep­a­ra­tion as we’ve had in per­sonal com­put­ing: apps don’t trap what you make with them. Someone can al­ways make a new app for old data:

Like be­fore, app de­vel­op­ers evolve their file for­mats. However, they can’t gate­keep who reads and writes files in those for­mats. Which apps to use is up to you.

Together, every­one’s fold­ers form some­thing like a dis­trib­uted so­cial filesys­tem:

I’ve pre­vi­ously writ­ten about the AT pro­to­col in Open Social, look­ing at its model from a web-cen­tric per­spec­tive. But I think that look­ing at it from the filesys­tem per­spec­tive is just as in­trigu­ing, so I in­vite you to take a tour of how it works.

What does a so­cial filesys­tem start with?

How would you rep­re­sent it as a file?

It’s nat­ural to con­sider JSON as a for­mat. After all, that’s what you’d re­turn if you were build­ing an API. So let’s fully de­scribe this post as a piece of JSON:

However, if we want to store this post as a file, it does­n’t make sense to em­bed the au­thor in­for­ma­tion there. After all, if the au­thor later changes their dis­play name or avatar, we would­n’t want to go through their every post and change them there.

So let’s as­sume their avatar and name live some­where else—per­haps, in an­other file. We could leave au­thor: dril’ in the JSON but this is un­nec­es­sary too. Since this file lives in­side the cre­ator’s folder—it’s their post, af­ter all—we can al­ways fig­ure out the au­thor based on whose folder we’re cur­rently look­ing at.

This seems like a good way to de­scribe this post:

But wait, no, this is still wrong.

You see, re­ply­Count, re­post­Count, and like­Count are not re­ally some­thing that the post’s au­thor has cre­ated. These val­ues are de­rived from the data cre­ated by other peo­ple—their replies, their re­posts, their likes. The app that dis­plays this post will have to keep track of those some­how, but they aren’t this user’s data.

So re­ally, we’re left with just this:

Notice how it took some trim­ming to iden­tify which parts of the data ac­tu­ally be­long in this file. This is some­thing that you have to be in­ten­tional about when cre­at­ing apps with the AT pro­to­col. My men­tal model for this is to think about the POST re­quest. When the user cre­ated this thing, what data did they send? That’s likely close to what we’ll want to store. That’s the stuff the user has just cre­ated.

Our so­cial filesys­tem will be struc­tured more rigidly than a tra­di­tional filesys­tem. For ex­am­ple, it will only con­sist of JSON files. To make this more ex­plicit, we’ll start in­tro­duc­ing our new ter­mi­nol­ogy. We’ll call this kind of file a record.

Now we need to give our record a name. There are no nat­ural names for posts. Could we use se­quen­tial num­bers? Our names need only be unique within a folder:

One down­side is that we’d have to keep track of the lat­est one so there’s a risk of col­li­sions when cre­at­ing many files from dif­fer­ent de­vices at the same time.

Instead, let’s use time­stamps with some per-clock ran­dom­ness mixed in:

This is nicer be­cause these can be gen­er­ated lo­cally and will al­most never col­lide.

We’ll use these names in URLs so let’s en­code them more com­pactly. We’ll pick our en­cod­ing care­fully so that sort­ing al­pha­bet­i­cally goes in the chrono­log­i­cal or­der:

Now ls -r gives us a re­verse chrono­log­i­cal time­line of posts! That’s neat. Also, since we’re stick­ing with JSON as our lin­gua franca, we don’t need file ex­ten­sions.

Not all records ac­cu­mu­late over time. For ex­am­ple, you can write many posts, but you only have one copy of pro­file in­for­ma­tion—your avatar and dis­play name. For singleton” records, it makes sense to use a pre­de­fined name, like me or self:

By the way, let’s save this pro­file record to pro­files/​self:

Note how, taken to­gether, posts/​34qye3­wows2c5 and pro­files/​self let us re­con­struct more of the UI we started with, al­though some parts are still miss­ing:

Before we fill them in, though, we need to make our sys­tem stur­dier.

This was the shape of our post record:

And this was the shape of our pro­file record:

Since these are stored as files, it’s im­por­tant for the for­mat not to drift.

TypeScript seems con­ve­nient for this but it is­n’t suf­fi­cient. For ex­am­ple, we can’t ex­press con­straints like the text string should have at most 300 Unicode graphemes”, or the cre­ate­dAt string should be for­mat­ted as date­time”.

We need a richer way to de­fine so­cial file for­mats.

We might shop around for ex­ist­ing op­tions (RDF? JSON Schema?) but if noth­ing quite fits, we might as well de­sign our own schema lan­guage ex­plic­itly geared to­wards the needs of our so­cial filesys­tem. This is what our Post looks like:

We’ll call this the Post lex­i­con be­cause it’s like a lan­guage our app wants to speak.

My first re­ac­tion was also ouch” but it helped to think that con­cep­tu­ally it’s this:

I used to yearn for a bet­ter syn­tax but I’ve ac­tu­ally come around to hes­i­tantly ap­pre­ci­ate the JSON. It be­ing triv­ial to parse makes it su­per easy to build tool­ing around it (more on that in the end). And of course, we can make bind­ings turn­ing these into type de­f­i­n­i­tions and val­i­da­tion code for any pro­gram­ming lan­guage.

Our so­cial filesys­tem looks like this so far:

The posts/ folder has records that sat­isfy the Post lex­i­con, and the pro­files/ folder con­tains records (a sin­gle record, re­ally) that sat­isfy the Profile lex­i­con.

This can be made to work well for a sin­gle app. But here’s a prob­lem. What if there’s an­other app with its own no­tion of posts” and profiles”?

Recall, each user has an everything folder” with data from every app:

Different apps will likely dis­agree on what the for­mat of a post” is! For ex­am­ple, a mi­croblog post might have a 300 char­ac­ter limit, but a proper blog post might not.

Can we get the apps to agree with each other?

We could try to put every app de­vel­oper in the same room un­til they all agree on a per­fect lex­i­con for a post. That would be an in­ter­est­ing use of every­one’s time.

For some use cases, like cross-site syn­di­ca­tion, a stan­dard-ish jointly gov­erned lex­i­con makes sense. For other cases, you re­ally want the app to be in charge. It’s ac­tu­ally good that dif­fer­ent prod­ucts can dis­agree about what a post is! Different prod­ucts, dif­fer­ent vibes. We’d want to sup­port that, not to fight it.

Really, we’ve been ask­ing the wrong ques­tion. We don’t need every app de­vel­oper to agree on what a post is; we just need to let any­one define” their own post.

We could try name­spacing types of records by the app name:

But then, app names can also clash. Luckily, we al­ready have a way to avoid con­flicts—do­main names. A do­main name is unique and im­plies own­er­ship.

Why don’t we take some in­spi­ra­tion from Java?

This gives us col­lec­tions.

A col­lec­tion is a folder with records of a cer­tain lex­i­con type. Twitter’s lex­i­con for posts might dif­fer from Tumblr’s, and that’s fine—they’re in sep­a­rate col­lec­tions. The col­lec­tion is al­ways named like .

For ex­am­ple, you could imag­ine these col­lec­tion names:

You could also imag­ine these slightly whack­ier col­lec­tion names:

* fm.last.scrob­ble_v2 (breaking changes = new lex­i­con, just like file for­mats)

It’s like hav­ing a ded­i­cated folder for every file ex­ten­sion.

To see some real lex­i­con names, check out UFOs and Lexicon Garden.

If you’re an ap­pli­ca­tion au­thor, you might be think­ing:

Who en­forces that the records match their lex­i­cons? If any app can (with the user’s ex­plicit con­sent) write into any other ap­p’s col­lec­tion, how do we not end up with a lot of in­valid data? What if some other app puts junk into my” col­lec­tion?

The an­swer is that records could be junk, but it still works out any­way.

It helps to draw a par­al­lel to file ex­ten­sions. Nothing stops some­one from re­nam­ing cat.jpg to cat.pdf. A PDF reader would just refuse to open it.

Lexicon val­i­da­tion works the same way. The com.tum­blr in com.tum­blr.post sig­nals who de­signed the lex­i­con, but the records them­selves could have been cre­ated by any app at all. This is why apps al­ways treat records as un­trusted in­put, sim­i­lar to POST re­quest bod­ies. When you gen­er­ate type de­f­i­n­i­tions from a lex­i­con, you also get a func­tion that will do the val­i­da­tion for you. If some record passes the check, great—you get a typed ob­ject. If not, fine, ig­nore that record.

So, val­i­date on read, just like files.

Some care is re­quired when evolv­ing lex­i­cons. From the mo­ment some lex­i­con is used in the wild, you should never change which records it would con­sider valid. For ex­am­ple, you can add new op­tional fields, but you can’t change whether some field is op­tional. This en­sures that the new code can still read old records and that the old code will be able to read any new records. There’s a lin­ter to check for this. (For break­ing changes, make a new lex­i­con, as you would do with a file for­mat.)

Although this is not re­quired, you can pub­lish your lex­i­cons for doc­u­men­ta­tion and dis­tri­b­u­tion. It’s like pub­lish­ing type de­f­i­n­i­tions. There’s no sep­a­rate reg­istry for those; you just put them into a com.at­proto.lex­i­con.schema col­lec­tion of some ac­count, and then prove the lex­i­con’s do­main is owned by you. For ex­am­ple, if I wanted to pub­lish an io.over­re­acted.com­ment lex­i­con, I could place it here:

Then I’d need to do some DNS setup to prove over­re­acted.io is mine. This would make my lex­i­con show up in pdsls, Lexicon Garden, and other tools.

We’ve al­ready de­cided that the pro­file should live in the com.twit­ter.pro­file col­lec­tion, and the post it­self should live in the com.twit­ter.post col­lec­tion:

But what about the likes?

Actually, what is a like?

...

Read the original on overreacted.io »

5 372 shares, 22 trendiness

Wine 11.0 · wine / wine · GitLab

...

Read the original on gitlab.winehq.org »

6 346 shares, 25 trendiness

Dead Internet Theory

The other day I was brows­ing my one-and-only so­cial net­work — which is not a so­cial net­work, but I’m tired of ar­gu­ing with peo­ple on­line about it — HackerNews. It’s like this dark cor­ner of the in­ter­net, where anony­mous tech-en­thu­si­asts, sci­en­tists, en­tre­pre­neurs, and in­ter­net-trolls, like to lurk. I like HackerNews. It helps me stay up-to-date about re­cent tech news (like Cloudflare ac­quir­ing Astro which makes me happy for the Astro team, but also sad and wor­ried since I re­ally like Astro, and big-tech has a ten­dency to ruin things); it mostly avoids pol­i­tics; and it’s not a so­cial net­work.

And, in the fash­ion of HackerNews, I stum­bled upon some­one shar­ing their open-source pro­ject. It’s great to see peo­ple work on their pro­jects and de­cide to show them to the world. I think peo­ple un­der­es­ti­mate the fear of ac­tu­ally ship­ping stuff, which in­volves shar­ing it with the world.

Upon glanc­ing at the com­ment sec­tion, I started to see other anony­mous par­tic­i­pants ques­tion­ing the va­lid­ity of said open-source pro­ject in terms of how much of it was AI-generated. I grabbed my pop­corn, and started to fol­low this thread. More ac­cu­sa­tions started to ap­pear: the com­mit time­line does not make sense; the code has AI-generated com­ments; etc. And at the same time, the au­thor tried to re­ply to every com­ment claim­ing that they wrote this 100% with­out us­ing AI.

I don’t mind peo­ple us­ing AI to write code, even though I tried to re­sist it my­self, un­til even­tu­ally suc­cumb­ing to it. But I think it’s fair to dis­close the use of AI, es­pe­cially in open-source soft­ware. People on the in­ter­net are, mostly, anony­mous, and it’s not al­ways pos­si­ble to ver­ify the claims or ex­per­tise of par­tic­u­lar in­di­vid­u­als. But as the amount of code is grow­ing, con­sid­er­ing that every­one is us­ing AI to gen­er­ate what­ever-app they want, it’s im­pos­si­ble to ver­ify every piece of code we are go­ing to use. So it’s fair to know, I think, if some pro­ject is AI gen­er­ated and to what ex­tent. In the end, LLMs are just prob­a­bilis­tic next-to­ken gen­er­a­tors. And while they are get­ting ex­tremely good at most sim­ple tasks, they have the po­ten­tial to wreak havoc with harder prob­lems or edge-cases (especially if there are no ex­pe­ri­enced en­gi­neers, with do­main knowl­edge, to re­view the gen­er­ated code).

As I was fol­low­ing this thread, I started to see a pat­tern: the com­ments of the au­thor looked AI gen­er­ated too:

The use of em-dashes, which on most key­board re­quire a spe­cial key-com­bi­na­tion that most peo­ple don’t know, and while in mark­down two dashes will ren­der as em-dash, this is not true of HackerNews (hence, you of­ten see — in HackerNews com­ments, where the au­thor is prob­a­bly used to Markdown ren­derer turn­ing it into em-dash)

The no­to­ri­ous you are ab­solutely right”, which no liv­ing hu­man ever used be­fore, at least not that I know of

The other no­to­ri­ous let me know if you want to [do that thing] or [explore this other thing]” at the end of the sen­tence

I was sit­ting there, re­fresh­ing the page, see­ing the au­thor be­ing con­fronted with use of AI in both their code and their com­ments, while the au­thor claim­ing to have not used AI at all. Honestly, I was think­ing I was go­ing in­sane. Am I wrong to sus­pect them? What if peo­ple DO USE em-dashes in real life? What if English is not their na­tive lan­guage and in their na­tive lan­guage it’s fine to use phrases like you are ab­solutely right”? Is this even a real per­son? Are the peo­ple who are com­ment­ing real?

And then it hit me. We have reached the Dead Internet. The Dead Internet Theory claims that since around 2016 (a whoop­ing 10 years al­ready), the in­ter­net is mainly dead, i.e. most in­ter­ac­tions are be­tween bots, and most con­tent is ma­chine gen­er­ated to ei­ther sell you stuff, or game the SEO game (in or­der to sell you stuff).

I’m proud to say that I spent a good por­tion of my teenage years on the in­ter­net, chat­ting and learn­ing from real peo­ple who knew more than me. Back in the early 2000s, there were barely bots on the in­ter­net. The av­er­age non-tech hu­man did­n’t know any­thing about ph­pBB fo­rums, and the weird peo­ple with pseu­do­nyms who hanged-out in there. I spent count­less hours in­side IRC chan­nels, and on ph­pBB fo­rums, learn­ing things like net­work pro­gram­ming, OS-development, game-de­vel­op­ment, and of course web-de­vel­op­ment (which be­came my pro­fes­sion for al­most two decades now). I’m ba­si­cally a grad­u­ate of the Internet University. Back then, no­body had doubts that they were talk­ing to a hu­man-be­ing. Sure, you could think that you spoke to a hot girl, who in re­al­ity was a fat guy, but hey, at least they were real!

But to­day, I no longer know what is real. I saw a pic­ture on LinkedIn, from a real tech com­pany, post­ing about their office vibes” and their happy em­ploy­ees. And then I went to the com­ment sec­tion, and sure enough this pic­ture is AI gen­er­ated (mangled text that does not make sense, weird hand ar­ti­facts). It was posted by an em­ployee of the com­pany, it showed other em­ploy­ees of said com­pany, and it was al­tered with AI to show­case a dif­fer­ent re­al­ity. Hell, maybe the peo­ple on the pic­ture do not even ex­ist!

And these are mild ex­am­ples. I don’t use so­cial net­works (and no, HackerNews is not a so­cial net­work), but I hear hor­ror sto­ries about AI gen­er­ated con­tent on Facebook, Xitter, TikTok, rang­ing from pho­tos of gi­ants that built the pyra­mids in Egypt, all the way to short videos of pretty girls say­ing that the EU is bad for Poland.

I hon­estly got sad that day. Hopeless, if I could say. AI is eas­ily avail­able to the masses, which al­low them to gen­er­ate shit­load of AI-slop. People no longer need to write com­ments or code, they can just feed this to AI agents who will gen­er­ate the next you are ab­solutely right” mas­ter­piece.

I like tech­nol­ogy. I like soft­ware en­gi­neer­ing, and the con­cept of the in­ter­net where peo­ple could share knowl­edge and cre­ate com­mu­ni­ties. Were there ma­li­cious ac­tors back then on the in­ter­net? For sure. But what I am see­ing to­day, makes me ques­tion whether the fu­ture we are headed to is a fu­ture where tech­nol­ogy is use­ful any­more. Or, rather, it’s a fu­ture where bots talk with bots, and hu­man knowl­edge just gets re­cy­cled and repack­aged into 10 step to fix your [daily prob­lem] you are hav­ing” for the sake of sell­ing you more stuff.

Unless oth­er­wise noted, all con­tent is gen­er­ated by a hu­man.

...

Read the original on kudmitry.com »

7 345 shares, 20 trendiness

antirez/flux2.c: Flux 2 image generation model pure C inference

This pro­gram gen­er­ates im­ages from text prompts (and op­tion­ally from other im­ages) us­ing the FLUX.2-klein-4B model from Black Forest Labs. It can be used as a li­brary as well, and is im­ple­mented en­tirely in C, with zero ex­ter­nal de­pen­den­cies be­yond the C stan­dard li­brary. MPS and BLAS ac­cel­er­a­tion are op­tional but rec­om­mended.

I (the hu­man here, Salvatore) wanted to test code gen­er­a­tion with a more am­bi­tious task, over the week­end. This is the re­sult. It is my first open source pro­ject where I wrote zero lines of code. I be­lieve that in­fer­ence sys­tems not us­ing the Python stack (which I do not ap­pre­ci­ate) are a way to free open mod­els us­age and make AI more ac­ces­si­ble. There is al­ready a pro­ject do­ing the in­fer­ence of dif­fu­sion mod­els in C / C++ that sup­ports mul­ti­ple mod­els, and is based on GGML. I wanted to see if, with the as­sis­tance of mod­ern AI, I could re­pro­duce this work in a more con­cise way, from scratch, in a week­end. Looks like it is pos­si­ble.

This code base was writ­ten with Claude Code, us­ing the Claude Max plan, the small one of ~80 eu­ros per month. I al­most reached the lim­its but this plan was def­i­nitely suf­fi­cient for such a large task, which was sur­pris­ing. In or­der to sim­plify the us­age of this soft­ware, no quan­ti­za­tion is used, nor do you need to con­vert the model. It runs di­rectly with the safeten­sors model as in­put, us­ing floats.

Even if the code was gen­er­ated us­ing AI, my help in steer­ing to­wards the right de­sign, im­ple­men­ta­tion choices, and cor­rect­ness has been vi­tal dur­ing the de­vel­op­ment. I learned quite a few things about work­ing with non triv­ial pro­jects and AI.

# Build (choose your back­end)

make mps # Apple Silicon (fastest)

# or: make blas # Intel Mac / Linux with OpenBLAS

# or: make generic # Pure C, no de­pen­den­cies

# Download the model (~16GB)

pip in­stall hug­ging­face_hub

python down­load­_­model.py

# Generate an im­age

./flux -d flux-klein-model -p A woman wear­ing sun­glasses” -o out­put.png

That’s it. No Python run­time, no PyTorch, no CUDA toolkit re­quired at in­fer­ence time.

Generated with: ./flux -d flux-klein-model -p A pic­ture of a woman in 1960 America. Sunglasses. ASA 400 film. Black and White.” -W 250 -H 250 -o /tmp/woman.png, and later processed with im­age to im­age gen­er­a­tion via ./flux -d flux-klein-model -i /tmp/woman.png -o /tmp/woman2.png -p oil paint­ing of woman with sun­glasses” -v -H 256 -W 256

* Zero de­pen­den­cies: Pure C im­ple­men­ta­tion, works stand­alone. BLAS op­tional for ~30x speedup (Apple Accelerate on ma­cOS, OpenBLAS on Linux)

./flux -d flux-klein-model -p A fluffy or­ange cat sit­ting on a win­dowsill” -o cat.png

./flux -d flux-klein-model -p oil paint­ing style” -i photo.png -o paint­ing.png -t 0.7

The -t (strength) pa­ra­me­ter con­trols how much the im­age changes:

The seed is al­ways printed to stderr, even when ran­dom:

To re­pro­duce the same im­age, use the printed seed:

make # Show avail­able back­ends

make generic # Pure C, no de­pen­den­cies (slow)

make blas # BLAS ac­cel­er­a­tion (~30x faster)

make mps # Apple Silicon Metal GPU (fastest, ma­cOS only)

For make blas on Linux, in­stall OpenBLAS first:

# Ubuntu/Debian

sudo apt in­stall li­bopen­blas-dev

# Fedora

sudo dnf in­stall open­blas-de­vel

make clean # Clean build ar­ti­facts

make info # Show avail­able back­ends for this plat­form

make test # Run ref­er­ence im­age test

The model weights are down­loaded from HuggingFace:

pip in­stall hug­ging­face_hub

python down­load­_­model.py

Inference steps: This is a dis­tilled model that pro­duces good re­sults with ex­actly 4 sam­pling steps.

The text en­coder is au­to­mat­i­cally re­leased af­ter en­cod­ing, re­duc­ing peak mem­ory dur­ing dif­fu­sion. If you gen­er­ate mul­ti­ple im­ages with dif­fer­ent prompts, the en­coder re­loads au­to­mat­i­cally.

* The C im­ple­men­ta­tion uses float32 through­out, while PyTorch uses bfloat16 with highly op­ti­mized MPS ker­nels. The next step of this pro­ject is likely to im­ple­ment such an op­ti­miza­tion, in or­der to reach sim­i­lar speed, or at least try to ap­proach it.

* The generic (pure C) back­end is ex­tremely slow and only prac­ti­cal for test­ing at small sizes.

Dimensions should be mul­ti­ples of 16 (the VAE down­sam­pling fac­tor).

The li­brary can be in­te­grated into your own C/C++ pro­jects. Link against libflux.a and in­clude flux.h.

Here’s a com­plete pro­gram that gen­er­ates an im­age from a text prompt:

#include flux.h”

#include

gcc -o myapp myapp.c -L. -lflux -lm -framework Accelerate # ma­cOS

gcc -o myapp myapp.c -L. -lflux -lm -lopenblas # Linux

Transform an ex­ist­ing im­age guided by a text prompt. The strength pa­ra­me­ter con­trols how much the im­age changes:

#include flux.h”

#include

* 0.9 - Almost com­plete re­gen­er­a­tion, keeps only com­po­si­tion

When gen­er­at­ing mul­ti­ple im­ages with dif­fer­ent seeds but the same prompt, you can avoid re­load­ing the text en­coder:

flux_ctx *ctx = flux_load­_dir(“flux-klein-model”);

flux_­params params = FLUX_PARAMS_DEFAULT;

params.width = 256;

params.height = 256;

/* Generate 5 vari­a­tions with dif­fer­ent seeds */

for (int i = 0; i < 5; i++) {

flux_set_seed(1000 + i);

flux_im­age *img = flux_­gen­er­ate(ctx, A moun­tain land­scape at sun­set”, ¶ms);

char file­name[64];

snprintf(file­name, sizeof(file­name), landscape_%d.png”, i);

flux_im­age_save(img, file­name);

flux_im­age_free(img);

flux_free(ctx);

Note: The text en­coder (~8GB) is au­to­mat­i­cally re­leased af­ter the first gen­er­a­tion to save mem­ory. It re­loads au­to­mat­i­cally if you use a dif­fer­ent prompt.

All func­tions that can fail re­turn NULL on er­ror. Use flux_get_er­ror() to get a de­scrip­tion:

flux_ctx *ctx = flux_load­_dir(“nonex­is­tent-model”);

if (!ctx) {

fprintf(stderr, Error: %s\n”, flux_get_er­ror());

/* Prints some­thing like: Failed to load VAE - can­not gen­er­ate im­ages” */

re­turn 1;

flux_ctx *flux_load_dir(const char *model_dir); /* Load model, re­turns NULL on er­ror */

void flux_free(flux_ctx *ctx); /* Free all re­sources */

flux_im­age *flux_generate(flux_ctx *ctx, const char *prompt, const flux_­params *params);

flux_im­age *flux_img2img(flux_ctx *ctx, const char *prompt, const flux_im­age *input,

const flux_­params *params);

flux_im­age *flux_image_load(const char *path); /* Load PNG or PPM */

int flux_im­age_save(const flux_im­age *img, const char *path); /* 0=success, -1=error */

flux_im­age *flux_image_resize(const flux_im­age *img, int new_w, int new_h);

void flux_im­age_free(flux_im­age *img);

void flux_set_seed(in­t64_t seed); /* Set RNG seed for re­pro­ducibil­ity */

const char *flux_get_error(void); /* Get last er­ror mes­sage */

void flux_re­lease_­tex­t_en­coder(flux_ctx *ctx); /* Manually free ~8GB (optional) */

type­def struct {

int width; /* Output width in pix­els (default: 256) */

int height; /* Output height in pix­els (default: 256) */

int num_steps; /* Denoising steps, use 4 for klein (default: 4) */

float guid­ance_s­cale; /* CFG scale, use 1.0 for klein (default: 1.0) */

in­t64_t seed; /* Random seed, -1 for ran­dom (default: -1) */

float strength; /* img2img only: 0.0-1.0 (default: 0.75) */

} flux_­params;

/* Initialize with sen­si­ble de­faults */

#define FLUX_PARAMS_DEFAULT { 256, 256, 4, 1.0f, -1, 0.75f }

...

Read the original on github.com »

8 289 shares, 8 trendiness

The Nobel Prize and the Laureate Are Inseparable

A Nobel Peace Prize lau­re­ate re­ceives two cen­tral sym­bols of the prize: a gold medal and a diploma. In ad­di­tion, the prize money is awarded sep­a­rately. Regardless of what may hap­pen to the medal, the diploma, or the prize money, it is and re­mains the orig­i­nal lau­re­ate who is recorded in his­tory as the re­cip­i­ent of the prize. Even if the medal or diploma later comes into some­one else’s pos­ses­sion, this does not al­ter who was awarded the Nobel Peace Prize.

A lau­re­ate can­not share the prize with oth­ers, nor trans­fer it once it has been an­nounced. A Nobel Peace Prize can also never be re­voked. The de­ci­sion is fi­nal and ap­plies for all time.

The Norwegian Nobel Committee does not see it as their role to en­gage in day-to-day com­men­tary on Peace Prize lau­re­ates or the po­lit­i­cal processes that they are en­gaged in. The prize is awarded on the ba­sis of the lau­re­ate’ con­tri­bu­tions by the time that the com­mit­tee’s de­ci­sion is taken.

The Committee does not com­ment on lau­re­ates’ sub­se­quent state­ments, de­ci­sions, or ac­tions. Any on­go­ing as­sess­ments or choices made by lau­re­ates must be un­der­stood as their own re­spon­si­bil­ity.

There are no re­stric­tions in the statutes of the Nobel Foundation on what a lau­re­ate may do with the medal, the diploma, or the prize money. This means that a lau­re­ate is free to keep, give away, sell, or do­nate these items.

A num­ber of Nobel medals are dis­played in mu­se­ums around the world. Several Nobel lau­re­ates have also cho­sen to give away or sell their medals:

* Kofi Annan (Peace Prize 2001): In February 2024, his widow, Nane Annan, do­nated both the medal and the diploma to the United Nations Office in Geneva, where they are now per­ma­nently on dis­play. She stated that she wished his legacy to con­tinue in­spir­ing fu­ture gen­er­a­tions.

* Christian Lous Lange (Peace Prize 1921): The medal of Norway’s first Nobel Peace Prize lau­re­ate has been on long-term loan from the Lange fam­ily to the Nobel Peace Center in Oslo since 2005. It is now dis­played in the Medal Chamber and is the only orig­i­nal Peace Prize medal per­ma­nently ex­hib­ited to the pub­lic in Norway.

* Dmitry Muratov (Peace Prize 2021): The Russian jour­nal­ist sold his medal for USD 103.5 mil­lion in June 2022. The en­tire sum was do­nated to UNICEFs fund for Ukrainian refugee chil­dren. This is the high­est price ever paid for a Nobel Prize medal.

* David Thouless (Physics Prize 2016): His fam­ily do­nated the medal to Trinity Hall, University of Cambridge, where it is dis­played to in­spire stu­dents.

* James Watson (Medicine Prize 1962): In 2014, his medal was sold for USD 4.76 mil­lion. The con­tro­ver­sial DNA re­searcher stated that parts of the pro­ceeds would be used for re­search pur­poses. The medal was pur­chased by Russian bil­lion­aire Alisher Usmanov, who later re­turned it to Watson.

* Leon Lederman (Physics Prize 1988): He sold his medal in 2015 for USD 765,002 to cover med­ical ex­penses re­lated to de­men­tia.

* Knut Hamsun (Literature Prize 1920): In 1943, the Norwegian au­thor Knut Hamsun trav­elled to Germany and met with Propaganda Minister Joseph Goebbels. After re­turn­ing to Norway, he sent his Nobel medal to Goebbels as a ges­ture of thanks for the meet­ing. Goebbels was ho­n­oured by the gift. The pre­sent where­abouts of the medal are un­known.

...

Read the original on www.nobelpeaceprize.org »

9 213 shares, 4 trendiness

Statement by Denmark, Finland, France, Germany, the Netherlands, Norway, Sweden and the United Kingdom

Wir nutzen auf un­serer Internetseite das Open-Source-Software-Tool Matomo. Mit Matomo wer­den keine Daten an Server über­mit­telt, die außer­halb der Kontrolle des Bundespresseamts liegen.

Das Tool ver­wen­det Cookies. Mit diesen Cookies kön­nen wir Besuche zählen. Diese Textdateien wer­den auf Ihrem Computer gespe­ichert und machen es dem Bundespresseamt möglich, die Nutzung seiner Webseite zu analysieren. Ihre IP-Adresse ist für uns eine anonyme Kennung; wir haben keine tech­nis­che Möglichkeit, Sie damit als angemelde­ten Nutzer zu iden­ti­fizieren. Sie bleiben als Nutzer anonym.

Wenn Sie mit der Auswertung Ihrer Daten ein­ver­standen sind, dann ak­tivieren Sie bitte diesen Cookie.

...

Read the original on www.bundesregierung.de »

10 202 shares, 60 trendiness

bitchat

bitchat is a de­cen­tral­ized peer-to-peer mes­sag­ing ap­pli­ca­tion that op­er­ates over blue­tooth mesh net­works. no in­ter­net re­quired, no servers, no phone num­bers.

tra­di­tional mes­sag­ing apps de­pend on cen­tral­ized in­fra­struc­ture that can be mon­i­tored, cen­sored, or dis­abled. bitchat cre­ates ad-hoc com­mu­ni­ca­tion net­works us­ing only the de­vices pre­sent in phys­i­cal prox­im­ity. each de­vice acts as both client and server, au­to­mat­i­cally dis­cov­er­ing peers and re­lay­ing mes­sages across mul­ti­ple hops to ex­tend the net­work’s reach.

this ap­proach pro­vides cen­sor­ship re­sis­tance, sur­veil­lance re­sis­tance, and in­fra­struc­ture in­de­pen­dence. the net­work re­mains func­tional dur­ing in­ter­net out­ages, nat­ural dis­as­ters, protests, or in re­gions with lim­ited con­nec­tiv­ity.

ios/​ma­cos ver­sion:

app­store: bitchat mesh

source code: https://​github.com/​per­mis­sion­lesstech/​bitchat

sup­ports ios 16.0+ and ma­cos 13.0+. build us­ing xcode with xcode­gen or swift pack­age man­ager.

the soft­ware is re­leased into the pub­lic do­main.

...

Read the original on bitchat.free »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.