10 interesting stories served every morning and every evening.




1 940 shares, 61 trendiness

Reddit’s plan to kill third-party apps sparks widespread protests

Reddit is get­ting ready to slap third-party apps with mil­lions of dol­lars in API fees, and many Reddit users are un­happy about it. A wide­spread protest is planned for June 12, with hun­dreds of sub­red­dits plan­ning to go dark for 48 hours.

Reddit started life as a geeky site, but as it has aged, it has been try­ing to work more like a tra­di­tional so­cial net­work. Part of that push in­cluded the de­vel­op­ment of a first-party app for mo­bile de­vices, but the 17-year-old site only launched an of­fi­cial app in 2016. Before then, it was up to third-party apps to pick up the slack, and even now, the rev­enue-fo­cused of­fi­cial app is gen­er­ally con­sid­ered in­fe­rior to third-party op­tions.

Reasonable API pric­ing would not nec­es­sar­ily mean the death of third-party apps, but the pric­ing Reddit com­mu­ni­cated to some of its biggest de­vel­op­ers is far above what other sites charge. The pop­u­lar iOS client Apollo an­nounced it was fac­ing a $20 mil­lion-a-year bill. Apollo’s de­vel­oper, Christian Selig, has­n’t an­nounced a shut­down but ad­mit­ted, I don’t have that kind of money or would even know how to charge it to a credit card.”

Other third-party apps are in the same boat. The de­vel­oper of Reddit is Fun has said the API costs will likely kill” the app. Narwhal, an­other third-party app, will be dead in 30 days” when the pric­ing kicks in on July 1, ac­cord­ing to its de­vel­oper.

Selig broke the news of the new pric­ing scheme, say­ing, I don’t see how this pric­ing is any­thing based in re­al­ity or re­motely rea­son­able.” Selig said Reddit wants to charge $12,000 for 50 mil­lion re­quests, while Imgur, an im­age-fo­cused site that’s sim­i­lar to Reddit, charges $166 for 50 mil­lion API calls. A post pinned to the top of the new /r/Save3rdPartyApps sub­red­dit calls for a pric­ing de­crease by a fac­tor of 15 to 20,” say­ing that would put API calls in ter­ri­tory more closely com­pa­ra­ble to other sites, like Imgur.”

Reddit is Fun (RIF) de­vel­oper /u/talklittle said Reddit’s API terms also re­quire blocking ads in third-party apps, which make up the ma­jor­ity of RIFs rev­enue.” Talklittle says the pric­ing and ad re­stric­tion will force a paid sub­scrip­tion model” onto any sur­viv­ing apps. Reddit’s APIs also ex­clude adult con­tent, a ma­jor draw for the site.

While Reddit is a com­pany that makes hun­dreds of mil­lions of dol­lars a year, the con­tent mod­er­a­tion and com­mu­nity build­ing is all done by vol­un­teer mod­er­a­tors. This means that you get fun civil wars, where the users and mods can take up arms against the site ad­min­is­tra­tors. The full list of sub­red­dits par­tic­i­pat­ing in the June 12 shut­down is cur­rently over a thou­sand sub­red­dits strong. Many of the site’s most pop­u­lar sub­red­dits, like r/​gam­ing, r/​Mu­sic, and r/​Pics, are par­tic­i­pat­ing, and each has over 30 mil­lion sub­scribers. The Reddit ad­min­is­tra­tors have yet to re­spond.

Advance Publications, which owns Ars Technica par­ent Condé Nast, is the largest share­holder in Reddit.

...

Read the original on arstechnica.com »

2 578 shares, 30 trendiness

Non-Ordinary States of Consciousness Contest

Check out the orig­i­nal con­test an­nounce­ment and rules here: https://​qri.org/​blog/​con­test

We strongly rec­om­mend view­ing the con­tent at its high­est

res­o­lu­tion on a large screen size to per­ceive the ef­fects in their

en­tirety.

Judges: A panel made up of mem­bers of QRIs in­ter­na­tional phe­nom­e­nol­o­gist net­work rated from 0 to 10 each piece by these three cri­te­ria:

Effectiveness: Distinguishes be­tween sober and trip­ping peo­ple - is it just a lit­tle eas­ier to see trip­ping but you can kinda see it any­way? Or is it im­pos­si­ble to see sober and ef­fort­lessly avail­able above a cer­tain dose?

Specificity: How spe­cific and con­crete the in­for­ma­tion en­coded is - think how many bits per sec­ond can be trans­mit­ted with this piece”.

Aesthetic Value: Does this look like an art piece? Can it pass as a stan­dard work of art at a fes­ti­val that peo­ple would en­joy whether trip­ping or not? Note: smaller con­tri­bu­tion to over­all score.

The scores were weighted by the level of ex­pe­ri­ence of each par­tic­i­pant (based on a com­bi­na­tion of self-re­port and group con­sen­sus). And to get the fi­nal score, a weighted av­er­age of the three fea­tures was taken, where Effectiveness” was mul­ti­plied by 3, Specificity” by 2, and Aesthetic Value” by 1. As with the Replications con­test sub­mis­sions, the weighted av­er­age ex­cluded the rat­ings of one of the par­tic­i­pants for pieces that they them­selves sub­mit­ted (so that no­body would be eval­u­at­ing their own sub­mis­sions).

The main re­sult of this ex­er­cise was that only three sub­mis­sions seemed to have any promis­ing psy­che­delic cryp­tog­ra­phy ef­fects. The three pieces that win stood out head and shoul­ders (and trunk and even knees and an­kles) above the rest. It turns out that in or­der to de­code these pieces you do re­quire a sub­stan­tial level of trac­ers, so only mem­bers of the com­mit­tee who had a high enough level of vi­sual ef­fects were able to see the en­coded mes­sages. Some of the mem­bers of the panel re­ported that once you saw the mes­sages dur­ing the state you could then also see them sober as well by us­ing the right at­ten­tional tricks. But at least two mem­bers of the panel who re­ported see­ing the mes­sages while on mush­rooms or ayahuasca were un­able to then see them sober af­ter the fact no mat­ter how much they tried.

The three win­ners in­deed are us­ing the first clas­sic PsyCrypto encoding method” de­scribed in How

to se­cretly com­mu­ni­cate with peo­ple on LSD. Namely, a method that takes ad­van­tage of tracer

ef­fects to write out” im­ages or text over time (see also the fic­tional Rainbow

God Burning Man theme camp where this idea is ex­plored in the con­text of fes­ti­vals). That is, the fact that bright col­ors last longer in your vi­sual field while on psy­che­delics can be used to slowly con­struct im­ages in the vi­sual field; sober in­di­vid­u­als see lines and squig­gles since the fea­tures of the hid­den mes­sage don’t linger long enough for them to com­bine into a co­her­ent mes­sage. All of the judges were stunned by the fact that the pieces ac­tu­ally worked. It works! PsyCrypto works!

At a the­o­ret­i­cal level, this con­fir­ma­tion is sig­nif­i­cant be­cause it is the first clear demon­stra­tion of a real per­cep­tual com­pu­ta­tional ad­van­tage of psy­che­delic states of con­scious­ness. We an­tic­i­pate a rather in­cred­i­ble wave of PsyCrypto emerg­ing within a year or two at fes­ti­vals, and then in movies (even main­stream ones) within five years. It will seep into the cul­ture at large in time. Just re­mem­ber… you saw it first here! :-)

It is worth point­ing out that there are pos­si­ble al­ter­na­tive PsyCrypto en­cod­ing meth­ods, and that there are two ways of iden­ti­fy­ing them. First, a strat­egy of cast­ing a very wide net of pos­si­ble stim­uli to ex­pe­ri­ence on psy­che­delics and in that way ar­rive at pat­terns only peo­ple can trip from the bot­tom up” is promis­ing. If this does work, it then opens up new av­enues for sci­en­tific re­search. Meaning that as we find PsyCrypto en­cod­ing schemes we demon­strate un­de­ni­able com­pu­ta­tional ad­van­tages for the psy­che­delic states of con­scious­ness, which in turn is sig­nif­i­cant for neu­ro­science and con­scious­ness re­search. And sec­ond, new ad­vance­ments in neu­ro­science can be used from the top down” to cre­ate PsyCrypto en­cod­ing meth­ods *from first prin­ci­ples*. Here, too, this will be syn­er­gis­tic with con­scious­ness re­search: as artists fig­ure out how to re­fine the tech­niques to make them work bet­ter, they will also be, in­ad­ver­tently, giv­ing neu­ro­sci­en­tists point­ers for fur­ther promis­ing work.

Title: Can You see us?

Description: Just a video loop of a bunch of weird wavy nooo­dles, noth­ing to see here, right?”

Encryption method: I can’t lin­guis­ti­cally de­scribe it be­cause it’s a lot of trial and er­ror, but so far, the mes­sage has been de­coded by a per­son who did­n’t even know that there was sup­posed to be a mes­sage on 150ug 1plsd. I be­lieve that any psy­che­delic/​dis­so­cia­tive sub­stance that causes heavy trac­ers could be help­ful in de­cod­ing the mes­sage. Also, a per­son needs to be trained to change their mode of fo­cus to see it. Once they see it, they can’t un­see it.”

One of the judges es­ti­mated that the LSD-equivalent” thresh­old of trac­ers needed for be­ing able to eas­ily de­code this piece was ap­prox­i­mately 150μg, whereas an­other one es­ti­mated it at roughly 100μg. What made this im­age stand out, and re­ceive the first prize rel­a­tive to the other two, was how rel­a­tively easy it was to de­code in the right state of mind. In other words, this piece eas­ily dis­tin­guishes peo­ple who are suf­fi­ciently af­fected by psy­che­delics and those who sim­ply aren’t high enough. More so, it does­n’t re­quire a lot of time, ded­i­ca­tion, or ef­fort. The en­coded in­for­ma­tion sim­ply, al­legedly, pops out” in the right state of con­scious­ness

Title: We Are Here. Lets talk

Description: Short video loop con­tain­ing a se­cret mes­sage from outer space. Can you see it?”

Encryption method: The mes­sage text is il­lu­mi­nated in scan­ner fash­ion. The speed of sweep is de­pen­dent on the video frame rate, so when­ever a per­son is in an al­tered state and ex­pe­ri­enc­ing heavy trac­ers they would see a clear mes­sage in­stead of one that’s bro­ken apart. Entire mes­sage can be seen clearly by us­ing video edit­ing soft­ware and ap­ply­ing a tracer/​echo ef­fect and hav­ing 60 im­ages in a trail that each are 0.033 sec­onds af­ter the pre­vi­ous. This process can also be re­peated with code.

The mes­sage can be seen in any al­tered state that in­duces heavy vi­sual trac­ers, like medium-high doses of the most pop­u­lar psy­che­delics, it also de­pends on a per­son at which doses they would start see­ing heavy trac­ers. If ex­pe­ri­enc­ing heavy trac­ers and still un­able to see the mes­sage, try look­ing at the cen­ter of a video and re­lax­ing your eyes and de­fo­cus­ing them.”

As with the sub­mis­sion that got the 1st prize, the same judges es­ti­mated 150μg and 100μg of LSD, re­spec­tively, as the thresh­old needed to eas­ily de­code the se­cret mes­sages in this piece. That said, de­cod­ing this piece turned out to be more dif­fi­cult for the ma­jor­ity of the judges, and it was­n’t as im­me­di­ately read­able as the first one. It takes more time, ef­fort, and ded­i­ca­tion to put the mes­sage to­gether in one’s vi­sual field than the first one.

People also com­mented on the aes­thetic rich­ness of this piece, which gave it an ex­tra boost.

Description: Artwork de­picts the con­nec­tion be­tween the sub­con­scious and the uni­ver­sal en­ergy. The key of every­thing is de­fined by the ob­server of their own mind.”

Encryption method: Images edited in a way where only one go­ing through a psy­che­delic ex­pe­ri­ence and see­ing large amounts of trac­ers would see the en­crypted mes­sage fully. Based on”How To Secretly Communicate With People On LSD first ex­am­ple of tracer-based en­crypted mes­sage. I be­lieve that DMT or 150-200ug of LSD or any sub­stance de­liv­er­ing the tracer vi­sual ef­fect could be used to de­code the art­work.”

The judges who were able to see the mes­sage in this piece had very dif­fer­ent opin­ions on how in­tense the ef­fects of psy­che­delics needed to be in or­der to eas­ily de­code the in­for­ma­tion hid­den in it. One of the judges said that in or­der to read this eas­ily with ayahuasca you would need the dose equiv­a­lent to ap­prox­i­mately 40mg of va­por­ized DMT (i.e. a re­ally strong, break­through-level, trip). This seems to be in stark con­trast with the opin­ion of an­other judge, who es­ti­mated that the av­er­age per­son would need as lit­tle as 75ug of LSD to de­code it.

The judges spec­u­lated that see­ing the hid­den in­for­ma­tion in this piece was eas­ier to do on DMT than other psy­che­delics like mush­rooms (for in­ten­sity-ad­justed lev­els of al­ter­ation). When asked why they thought this was the case, it was spec­u­lated that this dif­fer­ence was likely due to the crisp­ness and char­ac­ter­is­tic spa­tiotem­po­ral fre­quen­cies of DMT rel­a­tive to mush­rooms. DMT sim­ply pro­duces more de­tailed and high-res­o­lu­tion trac­ers, which seem to be use­ful prop­er­ties for de­cod­ing this piece in par­tic­u­lar.

Alternatively, one of the judges pro­posed that, on the one hand, the ef­fects of mush­rooms on the vi­sual field seem to be less de­pen­dent on the color palette of the stim­uli. Therefore, whether the PsyCrypto uses col­ors or not does­n’t mat­ter very much if one is us­ing mush­rooms. DMT, on the other hand, makes sub­tle dif­fer­ences in col­ors look larger, as if the ef­fects were to expand the color gamut” and am­plify the per­cep­tion of sub­tle gra­di­ents of hues (cf. color

con­trol), which in this case is ben­e­fi­cial to de­code the psycrypted” in­for­ma­tion.

Additionally, all of the judges agreed that this piece had very sig­nif­i­cant aes­thetic value. It looks ex­tremely HD and har­mo­nious in such states of con­scious­ness, which is a sig­nif­i­cant boost and per­haps even a Psychedelic Cryptography of its own (meaning that the in­crease in aes­thetic value in such states is suf­fi­ciently sur­pris­ing that it’s a packet of in­for­ma­tion all by it­self).

Despite the very high aes­thetic value of this piece and that it did work as a PsyCrypto tool, the rea­son it got the third place was that (a) it is still dif­fi­cult to de­code on psy­che­delics, and (b) that it is not im­pos­si­ble to de­code sober. In other words, it is less se­cure and dis­crim­i­nat­ing than the other two, and there­fore not as good as the oth­ers in terms of its PsyCrypto prop­er­ties. It is, how­ever, still very im­pres­sive and ef­fec­tive in ab­solute terms.

Congratulations to the win­ners and to all of the par­tic­i­pants! We look for­ward to see­ing se­cret mes­sages at PsyTrance fes­ti­vals and Psychedelic Conferences in­spired by this work from now on ;-)

For at­tri­bu­tion, please cite this work as

...

Read the original on qri.org »

3 563 shares, 61 trendiness

ggml.ai

ggml is a ten­sor li­brary for ma­chine learn­ing to en­able large mod­els and high per­for­mance on com­mod­ity hard­ware. It is used by llama.cpp and whis­per.cpp

Here are some sam­ple per­for­mance stats on Apple Silicon June 2023:

Minimal

We like sim­plic­ity and aim to keep the code­base as small and as sim­ple as pos­si­ble

Open Core

The li­brary and re­lated pro­jects are freely avail­able un­der the MIT li­cense. The de­vel­op­ment process is open and every­one is wel­come to join. In the fu­ture we may choose to de­velop ex­ten­sions that are li­censed for com­mer­cial use

Explore and have fun!

We built ggml in the spirit of play. Contributors are en­cour­aged to try crazy ideas, build wild demos, and push the edge of what’s pos­si­ble

whis­per.cpp

The pro­ject pro­vides a high-qual­ity speech-to-text so­lu­tion that runs on Mac, Windows, Linux, iOS, Android, Raspberry Pi, and Web. Used by rewind.ai

llama.cpp

The pro­ject demon­strates ef­fi­cient in­fer­ence on Apple Silicon hard­ware and ex­plores a va­ri­ety of op­ti­miza­tion tech­niques and ap­pli­ca­tions of LLMs

The best way to sup­port the pro­ject is by con­tribut­ing to the code­base

If you wish to fi­nan­cially sup­port the pro­ject, please con­sider be­com­ing a spon­sor to any of the con­trib­u­tors that are al­ready in­volved:

ggml.ai is a com­pany founded by Georgi Gerganov to sup­port the de­vel­op­ment of ggml. Nat Friedman

and Daniel Gross pro­vided the pre-seed fund­ing.

We are cur­rently seek­ing to hire full-time de­vel­op­ers that share our vi­sion and would like to help ad­vance the idea of on-de­vice in­fer­ence. If you are in­ter­ested and if you have al­ready been a con­trib­u­tor to any of the re­lated pro­jects, please con­tact us at jobs@ggml.ai

For any busi­ness-re­lated top­ics, in­clud­ing sup­port or en­ter­prise de­ploy­ment, please con­tact us at sales@ggml.ai

...

Read the original on ggml.ai »

4 539 shares, 41 trendiness

US tightens crackdown on crypto with lawsuits against Coinbase, Binance

NEW YORK, June 6 (Reuters) - The top U. S. se­cu­ri­ties reg­u­la­tor sued cryp­tocur­rency plat­form Coinbase on Tuesday, the sec­ond law­suit in two days against a ma­jor crypto ex­change, in a dra­matic es­ca­la­tion of a crack­down on the in­dus­try and one that could dra­mat­i­cally trans­form a mar­ket that has largely op­er­ated out­side reg­u­la­tion.

The U. S. Securities and Exchange Commission (SEC) on Monday took aim at Binance, the world’s largest cryp­tocur­rency ex­change. The SEC ac­cuses Binance and its CEO Changpeng Zhao of op­er­at­ing a web of de­cep­tion”.

If suc­cess­ful, the law­suits could trans­form the crypto mar­ket by suc­cess­fully as­sert­ing the SECs ju­ris­dic­tion over the in­dus­try which for years has ar­gued that to­kens do not con­sti­tute se­cu­ri­ties and should not be reg­u­lated by the SEC.

The two cases are dif­fer­ent, but over­lap and point in the same di­rec­tion: the SECs in­creas­ingly ag­gres­sive cam­paign to bring cryp­tocur­ren­cies un­der the ju­ris­dic­tion of the fed­eral se­cu­ri­ties laws,” said Kevin O’Brien, a part­ner at Ford O’Brien Landy and a for­mer fed­eral pros­e­cu­tor, adding, how­ever, that the SEC has not pre­vi­ously taken on such ma­jor crypto play­ers.

If the SEC pre­vails in ei­ther case, the cryp­tocur­rency in­dus­try will be trans­formed.”

In its com­plaint filed in Manhattan fed­eral court, the SEC said Coinbase has since at least 2019 made bil­lions of dol­lars by op­er­at­ing as a mid­dle­man on crypto trans­ac­tions, while evad­ing dis­clo­sure re­quire­ments meant to pro­tect in­vestors.

The SEC said Coinbase traded at least 13 crypto as­sets that are se­cu­ri­ties that should have been reg­is­tered, in­clud­ing to­kens such as Solana, Cardano and Polygon.

Coinbase suf­fered about $1.28 bil­lion of net cus­tomer out­flows fol­low­ing the law­suit, ac­cord­ing to ini­tial es­ti­mates from data firm Nansen. Shares of Coinbase’s par­ent Coinbase Global Inc (COIN. O) closed down $7.10, or 12.1%, at $51.61 af­ter ear­lier falling as much as 20.9%. They are up 46% this year.

Paul Grewal, Coinbase’s gen­eral coun­sel, in a state­ment said the com­pany will con­tinue op­er­at­ing as usual and has demonstrated com­mit­ment to com­pli­ance.”

Oanda se­nior mar­ket an­a­lyst Ed Moya said the SEC looks like it’s play­ing Whac-A-Mole with crypto ex­changes,” and be­cause most ex­changes of­fer a range of to­kens that op­er­ate on blockchain pro­to­cols tar­geted by reg­u­la­tors, it seems like this is just the be­gin­ning.”

Leading cryp­tocur­rency bit­coin has been a para­dox­i­cal ben­e­fi­ciary of the crack­down.

After an ini­tial plunge to a nearly three-month low of $25,350 fol­low­ing the Binance suit, bit­coin re­bounded by more than $2,000, ex­ceed­ing the pre­vi­ous day’s high.

The SEC is mak­ing life nearly im­pos­si­ble for sev­eral alt­coins and that is ac­tu­ally dri­ving some crypto traders back into bit­coin,” ex­plained Oanda’s Moya.

Securities, as op­posed to other as­sets such as com­modi­ties, are strictly reg­u­lated and re­quire de­tailed dis­clo­sures to in­form in­vestors of po­ten­tial risks. The Securities Act of 1933 out­lined a de­f­i­n­i­tion of the term security,” yet many ex­perts rely on two U. S. Supreme Court cases to de­ter­mine if an in­vest­ment prod­uct con­sti­tutes a se­cu­rity.

SEC Chair Gary Gensler has long said to­kens con­sti­tute se­cu­ri­ties and has steadily as­serted its au­thor­ity over the crypto mar­ket, fo­cus­ing ini­tially on the sale of to­kens and in­ter­est-bear­ing crypto prod­ucts. More re­cently, it has taken aim at un­reg­is­tered crypto bro­ker dealer, ex­change trad­ing and clear­ing ac­tiv­ity.

While a few crypto com­pa­nies are li­censed as al­ter­na­tive sys­tem trad­ing sys­tems, a type of trad­ing plat­form used by bro­kers to trade listed se­cu­ri­ties, no crypto plat­form op­er­ates as a full-blown stock ex­change. The SEC also this year sued Beaxy Digital and Bittrex Global for fail­ing to reg­is­ter as an ex­change, clear­ing house and bro­ker.

The whole busi­ness model is built on a non­com­pli­ance with the U. S. se­cu­ri­ties laws and we’re ask­ing them to come into com­pli­ance,” Gensler told CNBC.

Crypto com­pa­nies re­fute that to­kens meet the de­f­i­n­i­tion of a se­cu­rity, say the SECs rules are am­bigu­ous, and that the SEC is over­step­ping its au­thor­ity in try­ing to reg­u­late them. Still, many com­pa­nies have boosted com­pli­ance, shelved prod­ucts and ex­panded out­side the coun­try in re­sponse to the crack­down.

Kristin Smith, CEO of the Blockchain Association trade group, re­jected Gensler’s ef­forts to over­see the in­dus­try.

We’re con­fi­dent the courts will prove Chair Gensler wrong in due time,” she said.

Founded in 2012, Coinbase re­cently served more than 108 mil­lion cus­tomers and ended March with $130 bil­lion of cus­tomer crypto as­sets and funds on its bal­ance sheet. Transactions gen­er­ated 75% of its $3.15 bil­lion of net rev­enue last year.

Tuesday’s SEC law­suit seeks civil fines, the re­coup­ing of ill-got­ten gains and in­junc­tive re­lief.

On Monday, the SEC ac­cused Binance of in­flat­ing trad­ing vol­umes, di­vert­ing cus­tomer funds, im­prop­erly com­min­gling as­sets, fail­ing to re­strict U. S. cus­tomers from its plat­form, and mis­lead­ing cus­tomers about its con­trols.

Binance pledged to vig­or­ously de­fend it­self against the law­suit, which it said re­flected the SECs misguided and con­scious re­fusal” to pro­vide clar­ity to the crypto in­dus­try.

Customers pulled around $790 mil­lion from Binance and its U. S. af­fil­i­ate fol­low­ing the law­suit, Nansen said.

On Tuesday, the SEC filed a mo­tion to freeze as­sets be­long­ing to Binance. US, Binance’s U.S. af­fil­i­ate. The hold­ing com­pany of Binance is based in the Cayman Islands.

...

Read the original on www.reuters.com »

5 427 shares, 32 trendiness

OpenGL 3.1 on Asahi Linux

Upgrade your Asahi Linux sys­tems, be­cause your graph­ics dri­vers are get­ting a big boost: leapfrog­ging from OpenGL 2.1 over OpenGL 3.0 up to OpenGL 3.1! Similarly, the OpenGL ES 2.0 sup­port is bump­ing up to OpenGL ES 3.0. That means more playable games and more func­tion­ing ap­pli­ca­tions.

Back in December, I teased an early screen­shot of SuperTuxKart’s de­ferred ren­derer work­ing on Asahi, us­ing OpenGL ES 3.0 fea­tures like mul­ti­ple ren­der tar­gets and in­stanc­ing. Now you too can en­joy SuperTuxKart with ad­vanced light­ing the way it’s meant to be:

As be­fore, these dri­vers are ex­per­i­men­tal and not yet con­for­mant to the OpenGL or OpenGL ES spec­i­fi­ca­tions. For now, you’ll need to run our -edge pack­ages to opt-in to the work-in-progress dri­vers, un­der­stand­ing that there may be bugs. Please re­fer to our pre­vi­ous

post

ex­plain­ing how to in­stall the dri­vers and how to re­port bugs to help us im­prove.

With that dis­claimer out of the way, there’s a LOT of new func­tion­al­ity packed into OpenGL 3.0, 3.1, and OpenGL ES 3.0 to make this re­lease. Highlights in­clude:

Vulkan and OpenGL sup­port mul­ti­sam­pling, short for mul­ti­sam­pled anti-alias­ing. In graph­ics, alias­ing causes jagged di­ag­o­nal edges due to ren­der­ing at in­suf­fi­cient res­o­lu­tion. One so­lu­tion to alias­ing is ren­der­ing at higher res­o­lu­tions and scal­ing down. Edges will be blurred, not jagged, which looks bet­ter. Multisampling is an ef­fi­cient im­ple­men­ta­tion of that idea.

A mul­ti­sam­pled im­age con­tains mul­ti­ple sam­ples for every pixel. After ren­der­ing, a mul­ti­sam­pled im­age is re­solved to a reg­u­lar im­age with one sam­ple per pixel, typ­i­cally by av­er­ag­ing the sam­ples within a pixel.

Apple GPUs sup­port mul­ti­sam­pled im­ages and frame­buffers. There’s quite a bit of typ­ing to plumb the pro­gram­mer’s view of mul­ti­sam­pling into the form un­der­stood by the hard­ware, but there’s no fun­da­men­tal in­com­pat­i­bil­ity.

The trou­ble comes with sam­ple shad­ing. Recall that in mod­ern graph­ics, the colour of each frag­ment is de­ter­mined by run­ning a frag­ment shader given by the pro­gram­mer. If the frag­ments are pix­els, then each sam­ple within that pixel gets the same colour. Running the frag­ment shader once per pixel still ben­e­fits from mul­ti­sam­pling thanks to higher qual­ity ras­ter­i­za­tion, but it’s not as good as ac­tu­ally ren­der­ing at a higher res­o­lu­tion. If in­stead the frag­ments are sam­ples, each sam­ple gets a unique colour, equiv­a­lent to ren­der­ing at a higher res­o­lu­tion (supersampling). In Vulkan and OpenGL, frag­ment shaders gen­er­ally run per-pixel, but with sample shad­ing”, the ap­pli­ca­tion can force the frag­ment shader to run per-sam­ple.

How does sam­ple shad­ing work from the dri­vers’ per­spec­tive? On a typ­i­cal GPU, it is sim­ple: the dri­ver com­piles a frag­ment shader that cal­cu­lates the colour of a sin­gle sam­ple, and sets a hard­ware bit to ex­e­cute it per-sam­ple in­stead of per-pixel. There is only one bit of state as­so­ci­ated with sam­ple shad­ing. The hard­ware will ex­e­cute the frag­ment shader mul­ti­ple times per pixel, writ­ing out pixel colours in­de­pen­dently.

AGX al­ways ex­e­cutes the shader once per pixel, not once per sam­ple, like older GPUs that did not sup­port sam­ple shad­ing. AGX does sup­port it, though.

How? The AGX in­struc­tion set al­lows pixel shaders to out­put dif­fer­ent colours to each sam­ple. The in­struc­tion used to out­put a colour takes a set of sam­ples to mod­ify, en­coded as a bit mask. The de­fault all-1’s mask writes the same value to all sam­ples in a pixel, but a mask set­ting a sin­gle bit will write only the sin­gle cor­re­spond­ing sam­ple.

This de­sign is un­usual, and it re­quires dri­ver back­flips to trans­late fragment shaders” into hard­ware pixel shaders. How do we do it?

Physically, the hard­ware ex­e­cutes our shader once per pixel. Logically, we’re sup­posed to ex­e­cute the ap­pli­ca­tion’s frag­ment shader once per sam­ple. If we know the num­ber of sam­ples per pixel, then we can wrap the ap­pli­ca­tion’s shader in a loop over each sam­ple. So, if the orig­i­nal frag­ment shader is:

in­ter­po­lated colour = in­ter­po­late at cur­rent sam­ple(in­put colour);

out­put cur­rent sam­ple(in­ter­po­lated colour);

then we will trans­form the pro­gram to the pixel shader:

for (sample = 0; sam­ple < num­ber of sam­ples; ++sample) {

sam­ple mask = (1 << sam­ple);

in­ter­po­lated colour = in­ter­po­late at sam­ple(in­put colour, sam­ple);

out­put sam­ples(sam­ple mask, in­ter­po­lated colour);

The orig­i­nal frag­ment shader runs in­side the loop, once per sam­ple. Whenever it in­ter­po­lates in­puts at the cur­rent sam­ple po­si­tion, we change it to in­stead in­ter­po­late at a spe­cific sam­ple given by the loop counter sam­ple. Likewise, when it out­puts a colour for a sam­ple, we change it to out­put the colour to the sin­gle sam­ple given by the loop counter.

If the story ended here, this mech­a­nism would be silly. Adding sam­ple masks to the in­struc­tion set is more com­pli­cated than a sin­gle bit to in­voke the shader mul­ti­ple times, as other GPUs do. Even Apple’s own Metal dri­ver has to im­ple­ment this dance, be­cause Metal has a sim­i­lar ap­proach to sam­ple shad­ing as OpenGL and Vulkan. With all this ex­tra com­plex­ity, is there a ben­e­fit?

If we gen­er­ated that loop at the end, maybe not. But if we know at com­pile-time that sam­ple shad­ing is used, we can run our full op­ti­mizer on this sam­ple loop. If there is an ex­pres­sion that is the same for all sam­ples in a pixel, it can be hoisted out of the loop. Instead of cal­cu­lat­ing the same value mul­ti­ple times, as other GPUs do, the value can be cal­cu­lated just once and reused for each sam­ple. Although it com­pli­cates the dri­ver, this ap­proach to sam­ple shad­ing is­n’t Apple cut­ting cor­ners. If we slapped on the loop at the end and did no op­ti­miza­tions, the re­sult­ing code would be com­pa­ra­ble to what other GPUs ex­e­cute in hard­ware. There might be slight dif­fer­ences from spawn­ing fewer threads but ex­e­cut­ing more con­trol flow in­struc­tions, but that’s mi­nor. Generating the loop early and run­ning the op­ti­mizer en­ables bet­ter per­for­mance than pos­si­ble on other GPUs.

So is the mech­a­nism only an op­ti­miza­tion? Did Apple stum­ble on a bet­ter ap­proach to sam­ple shad­ing that other GPUs should adopt? I would­n’t be so sure.

Let’s pull the cur­tain back. AGX has its roots as a mo­bile GPU in­tended for iPhones, with sig­nif­i­cant PowerVR her­itage. Even if it pow­ers Mac Pros to­day, the mo­bile legacy means AGX prefers soft­ware im­ple­men­ta­tions of many fea­tures that desk­top GPUs im­ple­ment with ded­i­cated hard­ware.

Blending is an op­er­a­tion in graph­ics APIs to com­bine the frag­ment shader out­put colour with the ex­ist­ing colour in the frame­buffer. It is usu­ally used to im­ple­ment al­pha blend­ing, to let the back­ground poke through translu­cent ob­jects.

When mul­ti­sam­pling is used with­out sam­ple shad­ing, al­though the frag­ment shader only runs once per pixel, blend­ing hap­pens per-sam­ple. Even if the frag­ment shader out­puts the same colour to each sam­ple, if the frame­buffer al­ready had dif­fer­ent colours in dif­fer­ent sam­ples, blend­ing needs to hap­pen per-sam­ple to avoid los­ing that in­for­ma­tion al­ready in the frame­buffer.

A tra­di­tional desk­top GPU blends with ded­i­cated hard­ware. In the mo­bile space, there’s a mix of ded­i­cated hard­ware and soft­ware. On AGX, blend­ing is purely soft­ware. Rather than con­fig­ure blend­ing hard­ware, the dri­ver must pro­duce vari­ants of the frag­ment shader that in­clude in­struc­tions to im­ple­ment the de­sired blend mode. With al­pha blend­ing, a frag­ment shader like:

colour = cal­cu­late light­ing();

out­put(colour);

colour = cal­cu­late light­ing();

dest = load des­ti­na­tion colour;

al­pha = colour.al­pha;

blended = (alpha * colour) + ((1 - al­pha) * dest));

out­put(blended);

Blending hap­pens per sam­ple. Even if the ap­pli­ca­tion in­tends to run the frag­ment shader per pixel, the shader must run per sam­ple for cor­rect blend­ing. Compared to other GPUs, this ap­proach to blend­ing would regress per­for­mance when blend­ing and mul­ti­sam­pling are en­abled but sam­ple shad­ing is not.

On the other hand, ex­pos­ing mul­ti­sam­ple pixel shaders to the dri­ver solves the prob­lem neatly. If both the blend­ing and the mul­ti­sam­ple state are known, we can first in­sert in­struc­tions for blend­ing, and then wrap with the sam­ple loop. The above pro­gram would then be­come:

for (sample = 0; sam­ple < num­ber of sam­ples; ++sample_id) {

colour = cal­cu­late light­ing();

dest = load des­ti­na­tion colour at sam­ple (sample);

al­pha = colour.al­pha;

blended = (alpha * colour) + ((1 - al­pha) * dest);

sam­ple mask = (1 << sam­ple);

out­put sam­ples(sam­ple_­mask, blended);

In this form, the frag­ment shader is as­ymp­tot­i­cally worse than the ap­pli­ca­tion wanted: the frag­ment shader is ex­e­cuted in­side the loop, run­ning per-sam­ple un­nec­es­sar­ily.

Have no fear, the op­ti­mizer is here. Since colour is the same for each sam­ple in the pixel, it does not de­pend on the sam­ple ID. The com­piler can move the en­tire orig­i­nal frag­ment shader (and re­lated ex­pres­sions) out of the per-sam­ple loop:

colour = cal­cu­late light­ing();

al­pha = colour.al­pha;

in­v_al­pha = 1 - al­pha;

colour_al­pha = al­pha * colour;

for (sample = 0; sam­ple < num­ber of sam­ples; ++sample_id) {

dest = load des­ti­na­tion colour at sam­ple (sample);

blended = colour_al­pha + (inv_alpha * dest);

sam­ple mask = (1 << sam­ple);

out­put sam­ples(sam­ple_­mask, blended);

Now blend­ing hap­pens per sam­ple but the ap­pli­ca­tion’s frag­ment shader runs just once, match­ing the per­for­mance char­ac­ter­is­tics of tra­di­tional GPUs. Even bet­ter, all of this hap­pens with­out any spe­cial work from the com­piler. There’s no magic mul­ti­sam­pling op­ti­miza­tion hap­pen­ing here: it’s just a loop.

By the way, what do we do if we don’t know the blend­ing and mul­ti­sam­ple state at com­pile-time? Hope is not lost…

While OpenGL ES 3.0 is an im­prove­ment over ES 2.0, we’re not done. In my work-in-progress branch, OpenGL ES 3.1 sup­port is nearly fin­ished, which will un­lock com­pute shaders.

The fi­nal goal is a Vulkan dri­ver run­ning mod­ern games. We’re a while away, but the base­line Vulkan 1.0 re­quire­ments par­al­lel OpenGL ES 3.1, so our work trans­lates to Vulkan. For ex­am­ple, the mul­ti­sam­pling com­piler passes de­scribed above are com­mon code be­tween the dri­vers. We’ve tested them against OpenGL, and now they’re ready to go for Vulkan.

And yes, the team is al­ready work­ing on Vulkan.

Until then, you’re one pac­man -Syu away from en­joy­ing OpenGL 3.1!

...

Read the original on asahilinux.org »

6 362 shares, 19 trendiness

Apple Vision

On the busi­ness, strat­egy, and im­pact of tech­nol­ogy.

It re­ally is one of the best prod­uct names in Apple his­tory: Vision is a de­scrip­tion of a prod­uct, it is an as­pi­ra­tion for a use case, and it is a cri­tique on the sort of so­ci­ety we are build­ing, be­hind Apple’s lead­er­ship more than any­one else.

I am speak­ing, of course, about Apple’s new mixed re­al­ity head­set that was an­nounced at yes­ter­day’s WWDC, with a planned ship date of early 2024, and a price of $3,499. I had the good for­tune of us­ing an Apple Vision in the con­text of a con­trolled demo — which is an im­por­tant grain of salt, to be sure — and I found the ex­pe­ri­ence ex­tra­or­di­nary.

It’s far bet­ter than I ex­pected, and I had high ex­pec­ta­tions.

The high ex­pec­ta­tions came from the fact that not only was this prod­uct be­ing built by Apple, the undis­puted best hard­ware maker in the world, but also be­cause I am, un­like many, rel­a­tively op­ti­mistic about VR. What sur­prised me is that Apple ex­ceeded my ex­pec­ta­tions on both counts: the hard­ware and ex­pe­ri­ence were bet­ter than I thought pos­si­ble, and the po­ten­tial for Vision is larger than I an­tic­i­pated. The so­ci­etal im­pacts, though, are much more com­pli­cated.

I have, for as long as I have writ­ten about the space, high­lighted the dif­fer­ences be­tween VR (virtual re­al­ity) and AR (augmented re­al­ity). From a 2016 Update:

I think it’s use­ful to make a dis­tinc­tion be­tween vir­tual and aug­mented re­al­ity. Just look at the names: virtual” re­al­ity is about an im­mer­sive ex­pe­ri­ence com­pletely dis­con­nected from one’s cur­rent re­al­ity, while augmented” re­al­ity is about, well, aug­ment­ing the re­al­ity in which one is al­ready pre­sent. This is more than a se­man­tic dis­tinc­tion about dif­fer­ent types of head­sets: you can di­vide nearly all of con­sumer tech­nol­ogy along this axis. Movies and videogames are about dif­fer­ent re­al­i­ties; pro­duc­tiv­ity soft­ware and de­vices like smart­phones are about aug­ment­ing the pre­sent. Small won­der, then, that all of the big vir­tual re­al­ity an­nounce­ments are ex­pected to be video game and movie re­lated.

Augmentation is more in­ter­est­ing: for the most part it seems that aug­men­ta­tion prod­ucts are best suited as spokes around a hub; a car’s in­fo­tain­ment sys­tem, for ex­am­ple, is very much a de­vice that is fo­cused on the cur­rent re­al­ity of the car’s oc­cu­pants, and as evinced by Ford’s an­nounce­ment, the fu­ture here is to ac­com­mo­date the smart­phone. It’s the same story with watches and wear­ables gen­er­ally, at least for now.

I high­light that tim­ing ref­er­ence be­cause it’s worth re­mem­ber­ing that smart­phones were orig­i­nally con­ceived of as a spoke around the PC hub; it turned out, though, that by virtue of their mo­bil­ity — by be­ing use­ful in more places, and thus ca­pa­ble of aug­ment­ing more ex­pe­ri­ences — smart­phones dis­placed the PC as the hub. Thus, when think­ing about the ques­tion of what might dis­place the smart­phone, I sus­pect what we to­day think of a spoke” will be a good place to start. And, I’d add, it’s why plat­form com­pa­nies like Microsoft and Google have fo­cused on aug­mented, not vir­tual, re­al­ity, and why the mys­te­ri­ous Magic Leap has raised well over a bil­lion dol­lars to-date; al­ways in your vi­sion is even more com­pelling than al­ways in your pocket (as is al­ways on your wrist).

I’ll come back to that last para­graph later on; I don’t think it’s quite right, in part be­cause Apple Vision shows that the first part of the ex­cerpt was­n’t right ei­ther. Apple Vision is tech­ni­cally a VR de­vice that ex­pe­ri­en­tially is an AR de­vice, and it’s one of those so­lu­tions that, once you have ex­pe­ri­enced it, is so ob­vi­ously the cor­rect im­ple­men­ta­tion that it’s hard to be­lieve there was ever any other pos­si­ble ap­proach to the gen­eral con­cept of com­put­er­ized glasses.

This re­al­ity — pun in­tended — hits you the mo­ment you fin­ish set­ting up the de­vice, which in­cludes not only fit­ting the head­set to your head and adding a pre­scrip­tion set of lenses, if nec­es­sary, but also set­ting up eye track­ing (which I will get to in a mo­ment). Once you have jumped through those hoops you are sud­denly back where you started: look­ing at the room you are in with shock­ingly full fi­delity.

What is hap­pen­ing is that Apple Vision is uti­liz­ing some num­ber of its 12 cam­eras to cap­ture the out­side world, and dis­play­ing them to the postage-stamp sized screens in front of your eyes in a way that makes you feel like you are wear­ing safety gog­gles: you’re look­ing through some­thing, that is­n’t ex­actly like to­tal clar­ity but is of suf­fi­ciently high res­o­lu­tion and speed that there is no rea­son to think it’s not real.

The speed is es­sen­tial: Apple claims that the thresh­old for your brain to no­tice any sort of de­lay in what you see and what your body ex­pects you to see (which is what causes known VR is­sues like mo­tion sick­ness) is 12 mil­lisec­onds, and that the Vision vi­sual pipeline dis­plays what it sees to your eyes in 12 mil­lisec­onds or less. This is par­tic­u­larly re­mark­able given that the time for the im­age sen­sor to cap­ture and process what it is see­ing is along the lines of 7~8 mil­lisec­onds, which is to say that the Vision is tak­ing that cap­tured im­age, pro­cess­ing it, and dis­play­ing it in front of your eyes in around 4 mil­lisec­onds.

This is, truly, some­thing that only Apple could do, be­cause this speed is func­tion of two things: first, the Apple-designed R1 proces­sor (Apple also de­signed part of the im­age sen­sor), and sec­ond, the in­te­gra­tion with Apple’s soft­ware. Here is Mike Rockwell, who led the cre­ation of the head­set, ex­plain­ing visionOS”:

None of this ad­vanced tech­nol­ogy could come to life with­out a pow­er­ful op­er­at­ing sys­tem called visionOS”. It’s built on the foun­da­tion of the decades of en­gi­neer­ing in­no­va­tion in ma­cOS, iOS, and iPad OS. To that foun­da­tion we added a host of new ca­pa­bil­i­ties to sup­port the low la­tency re­quire­ments of spa­tial com­put­ing, such as a new real-time ex­e­cu­tion en­gine that guar­an­tees per­for­mance-crit­i­cal work­loads, a dy­nam­i­cally foveated ren­der­ing pipeline that de­liv­ers max­i­mum im­age qual­ity to ex­actly where your eyes are look­ing for every sin­gle frame, a first-of-its-kind multi-app 3D en­gine that al­lows dif­fer­ent apps to run si­mul­ta­ne­ously in the same sim­u­la­tion, and im­por­tantly, the ex­ist­ing ap­pli­ca­tion frame­works we’ve ex­tended to na­tively sup­port spa­tial ex­pe­ri­ences. vi­sionOS is the first op­er­at­ing sys­tem de­signed from the ground up for spa­tial com­put­ing.

The key part here is the real-time ex­e­cu­tion en­gine”; real time” is­n’t just a de­scrip­tor of the ex­pe­ri­ence of us­ing Vision Pro: it’s a term-of-art for a dif­fer­ent kind of com­put­ing. Here’s how Wikipedia de­fines a real-time op­er­at­ing sys­tem:

A real-time op­er­at­ing sys­tem (RTOS) is an op­er­at­ing sys­tem (OS) for real-time com­put­ing ap­pli­ca­tions that processes data and events that have crit­i­cally de­fined time con­straints. An RTOS is dis­tinct from a time-shar­ing op­er­at­ing sys­tem, such as Unix, which man­ages the shar­ing of sys­tem re­sources with a sched­uler, data buffers, or fixed task pri­or­i­ti­za­tion in a mul­ti­task­ing or mul­ti­pro­gram­ming en­vi­ron­ment. Processing time re­quire­ments need to be fully un­der­stood and bound rather than just kept as a min­i­mum. All pro­cess­ing must oc­cur within the de­fined con­straints. Real-time op­er­at­ing sys­tems are event-dri­ven and pre­emp­tive, mean­ing the OS can mon­i­tor the rel­e­vant pri­or­ity of com­pet­ing tasks, and make changes to the task pri­or­ity. Event-driven sys­tems switch be­tween tasks based on their pri­or­i­ties, while time-shar­ing sys­tems switch the task based on clock in­ter­rupts.

Real-time op­er­at­ing sys­tems are used in em­bed­ded sys­tems for ap­pli­ca­tions with crit­i­cal func­tion­al­ity, like a car, for ex­am­ple: it’s ok to have an in­fo­tain­ment sys­tem that some­times hangs or even crashes, in ex­change for more flex­i­bil­ity and ca­pa­bil­ity, but the soft­ware that ac­tu­ally op­er­ates the ve­hi­cle has to be re­li­able and un­fail­ingly fast. This is, in broad strokes, one way to think about how vi­sionOS works: while the user ex­pe­ri­ence is a time-shar­ing op­er­at­ing sys­tem that is in­deed a vari­a­tion of iOS, and runs on the M2 chip, there is a sub­sys­tem that pri­mar­ily op­er­ates the R1 chip that is real-time; this means that even if vi­sionOS hangs or crashes, the out­side world is still ren­dered un­der that magic 12 mil­lisec­onds.

This is, need­less to say, the most mean­ing­ful man­i­fes­ta­tion yet of Apple’s abil­ity to in­te­grate hard­ware and soft­ware: while pre­vi­ously that in­te­gra­tion man­i­fested it­self in a bet­ter user ex­pe­ri­ence in the case of a smart­phone, or a seem­ingly im­pos­si­ble com­bi­na­tion of power and ef­fi­ciency in the case of Apple Silicon lap­tops, in this case that in­te­gra­tion makes pos­si­ble the meld­ing of VR and AR into a sin­gle Vision.

In the early years of dig­i­tal cam­eras there was bi­fur­ca­tion be­tween con­sumer cam­eras that were fully dig­i­tal, and high-end cam­eras that had a dig­i­tal sen­sor be­hind a tra­di­tional re­flex mir­ror that pushed ac­tual light to an op­ti­cal viewfinder. Then, in 2008, Panasonic re­leased the G1, the first-ever mir­ror­less cam­era with an in­ter­change­able lens sys­tem. The G1 had a viewfinder, but the viewfinder was in fact a screen.

This sys­tem was, at the be­gin­ning, dis­missed by most high-end cam­era users: sure, a mir­ror­less sys­tem al­lowed for a sim­pler and smaller de­sign, but there was no way a screen could ever com­pare to ac­tu­ally look­ing through the lens of the cam­era like you could with a re­flex mir­ror. Fast for­ward to to­day, though, and nearly every cam­era on the mar­ket, in­clud­ing pro­fes­sional ones, are mir­ror­less: not only did those tiny screens get a lot bet­ter, brighter, and faster, but they also brought many ad­van­tages of their own, in­clud­ing the abil­ity to see ex­actly what a photo would look like be­fore you took it.

Mirrorless cam­eras were ex­actly what popped into my mind when the Vision Pro launched into that de­fault screen I noted above, where I could ef­fort­lessly see my sur­round­ings. The field of view was a bit lim­ited on the edges, but when I ac­tu­ally brought up the ap­pli­ca­tion launcher, or was us­ing an app or watch­ing a video, the field of vi­sion rel­a­tive to an AR ex­pe­ri­ence like a Hololens was pos­i­tively as­tro­nom­i­cal. In other words, by mak­ing the ex­pe­ri­ence all dig­i­tal, the Vision Pro de­liv­ers an ac­tu­ally use­ful AR ex­pe­ri­ence that makes the still mas­sive tech­ni­cal chal­lenges fac­ing true AR seem ir­rel­e­vant.

The pay­off is the abil­ity to then layer in dig­i­tal ex­pe­ri­ences into your real-life en­vi­ron­ment: this can in­clude pro­duc­tiv­ity ap­pli­ca­tions, pho­tos and movies, con­fer­ence calls, and what­ever else de­vel­op­ers might come up with, all of which can be used with­out los­ing your sense of place in the real world. To just take one small ex­am­ple, while us­ing the Vision Pro, my phone kept buzzing with no­ti­fi­ca­tions; I sim­ply took the phone out of my pocket, opened con­trol cen­ter, and turned on do-not-dis­turb. What was re­mark­able only in ret­ro­spect is that I did all of that while tech­ni­cally be­ing closed off to the world in vir­tual re­al­ity, but my ex­pe­ri­ence was of sim­ply glanc­ing at the phone in my hand with­out even think­ing about it.

Making every­thing dig­i­tal pays off in other ways, as well; the demo in­cluded this di­nosaur ex­pe­ri­ence, where the di­nosaur seems to en­ter the room:

The whole rea­son this works is be­cause while the room feels real, it is in fact ren­dered dig­i­tally.

It re­mains to be seen how well this ex­pe­ri­ence works in re­verse: the Vision Pro in­cludes EyeSight”, which is Apple’s name for the front-fac­ing dis­play that shows your eyes to those around you. EyeSight was­n’t a part of the demo, so it re­mains to be seen if it is as creepy as it seems it might be: the goal, though, is the same: main­tain a sense of place in the real world not by solv­ing seem­ingly-im­pos­si­ble physics prob­lems, but by sim­ply mak­ing every­thing dig­i­tal.

That the user’s eyes can be dis­played on the out­side of the Vision Pro is ar­guably a by-prod­uct of the tech­nol­ogy that un­der­girds the Vision Pro’s user in­ter­face: what you are look­ing at is tracked by the Vision Pro, and when you want to take ac­tion on what­ever you are look­ing at you sim­ply touch your fin­gers to­gether. Notably, your fin­gers don’t need to be ex­tended into space: the en­tire time I used the Vision Pro my hands were sim­ply rest­ing in my lap, their move­ment tracked by the Vision Pro’s cam­eras.

It’s as­tound­ing how well this works, and how nat­ural it feels. What is par­tic­u­larly sur­pris­ing is how high-res­o­lu­tion this UI is; look at this crop of a still from Apple’s pre­sen­ta­tion:

The bar at the bot­tom of Photos is how you grab” Photos to move it any­where (literally); the small cir­cle next to the bar is to close the app. On the left are var­i­ous menu items unique to Photos. What is no­table about these is how small they are: this is­n’t a user in­ter­face like iOS or iPa­dOS that has to ac­com­mo­date big blunt fin­gers; rather, vi­sionOS’s eye track­ing is so ac­cu­rate that it can eas­ily de­lin­eate the ex­act user in­ter­face el­e­ment you are look­ing at, which again, you trig­ger by sim­ply touch­ing your fin­gers to­gether. It’s ex­tra­or­di­nary, and works ex­tra­or­di­nar­ily well.

Of course you can also use a key­board and track­pad, con­nected via Bluetooth, and you can also pro­ject a Mac into the Vision Pro; the full ver­sion of the above screen­shot has a Mac run­ning Final Cut Pro to the left of Photos:

I did­n’t get the chance to try the Mac pro­jec­tion, but truth­fully, while I went into this keynote the most ex­cited about this ca­pa­bil­ity, the na­tive in­ter­face worked so well that I sus­pect I am go­ing to pre­fer us­ing na­tive apps, even if those apps are also avail­able for the Mac.

An in­cred­i­ble prod­uct is one thing; the ques­tion on every­one’s mind, though, is what ex­actly is this use­ful for? Who has room for an­other de­vice in their life, par­tic­u­larly one that costs $3,499?

This ques­tion is, more of­ten than not, more im­por­tant to the suc­cess of a prod­uct than the qual­ity of the prod­uct it­self. Apple’s own his­tory of new prod­ucts is an ex­cel­lent ex­am­ple:

The PC (including the Mac) brought com­put­ing to the masses for the first time; there was a mas­sive amount of green­field in peo­ple’s lives, and the prod­uct cat­e­gory was a mas­sive suc­cess.

The iPhone ex­panded com­put­ing from the desk­top to every other part of a per­son’s life. It turns out that was an even larger op­por­tu­nity than the desk­top, and the prod­uct cat­e­gory was an even larger suc­cess.

The iPad, in con­trast to the Mac and iPhone, sort of sat in the mid­dle, a fact that Steve Jobs noted when he in­tro­duced the prod­uct in 2010:

All of us use lap­tops and smart­phones now. Everybody uses a lap­top and/​or a smart­phone. And the ques­tion has arisen lately, is there room for a third cat­e­gory of de­vice in the mid­dle? Something that’s be­tween a lap­top and a smart­phone. And of course we’ve pon­dered this ques­tion for years as well. The bar is pretty high. In or­der to cre­ate a new cat­e­gory of de­vices those de­vices are go­ing to have to be far bet­ter at do­ing some key tasks. They’re go­ing to have to be far bet­ter at do­ing some re­ally im­por­tant things, bet­ter than lap­top, bet­ter than the smart­phone.

Jobs went on to list a num­ber of things he thought the iPad might be bet­ter at, in­clud­ing web brows­ing, email, view­ing pho­tos, watch­ing videos, lis­ten­ing to mu­sic, play­ing games, and read­ing eBooks.

In truth, the only one of those cat­e­gories that has truly taken off is watch­ing video, par­tic­u­larly stream­ing ser­vices. That’s a pretty sig­nif­i­cant use case, to be sure, and the iPad is a suc­cess­ful prod­uct (and one whose po­ten­tial use cases has been dra­mat­i­cally ex­panded by the Apple Pencil) that makes nearly as much rev­enue as the Mac, even though it dom­i­nates the tablet mar­ket to a much greater ex­tent than the Mac does the PC mar­ket. At the same time, it’s not close to the iPhone, which makes sense: the iPad is a nice ad­di­tion to one’s de­vice col­lec­tion, whereas an iPhone is es­sen­tial.

The crit­ics are right that this will be Apple Vision’s chal­lenge at the be­gin­ning: a lot of early buy­ers will prob­a­bly be in­ter­ested in the nov­elty value, or will be Apple su­per fans, and it’s rea­son­able to won­der if the Vision Pro might be­comes the world’s most ex­pen­sive pa­per weight. To use an up­dated ver­sion of Jobs’ slide:

Small won­der that Apple has re­port­edly pared its sales es­ti­mates to less than a mil­lion de­vices.

As I noted above, I have been rel­a­tively op­ti­mistic about VR, in part be­cause I be­lieve the most com­pelling use case is for work. First, if a de­vice ac­tu­ally makes some­one more pro­duc­tive, it is far eas­ier to jus­tify the cost. Second, while it is a bar­rier to ac­tu­ally put on a head­set — to go back to my VR/AR fram­ing above, a head­set is a des­ti­na­tion de­vice — work is a des­ti­na­tion. I wrote in an­other Update in the con­text of Meta’s Horizon Workrooms:

The point of in­vok­ing the changes wrought by COVID, though, was to note that work is a des­ti­na­tion, and its a des­ti­na­tion that oc­cu­pies a huge amount of our time. Of course when I wrote that skep­ti­cal ar­ti­cle in 2018 a work des­ti­na­tion was, for the vast ma­jor­ity of peo­ple, a phys­i­cal space; sud­denly, though, for mil­lions of white col­lar work­ers in par­tic­u­lar, it’s a vir­tual space. And, if work is al­ready a vir­tual space, then sud­denly vir­tual re­al­ity seems far more com­pelling. In other words, vir­tual re­al­ity may be much more im­por­tant than pre­vi­ously thought be­cause the vec­tor by which it will be­come per­va­sive is not the con­sumer space (and gam­ing), but rather the en­ter­prise space, par­tic­u­larly meet­ings.

Apple did dis­cuss meet­ings in the Vision Pro, in­clud­ing a frame­work for per­sonas — their word for avatars — that is used for Facetime and will be in­cor­po­rated into up­com­ing Zoom, Teams, and Webex apps. What is much more com­pelling to me, though, is sim­ply us­ing a Vision Pro in­stead of a Mac (or in con­junc­tion with one, by pro­ject­ing the screen).

At the risk of over-in­dex­ing on my own ex­pe­ri­ence, I am a huge fan of mul­ti­ple mon­i­tors: I have four at my desk, and it is frus­trat­ing to be on the road right now typ­ing this on a lap­top screen. I would ab­solutely pay for a de­vice to have a huge work­space with me any­where I go, and while I will re­serve judg­ment un­til I ac­tu­ally use a Vision Pro, I could see it be­ing bet­ter at my desk as well.

I have tried this with the Quest, but the screen is too low of res­o­lu­tion to work com­fort­ably, the user in­ter­face is a bit clunky, and the im­mer­sion is too com­plete: it’s hard to even drink cof­fee with it on. Oh, and the bat­tery life is­n’t nearly good enough. Vision Pro, though, solves all of these prob­lems: the res­o­lu­tion is ex­cel­lent, I al­ready raved about the user in­ter­face, and crit­i­cally, you can still see around you and in­ter­act with ob­jects and peo­ple. Moreover, this is where the ex­ter­nal bat­tery so­lu­tion is an ad­van­tage, given that you can eas­ily plug the bat­tery pack into a charger and use the head­set all day (and, as­sum­ing Apple’s real-time ren­der­ing holds up, you won’t get mo­tion sick­ness).1

Again, I’m al­ready bi­ased on this point, given both my pre­dic­tion and per­sonal work­flow, but if the Vision Pro is a suc­cess, I think that an im­por­tant part of its mar­ket will to at first be used along­side a Mac, and as the na­tive app ecosys­tem de­vel­ops, to be used in place of one.

To put it even more strongly, the Vision Pro is, I sus­pect, the fu­ture of the Mac.

The larger Vision Pro op­por­tu­nity is to move in on the iPad and to be­come the ul­ti­mate con­sump­tion de­vice:

The keynote high­lighted the movie watch­ing ex­pe­ri­ence of the Vision Pro, and it is ex­cel­lent and im­mer­sive. Of course it is­n’t, in the end, that much dif­fer­ent than hav­ing an ex­cel­lent TV in a dark room.

What was much more com­pelling were a se­ries of im­mer­sive video ex­pe­ri­ences that Apple did not show in the keynote. The most strik­ing to me were, un­sur­pris­ingly, sports. There was one clip of an NBA bas­ket­ball game that was in­cred­i­bly re­al­is­tic: the game clip was shot from the base­line, and as some­one who has had the good for­tune to sit court­side, it felt ex­actly the same, and, it must be said, much more im­mer­sive than sim­i­lar ex­pe­ri­ences on the Quest.

It turns out that one rea­son for the im­mer­sion is that Apple ac­tu­ally cre­ated its own cam­eras to cap­ture the game us­ing its new Apple Immersive Video Format. The com­pany was fairly mum about how it planned to make those cam­eras and its for­mat more widely avail­able, but I am com­pletely se­ri­ous when I say that I would pay the NBA thou­sands of dol­lars to get a sea­son pass to watch games cap­tured in this way. Yes, that’s a crazy state­ment to make, but court­side seats cost that much or more, and that 10-second clip was shock­ingly close to the real thing.

What is fas­ci­nat­ing is that such a sea­son pass should, in my es­ti­ma­tion, look very dif­fer­ent from a tra­di­tional TV broad­cast, what with its mul­ti­ple cam­era an­gles, an­nounc­ers, score­board slug, etc. I would­n’t want any of that: if I want to see the score, I can sim­ply look up at the score­board as if I’m in the sta­dium; the sounds are pro­vided by the crowd and PA an­nouncer. To put it an­other way, the Apple Immersive Video Format, to a far greater ex­tent than I thought pos­si­ble, truly makes you feel like you are in a dif­fer­ent place.

Again, though, this was a 10 sec­ond clip (there was an­other one for a base­ball game, shot from the home team’s dugout, that was equally com­pelling). There is a ma­jor chicken-and-egg is­sue in terms of pro­duc­ing con­tent that ac­tu­ally de­liv­ers this ex­pe­ri­ence, which is prob­a­bly why the keynote most fo­cused on 2D video. That, by ex­ten­sion, means it is harder to jus­tify buy­ing a Vision Pro for con­sump­tion pur­poses. The ex­pe­ri­ence is so com­pelling though, that I sus­pect this prob­lem will be solved even­tu­ally, at which point the ad­dress­able mar­ket is­n’t just the Mac, but also the iPad.

What is left in place in this vi­sion is the iPhone: I think that smart­phones are the pin­na­cle in terms of com­put­ing, which is to say that the Vision Pro makes sense every­where the iPhone does­n’t.

I rec­og­nize how ab­surdly pos­i­tive and op­ti­mistic this Article is about the Vision Pro, but it re­ally does feel like the fu­ture. That fu­ture, though, is go­ing to take time: I sus­pect there will be a slow burn, par­tic­u­larly when it comes to re­plac­ing prod­uct cat­e­gories like the Mac or es­pe­cially the iPad.

Moreover, I did­n’t even get into one of the fea­tures Apple is tout­ing most highly, which is the abil­ity of the Vision Pro to take pictures” — mem­o­ries, re­ally — of mo­ments in time and ren­der them in a way that feels in­cred­i­bly in­ti­mate and vivid.

One of the is­sues is the fact that record­ing those mem­o­ries does, for now, en­tail wear­ing the Vision Pro in the first place, which is go­ing to be re­ally awk­ward! Consider this video of a girl’s birth­day party:

It’s go­ing to seem pretty weird when dad is wear­ing a head­set as his daugh­ter blows out birth­day can­dles; per­haps this prob­lem will be fixed by a sep­a­rate line of stand­alone cam­eras that cap­ture pho­tos in the Apple Immersive Video Format, which is an­other way to say that this is a bit of a chicken-and-egg prob­lem.

What was far more strik­ing, though, was how the con­sump­tion of this video was pre­sented in the keynote:

Note the empty house: what hap­pened to the kids? Indeed, Apple ac­tu­ally went back to this clip while sum­ma­riz­ing the keynote, and the line for re­liv­ing mem­o­ries” struck me as in­cred­i­bly sad:

I’ll be hon­est: what this looked like to me was a di­vorced dad, alone at home with his Vision Pro, per­haps be­cause his wife was ir­ri­tated at the ex­tent to which he got lost in his own vir­tual ex­pe­ri­ence. That cer­tainly puts a dif­fer­ent spin on Apple’s proud de­c­la­ra­tion that the Vision Pro is The Most Advanced Personal Electronics Device Ever”.

Indeed, this, even more than the iPhone, is the true per­sonal com­puter. Yes, there are af­for­dances like mixed re­al­ity and EyeSight to in­ter­act with those around you, but at the end of the day the Vision Pro is a soli­tary ex­pe­ri­ence.

That, though, is the trend: long-time read­ers know that I have long be­moaned that it was the desk­top com­puter that was chris­tened the personal” com­puter, given that the iPhone is much more per­sonal, but now even the iPhone has been eclipsed. The arc of tech­nol­ogy, in large part led by Apple, is for ever more per­sonal ex­pe­ri­ences, and I’m not sure it’s an ac­ci­dent that that trend is hap­pen­ing at the same time as a so­ci­ety-wide trend away from fam­ily for­ma­tion and to­wards an in­crease in lone­li­ness.

This, I would note, is where the most in­ter­est­ing com­par­isons to Meta’s Quest ef­forts lie. The un­for­tu­nate re­al­ity for Meta is that they seem com­pletely out-classed on the hard­ware front. Yes, Apple is work­ing with a 7x ad­van­tage in price, which cer­tainly con­tributes to things like su­pe­rior res­o­lu­tion, but that bit about the deep in­te­gra­tion be­tween Apple’s own sil­i­con and its cus­tom-made op­er­at­ing sys­tem are go­ing to very dif­fi­cult to repli­cate for a com­pany that has (correctly) com­mit­ted to an Android-based OS and a Qualcomm-designed chip.

What is more strik­ing, though, is the ex­tent to which Apple is lean­ing into a per­sonal com­put­ing ex­pe­ri­ence, whereas Meta, as you would ex­pect, is fo­cused on so­cial. I do think that pres­ence is a real thing, and in­cred­i­bly com­pelling, but achiev­ing pres­ence de­pends on your net­work also hav­ing VR de­vices, which makes Meta’s goals that much more dif­fi­cult to achieve. Apple, mean­while, is­n’t even both­er­ing with pres­ence: even its Facetime in­te­gra­tion was with an avatar in a win­dow, lean­ing into the fact you are apart, whereas Meta wants you to feel like you are to­gether.

In other words, there is ac­tu­ally a rea­son to hope that Meta might win: it seems like we could all do with more con­nect­ed­ness, and less iso­la­tion with in­cred­i­ble im­mer­sive ex­pe­ri­ences to dull the pain of lone­li­ness. One won­ders, though, if Meta is in fact fight­ing Apple not just on hard­ware, but on the over­all trend of so­ci­ety; to put it an­other way, bull­ish­ness about the Vision Pro may in fact be a func­tion of be­ing bear­ish about our ca­pa­bil­ity to mean­ing­fully con­nect.

...

Read the original on stratechery.com »

7 336 shares, 21 trendiness

NVIDIA Grace Hopper Superchip

in­struc­tions how to en­able JavaScript in your web browser. This site re­quires Javascript in or­der to view all its con­tent. Please en­able Javascript in or­der to ac­cess all the func­tion­al­ity of this web site. Here are the in­struc­tions how to en­able JavaScript in your web browser.

The NVIDIA GH200 Grace™ Hopper™ Superchip is a break­through ac­cel­er­ated CPU de­signed from the ground up for gi­ant-scale AI and high-per­for­mance com­put­ing (HPC) ap­pli­ca­tions. The su­per­chip de­liv­ers up to 10X higher per­for­mance for ap­pli­ca­tions run­ning ter­abytes of data, en­abling sci­en­tists and re­searchers to reach un­prece­dented so­lu­tions for the world’s most com­plex prob­lems.

Take a Closer Look at the Superchip

The NVIDIA GH200 Grace Hopper Superchip combines the Grace and Hopper ar­chi­tec­tures us­ing NVIDIA® NVLink®-C2C to de­liver a CPU+GPU co­her­ent mem­ory model for ac­cel­er­ated AI and HPC ap­pli­ca­tions.

New 900 gi­ga­bytes per sec­ond (GB/s) co­her­ent in­ter­face, 7X faster than PCIe Gen5

Runs all NVIDIA soft­ware stacks and plat­forms, in­clud­ing the NVIDIA HPC SDK, NVIDIA AI, and NVIDIA Omniverse™

GH200-powered sys­tems join 400+ con­fig­u­ra­tions—based on the lat­est NVIDIA ar­chi­tec­tures—that are be­ing rolled out to meet the surg­ing de­mand for gen­er­a­tive AI.

...

Read the original on www.nvidia.com »

8 278 shares, 41 trendiness

US urged to reveal UFO evidence after claim that it has intact alien vehicles

The US has been urged to dis­close ev­i­dence of UFOs af­ter a whistle­blower for­mer in­tel­li­gence of­fi­cial said the gov­ern­ment has pos­ses­sion of intact and par­tially in­tact” alien ve­hi­cles.

The for­mer in­tel­li­gence of­fi­cial David Grusch, who led analy­sis of un­ex­plained anom­alous phe­nom­ena (UAP) within a US Department of Defense agency, has al­leged that the US has craft of non-hu­man ori­gin.

Information on these ve­hi­cles is be­ing il­le­gally with­held from Congress, Grusch told the Debrief. Grusch said when he turned over clas­si­fied in­for­ma­tion about the ve­hi­cles to Congress he suf­fered re­tal­i­a­tion from gov­ern­ment of­fi­cials. He left the gov­ern­ment in April af­ter a 14-year ca­reer in US in­tel­li­gence.

Jonathan Grey, a cur­rent US in­tel­li­gence of­fi­cial at the National Air and Space Intelligence Center (Nasic), con­firmed the ex­is­tence of exotic ma­te­ri­als” to the Debrief, adding: We are not alone.”

The dis­clo­sures come af­ter a swell of cred­i­ble sight­ings and re­ports have re­vived at­ten­tion in alien ships, and po­ten­tially vis­its, in re­cent years.

In 2021, the Pentagon re­leased a re­port on UAP — the term is pre­ferred to UFO by much of the ex­trater­res­trial com­mu­nity — which found more than 140 in­stances of UAP en­coun­ters that could not be ex­plained.

The re­port fol­lowed a leak of mil­i­tary footage that showed ap­par­ently in­ex­plic­a­ble hap­pen­ings in the sky, while navy pi­lots tes­ti­fied that they had fre­quently had en­coun­ters with strange craft off the US coast.

In an in­ter­view with the Debrief jour­nal­ists Leslie Kean and Ralph Blumenthal, who pre­vi­ously ex­posed the ex­is­tence of a se­cret Pentagon pro­gram that in­ves­ti­gated UFOs, Grusch said the US gov­ern­ment and de­fense con­trac­tors had been re­cov­er­ing frag­ments of non-hu­man craft, and in some cases en­tire craft, for decades.

We are not talk­ing about pro­saic ori­gins or iden­ti­ties,” Grusch said. The ma­te­r­ial in­cludes in­tact and par­tially in­tact ve­hi­cles.”

Grusch told the Debrief that analy­sis de­ter­mined that this ma­te­r­ial is of ex­otic ori­gin” — mean­ing non-human in­tel­li­gence, whether ex­trater­res­trial or un­known ori­gin”.

[This as­sess­ment is] based on the ve­hi­cle mor­pholo­gies and ma­te­r­ial sci­ence test­ing and the pos­ses­sion of unique atomic arrange­ments and ra­di­o­log­i­cal sig­na­tures,” Grusch said.

Grey, who, ac­cord­ing to the Debrief, an­a­lyzes un­ex­plained anom­alous phe­nom­ena within the Nasic, con­firmed Grusch’s ac­count.

The non-hu­man in­tel­li­gence phe­nom­e­non is real. We are not alone,” Grey said. Retrievals of this kind are not lim­ited to the United States. This is a global phe­nom­e­non, and yet a global so­lu­tion con­tin­ues to elude us.”

The Debrief spoke to sev­eral of Grusch’s for­mer col­leagues, each of whom vouched for his char­ac­ter. Karl E Nell, a re­tired army colonel, said Grusch was beyond re­proach”. In a 2022 per­for­mance re­view seen by the Debrief, Grusch was de­scribed as an of­fi­cer with the strongest pos­si­ble moral com­pass”.

Nick Pope, who spent the early 1990s in­ves­ti­gat­ing UFOs for the British Ministry of Defence (MoD), said Grusch and Grey’s ac­count of alien ma­te­ri­als was very sig­nif­i­cant”.

It’s one thing to have sto­ries on the con­spir­acy blogs, but this takes it to the next level, with gen­uine in­sid­ers com­ing for­ward,” Pope said.

When these peo­ple make these for­mal com­plaints, they do so on the un­der­stand­ing that if they’ve know­ingly made a false state­ment, they are li­able to a fairly hefty fine, and/​or prison.

People say: Oh, peo­ple make up sto­ries all the time.’ But I think it’s very dif­fer­ent to go be­fore Congress and go to the in­tel­li­gence com­mu­nity in­spec­tor gen­eral and do that. Because there will be con­se­quences if it emerges that this is not true.”

The Debrief re­ported that Grusch’s knowl­edge of non-hu­man ma­te­ri­als and ve­hi­cles was based on extensive in­ter­views with high-level in­tel­li­gence of­fi­cials”. He said he had re­ported the ex­is­tence of a UFO ma­te­r­ial recovery pro­gram” to Congress.

Grusch said that the craft re­cov­ery op­er­a­tions are on­go­ing at var­i­ous lev­els of ac­tiv­ity and that he knows the spe­cific in­di­vid­u­als, cur­rent and for­mer, who are in­volved,” the Debrief re­ported.

In the Debrief ar­ti­cle, Grusch does not say he has per­son­ally seen alien ve­hi­cles, nor does he say where they may be be­ing stored. He asked the Debrief to with­hold de­tails of re­tal­i­a­tion by gov­ern­ment of­fi­cials due to an on­go­ing in­ves­ti­ga­tion.

He also does not spec­ify how he be­lieves the gov­ern­ment re­tal­i­ated against him.

In June 2021, a re­port from the Office of the Director of National Intelligence said that from 2004 to 2021 there were 144 en­coun­ters be­tween mil­i­tary pi­lots and UAP, 80 of which were cap­tured on mul­ti­ple sen­sors. Only one of the 144 en­coun­ters could be ex­plained with high con­fi­dence” — it was a large, de­flat­ing bal­loon.

Following in­creased in­ter­est from the pub­lic and some US sen­a­tors, the Pentagon es­tab­lished the All-domain Anomaly Resolution Office, charged with track­ing UAP, in July 2022.

In December last year, the of­fice said it had re­ceived several hun­dred” new re­ports, but no ev­i­dence so far of alien life.

The pub­li­ca­tion of Grusch and Grey’s claims comes af­ter a panel that the US space agency Nasa charged with in­ves­ti­gat­ing un­ex­plained anom­alous phe­nom­ena said stigma around re­port­ing en­coun­ters — and ha­rass­ment of those who do re­port en­coun­ters — was hin­der­ing its work.

The navy pi­lots who in 2021 shared their ex­pe­ri­ences of en­coun­ter­ing un­ex­plained ob­jects while con­duct­ing mil­i­tary flights said they, and oth­ers, had de­cided against re­port­ing the en­coun­ters in­ter­nally, be­cause of fears it could hin­der their ca­reers.

Harassment only leads to fur­ther stigma­ti­za­tion of the UAP field, sig­nif­i­cantly hin­der­ing the sci­en­tific progress and dis­cour­ag­ing oth­ers to study this im­por­tant sub­ject mat­ter,” Nasa’s sci­ence chief, Nicola Fox, said in a pub­lic meet­ing on 31 May.

Dr David Spergel, the in­de­pen­dent chair of Nasa’s UAP in­de­pen­dent study team, told the Guardian he did not know Grusch and had no knowl­edge of his claims.

The Department of Defense did not im­me­di­ately re­spond to a re­quest for com­ment.

In a state­ment, a Nasa spokesper­son said: One of Nasa’s key pri­or­i­ties is the search for life else­where in the uni­verse, but so far, NASA has not found any cred­i­ble ev­i­dence of ex­trater­res­trial life and there is no ev­i­dence that UAPs are ex­trater­res­trial. However, Nasa is ex­plor­ing the so­lar sys­tem and be­yond to help us an­swer fun­da­men­tal ques­tions, in­clud­ing whether we are alone in the uni­verse.”

Pope said in his work in­ves­ti­gat­ing UFOs for the MoD he had seen no hard ev­i­dence of non-hu­man craft or ma­te­ri­als.

Some of our cases were in­trigu­ing,” Pope said. But we did­n’t have a space­ship in a hangar any­where. And if we did, they did­n’t tell me.”

Still, Pope said, Grusch’s claims should be seen as part of an in­creas­ing flow of in­for­ma­tion — and hope­fully dis­clo­sures — about UFOs.

He said: It’s part of a wider puz­zle. And I think, as­sum­ing this is all true, it takes us closer than we’ve ever been be­fore to the very heart of all this.”

...

Read the original on www.theguardian.com »

9 272 shares, 12 trendiness

Yes, Apple Vision Pro works and yes, it’s good

After a roughly 30 minute demo that ran through the ma­jor fea­tures that are yet ready to test I came away con­vinced that Apple has de­liv­ered noth­ing less than a gen­uine leapfrog in ca­pa­bil­ity and ex­e­cu­tion of XR — or mixed re­al­ity with its new Apple Vision Pro.

To be su­per clear, I’m not say­ing it de­liv­ers on all promises, is a gen­uinely new par­a­digm in com­put­ing or any other high-pow­ered claim that Apple hopes to de­liver on once it ships. I will need a lot more time with the de­vice than a guided demo.

But, I’ve used es­sen­tially every ma­jor VR head­set and AR de­vice since 2013’s Oculus DK1 right up through the lat­est gen­er­a­tions of Quest and Vive head­sets. I’ve tried all of the ex­pe­ri­ences and stabs at mak­ing fetch hap­pen when it comes to XR. I’ve been awed and re-awed as de­vel­op­ers of the hard­ware and soft­ware of those de­vices and their mar­quee apps have con­tin­ued to chew away at the conundrum of the killer app” — try­ing to find some­thing that would get real pur­chase with the broader pub­lic.

There are some gen­uine so­cial, nar­ra­tive or gam­ing suc­cesses like Gorilla Tag, VRChat or Cosmonius. I’ve also been moved by first-per­son ex­pe­ri­ences by Sundance film­mak­ers high­light­ing the hu­man (or an­i­mal) con­di­tion.

But none of them had the ad­van­tages that Apple brings to the table with Apple Vision Pro. Namely, 5,000 patents filed over the past few years and an enor­mous base of tal­ent and cap­i­tal to work with. Every bit of this thing shows Apple-level am­bi­tion. I don’t know whether it will be the next com­put­ing mode,” but you can see the con­vic­tion be­hind each of the choices made here. No cor­ners cut. Full-tilt en­gi­neer­ing on dis­play.

The hard­ware is good — very good — with 24 mil­lion pix­els across the two pan­els, or­ders of mag­ni­tude more than any head­sets most con­sumers have come into con­tact with. The op­tics are bet­ter, the head­band is com­fort­able and quickly ad­justable and there is a top strap for weight re­lief. Apple says it is still work­ing on which light seal (the cloth shroud) op­tions to ship with it when it re­leases of­fi­cially but the de­fault one was com­fort­able for me. They aim to ship them with vary­ing sizes and shapes to fit dif­fer­ent faces. The power con­nec­tor has a great lit­tle de­sign, as well, that in­ter­con­nects us­ing in­ter­nal pin-type power link­ages with an ex­ter­nal twist lock.

There is also a mag­netic so­lu­tion for some (but not all) op­ti­cal ad­just­ments peo­ple with dif­fer­ences in vi­sion may need. The on­board­ing ex­pe­ri­ence fea­tures an au­to­matic eye-re­lief cal­i­bra­tion match­ing the lenses to the cen­ter of your eyes. No man­ual wheels ad­just­ing that here.

The main frame and glass piece look fine, though it’s worth men­tion­ing that they are very sub­stan­tial in size. Not heavy, per se, but def­i­nitely pre­sent.

If you have ex­pe­ri­ence with VR at all then you know that the two big bar­ri­ers most peo­ple hit are ei­ther la­tency-dri­ven nau­sea or the iso­la­tion that long ses­sions wear­ing some­thing over your eyes can de­liver.

Apple has mit­i­gated both of those head on. The R1 chip that sits along­side the M2 chip has a sys­tem-wide polling rate of 12ms, and I no­ticed no jud­der or frame­drops. There was a slight mo­tion blur ef­fect used in the passthrough mode but it was­n’t dis­tract­ing. The win­dows them­selves ren­dered crisply and moved around snap­pily.

Of course, Apple was able to mit­i­gate those is­sues due to a lot of com­pletely new and orig­i­nal hard­ware. Everywhere you look here there’s a new idea, a new tech­nol­ogy or a new im­ple­men­ta­tion. All of that new comes at a price: $3,500 is on the high end of ex­pec­ta­tions and firmly places the de­vice in the power user cat­e­gory for early adopters.

Here’s what Apple got right that other head­sets just could­n’t nail down:

The eye track­ing and ges­ture con­trol is near per­fect. The hand ges­tures are picked up any­where around the head­set. That in­cludes on your lap or low and away rest­ing on a chair or couch. Many other hand-track­ing in­ter­faces force you to keep your hands up in front of you, which is tir­ing. Apple has high-res­o­lu­tion cam­eras ded­i­cated to the bot­tom of the de­vice just to keep track of your hands. Similarly, an eye-track­ing ar­ray in­side means that, af­ter cal­i­bra­tion, nearly every­thing you look at is pre­cisely high­lighted. A sim­ple low-ef­fort tap of your fin­gers and boom, it works.

Passthrough is a ma­jor key. Having a real-time 4k view of the world around you that in­cludes any hu­mans in your per­sonal space is so im­por­tant for long-ses­sion VR or AR wear. There is a deep an­i­mal brain thing in most hu­mans that makes us re­ally, re­ally un­com­fort­able if we can’t see our sur­round­ings for a length of time. Eliminating that worry by pass­ing through an im­age should im­prove the chance of long use times. There’s also a clever breakthrough” mech­a­nism that au­to­mat­i­cally passes a per­son who comes near you through your con­tent, alert­ing you to the fact that they’re ap­proach­ing. The eyes on the out­side, which change ap­pear­ance de­pend­ing on what you’re do­ing, also pro­vide a nice con­text cue for those out­side.

The res­o­lu­tion means that text is ac­tu­ally read­able. Apple’s po­si­tion­ing of this as a full on com­put­ing de­vice only makes sense if you can ac­tu­ally read text in it. All of the pre­vi­ous it­er­a­tions of virtual desk­top” se­tups have re­lied on pan­els and lenses that pre­sent too blurry a view to re­li­ably read fine text at length. In many cases it lit­er­ally hurt to do so. Not with the Apple Vision Pro — text is su­per crisp and leg­i­ble at all sizes and at far distances” within your space.

There were a hand­ful of re­ally sur­pris­ing mo­ments from my short time with the head­set, as well. Aside from the sharp­ness of the dis­play and the snappy re­spon­sive­ness of the in­ter­face, the en­tire suite of sam­ples oozed at­ten­tion to de­tail.

The Personas Play. I was HIGHLY doubt­ful that Apple could pull off a work­able dig­i­tal avatar based off of just a scan of your face us­ing the Vision Pro head­set it­self. Doubt crushed. I’d say that if you’re mea­sur­ing the dig­i­tal ver­sion of you that it cre­ates to be your avatar in FaceTime calls and other ar­eas, it has a solid set of toes on the other side of the un­canny val­ley. It’s not to­tally per­fect, but they got skin ten­sion and mus­cle work right, the ex­pres­sions they have you make are used to in­ter­po­late out a full range of fa­cial con­tor­tions us­ing ma­chine learn­ing mod­els, and the brief in­ter­ac­tions I had with a live per­son on a call (and it was live, I checked by ask­ing off-script stuff) did not feel creepy or odd. It worked.

It’s crisp. I’m sort of stat­ing this again but, re­ally, it’s crisp as hell. Running right up to demos like the 3D di­nosaur you got right down to the tex­ture level and be­yond.

3D Movies are ac­tu­ally good in it. Jim Cameron prob­a­bly had a mo­ment when he saw Avatar: Way of Water on the Apple Vision Pro. This thing was ab­solutely born to make the 3D for­mat sing — and it can dis­play them pretty much right away, so there’s go­ing to be a de­cent li­brary of shot-on-3D movies that will bring new life to them all. The 3D pho­tos and videos you can take with Apple Vision Pro di­rectly also look su­per great, but I was­n’t able to test cap­tur­ing any my­self so I don’t know how that will feel yet. Awkward? Hard to say.

The setup is smooth and sim­ple. A cou­ple of min­utes and you’re good to go. Very Apple.

Yes, it does look that good. The out­put of the in­ter­face and the var­i­ous apps are so good that Apple just used them di­rectly off of the de­vice in its keynote. The in­ter­face is bright and bold and feels pre­sent be­cause of the way it in­ter­acts with other win­dows, casts shad­ows on the ground and re­acts to light­ing con­di­tions.

Overall, I’m hes­i­tant to make any broad claims about whether Apple Vision Pro is go­ing to ful­fill Apple’s claims about the on­set of spa­tial com­put­ing. I’ve had far too lit­tle time with it and it’s not even com­pleted — Apple is still work­ing on things like the light shroud and def­i­nitely on many soft­ware as­pects.

It is, how­ever, re­ally, re­ally well done. The pla­tonic ideal of an XR head­set. Now, we wait to see what de­vel­op­ers and Apple ac­com­plish over the next few months and how the pub­lic re­acts.

...

Read the original on techcrunch.com »

10 263 shares, 15 trendiness

chromium - An open-source project to help move the web forward.

...

Read the original on bugs.chromium.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.