10 interesting stories served every morning and every evening.
Reddit is getting ready to slap third-party apps with millions of dollars in API fees, and many Reddit users are unhappy about it. A widespread protest is planned for June 12, with hundreds of subreddits planning to go dark for 48 hours.
Reddit started life as a geeky site, but as it has aged, it has been trying to work more like a traditional social network. Part of that push included the development of a first-party app for mobile devices, but the 17-year-old site only launched an official app in 2016. Before then, it was up to third-party apps to pick up the slack, and even now, the revenue-focused official app is generally considered inferior to third-party options.
Reasonable API pricing would not necessarily mean the death of third-party apps, but the pricing Reddit communicated to some of its biggest developers is far above what other sites charge. The popular iOS client Apollo announced it was facing a $20 million-a-year bill. Apollo’s developer, Christian Selig, hasn’t announced a shutdown but admitted, “I don’t have that kind of money or would even know how to charge it to a credit card.”
Other third-party apps are in the same boat. The developer of Reddit is Fun has said the API costs will “likely kill” the app. Narwhal, another third-party app, will be “dead in 30 days” when the pricing kicks in on July 1, according to its developer.
Selig broke the news of the new pricing scheme, saying, “I don’t see how this pricing is anything based in reality or remotely reasonable.” Selig said Reddit wants to charge $12,000 for 50 million requests, while Imgur, an image-focused site that’s similar to Reddit, charges $166 for 50 million API calls. A post pinned to the top of the new /r/Save3rdPartyApps subreddit calls for a pricing decrease “by a factor of 15 to 20,” saying that would “put API calls in territory more closely comparable to other sites, like Imgur.”
Reddit is Fun (RIF) developer /u/talklittle said Reddit’s API terms also require “blocking ads in third-party apps, which make up the majority of RIF’s revenue.” Talklittle says the pricing and ad restriction will “force a paid subscription model” onto any surviving apps. Reddit’s APIs also exclude adult content, a major draw for the site.
While Reddit is a company that makes hundreds of millions of dollars a year, the content moderation and community building is all done by volunteer moderators. This means that you get fun civil wars, where the users and mods can take up arms against the site administrators. The full list of subreddits participating in the June 12 shutdown is currently over a thousand subreddits strong. Many of the site’s most popular subreddits, like r/gaming, r/Music, and r/Pics, are participating, and each has over 30 million subscribers. The Reddit administrators have yet to respond.
Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.
...
Read the original on arstechnica.com »
Check out the original contest announcement and rules here: https://qri.org/blog/contest
We strongly recommend viewing the content at its highest
resolution on a large screen size to perceive the effects in their
entirety.
Judges: A panel made up of members of QRI’s international phenomenologist network rated from 0 to 10 each piece by these three criteria:
Effectiveness: Distinguishes between sober and tripping people - is it just a little easier to see tripping but you can kinda see it anyway? Or is it impossible to see sober and effortlessly available above a certain dose?
Specificity: How specific and concrete the information encoded is - think “how many bits per second can be transmitted with this piece”.
Aesthetic Value: Does this look like an art piece? Can it pass as a standard work of art at a festival that people would enjoy whether tripping or not? Note: smaller contribution to overall score.
The scores were weighted by the level of experience of each participant (based on a combination of self-report and group consensus). And to get the final score, a weighted average of the three features was taken, where “Effectiveness” was multiplied by 3, “Specificity” by 2, and “Aesthetic Value” by 1. As with the Replications contest submissions, the weighted average excluded the ratings of one of the participants for pieces that they themselves submitted (so that nobody would be evaluating their own submissions).
The main result of this exercise was that only three submissions seemed to have any promising psychedelic cryptography effects. The three pieces that win stood out head and shoulders (and trunk and even knees and ankles) above the rest. It turns out that in order to decode these pieces you do require a substantial level of tracers, so only members of the committee who had a high enough level of visual effects were able to see the encoded messages. Some of the members of the panel reported that once you saw the messages during the state you could then also see them sober as well by using the right attentional tricks. But at least two members of the panel who reported seeing the messages while on mushrooms or ayahuasca were unable to then see them sober after the fact no matter how much they tried.
The three winners indeed are using the first classic PsyCrypto “encoding method” described in “How
to secretly communicate with people on LSD”. Namely, a method that takes advantage of tracer
effects to “write out” images or text over time (see also the fictional Rainbow
God Burning Man theme camp where this idea is explored in the context of festivals). That is, the fact that bright colors last longer in your visual field while on psychedelics can be used to slowly construct images in the visual field; sober individuals see lines and squiggles since the features of the hidden message don’t linger long enough for them to combine into a coherent message. All of the judges were stunned by the fact that the pieces actually worked. It works! PsyCrypto works!
At a theoretical level, this confirmation is significant because it is the first clear demonstration of a real perceptual computational advantage of psychedelic states of consciousness. We anticipate a rather incredible wave of PsyCrypto emerging within a year or two at festivals, and then in movies (even mainstream ones) within five years. It will seep into the culture at large in time. Just remember… you saw it first here! :-)
It is worth pointing out that there are possible alternative PsyCrypto encoding methods, and that there are two ways of identifying them. First, a strategy of casting a very wide net of possible stimuli to experience on psychedelics and in that way arrive at patterns only people can trip “from the bottom up” is promising. If this does work, it then opens up new avenues for scientific research. Meaning that as we find PsyCrypto encoding schemes we demonstrate undeniable computational advantages for the psychedelic states of consciousness, which in turn is significant for neuroscience and consciousness research. And second, new advancements in neuroscience can be used “from the top down” to create PsyCrypto encoding methods *from first principles*. Here, too, this will be synergistic with consciousness research: as artists figure out how to refine the techniques to make them work better, they will also be, inadvertently, giving neuroscientists pointers for further promising work.
Title: Can You see us?
Description: “Just a video loop of a bunch of weird wavy nooodles, nothing to see here, right?”
Encryption method: “I can’t linguistically describe it because it’s a lot of trial and error, but so far, the message has been decoded by a person who didn’t even know that there was supposed to be a message on 150ug 1plsd. I believe that any psychedelic/dissociative substance that causes heavy tracers could be helpful in decoding the message. Also, a person needs to be trained to change their mode of focus to see it. Once they see it, they can’t unsee it.”
One of the judges estimated that the “LSD-equivalent” threshold of tracers needed for being able to easily decode this piece was approximately 150μg, whereas another one estimated it at roughly 100μg. What made this image stand out, and receive the first prize relative to the other two, was how relatively easy it was to decode in the right state of mind. In other words, this piece easily distinguishes people who are sufficiently affected by psychedelics and those who simply aren’t high enough. More so, it doesn’t require a lot of time, dedication, or effort. The encoded information simply, allegedly, “pops out” in the right state of consciousness
Title: We Are Here. Lets talk
Description: “Short video loop containing a secret message from outer space. Can you see it?”
Encryption method: “The message text is illuminated in scanner fashion. The speed of sweep is dependent on the video frame rate, so whenever a person is in an altered state and experiencing heavy tracers they would see a clear message instead of one that’s broken apart. Entire message can be seen clearly by using video editing software and applying a tracer/echo effect and having 60 images in a trail that each are 0.033 seconds after the previous. This process can also be repeated with code.
The message can be seen in any altered state that induces heavy visual tracers, like medium-high doses of the most popular psychedelics, it also depends on a person at which doses they would start seeing heavy tracers. If experiencing heavy tracers and still unable to see the message, try looking at the center of a video and relaxing your eyes and defocusing them.”
As with the submission that got the 1st prize, the same judges estimated 150μg and 100μg of LSD, respectively, as the threshold needed to easily decode the secret messages in this piece. That said, decoding this piece turned out to be more difficult for the majority of the judges, and it wasn’t as immediately readable as the first one. It takes more time, effort, and dedication to put the message together in one’s visual field than the first one.
People also commented on the aesthetic richness of this piece, which gave it an extra boost.
Description: “Artwork depicts the connection between the subconscious and the universal energy. The key of everything is defined by the observer of their own mind.”
Encryption method: “Images edited in a way where only one going through a psychedelic experience and seeing large amounts of tracers would see the encrypted message fully. Based on”How To Secretly Communicate With People On LSD” first example of tracer-based encrypted message. I believe that DMT or 150-200ug of LSD or any substance delivering the tracer visual effect could be used to decode the artwork.”
The judges who were able to see the message in this piece had very different opinions on how intense the effects of psychedelics needed to be in order to easily decode the information hidden in it. One of the judges said that in order to read this easily with ayahuasca you would need the dose equivalent to approximately 40mg of vaporized DMT (i.e. a really strong, breakthrough-level, trip). This seems to be in stark contrast with the opinion of another judge, who estimated that the average person would need as little as 75ug of LSD to decode it.
The judges speculated that seeing the hidden information in this piece was easier to do on DMT than other psychedelics like mushrooms (for intensity-adjusted levels of alteration). When asked why they thought this was the case, it was speculated that this difference was likely due to the crispness and characteristic spatiotemporal frequencies of DMT relative to mushrooms. DMT simply produces more detailed and high-resolution tracers, which seem to be useful properties for decoding this piece in particular.
Alternatively, one of the judges proposed that, on the one hand, the effects of mushrooms on the visual field seem to be less dependent on the color palette of the stimuli. Therefore, whether the PsyCrypto uses colors or not doesn’t matter very much if one is using mushrooms. DMT, on the other hand, makes subtle differences in colors look larger, as if the effects were to “expand the color gamut” and amplify the perception of subtle gradients of hues (cf. color
control), which in this case is beneficial to decode the “psycrypted” information.
Additionally, all of the judges agreed that this piece had very significant aesthetic value. It looks extremely HD and harmonious in such states of consciousness, which is a significant boost and perhaps even a Psychedelic Cryptography of its own (meaning that the increase in aesthetic value in such states is sufficiently surprising that it’s a packet of information all by itself).
Despite the very high aesthetic value of this piece and that it did work as a PsyCrypto tool, the reason it got the third place was that (a) it is still difficult to decode on psychedelics, and (b) that it is not impossible to decode sober. In other words, it is less secure and discriminating than the other two, and therefore not as good as the others in terms of its PsyCrypto properties. It is, however, still very impressive and effective in absolute terms.
Congratulations to the winners and to all of the participants! We look forward to seeing secret messages at PsyTrance festivals and Psychedelic Conferences inspired by this work from now on ;-)
For attribution, please cite this work as
...
Read the original on qri.org »
ggml is a tensor library for machine learning to enable large models and high performance on commodity hardware. It is used by llama.cpp and whisper.cpp
Here are some sample performance stats on Apple Silicon June 2023:
Minimal
We like simplicity and aim to keep the codebase as small and as simple as possible
Open Core
The library and related projects are freely available under the MIT license. The development process is open and everyone is welcome to join. In the future we may choose to develop extensions that are licensed for commercial use
Explore and have fun!
We built ggml in the spirit of play. Contributors are encouraged to try crazy ideas, build wild demos, and push the edge of what’s possible
whisper.cpp
The project provides a high-quality speech-to-text solution that runs on Mac, Windows, Linux, iOS, Android, Raspberry Pi, and Web. Used by rewind.ai
llama.cpp
The project demonstrates efficient inference on Apple Silicon hardware and explores a variety of optimization techniques and applications of LLMs
The best way to support the project is by contributing to the codebase
If you wish to financially support the project, please consider becoming a sponsor to any of the contributors that are already involved:
ggml.ai is a company founded by Georgi Gerganov to support the development of ggml. Nat Friedman
and Daniel Gross provided the pre-seed funding.
We are currently seeking to hire full-time developers that share our vision and would like to help advance the idea of on-device inference. If you are interested and if you have already been a contributor to any of the related projects, please contact us at jobs@ggml.ai
For any business-related topics, including support or enterprise deployment, please contact us at sales@ggml.ai
...
Read the original on ggml.ai »
NEW YORK, June 6 (Reuters) - The top U. S. securities regulator sued cryptocurrency platform Coinbase on Tuesday, the second lawsuit in two days against a major crypto exchange, in a dramatic escalation of a crackdown on the industry and one that could dramatically transform a market that has largely operated outside regulation.
The U. S. Securities and Exchange Commission (SEC) on Monday took aim at Binance, the world’s largest cryptocurrency exchange. The SEC accuses Binance and its CEO Changpeng Zhao of operating a “web of deception”.
If successful, the lawsuits could transform the crypto market by successfully asserting the SEC’s jurisdiction over the industry which for years has argued that tokens do not constitute securities and should not be regulated by the SEC.
“The two cases are different, but overlap and point in the same direction: the SEC’s increasingly aggressive campaign to bring cryptocurrencies under the jurisdiction of the federal securities laws,” said Kevin O’Brien, a partner at Ford O’Brien Landy and a former federal prosecutor, adding, however, that the SEC has not previously taken on such major crypto players.
“If the SEC prevails in either case, the cryptocurrency industry will be transformed.”
In its complaint filed in Manhattan federal court, the SEC said Coinbase has since at least 2019 made billions of dollars by operating as a middleman on crypto transactions, while evading disclosure requirements meant to protect investors.
The SEC said Coinbase traded at least 13 crypto assets that are securities that should have been registered, including tokens such as Solana, Cardano and Polygon.
Coinbase suffered about $1.28 billion of net customer outflows following the lawsuit, according to initial estimates from data firm Nansen. Shares of Coinbase’s parent Coinbase Global Inc (COIN. O) closed down $7.10, or 12.1%, at $51.61 after earlier falling as much as 20.9%. They are up 46% this year.
Paul Grewal, Coinbase’s general counsel, in a statement said the company will continue operating as usual and has “demonstrated commitment to compliance.”
Oanda senior market analyst Ed Moya said the SEC “looks like it’s playing Whac-A-Mole with crypto exchanges,” and because most exchanges offer a range of tokens that operate on blockchain protocols targeted by regulators, “it seems like this is just the beginning.”
Leading cryptocurrency bitcoin has been a paradoxical beneficiary of the crackdown.
After an initial plunge to a nearly three-month low of $25,350 following the Binance suit, bitcoin rebounded by more than $2,000, exceeding the previous day’s high.
“The SEC is making life nearly impossible for several altcoins and that is actually driving some crypto traders back into bitcoin,” explained Oanda’s Moya.
Securities, as opposed to other assets such as commodities, are strictly regulated and require detailed disclosures to inform investors of potential risks. The Securities Act of 1933 outlined a definition of the term “security,” yet many experts rely on two U. S. Supreme Court cases to determine if an investment product constitutes a security.
SEC Chair Gary Gensler has long said tokens constitute securities and has steadily asserted its authority over the crypto market, focusing initially on the sale of tokens and interest-bearing crypto products. More recently, it has taken aim at unregistered crypto broker dealer, exchange trading and clearing activity.
While a few crypto companies are licensed as alternative system trading systems, a type of trading platform used by brokers to trade listed securities, no crypto platform operates as a full-blown stock exchange. The SEC also this year sued Beaxy Digital and Bittrex Global for failing to register as an exchange, clearing house and broker.
“The whole business model is built on a noncompliance with the U. S. securities laws and we’re asking them to come into compliance,” Gensler told CNBC.
Crypto companies refute that tokens meet the definition of a security, say the SEC’s rules are ambiguous, and that the SEC is overstepping its authority in trying to regulate them. Still, many companies have boosted compliance, shelved products and expanded outside the country in response to the crackdown.
Kristin Smith, CEO of the Blockchain Association trade group, rejected Gensler’s efforts to oversee the industry.
“We’re confident the courts will prove Chair Gensler wrong in due time,” she said.
Founded in 2012, Coinbase recently served more than 108 million customers and ended March with $130 billion of customer crypto assets and funds on its balance sheet. Transactions generated 75% of its $3.15 billion of net revenue last year.
Tuesday’s SEC lawsuit seeks civil fines, the recouping of ill-gotten gains and injunctive relief.
On Monday, the SEC accused Binance of inflating trading volumes, diverting customer funds, improperly commingling assets, failing to restrict U. S. customers from its platform, and misleading customers about its controls.
Binance pledged to vigorously defend itself against the lawsuit, which it said reflected the SEC’s “misguided and conscious refusal” to provide clarity to the crypto industry.
Customers pulled around $790 million from Binance and its U. S. affiliate following the lawsuit, Nansen said.
On Tuesday, the SEC filed a motion to freeze assets belonging to Binance. US, Binance’s U.S. affiliate. The holding company of Binance is based in the Cayman Islands.
...
Read the original on www.reuters.com »
Upgrade your Asahi Linux systems, because your graphics drivers are getting a big boost: leapfrogging from OpenGL 2.1 over OpenGL 3.0 up to OpenGL 3.1! Similarly, the OpenGL ES 2.0 support is bumping up to OpenGL ES 3.0. That means more playable games and more functioning applications.
Back in December, I teased an early screenshot of SuperTuxKart’s deferred renderer working on Asahi, using OpenGL ES 3.0 features like multiple render targets and instancing. Now you too can enjoy SuperTuxKart with advanced lighting the way it’s meant to be:
As before, these drivers are experimental and not yet conformant to the OpenGL or OpenGL ES specifications. For now, you’ll need to run our -edge packages to opt-in to the work-in-progress drivers, understanding that there may be bugs. Please refer to our previous
post
explaining how to install the drivers and how to report bugs to help us improve.
With that disclaimer out of the way, there’s a LOT of new functionality packed into OpenGL 3.0, 3.1, and OpenGL ES 3.0 to make this release. Highlights include:
Vulkan and OpenGL support multisampling, short for multisampled anti-aliasing. In graphics, aliasing causes jagged diagonal edges due to rendering at insufficient resolution. One solution to aliasing is rendering at higher resolutions and scaling down. Edges will be blurred, not jagged, which looks better. Multisampling is an efficient implementation of that idea.
A multisampled image contains multiple samples for every pixel. After rendering, a multisampled image is resolved to a regular image with one sample per pixel, typically by averaging the samples within a pixel.
Apple GPUs support multisampled images and framebuffers. There’s quite a bit of typing to plumb the programmer’s view of multisampling into the form understood by the hardware, but there’s no fundamental incompatibility.
The trouble comes with sample shading. Recall that in modern graphics, the colour of each fragment is determined by running a fragment shader given by the programmer. If the fragments are pixels, then each sample within that pixel gets the same colour. Running the fragment shader once per pixel still benefits from multisampling thanks to higher quality rasterization, but it’s not as good as actually rendering at a higher resolution. If instead the fragments are samples, each sample gets a unique colour, equivalent to rendering at a higher resolution (supersampling). In Vulkan and OpenGL, fragment shaders generally run per-pixel, but with “sample shading”, the application can force the fragment shader to run per-sample.
How does sample shading work from the drivers’ perspective? On a typical GPU, it is simple: the driver compiles a fragment shader that calculates the colour of a single sample, and sets a hardware bit to execute it per-sample instead of per-pixel. There is only one bit of state associated with sample shading. The hardware will execute the fragment shader multiple times per pixel, writing out pixel colours independently.
AGX always executes the shader once per pixel, not once per sample, like older GPUs that did not support sample shading. AGX does support it, though.
How? The AGX instruction set allows pixel shaders to output different colours to each sample. The instruction used to output a colour takes a set of samples to modify, encoded as a bit mask. The default all-1’s mask writes the same value to all samples in a pixel, but a mask setting a single bit will write only the single corresponding sample.
This design is unusual, and it requires driver backflips to translate “fragment shaders” into hardware pixel shaders. How do we do it?
Physically, the hardware executes our shader once per pixel. Logically, we’re supposed to execute the application’s fragment shader once per sample. If we know the number of samples per pixel, then we can wrap the application’s shader in a loop over each sample. So, if the original fragment shader is:
interpolated colour = interpolate at current sample(input colour);
output current sample(interpolated colour);
then we will transform the program to the pixel shader:
for (sample = 0; sample < number of samples; ++sample) {
sample mask = (1 << sample);
interpolated colour = interpolate at sample(input colour, sample);
output samples(sample mask, interpolated colour);
The original fragment shader runs inside the loop, once per sample. Whenever it interpolates inputs at the current sample position, we change it to instead interpolate at a specific sample given by the loop counter sample. Likewise, when it outputs a colour for a sample, we change it to output the colour to the single sample given by the loop counter.
If the story ended here, this mechanism would be silly. Adding sample masks to the instruction set is more complicated than a single bit to invoke the shader multiple times, as other GPUs do. Even Apple’s own Metal driver has to implement this dance, because Metal has a similar approach to sample shading as OpenGL and Vulkan. With all this extra complexity, is there a benefit?
If we generated that loop at the end, maybe not. But if we know at compile-time that sample shading is used, we can run our full optimizer on this sample loop. If there is an expression that is the same for all samples in a pixel, it can be hoisted out of the loop. Instead of calculating the same value multiple times, as other GPUs do, the value can be calculated just once and reused for each sample. Although it complicates the driver, this approach to sample shading isn’t Apple cutting corners. If we slapped on the loop at the end and did no optimizations, the resulting code would be comparable to what other GPUs execute in hardware. There might be slight differences from spawning fewer threads but executing more control flow instructions, but that’s minor. Generating the loop early and running the optimizer enables better performance than possible on other GPUs.
So is the mechanism only an optimization? Did Apple stumble on a better approach to sample shading that other GPUs should adopt? I wouldn’t be so sure.
Let’s pull the curtain back. AGX has its roots as a mobile GPU intended for iPhones, with significant PowerVR heritage. Even if it powers Mac Pros today, the mobile legacy means AGX prefers software implementations of many features that desktop GPUs implement with dedicated hardware.
Blending is an operation in graphics APIs to combine the fragment shader output colour with the existing colour in the framebuffer. It is usually used to implement alpha blending, to let the background poke through translucent objects.
When multisampling is used without sample shading, although the fragment shader only runs once per pixel, blending happens per-sample. Even if the fragment shader outputs the same colour to each sample, if the framebuffer already had different colours in different samples, blending needs to happen per-sample to avoid losing that information already in the framebuffer.
A traditional desktop GPU blends with dedicated hardware. In the mobile space, there’s a mix of dedicated hardware and software. On AGX, blending is purely software. Rather than configure blending hardware, the driver must produce variants of the fragment shader that include instructions to implement the desired blend mode. With alpha blending, a fragment shader like:
colour = calculate lighting();
output(colour);
colour = calculate lighting();
dest = load destination colour;
alpha = colour.alpha;
blended = (alpha * colour) + ((1 - alpha) * dest));
output(blended);
Blending happens per sample. Even if the application intends to run the fragment shader per pixel, the shader must run per sample for correct blending. Compared to other GPUs, this approach to blending would regress performance when blending and multisampling are enabled but sample shading is not.
On the other hand, exposing multisample pixel shaders to the driver solves the problem neatly. If both the blending and the multisample state are known, we can first insert instructions for blending, and then wrap with the sample loop. The above program would then become:
for (sample = 0; sample < number of samples; ++sample_id) {
colour = calculate lighting();
dest = load destination colour at sample (sample);
alpha = colour.alpha;
blended = (alpha * colour) + ((1 - alpha) * dest);
sample mask = (1 << sample);
output samples(sample_mask, blended);
In this form, the fragment shader is asymptotically worse than the application wanted: the fragment shader is executed inside the loop, running per-sample unnecessarily.
Have no fear, the optimizer is here. Since colour is the same for each sample in the pixel, it does not depend on the sample ID. The compiler can move the entire original fragment shader (and related expressions) out of the per-sample loop:
colour = calculate lighting();
alpha = colour.alpha;
inv_alpha = 1 - alpha;
colour_alpha = alpha * colour;
for (sample = 0; sample < number of samples; ++sample_id) {
dest = load destination colour at sample (sample);
blended = colour_alpha + (inv_alpha * dest);
sample mask = (1 << sample);
output samples(sample_mask, blended);
Now blending happens per sample but the application’s fragment shader runs just once, matching the performance characteristics of traditional GPUs. Even better, all of this happens without any special work from the compiler. There’s no magic multisampling optimization happening here: it’s just a loop.
By the way, what do we do if we don’t know the blending and multisample state at compile-time? Hope is not lost…
While OpenGL ES 3.0 is an improvement over ES 2.0, we’re not done. In my work-in-progress branch, OpenGL ES 3.1 support is nearly finished, which will unlock compute shaders.
The final goal is a Vulkan driver running modern games. We’re a while away, but the baseline Vulkan 1.0 requirements parallel OpenGL ES 3.1, so our work translates to Vulkan. For example, the multisampling compiler passes described above are common code between the drivers. We’ve tested them against OpenGL, and now they’re ready to go for Vulkan.
And yes, the team is already working on Vulkan.
Until then, you’re one pacman -Syu away from enjoying OpenGL 3.1!
...
Read the original on asahilinux.org »
On the business, strategy, and impact of technology.
It really is one of the best product names in Apple history: Vision is a description of a product, it is an aspiration for a use case, and it is a critique on the sort of society we are building, behind Apple’s leadership more than anyone else.
I am speaking, of course, about Apple’s new mixed reality headset that was announced at yesterday’s WWDC, with a planned ship date of early 2024, and a price of $3,499. I had the good fortune of using an Apple Vision in the context of a controlled demo — which is an important grain of salt, to be sure — and I found the experience extraordinary.
It’s far better than I expected, and I had high expectations.
The high expectations came from the fact that not only was this product being built by Apple, the undisputed best hardware maker in the world, but also because I am, unlike many, relatively optimistic about VR. What surprised me is that Apple exceeded my expectations on both counts: the hardware and experience were better than I thought possible, and the potential for Vision is larger than I anticipated. The societal impacts, though, are much more complicated.
I have, for as long as I have written about the space, highlighted the differences between VR (virtual reality) and AR (augmented reality). From a 2016 Update:
I think it’s useful to make a distinction between virtual and augmented reality. Just look at the names: “virtual” reality is about an immersive experience completely disconnected from one’s current reality, while “augmented” reality is about, well, augmenting the reality in which one is already present. This is more than a semantic distinction about different types of headsets: you can divide nearly all of consumer technology along this axis. Movies and videogames are about different realities; productivity software and devices like smartphones are about augmenting the present. Small wonder, then, that all of the big virtual reality announcements are expected to be video game and movie related.
Augmentation is more interesting: for the most part it seems that augmentation products are best suited as spokes around a hub; a car’s infotainment system, for example, is very much a device that is focused on the current reality of the car’s occupants, and as evinced by Ford’s announcement, the future here is to accommodate the smartphone. It’s the same story with watches and wearables generally, at least for now.
I highlight that timing reference because it’s worth remembering that smartphones were originally conceived of as a spoke around the PC hub; it turned out, though, that by virtue of their mobility — by being useful in more places, and thus capable of augmenting more experiences — smartphones displaced the PC as the hub. Thus, when thinking about the question of what might displace the smartphone, I suspect what we today think of a “spoke” will be a good place to start. And, I’d add, it’s why platform companies like Microsoft and Google have focused on augmented, not virtual, reality, and why the mysterious Magic Leap has raised well over a billion dollars to-date; always in your vision is even more compelling than always in your pocket (as is always on your wrist).
I’ll come back to that last paragraph later on; I don’t think it’s quite right, in part because Apple Vision shows that the first part of the excerpt wasn’t right either. Apple Vision is technically a VR device that experientially is an AR device, and it’s one of those solutions that, once you have experienced it, is so obviously the correct implementation that it’s hard to believe there was ever any other possible approach to the general concept of computerized glasses.
This reality — pun intended — hits you the moment you finish setting up the device, which includes not only fitting the headset to your head and adding a prescription set of lenses, if necessary, but also setting up eye tracking (which I will get to in a moment). Once you have jumped through those hoops you are suddenly back where you started: looking at the room you are in with shockingly full fidelity.
What is happening is that Apple Vision is utilizing some number of its 12 cameras to capture the outside world, and displaying them to the postage-stamp sized screens in front of your eyes in a way that makes you feel like you are wearing safety goggles: you’re looking through something, that isn’t exactly like total clarity but is of sufficiently high resolution and speed that there is no reason to think it’s not real.
The speed is essential: Apple claims that the threshold for your brain to notice any sort of delay in what you see and what your body expects you to see (which is what causes known VR issues like motion sickness) is 12 milliseconds, and that the Vision visual pipeline displays what it sees to your eyes in 12 milliseconds or less. This is particularly remarkable given that the time for the image sensor to capture and process what it is seeing is along the lines of 7~8 milliseconds, which is to say that the Vision is taking that captured image, processing it, and displaying it in front of your eyes in around 4 milliseconds.
This is, truly, something that only Apple could do, because this speed is function of two things: first, the Apple-designed R1 processor (Apple also designed part of the image sensor), and second, the integration with Apple’s software. Here is Mike Rockwell, who led the creation of the headset, explaining “visionOS”:
None of this advanced technology could come to life without a powerful operating system called “visionOS”. It’s built on the foundation of the decades of engineering innovation in macOS, iOS, and iPad OS. To that foundation we added a host of new capabilities to support the low latency requirements of spatial computing, such as a new real-time execution engine that guarantees performance-critical workloads, a dynamically foveated rendering pipeline that delivers maximum image quality to exactly where your eyes are looking for every single frame, a first-of-its-kind multi-app 3D engine that allows different apps to run simultaneously in the same simulation, and importantly, the existing application frameworks we’ve extended to natively support spatial experiences. visionOS is the first operating system designed from the ground up for spatial computing.
The key part here is the “real-time execution engine”; “real time” isn’t just a descriptor of the experience of using Vision Pro: it’s a term-of-art for a different kind of computing. Here’s how Wikipedia defines a real-time operating system:
A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.
Real-time operating systems are used in embedded systems for applications with critical functionality, like a car, for example: it’s ok to have an infotainment system that sometimes hangs or even crashes, in exchange for more flexibility and capability, but the software that actually operates the vehicle has to be reliable and unfailingly fast. This is, in broad strokes, one way to think about how visionOS works: while the user experience is a time-sharing operating system that is indeed a variation of iOS, and runs on the M2 chip, there is a subsystem that primarily operates the R1 chip that is real-time; this means that even if visionOS hangs or crashes, the outside world is still rendered under that magic 12 milliseconds.
This is, needless to say, the most meaningful manifestation yet of Apple’s ability to integrate hardware and software: while previously that integration manifested itself in a better user experience in the case of a smartphone, or a seemingly impossible combination of power and efficiency in the case of Apple Silicon laptops, in this case that integration makes possible the melding of VR and AR into a single Vision.
In the early years of digital cameras there was bifurcation between consumer cameras that were fully digital, and high-end cameras that had a digital sensor behind a traditional reflex mirror that pushed actual light to an optical viewfinder. Then, in 2008, Panasonic released the G1, the first-ever mirrorless camera with an interchangeable lens system. The G1 had a viewfinder, but the viewfinder was in fact a screen.
This system was, at the beginning, dismissed by most high-end camera users: sure, a mirrorless system allowed for a simpler and smaller design, but there was no way a screen could ever compare to actually looking through the lens of the camera like you could with a reflex mirror. Fast forward to today, though, and nearly every camera on the market, including professional ones, are mirrorless: not only did those tiny screens get a lot better, brighter, and faster, but they also brought many advantages of their own, including the ability to see exactly what a photo would look like before you took it.
Mirrorless cameras were exactly what popped into my mind when the Vision Pro launched into that default screen I noted above, where I could effortlessly see my surroundings. The field of view was a bit limited on the edges, but when I actually brought up the application launcher, or was using an app or watching a video, the field of vision relative to an AR experience like a Hololens was positively astronomical. In other words, by making the experience all digital, the Vision Pro delivers an actually useful AR experience that makes the still massive technical challenges facing true AR seem irrelevant.
The payoff is the ability to then layer in digital experiences into your real-life environment: this can include productivity applications, photos and movies, conference calls, and whatever else developers might come up with, all of which can be used without losing your sense of place in the real world. To just take one small example, while using the Vision Pro, my phone kept buzzing with notifications; I simply took the phone out of my pocket, opened control center, and turned on do-not-disturb. What was remarkable only in retrospect is that I did all of that while technically being closed off to the world in virtual reality, but my experience was of simply glancing at the phone in my hand without even thinking about it.
Making everything digital pays off in other ways, as well; the demo included this dinosaur experience, where the dinosaur seems to enter the room:
The whole reason this works is because while the room feels real, it is in fact rendered digitally.
It remains to be seen how well this experience works in reverse: the Vision Pro includes “EyeSight”, which is Apple’s name for the front-facing display that shows your eyes to those around you. EyeSight wasn’t a part of the demo, so it remains to be seen if it is as creepy as it seems it might be: the goal, though, is the same: maintain a sense of place in the real world not by solving seemingly-impossible physics problems, but by simply making everything digital.
That the user’s eyes can be displayed on the outside of the Vision Pro is arguably a by-product of the technology that undergirds the Vision Pro’s user interface: what you are looking at is tracked by the Vision Pro, and when you want to take action on whatever you are looking at you simply touch your fingers together. Notably, your fingers don’t need to be extended into space: the entire time I used the Vision Pro my hands were simply resting in my lap, their movement tracked by the Vision Pro’s cameras.
It’s astounding how well this works, and how natural it feels. What is particularly surprising is how high-resolution this UI is; look at this crop of a still from Apple’s presentation:
The bar at the bottom of Photos is how you “grab” Photos to move it anywhere (literally); the small circle next to the bar is to close the app. On the left are various menu items unique to Photos. What is notable about these is how small they are: this isn’t a user interface like iOS or iPadOS that has to accommodate big blunt fingers; rather, visionOS’s eye tracking is so accurate that it can easily delineate the exact user interface element you are looking at, which again, you trigger by simply touching your fingers together. It’s extraordinary, and works extraordinarily well.
Of course you can also use a keyboard and trackpad, connected via Bluetooth, and you can also project a Mac into the Vision Pro; the full version of the above screenshot has a Mac running Final Cut Pro to the left of Photos:
I didn’t get the chance to try the Mac projection, but truthfully, while I went into this keynote the most excited about this capability, the native interface worked so well that I suspect I am going to prefer using native apps, even if those apps are also available for the Mac.
An incredible product is one thing; the question on everyone’s mind, though, is what exactly is this useful for? Who has room for another device in their life, particularly one that costs $3,499?
This question is, more often than not, more important to the success of a product than the quality of the product itself. Apple’s own history of new products is an excellent example:
The PC (including the Mac) brought computing to the masses for the first time; there was a massive amount of greenfield in people’s lives, and the product category was a massive success.
The iPhone expanded computing from the desktop to every other part of a person’s life. It turns out that was an even larger opportunity than the desktop, and the product category was an even larger success.
The iPad, in contrast to the Mac and iPhone, sort of sat in the middle, a fact that Steve Jobs noted when he introduced the product in 2010:
All of us use laptops and smartphones now. Everybody uses a laptop and/or a smartphone. And the question has arisen lately, is there room for a third category of device in the middle? Something that’s between a laptop and a smartphone. And of course we’ve pondered this question for years as well. The bar is pretty high. In order to create a new category of devices those devices are going to have to be far better at doing some key tasks. They’re going to have to be far better at doing some really important things, better than laptop, better than the smartphone.
Jobs went on to list a number of things he thought the iPad might be better at, including web browsing, email, viewing photos, watching videos, listening to music, playing games, and reading eBooks.
In truth, the only one of those categories that has truly taken off is watching video, particularly streaming services. That’s a pretty significant use case, to be sure, and the iPad is a successful product (and one whose potential use cases has been dramatically expanded by the Apple Pencil) that makes nearly as much revenue as the Mac, even though it dominates the tablet market to a much greater extent than the Mac does the PC market. At the same time, it’s not close to the iPhone, which makes sense: the iPad is a nice addition to one’s device collection, whereas an iPhone is essential.
The critics are right that this will be Apple Vision’s challenge at the beginning: a lot of early buyers will probably be interested in the novelty value, or will be Apple super fans, and it’s reasonable to wonder if the Vision Pro might becomes the world’s most expensive paper weight. To use an updated version of Jobs’ slide:
Small wonder that Apple has reportedly pared its sales estimates to less than a million devices.
As I noted above, I have been relatively optimistic about VR, in part because I believe the most compelling use case is for work. First, if a device actually makes someone more productive, it is far easier to justify the cost. Second, while it is a barrier to actually put on a headset — to go back to my VR/AR framing above, a headset is a destination device — work is a destination. I wrote in another Update in the context of Meta’s Horizon Workrooms:
The point of invoking the changes wrought by COVID, though, was to note that work is a destination, and its a destination that occupies a huge amount of our time. Of course when I wrote that skeptical article in 2018 a work destination was, for the vast majority of people, a physical space; suddenly, though, for millions of white collar workers in particular, it’s a virtual space. And, if work is already a virtual space, then suddenly virtual reality seems far more compelling. In other words, virtual reality may be much more important than previously thought because the vector by which it will become pervasive is not the consumer space (and gaming), but rather the enterprise space, particularly meetings.
Apple did discuss meetings in the Vision Pro, including a framework for personas — their word for avatars — that is used for Facetime and will be incorporated into upcoming Zoom, Teams, and Webex apps. What is much more compelling to me, though, is simply using a Vision Pro instead of a Mac (or in conjunction with one, by projecting the screen).
At the risk of over-indexing on my own experience, I am a huge fan of multiple monitors: I have four at my desk, and it is frustrating to be on the road right now typing this on a laptop screen. I would absolutely pay for a device to have a huge workspace with me anywhere I go, and while I will reserve judgment until I actually use a Vision Pro, I could see it being better at my desk as well.
I have tried this with the Quest, but the screen is too low of resolution to work comfortably, the user interface is a bit clunky, and the immersion is too complete: it’s hard to even drink coffee with it on. Oh, and the battery life isn’t nearly good enough. Vision Pro, though, solves all of these problems: the resolution is excellent, I already raved about the user interface, and critically, you can still see around you and interact with objects and people. Moreover, this is where the external battery solution is an advantage, given that you can easily plug the battery pack into a charger and use the headset all day (and, assuming Apple’s real-time rendering holds up, you won’t get motion sickness).1
Again, I’m already biased on this point, given both my prediction and personal workflow, but if the Vision Pro is a success, I think that an important part of its market will to at first be used alongside a Mac, and as the native app ecosystem develops, to be used in place of one.
To put it even more strongly, the Vision Pro is, I suspect, the future of the Mac.
The larger Vision Pro opportunity is to move in on the iPad and to become the ultimate consumption device:
The keynote highlighted the movie watching experience of the Vision Pro, and it is excellent and immersive. Of course it isn’t, in the end, that much different than having an excellent TV in a dark room.
What was much more compelling were a series of immersive video experiences that Apple did not show in the keynote. The most striking to me were, unsurprisingly, sports. There was one clip of an NBA basketball game that was incredibly realistic: the game clip was shot from the baseline, and as someone who has had the good fortune to sit courtside, it felt exactly the same, and, it must be said, much more immersive than similar experiences on the Quest.
It turns out that one reason for the immersion is that Apple actually created its own cameras to capture the game using its new Apple Immersive Video Format. The company was fairly mum about how it planned to make those cameras and its format more widely available, but I am completely serious when I say that I would pay the NBA thousands of dollars to get a season pass to watch games captured in this way. Yes, that’s a crazy statement to make, but courtside seats cost that much or more, and that 10-second clip was shockingly close to the real thing.
What is fascinating is that such a season pass should, in my estimation, look very different from a traditional TV broadcast, what with its multiple camera angles, announcers, scoreboard slug, etc. I wouldn’t want any of that: if I want to see the score, I can simply look up at the scoreboard as if I’m in the stadium; the sounds are provided by the crowd and PA announcer. To put it another way, the Apple Immersive Video Format, to a far greater extent than I thought possible, truly makes you feel like you are in a different place.
Again, though, this was a 10 second clip (there was another one for a baseball game, shot from the home team’s dugout, that was equally compelling). There is a major chicken-and-egg issue in terms of producing content that actually delivers this experience, which is probably why the keynote most focused on 2D video. That, by extension, means it is harder to justify buying a Vision Pro for consumption purposes. The experience is so compelling though, that I suspect this problem will be solved eventually, at which point the addressable market isn’t just the Mac, but also the iPad.
What is left in place in this vision is the iPhone: I think that smartphones are the pinnacle in terms of computing, which is to say that the Vision Pro makes sense everywhere the iPhone doesn’t.
I recognize how absurdly positive and optimistic this Article is about the Vision Pro, but it really does feel like the future. That future, though, is going to take time: I suspect there will be a slow burn, particularly when it comes to replacing product categories like the Mac or especially the iPad.
Moreover, I didn’t even get into one of the features Apple is touting most highly, which is the ability of the Vision Pro to take “pictures” — memories, really — of moments in time and render them in a way that feels incredibly intimate and vivid.
One of the issues is the fact that recording those memories does, for now, entail wearing the Vision Pro in the first place, which is going to be really awkward! Consider this video of a girl’s birthday party:
It’s going to seem pretty weird when dad is wearing a headset as his daughter blows out birthday candles; perhaps this problem will be fixed by a separate line of standalone cameras that capture photos in the Apple Immersive Video Format, which is another way to say that this is a bit of a chicken-and-egg problem.
What was far more striking, though, was how the consumption of this video was presented in the keynote:
Note the empty house: what happened to the kids? Indeed, Apple actually went back to this clip while summarizing the keynote, and the line “for reliving memories” struck me as incredibly sad:
I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience. That certainly puts a different spin on Apple’s proud declaration that the Vision Pro is “The Most Advanced Personal Electronics Device Ever”.
Indeed, this, even more than the iPhone, is the true personal computer. Yes, there are affordances like mixed reality and EyeSight to interact with those around you, but at the end of the day the Vision Pro is a solitary experience.
That, though, is the trend: long-time readers know that I have long bemoaned that it was the desktop computer that was christened the “personal” computer, given that the iPhone is much more personal, but now even the iPhone has been eclipsed. The arc of technology, in large part led by Apple, is for ever more personal experiences, and I’m not sure it’s an accident that that trend is happening at the same time as a society-wide trend away from family formation and towards an increase in loneliness.
This, I would note, is where the most interesting comparisons to Meta’s Quest efforts lie. The unfortunate reality for Meta is that they seem completely out-classed on the hardware front. Yes, Apple is working with a 7x advantage in price, which certainly contributes to things like superior resolution, but that bit about the deep integration between Apple’s own silicon and its custom-made operating system are going to very difficult to replicate for a company that has (correctly) committed to an Android-based OS and a Qualcomm-designed chip.
What is more striking, though, is the extent to which Apple is leaning into a personal computing experience, whereas Meta, as you would expect, is focused on social. I do think that presence is a real thing, and incredibly compelling, but achieving presence depends on your network also having VR devices, which makes Meta’s goals that much more difficult to achieve. Apple, meanwhile, isn’t even bothering with presence: even its Facetime integration was with an avatar in a window, leaning into the fact you are apart, whereas Meta wants you to feel like you are together.
In other words, there is actually a reason to hope that Meta might win: it seems like we could all do with more connectedness, and less isolation with incredible immersive experiences to dull the pain of loneliness. One wonders, though, if Meta is in fact fighting Apple not just on hardware, but on the overall trend of society; to put it another way, bullishness about the Vision Pro may in fact be a function of being bearish about our capability to meaningfully connect.
...
Read the original on stratechery.com »
instructions how to enable JavaScript in your web browser. This site requires Javascript in order to view all its content. Please enable Javascript in order to access all the functionality of this web site. Here are the instructions how to enable JavaScript in your web browser.
The NVIDIA GH200 Grace™ Hopper™ Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications. The superchip delivers up to 10X higher performance for applications running terabytes of data, enabling scientists and researchers to reach unprecedented solutions for the world’s most complex problems.
Take a Closer Look at the Superchip
The NVIDIA GH200 Grace Hopper Superchip combines the Grace and Hopper architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications.
New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5
Runs all NVIDIA software stacks and platforms, including the NVIDIA HPC SDK, NVIDIA AI, and NVIDIA Omniverse™
GH200-powered systems join 400+ configurations—based on the latest NVIDIA architectures—that are being rolled out to meet the surging demand for generative AI.
...
Read the original on www.nvidia.com »
The US has been urged to disclose evidence of UFOs after a whistleblower former intelligence official said the government has possession of “intact and partially intact” alien vehicles.
The former intelligence official David Grusch, who led analysis of unexplained anomalous phenomena (UAP) within a US Department of Defense agency, has alleged that the US has craft of non-human origin.
Information on these vehicles is being illegally withheld from Congress, Grusch told the Debrief. Grusch said when he turned over classified information about the vehicles to Congress he suffered retaliation from government officials. He left the government in April after a 14-year career in US intelligence.
Jonathan Grey, a current US intelligence official at the National Air and Space Intelligence Center (Nasic), confirmed the existence of “exotic materials” to the Debrief, adding: “We are not alone.”
The disclosures come after a swell of credible sightings and reports have revived attention in alien ships, and potentially visits, in recent years.
In 2021, the Pentagon released a report on UAP — the term is preferred to UFO by much of the extraterrestrial community — which found more than 140 instances of UAP encounters that could not be explained.
The report followed a leak of military footage that showed apparently inexplicable happenings in the sky, while navy pilots testified that they had frequently had encounters with strange craft off the US coast.
In an interview with the Debrief journalists Leslie Kean and Ralph Blumenthal, who previously exposed the existence of a secret Pentagon program that investigated UFOs, Grusch said the US government and defense contractors had been recovering fragments of non-human craft, and in some cases entire craft, for decades.
“We are not talking about prosaic origins or identities,” Grusch said. “The material includes intact and partially intact vehicles.”
Grusch told the Debrief that analysis determined that this material is “of exotic origin” — meaning “non-human intelligence, whether extraterrestrial or unknown origin”.
“[This assessment is] based on the vehicle morphologies and material science testing and the possession of unique atomic arrangements and radiological signatures,” Grusch said.
Grey, who, according to the Debrief, analyzes unexplained anomalous phenomena within the Nasic, confirmed Grusch’s account.
“The non-human intelligence phenomenon is real. We are not alone,” Grey said. “Retrievals of this kind are not limited to the United States. This is a global phenomenon, and yet a global solution continues to elude us.”
The Debrief spoke to several of Grusch’s former colleagues, each of whom vouched for his character. Karl E Nell, a retired army colonel, said Grusch was “beyond reproach”. In a 2022 performance review seen by the Debrief, Grusch was described as “an officer with the strongest possible moral compass”.
Nick Pope, who spent the early 1990s investigating UFOs for the British Ministry of Defence (MoD), said Grusch and Grey’s account of alien materials was “very significant”.
“It’s one thing to have stories on the conspiracy blogs, but this takes it to the next level, with genuine insiders coming forward,” Pope said.
“When these people make these formal complaints, they do so on the understanding that if they’ve knowingly made a false statement, they are liable to a fairly hefty fine, and/or prison.
“People say: ‘Oh, people make up stories all the time.’ But I think it’s very different to go before Congress and go to the intelligence community inspector general and do that. Because there will be consequences if it emerges that this is not true.”
The Debrief reported that Grusch’s knowledge of non-human materials and vehicles was based on “extensive interviews with high-level intelligence officials”. He said he had reported the existence of a UFO material “recovery program” to Congress.
“Grusch said that the craft recovery operations are ongoing at various levels of activity and that he knows the specific individuals, current and former, who are involved,” the Debrief reported.
In the Debrief article, Grusch does not say he has personally seen alien vehicles, nor does he say where they may be being stored. He asked the Debrief to withhold details of retaliation by government officials due to an ongoing investigation.
He also does not specify how he believes the government retaliated against him.
In June 2021, a report from the Office of the Director of National Intelligence said that from 2004 to 2021 there were 144 encounters between military pilots and UAP, 80 of which were captured on multiple sensors. Only one of the 144 encounters could be explained with “high confidence” — it was a large, deflating balloon.
Following increased interest from the public and some US senators, the Pentagon established the All-domain Anomaly Resolution Office, charged with tracking UAP, in July 2022.
In December last year, the office said it had received “several hundred” new reports, but no evidence so far of alien life.
The publication of Grusch and Grey’s claims comes after a panel that the US space agency Nasa charged with investigating unexplained anomalous phenomena said stigma around reporting encounters — and harassment of those who do report encounters — was hindering its work.
The navy pilots who in 2021 shared their experiences of encountering unexplained objects while conducting military flights said they, and others, had decided against reporting the encounters internally, because of fears it could hinder their careers.
“Harassment only leads to further stigmatization of the UAP field, significantly hindering the scientific progress and discouraging others to study this important subject matter,” Nasa’s science chief, Nicola Fox, said in a public meeting on 31 May.
Dr David Spergel, the independent chair of Nasa’s UAP independent study team, told the Guardian he did not know Grusch and had no knowledge of his claims.
The Department of Defense did not immediately respond to a request for comment.
In a statement, a Nasa spokesperson said: “One of Nasa’s key priorities is the search for life elsewhere in the universe, but so far, NASA has not found any credible evidence of extraterrestrial life and there is no evidence that UAPs are extraterrestrial. However, Nasa is exploring the solar system and beyond to help us answer fundamental questions, including whether we are alone in the universe.”
Pope said in his work investigating UFOs for the MoD he had seen no hard evidence of non-human craft or materials.
“Some of our cases were intriguing,” Pope said. “But we didn’t have a spaceship in a hangar anywhere. And if we did, they didn’t tell me.”
Still, Pope said, Grusch’s claims should be seen as part of an increasing flow of information — and hopefully disclosures — about UFOs.
He said: “It’s part of a wider puzzle. And I think, assuming this is all true, it takes us closer than we’ve ever been before to the very heart of all this.”
...
Read the original on www.theguardian.com »
After a roughly 30 minute demo that ran through the major features that are yet ready to test I came away convinced that Apple has delivered nothing less than a genuine leapfrog in capability and execution of XR — or mixed reality with its new Apple Vision Pro.
To be super clear, I’m not saying it delivers on all promises, is a genuinely new paradigm in computing or any other high-powered claim that Apple hopes to deliver on once it ships. I will need a lot more time with the device than a guided demo.
But, I’ve used essentially every major VR headset and AR device since 2013’s Oculus DK1 right up through the latest generations of Quest and Vive headsets. I’ve tried all of the experiences and stabs at making fetch happen when it comes to XR. I’ve been awed and re-awed as developers of the hardware and software of those devices and their marquee apps have continued to chew away at the “conundrum of the killer app” — trying to find something that would get real purchase with the broader public.
There are some genuine social, narrative or gaming successes like Gorilla Tag, VRChat or Cosmonius. I’ve also been moved by first-person experiences by Sundance filmmakers highlighting the human (or animal) condition.
But none of them had the advantages that Apple brings to the table with Apple Vision Pro. Namely, 5,000 patents filed over the past few years and an enormous base of talent and capital to work with. Every bit of this thing shows Apple-level ambition. I don’t know whether it will be the “next computing mode,” but you can see the conviction behind each of the choices made here. No corners cut. Full-tilt engineering on display.
The hardware is good — very good — with 24 million pixels across the two panels, orders of magnitude more than any headsets most consumers have come into contact with. The optics are better, the headband is comfortable and quickly adjustable and there is a top strap for weight relief. Apple says it is still working on which light seal (the cloth shroud) options to ship with it when it releases officially but the default one was comfortable for me. They aim to ship them with varying sizes and shapes to fit different faces. The power connector has a great little design, as well, that interconnects using internal pin-type power linkages with an external twist lock.
There is also a magnetic solution for some (but not all) optical adjustments people with differences in vision may need. The onboarding experience features an automatic eye-relief calibration matching the lenses to the center of your eyes. No manual wheels adjusting that here.
The main frame and glass piece look fine, though it’s worth mentioning that they are very substantial in size. Not heavy, per se, but definitely present.
If you have experience with VR at all then you know that the two big barriers most people hit are either latency-driven nausea or the isolation that long sessions wearing something over your eyes can deliver.
Apple has mitigated both of those head on. The R1 chip that sits alongside the M2 chip has a system-wide polling rate of 12ms, and I noticed no judder or framedrops. There was a slight motion blur effect used in the passthrough mode but it wasn’t distracting. The windows themselves rendered crisply and moved around snappily.
Of course, Apple was able to mitigate those issues due to a lot of completely new and original hardware. Everywhere you look here there’s a new idea, a new technology or a new implementation. All of that new comes at a price: $3,500 is on the high end of expectations and firmly places the device in the power user category for early adopters.
Here’s what Apple got right that other headsets just couldn’t nail down:
The eye tracking and gesture control is near perfect. The hand gestures are picked up anywhere around the headset. That includes on your lap or low and away resting on a chair or couch. Many other hand-tracking interfaces force you to keep your hands up in front of you, which is tiring. Apple has high-resolution cameras dedicated to the bottom of the device just to keep track of your hands. Similarly, an eye-tracking array inside means that, after calibration, nearly everything you look at is precisely highlighted. A simple low-effort tap of your fingers and boom, it works.
Passthrough is a major key. Having a real-time 4k view of the world around you that includes any humans in your personal space is so important for long-session VR or AR wear. There is a deep animal brain thing in most humans that makes us really, really uncomfortable if we can’t see our surroundings for a length of time. Eliminating that worry by passing through an image should improve the chance of long use times. There’s also a clever “breakthrough” mechanism that automatically passes a person who comes near you through your content, alerting you to the fact that they’re approaching. The eyes on the outside, which change appearance depending on what you’re doing, also provide a nice context cue for those outside.
The resolution means that text is actually readable. Apple’s positioning of this as a full on computing device only makes sense if you can actually read text in it. All of the previous iterations of “virtual desktop” setups have relied on panels and lenses that present too blurry a view to reliably read fine text at length. In many cases it literally hurt to do so. Not with the Apple Vision Pro — text is super crisp and legible at all sizes and at far “distances” within your space.
There were a handful of really surprising moments from my short time with the headset, as well. Aside from the sharpness of the display and the snappy responsiveness of the interface, the entire suite of samples oozed attention to detail.
The Personas Play. I was HIGHLY doubtful that Apple could pull off a workable digital avatar based off of just a scan of your face using the Vision Pro headset itself. Doubt crushed. I’d say that if you’re measuring the digital version of you that it creates to be your avatar in FaceTime calls and other areas, it has a solid set of toes on the other side of the uncanny valley. It’s not totally perfect, but they got skin tension and muscle work right, the expressions they have you make are used to interpolate out a full range of facial contortions using machine learning models, and the brief interactions I had with a live person on a call (and it was live, I checked by asking off-script stuff) did not feel creepy or odd. It worked.
It’s crisp. I’m sort of stating this again but, really, it’s crisp as hell. Running right up to demos like the 3D dinosaur you got right down to the texture level and beyond.
3D Movies are actually good in it. Jim Cameron probably had a moment when he saw Avatar: Way of Water on the Apple Vision Pro. This thing was absolutely born to make the 3D format sing — and it can display them pretty much right away, so there’s going to be a decent library of shot-on-3D movies that will bring new life to them all. The 3D photos and videos you can take with Apple Vision Pro directly also look super great, but I wasn’t able to test capturing any myself so I don’t know how that will feel yet. Awkward? Hard to say.
The setup is smooth and simple. A couple of minutes and you’re good to go. Very Apple.
Yes, it does look that good. The output of the interface and the various apps are so good that Apple just used them directly off of the device in its keynote. The interface is bright and bold and feels present because of the way it interacts with other windows, casts shadows on the ground and reacts to lighting conditions.
Overall, I’m hesitant to make any broad claims about whether Apple Vision Pro is going to fulfill Apple’s claims about the onset of spatial computing. I’ve had far too little time with it and it’s not even completed — Apple is still working on things like the light shroud and definitely on many software aspects.
It is, however, really, really well done. The platonic ideal of an XR headset. Now, we wait to see what developers and Apple accomplish over the next few months and how the public reacts.
...
Read the original on techcrunch.com »
...
Read the original on bugs.chromium.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.