10 interesting stories served every morning and every evening.
The advent of the personal computer wasn’t just about making these powerful machines available to everyone, it was also about making them accessible and usable, even for those lacking a computer science degree. Larry Tesler, who passed away on Monday, might not be a household name like Steve Jobs or Bill Gates, but his contributions to making computers and mobile devices easier to use are the highlight of a long career inﬂuencing modern computing.
Born in 1945 in New York, Tesler went on to study computer science at Stanford University, and after graduation he dabbled in artiﬁcial intelligence research (long before it became a deeply concerning tool) and became involved in the anti-war and anti-corporate monopoly movements, with companies like IBM as one of his deserving targets. In 1973 Tesler took a job at the Xerox Palo Alto Research Center (PARC) where he worked until 1980. Xerox PARC is famously known for developing the mouse-driven graphical user interface we now all take for granted, and during his time at the lab Tesler worked with Tim Mott to create a word processor called Gypsy that is best known for coining the terms “cut,” “copy,” and “paste” when it comes to commands for removing, duplicating, or repositioning chunks of text.
Xerox PARC is also well known for not capitalizing on the groundbreaking research it did in terms of personal computing, so in 1980 Tesler transitioned to Apple Computer where he worked until 1997. Over the years he held countless positions at the company including Vice President of AppleNet (Apple’s in-house local area networking system that was eventually canceled), and even served as Apple’s Chief Scientist, a position that at one time was held by Steve Wozniak, before eventually leaving the company.
In addition to his contributions to some of Apple’s most famous hardware, Tesler was also known for his efforts to make software and user interfaces more accessible. In addition to the now ubiquitous “cut,” “copy,” and “paste” terminologies, Tesler was also an advocate for an approach to UI design known as modeless computing, which is reﬂected in his personal website. In essence, it ensures that user actions remain consistent throughout an operating system’s various functions and apps. When they’ve opened a word processor, for instance, users now just automatically assume that hitting any of the alphanumeric keys on their keyboard will result in that character showing up on-screen at the cursor’s insertion point. But there was a time when word processors could be switched between multiple modes where typing on the keyboard would either add characters to a document or alternately allow functional commands to be entered.
There are still plenty of software applications where tools and functionality change depending on the mode they’re in (complex apps like Photoshop, for example, where various tools behave differently and perform very distinct functions) but for the most part modern operating systems like Apple’s macOS and Microsoft’s Windows have embraced user-friendliness through a less complicated modeless approach.
After leaving Apple in 1997, Tesler co-founded a company called Stagecast Software which developed applications that made it easier and more accessible for children to learn programming concepts. In 2001 he joined Amazon and eventually became the VP of Shopping Experience there, in 2005 he switched to Yahoo where he headed up that company’s user experience and design group, and then in 2008 he became a product fellow at 23andMe. According to his CV, Tesler left 23andMe in 2009 and from then on mostly focused on consulting work.
While there are undoubtedly countless other contributions Tesler made to modern computing as part of his work on teams at Xerox and Apple that may never come to light, his known contributions are immense. Tesler is one of the major reasons computer moved out of research centers and into homes.
Kickstarter employees voted to form a union with the Ofﬁce and Professional Employees International Union, which represents more than 100,000 white collar workers. The ﬁnal vote was 46 for the union, 37 against, a historic win for unionization efforts at tech companies.
Kickstarter workers are now the ﬁrst white collar workers at a major tech company to successfully unionize in the United States, sending a message to other tech workers.
“Everyone was crying [when the results were announced],” Clarissa Redwine, a Kickstarter United organizers who was ﬁred in September, told Motherboard. “I thought it would be close, but I also knew we were going to win. I hope other tech workers feel emboldened and know that it’s possible to ﬁght for your workplace and your values. I know my former coworkers will use a seat at the table really well.”
“Today we learned that in a 46 to 37 vote, our staff has decided to unionize,” Kickstarter’s CEO Aziz Hasan said in a statement. “We support and respect this decision, and we are proud of the fair and democratic process that got us here. We’ve worked hard over the last decade to build a different kind of company, one that measures its success by how well it achieves its mission: helping to bring creative projects to life. Our mission has been common ground for everyone here during this process, and it will continue to guide us as we enter this new phase together.”
The union at the Brooklyn-based crowd-funding platform arrives during a period of unprecedented labor organizing among engineers and other white collar tech workers at Google, Amazon, Microsoft and other prominent tech companies—around issues like sexual harassment, ICE contracts, and carbon emissions. Between 2017 and 2019, the number of protest actions led by tech workers nearly tripled. In 2019 alone, tech workers led more than 100 actions, according to the online database “Collective Actions in Tech.”
“I feel like the most important issues [for us] are around creating clearer policies and support for reporting workplace issues and creating clearer mechanisms for hiring and ﬁring employees,” said RV Dougherty, a former trust and safety analyst and core organizer for Kickstarter United who quit in early February. “Right now so much depends on what team you’re on and if you have a good relationship with your manager… We also have a lot of pay disparity and folks who are doing incredible jobs but have been kept from getting promoted because they spoke their mind, which is not how Kickstarter should work.”
In the days leading up to Kickstarter vote count, Motherboard revealed that Kickstarter hired Duane Morris, a Philadelphia law ﬁrm that specializes in labor management relations and “maintaining a union-free workplace.” Kickstarter conﬁrmed to Motherboard that it ﬁrst retained the services of Duane Morris in 2018 before it knew about union organizing at the company, but would not go into detail about whether the ﬁrm had advised the company on how to defeat the union and denied any union-busting activity.
Dating back to its 2009 founding, Kickstarter has tried to distinguish itself as a progressive exception to Silicon Valley tech companies. In 2015, the company’s leadership announced it had become a “public beneﬁt corporation.” “Beneﬁt Corporations are for-proﬁt companies that are obligated to consider the impact of their decisions on society, not only shareholders,” the senior leadership wrote at the time. The company has been hailed as one of the most ethical places to work in tech.
Indeed, rather than dedicate its resources to maximizing proﬁt, Kickstarter has fought for progressive causes, like net neutrality, and against the anti-trans bathroom law in North Carolina.
But in 2018, a heated disagreement broke out between employees and management about whether to leave a project called “Always Punch Nazis” on the platform, according to reporting in Slate. When Breitbart said the project violated Kickstarter’s terms of service by inciting violence, management initially planned to remove the project, but then reversed its decision after protest from employees.
Following the controversy, employees announced their intentions to unionize with OPEIU Local 153 in March 2019. And the company made it clear that it did not believe a union was right for Kickstarter.
In a letter to creators, Kickstarter’s CEO Aziz Hasan wrote in September that “The union framework is inherently adversarial.”
“That dynamic doesn’t reﬂect who we are as a company, how we interact, how we make decisions, or where we need to go,” the company’s CEO Aziz Hasan wrote to creators in September. “We believe that in many ways it would set us back.”
In September, Kickstarter ﬁred two employees on its union organizing committee within 8 days, informing a third that his role was no longer needed at the company. Following outcry from prominent creators, the company insisted that the two ﬁrings were related to job performance, not union activity.
The two ﬁred workers ﬁled federal unfair labor practice charges with the National Labor Relations Board (NLRB), claiming the company retaliated against them for union organizing in violation of the National Labor Relations Act. (Those charges have yet to be resolved.) Days later, the company denied a request from the union, Kickstarter United, for voluntary recognition.
The decision to unionize at Kickstarter follows a series of victories for union campaigns led by blue collar tech workers. Last year, 80 Google contractors in Pittsburgh, 2,300 cafeteria workers at Google in Silicon Valley, and roughly 40 Spin e-scooter workers in San Francisco voted to form the ﬁrst unions in the tech industry. In early February, 15 employees at the delivery app Instacart in Chicago successfully unionized, following a ﬁerce anti-union campaign run by management.
By some accounts, the current wave of white collar tech organizing began in early 2018 when the San Francisco tech company Lanetix ﬁred its entire 14-software engineer staff after they ﬁled to unionize with Communications Workers of America (CWA). Later, the company was forced to cough up $775,000 to settle unfair labor practice charges.
Update: This story has been updated with comment from Kickstarter.
Companies around the world are embracing what might seem like a radical idea: a four-day workweek.
The concept is gaining ground in places as varied as New Zealand and Russia, and it’s making inroads among some American companies. Employers are seeing surprising beneﬁts, including higher sales and profits.
The idea of a four-day workweek might sound crazy, especially in America, where the number of hours worked has been climbing and where cellphones and email remind us of our jobs 24/7.
But in some places, the four-day concept is taking off like a viral meme. Many employers aren’t just moving to 10-hour shifts, four days a week, as companies like Shake Shack are doing; they’re going to a 32-hour week — without cutting pay. In exchange, employers are asking their workers to get their jobs done in a compressed amount of time.
Last month, a Washington state senator introduced a bill to reduce the standard workweek to 32 hours. Russian Prime Minister Dmitry Medvedev is backing a parliamentary proposal to shift to a four-day week. Politicians in Britain and Finland are considering something similar.
In the U. S., Shake Shack started testing the idea a year and a half ago. The burger chain shortened managers’ workweeks to four days at some stores and found that recruitment spiked, especially among women.
Shake Shack’s president, Tara Comonte, says the staff loved the perk: “Being able to take their kids to school a day a week, or one day less of having to pay for day care, for example.”
So the company recently expanded its trial to a third of its 164 U. S. stores. Offering that beneﬁt required Shake Shack to ﬁnd time savings elsewhere, so it switched to computer software to track supplies of ground beef, for example.
“It was a way to increase ﬂexibility,” Comonte says of the shorter week. “Corporate environments have had ﬂexible work policies for a while now. That’s not so easy to do in the restaurant business.”
Hundreds — if not thousands — of other companies are also adopting or testing the four-day week. Last summer, Microsoft’s trial in Japan led to a 40% improvement in productivity, measured as sales per employee.
Much of this is thanks to Andrew Barnes, an archaeologist by training, who never intended to become a global evangelist. “This was not a journey I expected to be on,” he says.
Barnes is CEO of Perpetual Guardian, New Zealand’s largest estate planning company. He spent much of his career believing long hours were better for business. But he was also disturbed by the toll it took on employees and their families, particularly when it came to mental health.
So two years ago, he used Perpetual Guardian and its 240 workers as guinea pigs, partnering with academic researchers in Auckland to monitor and track the effects of working only four days a week.
“Core to this is that people are not productive for every hour, every minute of the day that they’re in the ofﬁce,” Barnes says, which means there was lots of distraction and wasted time that could be cut.
Simply slashing the number and duration of meetings saved huge amounts of time. Also, he did away with open-ﬂoor ofﬁce plans and saw workers spending far less time on social media. All this, he says, made it easier to focus more deeply on the work.
Remarkably, workers got more work done while working fewer hours. Sales and profits grew. Employees spent less time commuting, and they were happier.
Barnes says there were other, unexpected beneﬁts: It narrowed workplace gender gaps. Women — who typically took more time off for caregiving — suddenly had greater ﬂexibility built into their schedule. Men also had more time to help with their families, Barnes says.
The company didn’t police how workers spent their time. But if performance slipped, the ﬁrm could revert back to the full-week schedule. Barnes says that alone motivated workers.
The Perpetual Guardian study went viral, and things went haywire for Barnes.
Employers — including big multinationals — started calling, seeking advice. “Frankly, I couldn’t drink enough coffee to deal with the number of companies that approached us,” Barnes says.
Demand was so great that he set up a foundation to promote the four-day workweek. Ironically, in the process, he’s working a lot of overtime.
“You only get one chance to change the world. And, it’s my responsibility at least, on this one, to see if I can inﬂuence the world for the better,” he says.
To date, most of that interest has not come from American employers.
Peter Cappelli, a professor of management at the Wharton School of the University of Pennsylvania, says that’s because the concept runs counter to American notions of work and capitalism. Unions are less powerful, and workers have less political sway than in other countries, he says.
So American companies answer to shareholders, who tend to prioritize proﬁt over worker beneﬁts.
“I just don’t see contemporary U. S. employers saying, ‘You know what, if we create more value here, we’re gonna give it to the employees.’ I just don’t see that happening,” Cappelli says.
Natalie Nagele, co-founder and CEO of Wildbit, has heard from other leaders who say it didn’t work for them. She says it fails when employees aren’t motivated and where managers don’t trust employees.
But Nagele says moving her Philadelphia software company to a four-day week three years ago has been a success.
“We had shipped more features than we had in recent years, we felt more productive, the quality of our work increased. So then we just kept going with it,” Nagele says. Personally, she says, it gives her time to rest her brain, which helps solve complex problems: “You can ask my team, there’s multiple times where somebody is like, ‘On Sunday morning, I woke up and … I ﬁgured it out.’ ”
Mikeal Parlow started working a four-day week about a month ago. It was a perk of his new job as a budget analyst in Westminster, Colo.
He works 10 hours a day, Monday through Thursday. Or, as he puts it, until the job is done. Parlow says he much prefers the new way “because it is about getting your work done, more so than feeding the clock.”
That frees Fridays up for life’s many delightful chores — like visits to the DMV. “For instance, today we’re going to go and get our license plates,” Parlow says.
But that also leaves time on the weekends … for the weekend.
Use Git or checkout with SVN using the web URL.
Want to be notiﬁed of new releases in VGraupera/1on1-questions?
If nothing happens, download GitHub Desktop and try again.
If nothing happens, download GitHub Desktop and try again.
If nothing happens, download Xcode and try again.
If nothing happens, download the GitHub extension for Visual Studio and try again.
A new email-based extortion scheme apparently is making the rounds, targeting Web site owners serving banner ads through Google’s AdSense program. In this scam, the fraudsters demand bitcoin in exchange for a promise not to ﬂood the publisher’s ads with so much bot and junk trafﬁc that Google’s automated anti-fraud systems suspend the user’s AdSense account for suspicious trafﬁc.
Earlier this month, KrebsOnSecurity heard from a reader who maintains several sites that receive a fair amount of trafﬁc. The message this reader shared began by quoting from an automated email Google’s systems might send if they detect your site is seeking to beneﬁt from automated clicks. The message continues:
“Very soon the warning notice from above will appear at the dashboard of your AdSense account undoubtedly! This will happen due to the fact that we’re about to ﬂood your site with huge amount of direct bot generated web trafﬁc with 100% bounce ratio and thousands of IP’s in rotation — a nightmare for every AdSense publisher. More also we’ll adjust our sophisticated bots to open, in endless cycle with different time duration, every AdSense banner which runs on your site.”
The message goes on to warn that while the targeted site’s ad revenue will be brieﬂy increased, “AdSense trafﬁc assessment algorithms will detect very fast such a web trafﬁc pattern as fraudulent.”
“Next an ad serving limit will be placed on your publisher account and all the revenue will be refunded to advertisers. This means that the main source of proﬁt for your site will be temporarily suspended. It will take some time, usually a month, for the AdSense to lift your ad ban, but if this happens we will have all the resources needed to ﬂood your site again with bad quality web trafﬁc which will lead to second AdSense ban that could be permanent!”
The message demands $5,000 worth of bitcoin to forestall the attack. In this scam, the extortionists are likely betting that some publishers may see paying up as a cheaper alternative to having their main source of advertising revenue evaporate.
The reader who shared this email said while he considered the message likely to be a baseless threat, a review of his recent AdSense trafﬁc statistics showed that detections in his “AdSense invalid trafﬁc report” from the past month had increased substantially.
The reader, who asked not to be identiﬁed in this story, also pointed to articles about a recent AdSense crackdown in which Google announced it was enhancing its defenses by improving the systems that identify potentially invalid trafﬁc or high risk activities before ads are served.
Google deﬁnes invalid trafﬁc as “clicks or impressions generated by publishers clicking their own live ads,” as well as “automated clicking tools or trafﬁc sources.”
“Pretty concerning, thought it seems this group is only saying they’re planning their attack,” the reader wrote.
Google declined to discuss this reader’s account, saying its contracts prevent the company from commenting publicly on a speciﬁc partner’s status or enforcement actions. But in a statement shared with KrebsOnSecurity, the company said the message appears to be a classic threat of sabotage, wherein an actor attempts to trigger an enforcement action against a publisher by sending invalid trafﬁc to their inventory.
“We hear a lot about the potential for sabotage, it’s extremely rare in practice, and we have built some safeguards in place to prevent sabotage from succeeding,” the statement explained. “For example, we have detection mechanisms in place to proactively detect potential sabotage and take it into account in our enforcement systems.”
Google said it has extensive tools and processes to protect against invalid trafﬁc across its products, and that most invalid trafﬁc is ﬁltered from its systems before advertisers and publishers are ever impacted.
“We have a help center on our website with tips for AdSense publishers on sabotage,” the statement continues. “There’s also a form we provide for publishers to contact us if they believe they are the victims of sabotage. We encourage publishers to disengage from any communication or further action with parties that signal that they will drive invalid trafﬁc to their web properties. If there are concerns about invalid trafﬁc, they should communicate that to us, and our Ad Trafﬁc Quality team will monitor and evaluate their accounts as needed.”
This entry was posted on Monday, February 17th, 2020 at 9:13 am and is ﬁled under A Little Sunshine, The Coming Storm, Web Fraud 2.0.
You can follow any comments to this entry through the RSS 2.0 feed.
You can skip to the end and leave a comment. Pinging is currently not allowed.
The Mandalorian: This Is the Way
Cinematographers Greig Fraser, ASC, ACS and Barry “Baz” Idoine and showrunner Jon Favreau employ new technologies to frame the Disney Plus Star Wars series.
Unit photography by François Duhamel, SMPSP, and Melinda Sue Gordon, SMPSP, courtesy of Lucasﬁlm, Ltd.
At top, the Mandalorian Bounty Hunter (played by played by Pedro Pascal) rescues the Child — popularly described as “baby Yoda.”
This article is an expanded version of the story that appears in our February, 2020 print magazine.
A live-action Star Wars television series was George Lucas’ dream for many years, but the logistics of television production made achieving the necessary scope and scale seem inconceivable. Star Wars fans would expect exotic, picturesque locations, but it simply wasn’t plausible to take a crew to the deserts of Tunisia or the salt ﬂats of Bolivia on a short schedule and limited budget. The creative team behind The Mandalorian has solved that problem.
For decades, green- and bluescreen compositing was the go-to solution for bringing fantastic environments and actors together on the screen. (Industrial Light & Magic did pioneering work with the technology for the original Star Wars movie.) However, when characters are wearing highly reﬂective costumes, as is the case with Mando (Pedro Pascal), the title character of The Mandalorian, the reﬂection of green- and bluescreen in the wardrobe causes costly problems in post-production. In addition, it’s challenging for actors to perform in a “sea of blue,” and for key creatives to have input on shot designs and composition.
This story was originally published in the Feb. 2020 issue of AC. Some images are additional or alternate.
In order for The Mandalorian to work, technology had to advance enough that the epic worlds of Star Wars could be rendered on an affordable scale by a team whose actual production footprint would comprise a few soundstages and a small backlot. An additional consideration was that the typical visual-effects workﬂow runs concurrent with production, and then extends for a lengthy post period. Even with all the power of contemporary digital visual-effects techniques and billions of computations per second, the process can take up to 12 hours or more per frame. With thousands of shots and multiple iterations, this becomes a time-consuming endeavor. The Holy Grail of visual effects — and a necessity for The Mandalorian, according to co-cinematographer and co-producer Greig Fraser, ASC, ACS — was the ability to do real-time, in-camera compositing on set.
“That was our goal,” says Fraser, who had previously explored the Star Wars galaxy while shooting Rogue One: A Star Wars Story (AC Feb. ’17). “We wanted to create an environment that was conducive not just to giving a composition line-up to the effects, but to actually capturing them in real time, photo-real and in-camera, so that the actors were in that environment in the right lighting — all at the moment of photography.”
The solution was what might be described as the heir to rear projection — a dynamic, real-time, photo-real background played back on a massive LED video wall and ceiling, which not only provided the pixel-accurate representation of exotic background content, but was also rendered with correct camera positional data.
Mando with the Child on his ship.
If the content was created in advance of the shoot, then photographing actors, props and set pieces in front of this wall could create ﬁnal in-camera visual effects — or “near” ﬁnals, with only technical ﬁxes required, and with complete creative conﬁdence in the composition and look of the shots. On The Mandalorian, this space was dubbed “the Volume.” (Technically, a “volume” is any space deﬁned by motion-capture technology.)
This concept was initially proposed by Kim Libreri of Epic Games while he was at Lucasﬁlm and it has become the basis of the technology that “Holy Grail” that makes a live-action Star Wars television series possible.
In 2014, as Rogue One was ramping up, the concept of real-time compositing was once again discussed. Technology had matured to a new level. Visual effects supervisor John Knoll had an early discussion with Fraser about this concept and the cinematographer brought up the notion of utilizing a large LED screen as a lighting instrument to incorporate interactive animated lighting on the actors and sets during composite photography utilizing playback of rough previsualized effects on the LED screens. The ﬁnal animated VFX would be added in later; the screens were merely to provide interactive lighting to match the animations.
“One of the big problems of shooting blue- and greenscreen composite photography is the interactive lighting,” offers Fraser. “Often, you’re shooting real photography elements before the backgrounds are created and you’re imagining what the interactive lighting will do — and then you have to hope that what you’ve done on set will match what happens in post much later on. If the director changes the backgrounds in post, then the lighting isn’t going to match and the ﬁnal shot will feel false.”
Director and executive producer Dave Filoni and cinematographers Greig Fraser, ASC, ACS (center) and Barry “Baz” Idoine (operating camera) on the set.
For Rogue One, they built a large cylindrical LED screen and created all of the backgrounds in advance for the space battle landings on Scarif, Jedha and Eadu and all the cockpit sequences in X-Wing and U-Wing spacecraft were done in front of that LED wall as the primary source of illumination on the characters and sets. Those LED panels had a pixel pitch of 9mm (the distance between the centers of the RGB pixel clusters on the screen). Unfortunately, with the size of the pixel pitch, they could rarely get it far enough away from the camera to avoid moiré and make the image appear photo-real, so it was used purely for lighting purposes. However, because the replacement backgrounds were already built and utilized on set — the comps were extremely successful and perfectly matched the dynamic lighting.
A ﬁsheye view looking through the gap between the two back walls of the show’s LED-wall system, known as “the Volume.” The dark spot on the Volume ceiling is due to a different model of LED screens used there. The ceiling is mostly used for lighting purposes, and if seen on camera is replaced in post.
“I went to see Jon and ask him if we would like to do something for Disney’s new streaming service,” Kennedy says. “I’ve known that Jon has wanted to do a Star Wars project for a long time, so we started talking right away about what he could do that would push technology and that led to a whole conversation around what could change the production path; what could actually create a way in which we could make things differently?”
Favreau had just completed The Jungle Book and was embarking on The Lion King for Disney — both visual-effects heavy ﬁlms.
Visual effects supervisor Richard Bluff and executive creative director and head of ILM Rob Bredow showed Favreau a number of tests that ILM had conducted including the technology of the LED wall from Rogue One. Fraser suggested with the advancements in LED technology since Rogue One that this project could leverage new panels and push the envelope on real-time, in-camera visual effects. Favreau loved the concept and decided that was the production path to take.
In the background, appearing to ﬂoat in space, are the motion-tracking cameras peeking between the Volume’s wall and ceiling.
The production was looking to minimize the amount of green- and bluescreen photography and requirements of post compositing to improve the quality of the environment for the actors. The LED screen provides a convincing facsimile of a real set/location and avoids the green void that can be challenging for performers.
“I was very encouraged by my experiences using similar technology on Jungle Book [AC, May ’16], and using virtual cameras on The Lion King [AC, Aug. ’19],” explains Favreau, series creator and executive producer. “I had also experimented with a partial video wall for the pilot episode of The Orville. With the team we had assembled between our crew, ILM, Magnopus, Epic Games, Proﬁle Studios and Lux Machina, I felt that we had a very good chance at a positive outcome.”
“The Volume is a difﬁcult technology to understand until you stand there in front of the ‘projection’ on the LED screen, put an actor in front of it, and move the camera around,” Fraser says. “It’s hard to grasp. It’s not really rear projection; it’s not a TransLite because [it is a real-time, interactive image with 3D objects] and has the proper parallax; and it’s photo-real, not animated, but it is generated through a gaming engine.”
Idoine (left) shooting on the Volume’s display of the ice-planet Maldo Kreis — one of many of the production’s environment “loads” — with director Filoni watching and Karina Silva operating B camera. The ﬁxtures with white, half-dome, ping-pong-style balls on each camera are the “Sputniks” — infrared-marker conﬁgurations that are seen by the motion-tracking cameras to record the production camera’s position in 3D space, and to render proper 3D parallax on the Volume wall.
“The technology that we were able to innovate on The Mandalorian would not have been possible had we not developed technologies around the challenges of Jungle Book and Lion King,” offers Favreau. “We had used game-engine and motion-capture [technology] and real-time set extension that had to be rendered after the fact, so real-time render was a natural extension of this approach.”
Barry “Baz” Idoine, who worked with Fraser for several years as a camera operator and second-unit cinematographer on features including Rogue One and Vice (AC Jan. ’19), assumed cinematography duties on The Mandalorian when Fraser stepped away to shoot Denis Villeneuve’s Dune. Idoine observes, “The strong initial value is that you’re not shooting in a green-screen world and trying to emulate the light that will be comped in later — you’re actually shooting ﬁnished product shots. It gives the control of cinematography back to the cinematographer.”
The Volume was a curved, 20′-high-by-180′-circumference LED video wall, comprising 1,326 individual LED screens of a 2.84mm pixel pitch that created a 270-degree semicircular background with a 75′-diameter performance space topped with an LED video ceiling, which was set directly onto the main curve of the LED wall.
At the rear of the Volume, in the 90 remaining degrees of open area, essentially “behind camera,” were two 18′-high-by-20′-wide ﬂat panels of 132 more LED screens. These two panels were rigged to traveler track and chain motors in the stage’s perms, so the walls could be moved into place or ﬂown out of the way to allow better access to the Volume area.
“The Volume allows us to bring many different environments under one roof,” says visual-effects supervisor Richard Bluff of ILM. “We could be shooting on the lava ﬂats of Nevarro in the morning and in the deserts of Tatooine in the afternoon. Of course, there are practical considerations to switching over environments, but we [typically did] two environments in one day.”
The crew surrounds the Mandalorian’s spacecraft Razor Crest. Only the fuselage and cockpit are practical set pieces. From this still-camera position, the composition appears “broken,” but from the production camera’s perspective, the engines appear in perfect relationship to the fuselage, and track in parallax with the camera’s movement.
“A majority of the shots were done completely in camera,” Favreau adds. “And in cases where we didn’t get to ﬁnal pixel, the postproduction process was shortened significantly because we had already made creative choices based on what we had seen in front of us. Postproduction was mostly reﬁning creative choices that we were not able to ﬁnalize on the set in a way that we deemed photo-real.”
With traditional rear projection (and front projection), in order for the result to look believable, the camera must either remain stationary or move along a preprogrammed path to match the perspective of the projected image. In either case, the camera’s center of perspective (the entrance pupil of the lens, sometimes referred to — though incorrectly — as the nodal point) must be precisely aligned with the projection system to achieve proper perspective and the effects of parallax. The Mandalorian is hardly the ﬁrst production to incorporate an image-projection system for in-camera compositing, but what sets its technique apart is its ability to facilitate a moving camera.
In the pilot episode, the Mandalorian (Pedro Pascal) brings his prey (Horatio Sanz) into custody.
Indeed, using a stationary camera or one locked into a pre-set move for all of the work in the Volume was simply not acceptable for the needs of this particular production. The team therefore had to ﬁnd a way to track the camera’s position and movement in real-world space, and extrapolate proper perspective and parallax on the screen as the camera moved. This required incorporating motion-capture technology and a videogame engine — Epic Games’ Unreal Engine — that would generate proper 3D parallax perspective in real time.
The locations depicted on the LED wall were initially modeled in rough form by visual-effects artists creating 3D models in Maya, to the specs determined by production designer Andrew Jones and visual consultant Doug Chiang. Then, wherever possible, a photogrammetry team would head to an actual location and create a 3D photographic scan.
“We realized pretty early on that the best way to get photo-real content on the screen was to photograph something,” attests Visual Effects Supervisor Richard Bluff.
As amazing and advanced as the Unreal Engine’s capabilities were, rendering fully virtual polygons on-the-ﬂy didn’t produce the photo-real result that the ﬁlmmakers demanded. In short, 3-D computer-rendered sets and environments were not photo-realistic enough to be utilized as in-camera ﬁnal images. The best technique was to create the sets virtually, but then incorporate photographs of real-world objects, textures and locations and map those images onto the 3-D virtual objects. This technique is commonly known as tiling or photogrammetry. This is not necessarily a unique or new technique, but the incorporation of photogrammetry elements achieved the goal of creating in-camera ﬁnals.
The Mandolorian makes repairs with a rich landscape displayed behind him.
Additionally, photographic “scanning” of a location, which incorporates taking thousands of photographs from many different viewpoints to generate a 3-D photographic model, is a key component in creating the virtual environments.
Enrico Damm became the Environment Supervisor for the production and led the scanning and photogrammetry team that would travel to locations such as Iceland and Utah to shoot elements for the Star Wars planets.
The perfect weather condition for these photographic captures is a heavily overcast day, as there are little to no shadows on the landscape. A situation with harsh sunlight and hard shadows means that it cannot easily be re-lit in the virtual world. In those cases, software such as Agisoft De-Lighter was used to analyze the photographs for lighting and remove shadows to result in a more neutral canvas for virtual lighting.
Scanning is a faster, looser process than photogrammetry and it is done from multiple positions and viewpoints. For scanning, the more parallax introduced, the better the software can resolve the 3-D geometry. Damm created a custom rig where the scanner straps six cameras to their body which all ﬁre simultaneously as the scanner moves about the location. This allows them to gather six times the images in the same amount of time — about 1,800 on average.
Photogrammetry is used to create virtual backdrops and images must be shot on a nodal rig to eliminate parallax between the photos. For Mandalorian, about 30-40% of the Volume’s backdrops were created via virtual backdrops — photogrammetry images.
Each phase of photography — photogrammetry and scanning — needs to be done at various times during the day to capture different looks to the landscape.
Lidar scanning systems are sometimes also employed.
The cameras used for scanning were Canon EOS 5D MKIV and EOS 5DS with prime lenses. Zooms are sometimes incorporated as modern stitching software has gotten better about solving multiple images from different focal lengths.
The Mandalorian (aka “Mando,” played by Pedro Pascal) treks through the desert alone.
This information was mapped onto 3D virtual sets and then modiﬁed or embellished as necessary to adhere to the Star Wars design aesthetic. If there wasn’t a real-world location to photograph, the environments were created entirely by ILM’s “environments” visual-effects team. The elements of the locations were loaded into the Unreal Engine video game platform, which provided a live, real-time, 3D environment that could react to the camera’s position.
The third shot of Season 1’s ﬁrst episode demonstrates this technology with extreme effectiveness. The shot starts with a low angle of Mando reading a sensor on the icy planet of Maldo Kreis; he stands on a long walkway that stretches out to a series of structures on the horizon. The skies are full of dark clouds, and a light snow swirls around. Mando walks along the trail toward the structures, and the camera booms up.
All of this was captured in the Volume, in-camera and in real time. Part of the walkway was a real, practical set, but the rest of the world was the virtual image on the LED screen, and the parallax as the camera boomed up matched perfectly with the real set. The effect of this system is seamless.
Because of the enormous amount of processing power needed to create this kind of imagery, the full 180′ screen and ceiling could not be rendered high-resolution, photo-real in real time. The compromise was to enter the speciﬁc lens used on the camera into the system, so that it rendered a photo-real, high-resolution image based on the camera’s speciﬁc ﬁeld of view at that given moment, while the rest of the screen displayed a lower-resolution image that was still effective for interactive lighting and reﬂections on the talent, props and physical sets. (The simpler polygon count facilitated faster rendering times.)
Idoine (far left) discusses a shot of “the Child” (aka “Baby Yoda”) with director Rick Famuyiwa (third from left) and series creator/executive producer Jon Favreau (third from right), while assistant director Kim Richards (second from right, standing) and crewmembers listen. Practical set design was often used in front of the LED screen, and was designed to visually bridge the gap between the real and virtual space. The practical sets were frequently placed on risers to lift the ﬂoor and better hide the seam of the LED wall and stage ﬂoor.
Each Volume load was put into the Unreal Engine video game platform, which provided the live, real-time, 3D environment that reacted to the production camera’s position — which was tracked by Proﬁle Studios’ motion-capture system via infrared (IR) cameras surrounding the top of the LED walls that monitored the IR markers mounted to the production camera. When the system recognized the X, Y, Z position of the camera, it then rendered proper 3D parallax for the camera’s position in real time. That was fed from Proﬁle into ILM’s proprietary StageCraft software, which managed and recorded the information and full production workﬂow as it, in turn, fed the images into the Unreal Engine. The images were then output to the screens with the assistance of the Lux Machina team.
It takes 11 interlinked computers to serve the images to the wall. Three processors are dedicated to real-time rendering and four servers provide three 4K images seamlessly side-by-side on the wall and one 4K image on the ceiling. That delivers an image size of 12,288 pixels wide by 2,160 high on the wall and 4,096 x 2,160 on the ceiling. With that kind of imagery, however, the full 270 degrees (plus movable back LED walls) and ceiling cannot be rendered high-resolution photo-real in real time. The compromise is to enter in the speciﬁc lens used on the camera into the system so that it renders a photo-real high-resolution image only for the camera’s speciﬁc ﬁeld of view at that given moment while the rest of the screen displays a lower-resolution image that is perfectly effective for interactive lighting and reﬂections on the talent, props and physical sets, but of a simpler polygon count for faster rendering times.
Mando stands in a canyon on the planet Arvala. The rocks behind him are on the LED wall, while some practical rocks are placed in the mid- and foreground to blend the transition. The ﬂoor of the stage is covered in mud and rocks for this location. On the jib is an Arri Alexa LF with a Panavision Ultra Vista anamorphic lens.
Due to the 10-12 frames (roughly half a second) of latency from the time Proﬁle’s system received camera-position information to Unreal’s rendering of the new position on the LED wall, if the camera moved ahead of the rendered frustum (a term deﬁning the virtual ﬁeld of view of the camera) on the screen, the transition line between the high-quality perspective render window and the lower-quality main render would be visible. To avoid this, the frustum was projected an average of 40-percent larger than the actual ﬁeld of view of the camera/lens combination, to allow some safety margin for camera moves. In some cases, if the lens’ ﬁeld of view — and therefore the frustum — was too wide, the system could not render an image high-res enough in real time; the production would then use the image on the LED screen simply as lighting, and composite the image in post [with a greenscreen added behind the actors]. In those instances, the backgrounds were already created, and the match was seamless because those actual backgrounds had been used at the time of photography [to light the scene].
Fortunately, says Fraser, Favreau wanted The Mandalorian to have a visual aesthetic that would match that of the original Star Wars. This meant a more “grounded” camera, with slow pans and tilts, and non-aggressive camera moves — an aesthetic that helped to hide the system latency. “In addition to using some of the original camera language in Star Wars, Jon is deeply inspired by old Westerns and samurai ﬁlms, so he also wanted to borrow a bit from those, especially Westerns,” Fraser notes. “The Mandalorian is, in essence, a gunslinger, and he’s very methodical. This gave us a set of parameters that helped deﬁne the look of the show. At no point will you see an 8mm ﬁsheye lens in someone’s face. That just doesn’t work within this language.
“It was also of paramount importance to me that the result of this technology not just be ‘suitable for TV,’ but match that of major, high-end motion pictures,” Fraser continues. “We had to push the bar to the point where no one would really know we were using new technology; they would just accept it as is. Amazingly, we were able to do just that.”
Steadicam operator Simon Jayes tracks Mando, Mayfeld (Bill Burr) and Ran Malk (Mark Boone Jr.) in front of the LED wall. While the 10- to 12-frame latency of rendering the high-resolution “frustum” on the wall can be problematic, Steadicam was employed liberally in Episode 6 to great success.
Shot on Arri’s Alexa LF, The Mandalorian was the maiden voyage for Panavision’s full-frame Ultra Vista 1.65x anamorphic lenses. The 1.65x anamorphic squeeze allowed for full utilization of the 1.44:1 aspect ratio of the LF to create a 2.37:1 native aspect ratio, which was only slightly cropped to 2.39:1 for exhibition.
“We chose the LF for a couple reasons,” explains Fraser. “Star Wars has a long history of anamorphic photography, and that aspect ratio is really key. We tested spherical lenses and cropping to 2.40, but it didn’t feel right. It felt very contemporary, not like the Star Wars we grew up with. Additionally, the LF’s larger sensor changes the focal length of the lens that we use for any given shot to a longer lens and reduces the overall depth of ﬁeld. The T2.3 of the Ultra Vistas is more like a T0.8 in Super 35, so with less depth of ﬁeld, it was easier to put the LED screen out of focus faster, which avoided a lot of issues with moiré. It allows the inherent problems in a 2D screen displaying 3D images to fall off in focus a lot faster, so the eye can’t tell that those buildings that appear to be 1,000 feet away are actually being projected on a 2D screen only 20 feet from the actor.
Fraser operates an Alexa LF, shooting a close-up of the Ugnaught Kuiil (Misty Rosas in the suit, voiced by Nick Nolte). The transition between the bottom of the LED wall and the stage ﬂoor is clearly seen here. That area was often obscured by physical production design or replaced in post.
“The Ultra Vistas were a great choice for us because they have a good amount of character and softness,” Fraser continues. “Photographing the chrome helmet on Mando is a challenge — its super-sharp edges can quickly look video-like if the lens is too sharp. Having a softer acutance in the lens, which [Panavision senior vice president of optical engineering and ASC associate] Dan Sasaki [modiﬁed] for us, really helped. The lens we used for Mando tended to be a little too soft for human faces, so we usually shot Mando wide open, compensating for that with ND ﬁlters, and shot people 2⁄3 stop or 1 stop closed.”
According to Idoine, the production used 50mm, 65mm, 75mm, 100mm, 135mm, 150mm and 180mm Ultra Vistas that range from T2 to T2.8, and he and Fraser tended to expose at T2.5-T3.5. “Dan Sasaki gave us two prototype Ultra Vistas to test in June 2018,” he says, “and from that we worked out what focal-length range to build.
Director Bryce Dallas Howard confers with actress Gina Carano — as mercenary Cara Dune — while shooting the episode “Chapter 4: Sanctuary.”
“Our desire for cinematic imagery drove every choice,” Idoine adds. And that included the incorporation of a LUT emulating Kodak’s short-lived 500T 5230 color negative, a favorite of Fraser’s. “I used that stock on Killing Them Softly [AC Oct. ’12] and Foxcatcher [AC Dec. ’14], and I just loved its creamy shadows and the slight magenta cast in the highlights,” says Fraser. “For Rogue One, ILM was able to develop a LUT that emulated it, and I’ve been using that LUT ever since.”
“Foxcatcher was the last ﬁlm I shot on the stock, and then Kodak discontinued it,” continues Fraser. “At the time, we had some stock left over and I asked the production if we could donate it to an Australian ﬁlm student and they said ‘yes,’ so we sent several boxes to Australia. When I was prepping Rogue One, I decided that was the look I wanted — this 5230 stock — but it was gone. On a long shot, I wrote an email to the ﬁlm student to see if he had any stock left and, unbelievably, he had 50 feet in the bottom of his fridge. I had him send that directly to ILM and they created a LUT from it that I used on Rogue and now Mandalorian.”
Actor Giancarlo Esposito as Moff Gideon, an Imperial searching for the Child.
A significant key to the Volume’s success creating in-camera ﬁnal VFX is color matching the wall’s LED output with the color matrix of the Arri Alexa LF camera. ILM’s Matthias Scharfenberg, J. Schulte and their team did thorough testing of the Black Roe LED capabilities and matching that with the color sensitivity and reproduction of the LF to make them seamless partners. LEDs are very narrow band color spectrum emitters, their red, green and blue diodes output very narrow spectra of color for each diode which makes reaching some colors very difﬁcult and further making them compatible with the color ﬁlter array on the ALEV-III was a bit of a challenge. Utilizing a carefully designed series of color patches, a calibration sequence was run on the LED wall to sync with the camera’s sensitivity. This means any other model of camera shooting on the Volume will not receive proper color, but the Alexa LF will. While the color reproduction of the LEDs may not have looked right to the eye, through the camera, it appeared seamless. This means that the off-the-shelf LED panels won’t quite work with the accuracy necessary for a high-end production, but, with custom tweaking, they were successful. There were limitations, however. With low light backgrounds, the screens would block up and alias in the shadows making them unsuitable for in-camera ﬁnals — although with further development of the color science this has been solved for season two.
A significant asset to the LED Volume wall and images projected from it is the interactive lighting provided on the actors, sets and props within the Volume. The light that is projected from the imagery on the LED wall provides a realistic sense of the actor (or set/props) being within that environment in a way that is rarely achievable with green- or bluescreen composite photography. If the sun is low on the horizon on the LED wall, the position of the sun on the wall will be significantly brighter than the surrounding sky. This brighter spot will create a bright highlight on the actors and objects in the Volume just as a real sun would from that position. Reﬂections of elements of the environment from the walls and ceiling show up in Mando’s costume as if he were actually in that real-world location.
“When you’re dealing with a reﬂective subject like Mando, the world outside the camera frame is often more important than the world you see in the camera’s ﬁeld of view,” Fraser says. “What’s behind the camera is reﬂected in the actor’s helmet and costume, and that’s crucial to selling the illusion that he’s in that environment. Even if we were only shooting in one direction on a particular location, the virtual art-department would have to build a 360-degree set so we could get the interactive lighting and reﬂections right. This was also true for practical sets that were built onstage and on the backlot — we had to build the areas that we would never see on camera because they would be reﬂected in the suit. In the Volume, it’s this world outside the camera that deﬁnes the lighting.
“When you think about it, unless it’s a practical light in shot, all of our lighting is outside the frame — that’s how we make movies,” Fraser continues. “But when most of your lighting comes from the environment, you have to shape that environment carefully. We sometimes have to add a practical or a window into the design, which provides our key light even though we never see that [element] on camera.”
The ﬁght with the mudhorn likely negated any worry about helmet reﬂections for this scene.
The interactive lighting of the Volume also significantly reduces the requirement for traditional ﬁlm production lighting equipment and crew. The light emitted from the LED screens becomes the primary lighting on the actors, sets and props within the Volume. Since this light comes from a virtual image of the set or location, the organic nature of the quality of the light on the elements within the Volume ﬁrmly ground those elements into the reality presented.
There were, of course, limitations. Although LEDs are bright and capable of emitting a good deal of light, they cannot re-create the intensity and quality of direct, natural daylight. “The sun on the LED screen looks perfect because it’s been photographed, but it doesn’t look good on the subjects — they look like they’re in a studio,” Fraser attests. “It’s workable for close-ups, but not really for wide shots. For moments with real, direct sunlight, we headed out to the backlot as much as possible.” That “backlot” was an open ﬁeld near the Manhattan Beach Studios stages, where the art department built various sets. (Several stages were used for creating traditional sets as well.)
Overcast skies, however, proved a great source in the Volume. The skies for each “load” — the term given for each new environment loaded onto the LED walls — were based on real, photographed skies. While shooting a location, the photogrammetry team shot multiple stills at different times of day to create “sky domes.” This enabled the director and cinematographer to choose the sun position and sky quality for each set. “We can create a perfect environment where you have two minutes to sunset frozen in time for an entire 10-hour day,” Idoine notes. “If we need to do a turnaround, we merely rotate the sky and background, and we’re ready to shoot!”
Idoine (seated at camera) in discussion with Favreau and Filoni on a practical set.
During prep, Fraser and Idoine spent a lot of time in the virtual art department, whose crew created the virtual backgrounds for the LED loads. They spent many hours going through each load to set sky-dome choices and pick the perfect time of day and sun position for each moment. They could select the sky condition they wanted, adjust the scale and the orientation, and ﬁnesse all of these attributes to ﬁnd the best lighting for the scene. Basic, real-time ray tracing helped them see the effects of their choices on the virtual actors in the previs scene. These choices would then be saved and sent off to ILM, whose artists would use these rougher assets for reference and build the high-resolution digital assets.
The Virtual Art Department starts their job creating 3-D virtual sets of each location to production designer Andrew Jones’ speciﬁcations and then the director and cinematographer can go into the virtual location with VR headsets and do a virtual scout. Digital actors, props and sets are added and can be moved about and coverage is chosen during the virtual scout. Then the cinematographer will follow the process as the virtual set gets further textured with photogrammetry elements and the sky domes are added.
The virtual world on the LED screen is fantastic for many uses, but obviously an actor cannot walk through the screen, so an open doorway doesn’t work when it’s virtual. Doors are an aspect of production design that have to be physical. If a character walks through a door, it can’t be virtual, it must be real as the actor can’t walk through the LED screen.
Favreau gets his western-style saloon entrance from the ﬁrst episode of The Mandalorian.
If an actor is close to a set piece, it is more often preferred that piece be physical instead of virtual. If they’re close to a wall, that should be a physical wall so that they are actually close to something real.
Many objects that are physical are also virtual. Even if a prop or set piece is physically constructed, it is scanned and incorporated into the virtual world so that it becomes not only a practical asset, but a digital one as well. Once it’s in the virtual world, it can be turned on or off on a particular set or duplicated.
“We take objects that the art department have created and we employ photogrammetry on each item to get them into the game engine,” explains Clint Spillers, Virtual Production Supervisor. “We also keep the thing that we scanned and we put it in front of the screen and we’ve had remarkable success getting the foreground asset and the digital object to live together very comfortably.”
Another challenge on production design is the concept that every set must be executed in full 360 degrees. While in traditional ﬁlmmaking a production designer may be tempted to shortcut a design knowing that the camera will only see a small portion of a particular set, in this world the set that is off camera is just as important as the set that is seen on camera.
“This was a big revelation for us early on,” attests production designer Andrew Jones. “We were, initially, thinking of this technology as a backdrop — like an advanced translight or painted backdrop — that we would shoot against and hope to get in-camera ﬁnal effects. We imagined that we would design our sets as you would on a normal ﬁlm: IE, the camera sees over here, so this is what we need to build. In early conversations with DP Greig Fraser he explained that the off-camera portion of the set — that might never be seen on camera — was just as vital to the effect. The whole Volume is a light box and what is behind the camera is reﬂected on the actor’s faces, costumes, props. What’s behind the camera is actually the key lighting on the talent.
“This concept radically changed how we approach the sets,” Jones continues. “Anything you put in The Volume is lit by the environment, so we have to make sure that we conceptualize and construct the virtual set in its entirety of every location in full 360. Since the actor is, in essence, a chrome ball, he’s reﬂecting what is all around him so every detail needs to be realized.”
They sometimes used photogrammetry as the basis, but always relied upon the same visual-effects artists who create environments for the Star Wars ﬁlms to realize these real-time worlds — “baking in” lighting choices established earlier in the pipeline with high-end, ray-traced rendering.
“I chose the sky domes that worked best for all the shots we needed for each sequence on the Volume,” Fraser notes. “After they were chosen and ILM had done their work, I couldn’t raise or lower the sun because the lighting and shadows would be baked in, but I could turn the whole world to adjust where the hot spot was.”
Fraser noted a limitation of the adjustments that can be made to the sky domes once they’re live on the Volume after ILM’s ﬁnalization. The world can be rotated and the center position can be changed; the intensity and color can be adjusted, but the actual position of the sun in the sky dome can’t be altered because ILM has done the ray tracing ahead of time and “baked” in the shadows of the terrain by the sun position. This is done to minimize the computations necessary to do advanced ray tracing in real time. If the chosen position changes, those baked-in shadows won’t change, only the elements that are reserved for real-time rendering and simple ray tracing will be affected. This would make the backgrounds look false and fake as the lighting direction wouldn’t match the baked-in shadows.
From time to time, traditional lighting ﬁxtures were added to augment the output of the Volume.
In the fourth episode, the Mandalorian is looking to lay low and travels to the remote farming planet of Sorgan and visits the common house, which is a thatched, basket-weave structure. The actual common house was a miniature built by the art department and then photographed to be included in the virtual world. The miniature was lit with a single, hard light source that emulated natural daylight breaking through the thatched walls. “You could clearly see that one side of the common house was in hard light and the other side was in shadow,” recalls Idoine. “There were hot spots in the model that really looked great so we incorporated LED “movers” with slash gobos and Charlie Bars [long ﬂags] to break up the light in a similar basket-weave pattern. Because of this very open basket-weave construction and the fact that the load had a lot of shafts of light, I added in random slashes of hard light into the practical set and it mixed really well.”
The Volume could incorporate virtual lighting, too, via the “Brain Bar,” a NASA Mission Control-like section of the soundstage where as many as a dozen artists from ILM, Unreal and Proﬁle sat at workstations and made the technology of the Volume function. Their work was able to incorporate on-the-ﬂy color-correction adjustments and virtual-lighting tools, among other tweaks.
Matt Madden, president of Proﬁle and a member of the Brain Bar team, worked closely with Fraser, Idoine and gaffer Jeff Webster to incorporate virtual-lighting tools via an iPad that communicated back to the Bar. He could create shapes of light on the wall of any size, color and intensity. If the cinematographer wanted a large, soft source off-camera, Madden was able to create a “light card” of white just outside the frustum. The entire wall outside the camera’s angle of view could be a large light source of any intensity or color that the LEDs could reproduce.
In this case, a LED wall was made up of Roe Black Pearl BP2 screens with a max brightness of 1800 nits. 10.674 nits are equal to 1 foot candle of light. At peak brightness, the wall could create an intensity of about 168 foot candles. That’s the equivalent of an f/8 3/4 at 800 ISO (24fps 180-degree shutter). While the Volume was never shot at peak full white, any lighting “cards” that were added were capable of outputting this brightness.
Idoine discovered that a great additional source for Mando was a long, narrow band of white near the top of the LED wall. “This wraparound source created a great backlight look on Mando’s helmet,” Idoine says. Alternatively, he and Fraser could request a tall, narrow band of light on the wall that would reﬂect on Mando’s full suit, similar to the way a commercial photographer might light a wine bottle or a car — using specular reﬂections to deﬁne shape.
Additionally, virtual black ﬂags — meaning areas where the LED wall were set to black — could be added wherever needed, and at whatever size. The transparency of the black could also be adjusted to any percentage to create virtual nets.
I am an amateur photographer, and I’ve sold cameras non-professionally on Amazon for over eight years as I’ve upgraded. That trend comes to an end with my most recent transaction. In December, I sold a mint-in-box Sony a7R 4, and the buyer used a combination of social engineering and ambiguity to not only end up with the camera, but also the money he paid me.
Amazon’s A-to-Z Guarantee did not protect me as a seller. Based on my experience with this transaction, I cannot in good faith recommend selling cameras on Amazon anymore.
Author’s Note: This is a summary and my personal opinion, and not that of my employer or anyone else.
I ordered a second Sony a7R 4 as a backup for a photoshoot. My plan was to then resell it, as the seller fees were slightly less than the rental fees at the time. I listed it on Amazon, and it was almost instantly purchased by a buyer from Florida. I took photos of the camera as I prepared it for shipment, and used conﬁrmed & insured two-day FedEx. The package arrived to the buyer on December 17th.
The buyer listed an initial for his last name—that should have been a red ﬂag. It gave him a layer of anonymity that will be relevant later.
On December 24th, I apparently ruined Christmas, as the buyer now claims that accessories were missing. Throughout this whole ordeal, I’ve never heard directly from the buyer, in spite of numerous email communications. He never told me which “product purchased/packaging” was missing.
I started a claim with Amazon, showing the photographic evidence of a mint-in-box a7R 4 with all accessories. I denied the return. The buyer offered no photographic proof or other evidence that he received anything but a mint camera.
To this day, I have no idea what he claimed was “missing” from the package. I even included all the original plastic wrap!
After about a week of back-and-forth emails, Amazon initially agreed with me.
Somehow, a second support ticket for the same item got opened up. The issue was not yet resolved. The buyer kept clawing back. The next day, I get an email about a “refund request initiated.” On this second ticket, Amazon now turned against me.
Now, we’re in 2020. The buyer apparently shipped the camera back to me; however, he entered the wrong address (forgetting my last name, among other things). The package was returned to sender, and I never got to inspect what’s inside. Whether that box contained the camera as sent in like new condition, a used camera, or a box of stones is an eternal mystery.
Truly, had he shipped it to the right address, I would have had multiple witnesses and video footage of the unboxing.
Here’s where it gets interesting: as I appeal the claim, Amazon notes that the buyer is not responsible for shipping the item back to the correct address, and they can indeed keep the item if they want to, following the initiation of an A-to-Z Guarantee.
Indeed, I have a paper trail of emails that I in fact sent to Amazon. Somehow, they got their tickets confused. When I followed up on this email, they shut off communication.
So, as a buyer, you can keep an item with “no obligation to return,” even if you can’t substantiate your claim of “missing items or box.” Now the buyer has the camera, and the cash.
The whole experience has been frustrating, humiliating, and sickening to my photography hobby. I hope that this serves as a cautionary tale for selling such goods on Amazon, if my experience is any indication.
As of now, I’ve emailed the buyer again to ship the camera back to me, and I have a case open with Amazon, in which I provide the 23 emails they claim I never sent them. That case was closed with no response from Amazon. I had an initially-sympathetic ear through their Twitter support, until I mentioned the speciﬁcs of my case.
* If you’re going to sell on Amazon or elsewhere, take an actual video of you packing the camera. You need all the defense you can get against items mysteriously disappearing.
* Investigate more even-handed selling services, like eBay, Fred Miranda, or other online retailers.
* If you need a backup camera, go ahead and rent one. I’m a frequent customer of BorrowLenses, and I inﬁnitely regret not using them this time.
* Update your personal articles insurance policy for any moment that the camera is in your possession, and use something like MyGearVault to keep track of all the serial numbers. I only had the camera for a couple of days altogether, but that was enough.
I hope that this was a worst-case, everything-goes-wrong scenario, and I hope that it doesn’t happen to anyone else. There ought to be more even-handed failsafes for these transactions.
About the author: Cliff is an amateur landscape and travel photographer. You can ﬁnd more of his work on his website and Instagram account.
“We are sidestepping all of the scientiﬁc challenges that have held fusion energy back for more than half a century,” says the director of an Australian company that claims its hydrogen-boron fusion technology is already working a billion times better than expected.
HB11 Energy is a spin-out company that originated at the University of New South Wales, and it announced today a swag of patents through Japan, China and the USA protecting its unique approach to fusion energy generation.
Fusion, of course, is the long-awaited clean, safe theoretical solution to humanity’s energy needs. It’s how the Sun itself makes the vast amounts of energy that have powered life on our planet up until now. Where nuclear ﬁssion — the splitting of atoms to release energy — has proven incredibly powerful but insanely destructive when things go wrong, fusion promises reliable, safe, low cost, green energy generation with no chance of radioactive meltdown.
It’s just always been 20 years away from being 20 years away. A number of multi-billion dollar projects are pushing slowly forward, from the Max Planck Institute’s insanely complex Wendelstein 7-X stellerator to the 35-nation ITER Tokamak project, and most rely on a deuterium-tritium thermonuclear fusion approach that requires the creation of ludicrously hot temperatures, much hotter than the surface of the Sun, at up to 15 million degrees Celsius (27 million degrees Fahrenheit). This is where HB11′s tech takes a sharp left turn.
The results of decades of research by Emeritus Professor Heinrich Hora, HB11′s approach to fusion does away with rare, radioactive and difﬁcult fuels like tritium altogether — as well as those incredibly high temperatures. Instead, it uses plentiful hydrogen and boron B-11, employing the precise application of some very special lasers to start the fusion reaction.
Here’s how HB11 describes its “deceptively simple” approach: the design is “a largely empty metal sphere, where a modestly sized HB11 fuel pellet is held in the center, with apertures on different sides for the two lasers. One laser establishes the magnetic containment ﬁeld for the plasma and the second laser triggers the ‘avalanche’ fusion chain reaction. The alpha particles generated by the reaction would create an electrical ﬂow that can be channeled almost directly into an existing power grid with no need for a heat exchanger or steam turbine generator.”
HB11′s Managing Director Dr. Warren McKenzie clariﬁes over the phone: “A lot of fusion experiments are using the lasers to heat things up to crazy temperatures — we’re not. We’re using the laser to massively accelerate the hydrogen through the boron sample using non-linear forced. You could say we’re using the hydrogen as a dart, and hoping to hit a boron , and if we hit one, we can start a fusion reaction. That’s the essence of it. If you’ve got a scientiﬁc appreciation of temperature, it’s essentially the speed of atoms moving around. Creating fusion using temperature is essentially randomly moving atoms around, and hoping they’ll hit one another, our approach is much more precise.”
“The hydrogen/boron fusion creates a couple of helium atoms,” he continues. “They’re naked heliums, they don’t have electrons, so they have a positive charge. We just have to collect that charge. Essentially, the lack of electrons is a product of the reaction and it directly creates the current.”
The lasers themselves rely upon cutting-edge “Chirped Pulse Ampliﬁcation” technology, the development of which won its inventors the 2018 Nobel prize in Physics. Much smaller and simpler than any of the high-temperature fusion generators, HB11 says its generators would be compact, clean and safe enough to build in urban environments. There’s no nuclear waste involved, no superheated steam, and no chance of a meltdown.
“This is brand new,” Professor Hora tells us. “10-petawatt power laser pulses. It’s been shown that you can create fusion conditions without hundreds of millions of degrees. This is completely new knowledge. I’ve been working on how to accomplish this for more than 40 years. It’s a unique result. Now we have to convince the fusion people — it works better than the present day hundred million degree thermal equilibrium generators. We have something new at hand to make a drastic change in the whole situation. A substitute for carbon as our energy source. A radical new situation and a new hope for energy and the climate.”
Indeed, says Hora, experiments and simulations on the laser-triggered chain reaction are returning reaction rates a billion times higher than predicted. This cascading avalanche of reactions is an essential step toward the ultimate goal: reaping far more energy from the reaction than you put in. The extraordinary early results lead HB11 to believe the company “stands a high chance of reaching the goal of net energy gain well ahead of other groups.”
“As we aren’t trying to heat fuels to impossibly high temperatures, we are sidestepping all of the scientiﬁc challenges that have held fusion energy back for more than half a century,” says Dr McKenzie. “This means our development roadmap will be much faster and cheaper than any other fusion approach. You know what’s amazing? Heinrich is in his eighties. He called this in the 1970s, he said this would be possible. It’s only possible now because these brand new lasers are capable of doing it. That, in my mind, is awesome.”
Dr McKenzie won’t however, be drawn on how long it’ll be before the hydrogen-boron reactor is a commercial reality. “The timeline question is a tricky one,” he says. “I don’t want to be a laughing stock by promising we can deliver something in 10 years, and then not getting there. First step is setting up camp as a company and getting started. First milestone is demonstrating the reactions, which should be easy. Second milestone is getting enough reactions to demonstrate an energy gain by counting the amount of helium that comes out of a fuel pellet when we have those two lasers working together. That’ll give us all the science we need to engineer a reactor. So the third milestone is bringing that all together and demonstrating a reactor concept that works.”
This is big-time stuff. Should cheap, clean, safe fusion energy really be achieved, it would be an extraordinary leap forward for humanity and a huge part of the answer for our future energy needs. And should it be achieved without insanely hot temperatures being involved, people would be even more comfortable having it close to their homes. We’ll be keeping an eye on these guys.
Lion cubs play-ﬁght to learn social skills. Rats play to learn emotional skills. Monkeys play to learn cognitive skills. And yet, in the last century, we humans have convinced ourselves that play is useless, and learning is supposed to be boring.
Gosh, no wonder we’re all so miserable.
Welcome to Explorable Explanations, a hub for learning through play! We’re a disorganized “movement” of artists, coders & educators who want to reunite play and learning.
Let’s get started! Check out these 3 random Explorables:
What should an essay be? Many people would say persuasive. That’s
what a lot of us were taught essays should be. But I think we can
aim for something more ambitious: that an essay should be useful.
To start with, that means it should be correct. But it’s not enough
merely to be correct. It’s easy to make a statement correct by
making it vague. That’s a common ﬂaw in academic writing, for
example. If you know nothing at all about an issue, you can’t go
wrong by saying that the issue is a complex one, that there are
many factors to be considered, that it’s a mistake to take too
simplistic a view of it, and so on.
Though no doubt correct, such statements tell the reader nothing.
Useful writing makes claims that are as strong as they can be made
without becoming false.
For example, it’s more useful to say that Pike’s Peak is near the
middle of Colorado than merely somewhere in Colorado. But if I say
it’s in the exact middle of Colorado, I’ve now gone too far, because
it’s a bit east of the middle.
Precision and correctness are like opposing forces. It’s easy to
satisfy one if you ignore the other. The converse of vaporous
academic writing is the bold, but false, rhetoric of demagogues.
Useful writing is bold, but true.
It’s also two other things: it tells people something important,
and that at least some of them didn’t already know.
Telling people something they didn’t know doesn’t always mean
surprising them. Sometimes it means telling them something they
knew unconsciously but had never put into words. In fact those may
be the more valuable insights, because they tend to be more
Let’s put them all together. Useful writing tells people something
true and important that they didn’t already know, and tells them
as unequivocally as possible.
Notice these are all a matter of degree. For example, you can’t
expect an idea to be novel to everyone. Any insight that you have
will probably have already been had by at least one of the world’s
7 billion people. But it’s sufﬁcient if an idea is novel to a lot
Ditto for correctness, importance, and strength. In effect the four
components are like numbers you can multiply together to get a score
for usefulness. Which I realize is almost awkwardly reductive, but
How can you ensure that the things you say are true and novel and
important? Believe it or not, there is a trick for doing this. I
learned it from my friend Robert Morris, who has a horror of saying
anything dumb. His trick is not to say anything unless he’s sure
it’s worth hearing. This makes it hard to get opinions out of him,
but when you do, they’re usually right.
Translated into essay writing, what this means is that if you write
a bad sentence, you don’t publish it. You delete it and try again.
Often you abandon whole branches of four or ﬁve paragraphs. Sometimes
a whole essay.
You can’t ensure that every idea you have is good, but you can
ensure that every one you publish is, by simply not publishing the
ones that aren’t.
In the sciences, this is called publication bias, and is considered
bad. When some hypothesis you’re exploring gets inconclusive results,
you’re supposed to tell people about that too. But with essay
writing, publication bias is the way to go.
My strategy is loose, then tight. I write the ﬁrst draft of an
essay fast, trying out all kinds of ideas. Then I spend days rewriting
it very carefully.
I’ve never tried to count how many times I proofread essays, but
I’m sure there are sentences I’ve read 100 times before publishing
them. When I proofread an essay, there are usually passages that
stick out in an annoying way, sometimes because they’re clumsily
written, and sometimes because I’m not sure they’re true. The
annoyance starts out unconscious, but after the tenth reading or
so I’m saying “Ugh, that part” each time I hit it. They become like
briars that catch your sleeve as you walk past. Usually I won’t
publish an essay till they’re all gone � till I can read through
the whole thing without the feeling of anything catching.
I’ll sometimes let through a sentence that seems clumsy, if I can’t
think of a way to rephrase it, but I will never knowingly let through
one that doesn’t seem correct. You never have to. If a sentence
doesn’t seem right, all you have to do is ask why it doesn’t, and
you’ve usually got the replacement right there in your head.
This is where essayists have an advantage over journalists. You
don’t have a deadline. You can work for as long on an essay as you
need to get it right. You don’t have to publish the essay at all,
if you can’t get it right. Mistakes seem to lose courage in the
face of an enemy with unlimited resources. Or that’s what it feels
like. What’s really going on is that you have different expectations
for yourself. You’re like a parent saying to a child “we can sit
here all night till you eat your vegetables.” Except you’re the
I’m not saying no mistake gets through. For example, I added condition
(c) in “A Way to Detect Bias”
after readers pointed out that I’d
omitted it. But in practice you can catch nearly all of them.
There’s a trick for getting importance too. It’s like the trick I
suggest to young founders for getting startup ideas: to make something
you yourself want. You can use yourself as a proxy for the reader.
The reader is not completely unlike you, so if you write about
topics that seem important to you, they’ll probably seem important
to a significant number of readers as well.
Importance has two factors. It’s the number of people something
matters to, times how much it matters to them. Which means of course
that it’s not a rectangle, but a sort of ragged comb, like a Riemann
The way to get novelty is to write about topics you’ve thought about