10 interesting stories served every morning and every evening.
Adapted from our book and video series, Refactoring UI.
Ever used one of those fancy color palette generators? You know, the ones where you pick a starting color, tweak some options that probably include some musical jargon like “triad” or “major fourth”, and are then bestowed the ﬁve perfect color swatches you should use to build your website?
This calculated and scientiﬁc approach to picking the perfect color scheme is extremely seductive, but not very useful.
Well, unless you want your site to look like this:
You can’t build anything with ﬁve hex codes. To build something real, you need a much more comprehensive set of colors to choose from.
You can break a good color palette down into three categories.
Text, backgrounds, panels, form controls — almost everything in an interface is grey.
You’ll need more greys than you think, too — three or four shades might sound like plenty but it won’t be long before you wish you had something a little darker than shade #2 but a little lighter than shade #3.
In practice, you want 8-10 shades to choose from (more on this later). Not so many that you waste time deciding between shade #77 and shade #78, but enough to make sure you don’t have to compromise too much .
True black tends to look pretty unnatural, so start with a really dark grey and work your way up to white in steady increments.
Most sites need one, maybe two colors that are used for primary actions, emphasizing navigation elements, etc. These are the colors that determine the overall look of a site — the ones that make you think of Facebook as “blue”, even though it’s really mostly grey.
Just like with greys, you need a variety (5-10) of lighter and darker shades to choose from.
Ultra-light shades can be useful as a tinted background for things like alerts, while darker shades work great for text.
On top of primary colors, every site needs a few accent colors for communicating different things to the user.
For example, you might want to use an eye-grabbing color like yellow, pink, or teal to highlight a new feature:
You might also need colors to emphasize different semantic states, like red for conﬁrming a destructive action:
You’ll want multiple shades for these colors too, even though they should be used pretty sparingly throughout the UI.
If you’re building something where you need to use color to distinguish or categorize similar elements (like lines on graphs, events in a calendar, or tags on a project), you might need even more accent colors.
All in, it’s not uncommon to need as many as ten different colors with 5-10 shades each for a complex UI.
When you need to create a lighter or darker variation of a color in your palette, don’t get clever using CSS preprocessor functions like “lighten” or “darken” to create shades on the ﬂy. That’s how you end up with 35 slightly different blues that all look the same.
Instead, deﬁne a ﬁxed set of shades up front that you can choose from as you work.
So how do you put together a palette like this anyways?
Start by picking a base color for the scale you want to create — the color in the middle that your lighter and darker shades are based on.
There’s no real scientiﬁc way to do this, but for primary and accent colors, a good rule of thumb is to pick a shade that would work well as a button background.
It’s important to note that there are no real rules here like “start at 50% lightness” or anything — every color behaves a bit differently, so you’ll have to rely on your eyes for this one.
Next, pick your darkest shade and your lightest shade. There’s no real science to this either, but it helps to think about where they will be used and choose them using that context.
The darkest shade of a color is usually reserved for text, while the lightest shade might be used to tint the background of an element.
A simple alert component is a good example that combines both of these use cases, so it can be a great place to pick these colors.
Start with a color that matches the hue of your base color, and adjust the saturation and lightness until you’re satisﬁed.
Once you’ve got your base, darkest, and lightest shades, you just need to ﬁll in the gaps in between them.
For most projects, you’ll need at least 5 shades per color, and probably closer to 10 if you don’t want to feel too constrained.
Nine is a great number because it’s easy to divide and makes ﬁlling in the gaps a little more straightforward. Let’s call our darkest shade 900, our base shade 500, and our lightest shade 100.
Start by picking shades 700 and 300, the ones right in the middle of the gaps. You want these shades to feel like the perfect compromise between the shades on either side.
This creates four more holes in the scale (800, 600, 400, and 200), which you can ﬁll using the same approach.
You should end up with a pretty balanced set of colors that provide just enough options to accommodate your design ideas without feeling limiting.
With greys the base color isn’t as important, but otherwise the process is the same. Start at the edges and ﬁll in the gaps until you have what you need.
Pick your darkest grey by choosing a color for the darkest text in your project, and your lightest grey by choosing something that works well for a subtle off-white background.
As tempting as it is, you can’t rely purely on math to craft the perfect color palette.
A systematic approach like the one described above is great to get you started, but don’t be afraid to make little tweaks if you need to.
Once you actually start using your colors in your designs, it’s almost inevitable that you’ll want to tweak the saturation on a shade, or make a couple of shades lighter or darker. Trust your eyes, not the numbers.
Just try to avoid adding new shades too often if you can avoid it. If you’re not dilligent about limiting your palette, you might as well have no color system at all.
Send an email to with whatever you want to torch. Use plain text or an image attachment. PG-13 rules apply.
Watch on the live feed as your message is created, conveyed, and then dropped into the rolling ﬂames.
What’s this experiment all about?
Well, 2020′s been a rough year. An absolute dumpster ﬁre of a year for a lot of people.
That’s when it came to us. Can email be a conduit for catharsis? If you could type out an email, press send, and see it being consumed in an actual dumpster ﬁre, would it help reclaim a little bit of what we’ve lost?
P. S. We’ll only use your email address to notify you about your burn. That’s it, the end.
P. P.S. We’re offsetting by 3x every bit of CO2 this creates via Cool Effect.
Performance must suck when trying to emulate x86 on ARM, right?
Overwhelmingly postive user-reviews
Silence is Golden or Cool as a Cucumber?
Is 8 GB RAM on x86 Intel/AMD the same as 8 GB on Apple Silicon M1?
Server applications will see an uptake on ARM as well
A lot of old laptops that are “still working” are about to get replaced
Hackintoshers are ready to say “Yes”
Which brings us to: What have others been doing all this time?
These are the words used by the user holdagold on reddit to describe their experience with the new Apple Silicon M1 Macbook Air. Rarely does a product leave people effusing to the extent Apple Silicon M1 has done this week. At best, you get the people who really care about a system’s internals very excited like we saw with Zen 3’s launch recently. For everyday users who just want to browse the web, stream some Netﬂix, maybe edit some documents, computers have been “perfectly ﬁne” for the last decade. We’ve seen incremental year over year improvements with slightly more performance, slightly more battery life, marginally faster SSD, somewhat thinner design, etc. But something genuinely new, something revolutionary, something once in a generation has been missing. I believe the Apple M1 represents something we can truly call “revolutionary”.
Before we proceed, it’s essential to set the context that I’ve only used two Apple devices in my entire life - a personal 2013 MacBook Air and a 2019 MacBook Pro that I got through work. Everything else has been either a custom-built PC, Windows laptop, or an Android/Windows Mobile smartphone. Even for a “PC/Android Guy”, I have to admit what I saw this week is something special. I believe it’ll go down as a significant milestone in computing history on par with some industry-deﬁning chips like Intel’s 8086, 386, 486, Pentium, Conroe or AMD’s K8, Zen, etc. I hope for the return of Moore’s law and awakening of the x86 manufacturers from their slumber as this will be the “slowest” CPU Apple will ever make. As Henry Clay once said,
Of all human powers operating on the affairs of mankind, none is greater than that of competition.
This blog is then my observation of the excitement around this significant launch and captures some of the user and reviewer commentary.
Apple launched its own M1 SoC that integrates an 8-core CPU, 8-core GPU, 16-core Neural Engine, Media encode and decode engines, RAM - all on a single-chip. By including the RAM on the SoC, Apple is marketing this as a Uniﬁed Memory Architecture (UMA), central to the performance improvements M1 brings.
The ﬁrst products and price points the M1 will be going into are:
Apple promises its new chip is much more energy-efﬁcient than its Intel counterparts, so the battery life promises have gone up across the board:
On the MacBook Air - up to 18 hours of video on a single charge (up from 12 hours on this year’s Intel-powered MacBook Air) and offers up to 15 hours of wireless web browsing per charge (up from 11 hours previously)
On the MacBook Pro - up to 17 hours of wireless web browsing (up from 10 hours with this year’s Intel-powered MacBook Pro), and 20 hours of video playback (up from 10 hours before).
To showcase that energy efﬁciency, Apple is shipping the Macbook Air without any fan! It will be passively cooled like all iPhones and iPads.
Performance must suck when trying to emulate x86 on ARM, right?#
Surprisingly no! Apple included Rosetta 2 ahead-of-time binary translation technology that translates code designed to run on Intel/x86 CPUs for the Apple Silicon CPUs. The performance is much better than expected and ranges between 70-80% of native code, which is surprising compared to Microsoft’s struggles in emulating x86 Windows apps on ARM CPUs. Apple’s answer might lie in something called TSO, aka. total store ordering as explained by u/Veedrac and and u/ShaidarHaran2 on reddit:
TSO, aka. total store ordering, is a type of memory ordering, and affects how cores see the operations performed in other cores. Total store ordering is a strong guarantee provided by x86, that very roughtly means that all stores from other processors are ordered the same way for every processor, and in a reasonably consistent order, with exceptions for local memory.
In contrast, Arm architectures favour weaker memory models, that allows a lot of reordering of loads and stores. This has the advantage that in general there is less overhead where these guarantees are not needed, but it means that when ordering is required for correctness, you need to explicitly run instructions to ensure it. Emulating x86 would require this on practically every store instruction, which would slow emulation down a lot. That’s what the hardware toggle is for.
In other words, Apple has, of course, been playing the very long game. TSO is quite a large beneﬁt to emulating x86, hence why Rosetta 2 appears to put out a very decent 70% of native chip performance, that and install time translation for everything but JIT features. That’s on a chip not even meant to be a mac chip, so with further expanded caches, a wider, faster engine, perhaps applying the little cores to emulation which they’re not currently, and so on, x86_64 performance should be very very decent. I’m going to dare upset some folks and say perhaps even be faster in emulation than most contemporary x86 chips of the time, if you only lose 20% of native performance when it’s all said and done, it doesn’t take much working backwards to ﬁgure where they’d need to be, and Gurman said they were aiming for over 50% faster than Intel.
There have been numerous professional reviews and YouTube videos enumerating how Apple’s new products are better than their previous Intel counterparts. In the end, though, it comes down to how these products ﬁt into the core workﬂows of the consumer who’s spending their money on them. There have been plenty of real-world experiences that I’ve seen in my ﬁlter bubble, mostly Reddit and Twitter. I will share some of these throughout this blog.
I pray that Intel, AMD, and Qualcomm is letting the M1 give them ideas, take them in new directions. Because this level of sorcery is too damn powerful to be held by a single company. Especially a monopolizing conglomerate like Apple. But fucking kudos to those chip wizards 👏
— DHH (@dhh) November 23, 2020
Purchased a new MacBook Air w/ Apple’s M1 chip.
Everything is WICKED fast.
Windows and prompts pop up instantly. Slowdown NEVER happens — even w/ numerous apps going.
Evernote, always a resources hog for me, is now a non-issue.
Huge props, Apple. 👍
— JP Mangalindan (@JPManga) November 19, 2020
Have had my M1 MacBook for about a week now… and have been blown away by the performance. Battery just last and lasts, and either the fan never runs or is inaudible. Everything seems faster, even the stuff not yet compiled for Apple Silicon.
— Blake Scholl 🛫 (@bscholl) November 24, 2020
Deﬁnitely don’t get near one! I have the 12.9” iPad Pro, new Max iPhone, older 13”MBP, and a beastly gaming PC. Our IT guy got the new MacBook Pro today and after playing with it for 10 minutes I was already rearranging my ﬁnances in my head.
People keep saying this but it’s eerily fast and silent, like alien technology. I exported a 5 minute clip in unoptimized Premiere Pro and I swear it did it faster than my PC with a 2070 ever has. The MBP wasn’t even warm to the touch afterwards either.
> It’s honestly the best purchase I’ve made in the last 10 years.
This is exactly how I feel. Feels like I’m holding a magical device that shouldn’t exist. Haven’t felt that in a long long time
I have a 2018 15” MacBook Pro which is used almost exclusively in clamshell mode these days and attached to an ultrawide monitor. I use it mainly for photoshop and Lightroom for my photography work, and it’s been painful to say the least. It’s quick for all of two minutes until the fan kicks in with the thermal throttling, at which point the machine chugs to a crawl. I’ve been wanting to get a desktop in replacement, eyeing the previous gen Mac Minis but unable to make the move due to the lack of discrete GPU and an inability to push my monitor’s resolution.
In comes the M1 Mac Mini - I ordered right away and received it Tuesday, and my god has it been a breath of fresh air. First impressions were insanely positive, even hooked to my 5120x1440 display it was lightning fast. But yesterday I put it through the paces with edits from a recent shoot, and it was beyond stellar. More photoshop tabs open than ever before, Lightroom CC and classic open together, nothing could slow it down.
To say I’m impressed with this ﬁrst gen is a massive understatement, this is shaping up to be one of the most enjoyable devices I’ve ever owned. First computer that hasn’t had some feeling of compromise in a long time.
I feel so fucking stupid for ordering a Macbook Air in April this year.
Same. I’m mad at myself. I ordered a MacBook Pro around the same time and of course this comes out. Trade in value is a joke too.
I was stupid to by [sic] the early 2020 model. Sent it back today in exchange for this one. The performance on the M1 is far more than what I expected
As someone who got an entry level 2020 MBP in June… fuck.
Ha my dad is 5 months into his MBP gutted
Sucks cause i just bought a MacBook 3 years ago. And that battery is super super appealing.
I haven’t plugged in this M1 Mac in almost 2 days. It’s only half dead. lol. What is this sorcery? 🔋
Apple Silicon Macs are the future, man. Competing laptops are gonna have a hard time catching up. pic.twitter.com/FmX5uVKkFd
— Computer Clan (📌M1) (@thecomputerclan) November 20, 2020
The battery life on the new MacBook Pro with M1 chip is INSANE
I’ve been doing work on this for several hours, and it’s still at 87% 🤯🤯🤯
I guess it was a good thing I got my 3 week old laptop stolen? Lol#AppleM1 pic.twitter.com/fENYDS235O
— William Lex Ham ✊🏽🧢 #TheyCantBurnUsAll (@WillLexHam) November 20, 2020
I unplugged mine yesterday morning at 5:30am. Worked heavily on it throughout the day (lots of tabs open in chrome, video editing in FCPX, watching videos, photo editing in LrC, and testing lots of apps). When I ﬁnally shut it down at 10pm last night, I was at 60%. Yesterday was my heaviest use day in a LONG time, and I just couldn’t kill it. My 2015 13″ MBP would have died around 10am.
My 2015 still works ﬁne, so I thought the switch would be lackluster. But the M1 is everything people are saying it is. It is just so damn fast and smooth. I’ve had a few very minor things happen like Preview locking up once, and Chrome freezing once, but other than that, this thing just ﬂies. I ﬁred up my 2015 this morning to transfer a few ﬁles, and it felt painfully slow. It’s honestly the best purchase I’ve made in the last 10 years. I’ve been on it non-stop this morning for 4 hours and my battery is at 94%. It’s insane. And in my two days of trying to kill this thing, the fan hasn’t turned on once, and it’s never gotten warmer than “cool to the touch.”
I’ve said it before and I’ll say it again. Owned soooo many laptops over the years, both Mac and Windows. Never have I ever had something like this. I would say the closest would be an iPad but as we all know, certain tasks can be very limited on an iPad.
This thing handles everything like a freaking beast and the battery is quite literally an inﬁnity stone at this point. It just blows everything else out of the water. I’m on day 3/4 right now. Countless hours of browsing, videos, videos in the background, light gaming for about an hour. The dang thing is still at 40%.
Everything from here on might as well be posted under r/BlackMagicFuckery because it just doesn’t make sense.
I got my M1 MacBook Pro yesterday. I spent the afternoon setting it up, downloading tons of apps, installing Xcode, doing a couple of test builds, syncing all my photo library and letting Photos do its indexing. At no point did the laptop get warm, and was silent throughout. I probably should have got the Air, because it’s clear I’m not going to stress this enough to get the fans to even kick in.
At no point did I plug the laptop in. I did all this on the charge it had from the factory - about 75% when I received it. By the time I was done for the day, it still had about 40% left.
It’s absolutely magical. It’s not iPad level battery, it’s way better than even that!
I’m on day 3 with 6+ hours of use. Code compilations, npm installing benchmarks. Still have 63%
I bought a base pro and the battery is just bananas. I was working on subordinate performance reports in Adobe reader and listening to music with my AirPods today. From 8am to noon I used 14% battery.
It’s outta the park. For what I use it for, web browsing and videos, it literally will go a week without a charge. I look at the percentage sometimes just to be like, meh, of course it lost 25%, only to see it’s down 5% after an hour! LMAO it’s stupid how amazing it is. I thought my iPad Air battery was great before this.
Silence is Golden or Cool as a Cucumber?#
My base MacBook Air M1 basically destroys it at everything except gaming. But I don’t really game on mac anyways. Everything in the ui just feels immediate. Photo editing has worked great. I had it playing 4 4K videos at once and they were all just ﬁne. It got a bit warm but never hot. And it’s silent unlike my hackintosh that sounds like a jet engine and keeps my ofﬁce 15° warmer than the rest of the house.
I had a new intel MacBook Air in my hands just a month ago that was burning my lap just trying to watch 4K Netﬂix. I was getting antsy waiting for apple silicon and needed a new laptop. I decided to send it back and just wait and I’m glad I did. This is a completely different experience.
Contrast this to a 2018 i7 Mac mini that I copied 60gb of ﬁles from an external hard drive last night. Sounded like a jet engine and was just as warm.
I’ve been using an M1 MacBook Air and it refuses to get warm…you don’t realize what a jump this is until you’ve used an M1 in person.
Transferring data from my 2020 Intel MacBook Pro, to the 2020 M1 MacBook Pro.
The Intel is burning hot and the fans are maxed out.
M1 is cool and fans don’t even seem to be running.
— Daniel (@ZONEofTECH) November 20, 2020
Is 8 GB RAM on x86 Intel/AMD the same as 8 GB on Apple Silicon M1?#
I can’t believe I’m asking this. All my education and experience with technology has taught me that memory is memory. If you run a lot of programs, you need more of it. 16 GB minimum seems to be the default advice these days, with more if you’re doing specialist tasks like video editing, compiling code. However, early M1 users’ experience like below testing out with 8 GB seems to indicate otherwise.
How can this be so? PC usually dies with just 8 GB of RAM trying to use so many apps. There hasn’t been much explanation of this, but a couple of posts might offer hints.
First, David Smith, an engineer at Apple, might have some insight into this.
…and ~14 nanoseconds on an M1 emulating an Intel 😇
— David Smith (@Catﬁsh_Man) November 10, 2020
Second, John Gruber on Daring Fireball explains how this helps explain iPhone like RAM management that seems to be now possible on Macs.
Retain and release are tiny actions that almost all software, on all Apple platforms, does all the time. ….. The Apple Silicon system architecture is designed to make these operations as fast as possible. It’s not so much that Intel’s x86 architecture is a bad ﬁt for Apple’s software frameworks, as that Apple Silicon is designed to be a bespoke ﬁt for it …. retaining and releasing NSObjects is so common on MacOS (and iOS), that making it 5 times faster on Apple Silicon than on Intel has profound implications on everything from performance to battery life.
Broadly speaking, this is a significant reason why M1 Macs are more efﬁcient with less RAM than Intel Macs. This, in a nutshell, helps explain why iPhones run rings around even ﬂagship Android phones, even though iPhones have significantly less RAM. iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efﬁcient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance.
Third, Marcel Weiher explains Apple’s obsession about keeping memory consumption under control from his time at Apple as well as the beneﬁts of reference counting:
where Apple might have been “focused” on performance for the last 15 years or so, they have been completely anal about memory consumption. When I was there, we were ﬁxing 32 byte memory leaks. Leaks that happened once. So not an ongoing consumption of 32 bytes again and again, but a one-time leak of 32 bytes.
The beneﬁt of sticking to RC is much-reduced memory consumption. It turns out that for a tracing GC to achieve performance comparable with manual allocation, it needs several times the memory (different studies ﬁnd different overheads, but at least 4x is a conservative lower bound). While I haven’t seen a study comparing RC, my personal experience is that the overhead is much lower, much more predictable, and can usually be driven down with little additional effort if needed.
Fourth, the memory bandwidth might have a role to play in enabling that multi-tasking
The memory bandwidth on the new Macs is impressive. Benchmarks peg it at around 60GB/sec–about 3x faster than a 16” MBP. Since the M1 CPU only has 16GB of RAM, it can replace the entire contents of RAM 4 times every second. Think about that…
Perhaps the most striking feature of these M1 Macs is the value they bring at the sub-$1000 price point (base models). A task like editing 8K RAW RED video ﬁle that might have taken a $5000 machine before can now be done on a $699 Mac Mini M1 or a fan-less MacBook Air that costs $999 🤯
To those who are still doubting the M1 Macs, imagine if Apple launched a new MacBook Air at the same $999 starting price that came with a Core i7 10750H and RX 560 graphics, but they did it without a fan and added 6 hours more battery life. That’s basically what they did 🤯— Luke Miani (@LukeMiani) November 21, 2020
Apple M1 perf pr0n:
I compiled all the @libretro cores for comparison:
My 2019 12-core Mac Pro with 32GB RAM took 6095.13 seconds.
My 13” M1 MacBook Pro with 16GB ram took…. Wait for it…. 4570.09.
If you build code, there is nothing to think about. Get one of these Now!— Lemont (@cocoalabs) November 21, 2020
I tried to build a fresh React Native on the new Apple MBA with M1 / 16GB ram. Cache cleaned.
It took 25s.
To compare, the same task, executed in exactly the same conditions, on my MBP (13″, 2018, Core i5 2,3GHz, 16GB ram) took… 1min21s.
These things are a cancerous growth on the web.
But we can make a difference - no matter how small it may seem. 1MB Club is a growing collection of performance-focused web pages found across the internet. The one and only rule for a web page to qualify as a member:
Due to the huge number of submissions on initial launch, requesting new sites to be added is temporarily paused. Once the backlog has been worked through, submissions will open up again. Thanks!
The department has not disclosed the exact location of the monolith, fearing explorers may try to seek it out and “become stranded”. The big horn sheep wildlife ofﬁcials were counting are native to many parts of southern Utah, where the terrain is rugged.
Use the up and down arrow keys to navigate.
The Arecibo telescope’s long and productive life has come to an end. The National Science Foundation (NSF) announced today it will decommission the iconic radio telescope in Puerto Rico following two cable breaks in recent months that have brought the structure to near collapse. The 57-year-old observatory, a survivor of numerous hurricanes and earthquakes, is now in such a fragile state that attempting repairs would put staff and workers in danger. “This decision was not an easy one to make,” Sean Jones, NSF’s assistant director for mathematical and physical sciences, said at a news briefing today. “We understand how much Arecibo means to [the research] community and to Puerto Rico.”
Ralph Gaume, director of NSF’s astronomy division, said at the briefing the agency wants to preserve other instruments at the site, as well as the visitor and outreach center. But they are under threat if the telescope structure collapses. That would bring the 900-ton instrument platform, suspended 137 meters above the 305-meter-wide dish, crashing down. Flailing cables could damage other buildings on the site, as could the three support towers if they fell, too. “There is a serious risk of an unexpected and uncontrolled collapse,” Gaume said. “A controlled decommissioning gives us the opportunity to preserve valuable assets that the observatory has.”
Over the next few weeks, engineering ﬁrms will develop a plan for a controlled dismantling. It may involve releasing the platform from its cables explosively and letting it fall.
The Arecibo telescope has been widely used by astrophysicists as well as atmospheric and planetary scientists since the early 1960s. For many years it was the main instrument involved in listening for messages from extraterrestrial civilizations, and its striking looks won it a supporting role in feature ﬁlms.
The observatory has been battered by the elements over the years, most recently by Hurricane Maria in 2017 and an earthquake and aftershocks in December 2019. It’s unknown whether those stresses contributed to the cable failures, the ﬁrst of which occurred on 10 August. An auxiliary cable, installed in the 1990s when 300 tons of new instruments were added to the suspended platform, broke away from its socket at one end, damaging some instruments and gashing the surface of the dish below.
Engineers investigating the break ordered a replacement cable and others to lend support. During their studies, they noticed that one of the 12 main suspension cables—one connected to the same tower as the failed auxiliary cable—had a dozen broken wires around its exterior. Because these 9-centimeter-thick cables are made up of 160 wires, they thought it had enough capacity to shoulder the extra load.
But on 7 November, that cable broke. The University of Central Florida (UCF), which leads the consortium managing the facility for NSF, already had three engineering ﬁrms on-site assessing the ﬁrst break. They quickly set about analyzing the safety of the whole structure. NSF sent another ﬁrm and the Army Corps of Engineers. Of the ﬁve, three said the only way forward was a controlled decommissioning. If one main cable was operating below its design capacity, “now all the cables are suspect,” said Ashley Zauderer, NSF’s program director for the Arecibo Observatory. If one of three remaining main cables connected to the impaired tower also failed, the engineers concluded, the platform would collapse.
NSF has, in recent years, been seeking to reduce its commitment to the Arecibo Observatory and, in taking over its management, UCF has shouldered more of the ﬁnancial burden. But Gaume stated: “This decision has nothing to do with the scientiﬁc merits of Arecibo Observatory. It is all about safety.” The facility still has powerful and unique capabilities that researchers rely on, he said. “I’m conﬁdent of the resilience of the astrophysics community,” he added, and NSF is working with some of its other facilities to take up some of the studies that have been halted.
“I don’t believe we shall ever have a good money again before we take the thing out of the hands of government, that is, we can’t take it violently out of the hands of government, all we can do is by some sly roundabout way introduce something that they can’t stop.” — F. A. Hayek 1984
Discussions — More info on WTF Happened in 1971
I was stuck on scening the chaos/warped space section. Every other section had clear rules, e.g. I could only use certain elements, whereas this section is supposed to have no rules. So I had to employ a different mindset to move forward: “Don’t think too much about it”. I didn’t think too much and doodled away and a beautiful mess came out.
Programmers like me frequently have this dilemma: Should I manually do this tedious thing, or create automation to do it for me? In my case, I’m building tools that could be useful for everyone, so I do have incentive to automate as much as I can. But I had a quickly approaching deadline. I determined that it would be faster to manually draw ribbons than to ﬁgure out how to extend the curve tool to create them for me.
Then, after I ﬁnished drawing the ribbons, I found out it took less time than expected to extend the curve tool to make ribbons. I’m not sure what lesson I learned here. Maybe I tend to err on the side of pessimism as a reaction from being too optimistic in the past? Predicting the required amount of work is a generally hard problem in software engineering.
There are times we get too attached to what we made and are unwilling to iterate upon it. There are times we keep redoing something without making progress. There are times we accidentally lose progress, but after redoing it, we realize we did it better the second time around. If you accidentally lose progress, reassure yourself you’ll do it better the second time.
When you work outside your comfort zone, you become a lot more aware of your creative process. I’m not an illustrator or story teller, but I forced myself to work in those mediums and became hyperaware of the nature of those mediums and my own processes. This is how I’m bringing you all these lessons I’ve learned. This experience will help me with my future ambitions.
When we get engrossed in a project, it’s very easy to zoom in on details and lose sight of the bigger picture and we tend to get desensitized to other details. If you’re looking to achieve a speciﬁc effect with your project, or just want to understand how others perceive it in general, the best thing you can do is to ask for critical feedback from other people who work in the same or similar medium. Their perspective is uncolored by how much you’ve already stared at your project, and their different experiences, backgrounds, and tastes can bring you really valuable insight that you may not have been able to see otherwise.
I already planned out most of Omniverse II, but right when I was ﬁnishing up the project, I knew I should verify if what I planned actually had the effect I wanted. So I solicited feedback from Rabid Squirrel, and they gave me really helpful suggestions like tweaking the camera work and adding the “danger spikes”.
Note: As I wrote this out, I realized these may be more suitable as standalone pieces with potential for way more depth. Consider these rough drafts.
The biggest lesson is how to tell a story. From that follows world building, lore, set creation (spatial structure), pacing, and generally being critical of everything with respect to ﬁtting into the narrative. You may have a lot of cool ideas, but if you want to tell a coherent story, you need to make it cohesive and you’ll probably have to throw away the irrelevant parts, even if they are cool.
Unique to Line Rider is structural cohesion, how the track is spatially arranged. Consider the structure of the world you build and how Bosh’s traversal drives the narrative. Is Bosh entering a new area? Is he returning to a previous area? Did he fall and need to get back up?
I wanted to demonstrate how we can use movement techniques as a means towards something greater rather than for its own sake. And the clearest way to do that is to reclaim such a feat of olympic puppetry as a compelling story, retroﬁtting a narrative in its place where the movement seems to emerge from how Bosh interacts with the environment he is in.
Recycling was another one of those movement techniques done for the sake of overcoming the challenge. But it can be used for narrative purposes, like being stuck in a loop or traveling through a past part of the track in the opposite direction to “turn back”. I think there’s more narrative depth that could be done with revisiting, much more than in Omniverse II, perhaps in a track featuring a more intricate story.
In a composition, negative space is the absence of content, contrasting with the content that’s there. While negative space is already commonly used in Line Rider tracks, I think it’s still worth discussing. Negative space in Line Rider can be in the form of the white void (absence of lines) or as airtime (absence of movement). There are obvious uses like dramatic moments in the music, but we should also consider more subtle “less is more” cases, like bringing attention to an object by removing details around the object.