10 interesting stories served every morning and every evening.
A friend and I were recently lamenting the strange death of OKCupid. Seven years ago when I ﬁrst tried online dating, the way it worked is that you wrote a long essay about yourself and what you were looking for. You answered hundreds of questions about your personality, your dreams, your desires for your partner, your hard nos. Then you saw who in your area was most compatible, with a “match score” between 0 and 100%. The match scores were eerily good. Pretty much every time I read the proﬁle of someone with a 95% match score or higher, I fell a little bit in love. Every date I went on was fun; the chemistry wasn’t always there but I felt like we could at least be great friends.
I’m now quite skeptical of quantiﬁcation of romance and the idea that similarity makes for good relationships. I was somewhat skeptical then, too. What I did not expect, what would have absolutely boggled young naive techno-optimist Ivan, was that 2016-era OKCupid was the best that online dating would ever get. That the tools that people use to ﬁnd the most important relationship in their lives would get worse, and worse, and worse. OKCupid, like the other acquisitions of Match.com, is now just another Tinder clone - see face, swipe left, see face, swipe right. A digital nightclub. And I just don’t expect to meet my wife in a nightclub.
This isn’t just dating apps. Nearly all popular consumer software has been trending towards minimal user agency, inﬁnitely scrolling feeds, and garbage content. Even that crown jewel of the Internet, Google Search itself, has decayed to the point of being unusable for complicated queries. Reddit and Craigslist remain incredibly useful and valuable precisely because their software remains frozen in time. Like old Victorian mansions in San Francisco they stand, shielded by a quirk of fate from the winds of capital, reminders of a more humane age.
How is it possible that software gets worse, not better, over time, despite billions of dollars of R&D and rapid progress in tooling and AI? What evil force, more powerful than Innovation and Progress, is at work here?
In my six years at Google, I got to observe this force up close, relentlessly killing features users loved and eroding the last vestiges of creativity and agency from our products. I know this force well, and I hate it, but I do not yet know how to ﬁght it. I call this force the Tyranny of the Marginal User.
Simply put, companies building apps have strong incentives to gain more users, even users that derive very little value from the app. Sometimes this is because you can monetize low value users by selling them ads. Often, it’s because your business relies on network effects and even low value users can help you build a moat. So the north star metric for designers and engineers is typically something like Daily Active Users, or DAUs for short: the number of users who log into your app in a 24 hour period.
What’s wrong with such a metric? A product that many users want to use is a good product, right? Sort of. Since most software products charge a ﬂat per-user fee (often zero, because ads), and economic incentives operate on the margin, a company with a billion-user product doesn’t actually care about its billion existing users. It cares about the marginal user - the billion-plus-ﬁrst user - and it focuses all its energy on making sure that marginal user doesn’t stop using the app. Yes, if you neglect the existing users’ experience for long enough they will leave, but in practice apps are sticky and by the time your loyal users leave everyone on the team will have long been promoted.
So in practice, the design of popular apps caters almost entirely to the marginal user. But who is this marginal user, anyway? Why does he have such bad taste in apps?
Here’s what I’ve been able to piece together about the marginal user. Let’s call him Marl. The ﬁrst thing you need to know about Marl is that he has the attention span of a goldﬁsh on acid. Once Marl opens your app, you have about 1.3 seconds to catch his attention with a shiny image or triggering headline, otherwise he’ll swipe back to TikTok and never open your app again.
Marl’s tolerance for user interface complexity is zero. As far as you can tell he only has one working thumb, and the only thing that thumb can do is ﬂick upwards in a repetitive, zombielike scrolling motion. As a product designer concerned about the wellbeing of your users, you might wonder - does Marl really want to be hate-reading Trump articles for 6 hours every night? Is Marl okay? You might think to add a setting where Marl can enter his preferences about the content he sees: less politics, more sports, simple stuff like that. But Marl will never click through any of your hamburger menus, never change any setting to a non-default. You might think Marl just doesn’t know about the settings. You might think to make things more convenient for Marl, perhaps add a little “see less like this” button below a piece of content. Oh boy, are you ever wrong. This absolutely infuriates Marl. On the margin, the handful of pixels occupied by your well-intentioned little button replaced pixels that contained a triggering headline or a cute image of a puppy. Insufﬁciently stimulated, Marl throws a ﬁt and swipes over to TikTok, never to return to your app. Your feature decreases DAUs in the A/B test. In the launch committee meeting, you mumble something about “user agency” as your VP looks at you with pity and scorn. Your button doesn’t get deployed. You don’t get your promotion. Your wife leaves you. Probably for Marl.
Of course, “Marl” isn’t always a person. Marl can also be a state of mind. We’ve all been Marl at one time or another - half consciously scrolling in bed, in line at the airport with the announcements blaring, reﬂexively opening our phones to distract ourselves from a painful memory. We don’t usually think about Marl, or identify with him. But the structure of the digital economy means most of our digital lives are designed to take advantage of this state. A substantial fraction of the world’s most brilliant, competent, and empathetic people, armed with near-unlimited capital and increasingly god-like computers, spend their lives serving Marl.
By contrast, consumer software tools that enhance human agency, that serve us when we are most creative and intentional, are often built by hobbyists and used by a handful of nerds. If such a tool ever gets too successful one of the Marl-serving companies, ﬂush with cash from advertising or growth-hungry venture capital, will acquire it and kill it. So it goes.
Thanks to Ernie French (fuseki.net) for many related conversations and comments on this essay.
When was the Dollar highest against the Euro?
Here is a small program that calculates it:
The output: 2000-10-26. (Try running it yourself.)
The curl bit downloads the ofﬁcial historical data that the European
on the position of the Euro against other currencies. (The -s ﬂag just removes some noise from standard error.)
That data comes as a zipﬁle, which gunzip will decompress.
sqlite3 queries the csv inside. :memory tells sqlite to use an in-memory ﬁle. After that, .import /dev/stdin stdin tells sqlite to load standard input into a table called stdin. The string that follows that is a SQL query.
Although pulling out a simple max is easy, the data shape is not ideal. It’s in “wide” format - a Date column, and then an extra column for every currency. Here’s the csv header for that ﬁle:
When doing ﬁlters and aggregations, life is easier if the data is in “long” format, like this:
Switching from wide to long is a simple operation, commonly called a “melt”. Unfortunately, it’s not available in SQL.
No matter, you can melt with pandas:
There is one more problem. The ﬁle mungers at ECB have wrongly put a trailing comma at the end of every line. The makes csv parsers pick up an extra, blank column at the end. Our sqlite query didn’t notice, but these commas interfere with the melt, creating a whole set of junk rows at the end:
The effects of that extra comma can be removed via pandas by adding one more thing to our method chain: .iloc[:, :-1], which effectively says “give me all rows (”:“) and all but the last column (”:-1″). So:
Does everyone who uses this ﬁle have to repeat this data shitwork?
Tragically, the answer is yes. As they say: “data janitor: nobody’s dream, everyone’s job”.
In full fairness, though, the ECB foreign exchange data is probably in the top 10% of all open data releases. Usually, getting viable tabular data out of someone is a much more tortuous and involved process.
Some things we didn’t have to do in this case: negotiate access (for example by paying money or talking to a salesman); deposit our email address/company name/job title into someone’s database of qualiﬁed leads, observe any quota; authenticate (often a substantial side-quest of its own), read any API docs at all or deal with any issues more serious than basic formatting and shape.
So eurofxref-hist.zip is, relatively speaking, pretty nice actually.
But anyway - I’ll put my cleaned up copy into a csvbase
table so you, dear reader, can skip the tedium and just have fun.
Here’s how I do that:
All I’ve done is add another curl, to HTTP PUT the csv ﬁle into csvbase.
–upload-ﬁle - uploads from standard input to the given url (via HTTP PUT). If the table doesn’t already exist in csvbase, it is created. -n adds my
from my ~/.netrc. That’s it. Simples.
Alright, now the data cleaning phase is over, let’s do some more interesting stuff.
That’s somewhat legible for over 6000 datapoints in an 80x25 character terminal. You can make out the broad trend. A reasonable data-ink
gnuplot is like a little mini-programming language of it’s own. Here’s what the above snippet does:
* using 1:2 with lines draw lines from columns 1 and 2 (the date and the rate
You can, of course, also draw graphs to proper images:
Outputting to SVG is only a bit more complicated than ascii art. In order for it look decent you need to help gnuplot understand that it’s “timeseries” data - ie: that the x axis is time; give a format for that time and then tell it to rotate the markings on the x axis so that they are readable. It’s a bit wordy though: let’s bind it to bash function so we can reuse it:
So far, so good. But it would be nice to try out more sophisticated analyses: let’s try putting a nice rolling average in so that we can see a trend line:
Smooth. If you don’t have duckdb installed, it’s not hard to adapt the above for sqlite3 (the query is the same). DuckDB is a tool I wanted to show because it’s a lot like sqlite but instead is columnar (rather than row-oriented). However for me the main value is that it has a lot of easy ergonomics.
Here is one of them: you can load csvs into table ﬁles straight from HTTP:
That’s pretty easy, and DuckDB does a reasonable job of inferring types. There are a lot of other usability niceties too: for example, it helpfully detects your terminal size and abridges tables by default rather than ﬂooding your terminal with an enormous resultset. It has a progress bar for big queries! It can output markdown tables! Etc!
A lot is possible with a zipﬁle of data and just the programs that are either already installed or a quick brew install/apt install away. I remember how impressed I was when I was ﬁrst shown this eurofxref-hist.zip by an old hand from foreign exchange when I worked in a bank. It was so simple: the simplest cross-organisation data interchange protocol I had then seen (and probably since).
A mere zipﬁle with a csv in it seems so diminutive, but in fact an enormous mass of ﬁnancial applications use this particular zipﬁle every day. I’m pretty sure that’s why they’ve left those commas in - if they removed them now they’d break a lot of code.
When open data is made really easily available, it also functions double duty as an open API. After all, for the largeish fraction of APIs in which are less about calling remote functions than about exchanging data, what is the functional difference?
So I think the ECB’s zipﬁle is a pretty good starting point for a data interchange format. I love the simplicity - and I’ve tried to keep that with csvbase.
In csvbase, every table has a single url, following the form:
And on each url, there are four main verbs:
When you GET: you get a csv (or a web page, if you’re in a
When you PUT a new csv: you create a new table, or overwrite the existing one.
When you POST a new csv: you bulk add more rows to an existing table.
When you DELETE: that table is no more.
To authenticate, just use HTTP Basic
Could it be any simpler? If you can think of a way: write me an
I said above that most SQL databases don’t have a “melt” operation. The ones that I know of that do are
Server. One question that SQL-knowers frequently ask is: why does anyone use R or Pandas at all when SQL already exists? A key reason is that R and Pandas are very strong on data cleanup.
One under-appreciated feature of bash pipelines is that they are multi-process. Each program runs independently, in it’s own process. While curl is downloading data from the web, grep is ﬁltering it, sqlite is querying it and perhaps curl is uploading it again, etc. All in parallel, which can, surprisingly, make it very competitive with fancy cloud
Why was the Euro so weak back in 2000? It was launched, without coins or notes, in January 1999. The Euro was, initially, a sort of in-game currency for the European Union. It existed only inside banks - so there were no notes or coins for it. That all came later. So did belief - early on it didn’t look like the little Euro was going to make it: so the rate against the Dollar was 0.8252. That means that in October 2000, a Dollar would buy you 1.21 Euros (to reverse exchange rates, do 1/rate). Nowadays the Euro is much stronger: a Dollar would buy you less than 1 Euro.
* Microsoft’s AI research team, while publishing a bucket of open-source training data on GitHub, accidentally exposed 38 terabytes of additional private data — including a disk backup of two employees’ workstations.
* The backup includes secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.
* The researchers shared their ﬁles using an Azure feature called SAS tokens, which allows you to share data from Azure Storage accounts.
* The access level can be limited to speciﬁc ﬁles only; however, in this case, the link was conﬁgured to share the entire storage account — including another 38TB of private ﬁles.
* This case is an example of the new risks organizations face when starting to leverage the power of AI more broadly, as more of their engineers now work with massive amounts of training data. As data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards.
As part of the Wiz Research Team’s ongoing work on accidental exposure of cloud-hosted data, the team scanned the internet for misconﬁgured storage containers. In this process, we found a GitHub repository under the Microsoft organization named robust-models-transfer. The repository belongs to Microsoft’s AI research division, and its purpose is to provide open-source code and AI models for image recognition. Readers of the repository were instructed to download the models from an Azure Storage URL:
However, this URL allowed access to more than just open-source models. It was conﬁgured to grant permissions on the entire storage account, exposing additional private data by mistake.
Our scan shows that this account contained 38TB of additional data — including Microsoft employees’ personal computer backups. The backups contained sensitive personal data, including passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from 359 Microsoft employees.
In addition to the overly permissive access scope, the token was also misconﬁgured to allow “full control” permissions instead of read-only. Meaning, not only could an attacker view all the ﬁles in the storage account, but they could delete and overwrite existing ﬁles as well.
This is particularly interesting considering the repository’s original purpose: providing AI models for use in training code. The repository instructs users to download a model data ﬁle from the SAS link and feed it into a script. The ﬁle’s format is ckpt, a format produced by the TensorFlow library. It’s formatted using Python’s pickle formatter, which is prone to arbitrary code execution by design. Meaning, an attacker could have injected malicious code into all the AI models in this storage account, and every user who trusts Microsoft’s GitHub repository would’ve been infected by it.
However, it’s important to note this storage account wasn’t directly exposed to the public; in fact, it was a private storage account. The Microsoft developers used an Azure mechanism called “SAS tokens”, which allows you to create a shareable link granting access to an Azure Storage account’s data — while upon inspection, the storage account would still seem completely private.
In Azure, a Shared Access Signature (SAS) token is a signed URL that grants access to Azure Storage data. The access level can be customized by the user; the permissions range between read-only and full control, while the scope can be either a single ﬁle, a container, or an entire storage account. The expiry time is also completely customizable, allowing the user to create never-expiring access tokens. This granularity provides great agility for users, but it also creates the risk of granting too much access; in the most permissive case (as we’ve seen in Microsoft’s token above), the token can allow full control permissions, on the entire account, forever — essentially providing the same access level as the account key itself.
There are 3 types of SAS tokens: Account SAS, Service SAS, and User Delegation SAS. In this blog we will focus on the most popular type — Account SAS tokens, which were also used in Microsoft’s repository.
Generating an Account SAS is a simple process. As can be seen in the screen below, the user conﬁgures the token’s scope, permissions, and expiry date, and generates the token. Behind the scenes, the browser downloads the account key from Azure, and signs the generated token with the key. This entire process is done on the client side; it’s not an Azure event, and the resulting token is not an Azure object.
Because of this, when a user creates a highly-permissive non-expiring token, there is no way for an administrator to know this token exists and where it circulates. Revoking a token is no easy task either — it requires rotating the account key that signed the token, rendering all other tokens signed by same key ineffective as well. These unique pitfalls make this service an easy target for attackers looking for exposed data.
Besides the risk of accidental exposure, the service’s pitfalls make it an effective tool for attackers seeking to maintain persistency on compromised storage accounts. A recent Microsoft report indicates that attackers are taking advantage of the service’s lack of monitoring capabilities in order to issue privileged SAS tokens as a backdoor. Since the issuance of the token is not documented anywhere, there is no way to know that it was issued and act against it.
SAS tokens pose a security risk, as they allow sharing information with external unidentiﬁed identities. The risk can be examined from several angles: permissions, hygiene, management and monitoring.
A SAS token can grant a very high access level to a storage account, whether through excessive permissions (like read, list, write or delete), or through wide access scopes that allow users to access adjacent storage containers.
SAS tokens have an expiry problem — our scans and monitoring show organizations often use tokens with a very long (sometimes inﬁnite) lifetime, as there is no upper limit on a token’s expiry. This was the case with Microsoft’s token, which was valid until 2051.
Account SAS tokens are extremely hard to manage and revoke. There isn’t any ofﬁcial way to keep track of these tokens within Azure, nor to monitor their issuance, which makes it difﬁcult to know how many tokens have been issued and are in active use. The reason even issuance cannot be tracked is that SAS tokens are created on the client side, therefore it is not an an Azure tracked activity, and the generated token is not an Azure object. Because of this, even what appears to be a private storage account may potentially be widely exposed.
As for revocation, there isn’t a way to revoke a singular Account SAS; the only solution is revoking the entire account key, which invalidates all the other tokens issued with the same key as well.
Monitoring the usage of SAS tokens is another challenge, as it requires enabling logging on each storage account separately. It can also be costly, as the pricing depends on the request volume of each storage account.
SAS security can be significantly improved with the following recommendations.
Due to the lack of security and governance over Account SAS tokens, they should be considered as sensitive as the account key itself. Therefore, it is highly recommended to avoid using Account SAS for external sharing. Token creation mistakes can easily go unnoticed and expose sensitive data.
For external sharing, consider using a Service SAS with a Stored Access Policy. This feature connects the SAS token to a server-side policy, providing the ability to manage policies and revoke them in a centralized manner.
If you need to share content in a time-limited manner, consider using a User Delegation SAS, since their expiry time is capped at 7 days. This feature connects the SAS token to Azure Active Directory’s identity management, providing control and visibility over the identity of the token’s creator and its users.
Additionally, we recommend creating dedicated storage accounts for external sharing, to ensure that the potential impact of an over-privileged token is limited to external data only.
To avoid SAS tokens completely, organizations will have to disable SAS access for each of their storage accounts separately. We recommend using a CSPM to track and enforce this as a policy.
Another solution to disable SAS token creation is by blocking access to the “list storage account keys” operation in Azure (since new SAS tokens cannot be created without the key), then rotating the current account keys, to invalidate pre-existing SAS tokens. This approach would still allow creation of User Delegation SAS, since it relies on the user’s key instead of the account key.
To track active SAS token usage, you need to enable Storage Analytics logs for each of your storage accounts. The resulting logs will contain details of SAS token access, including the signing key and the permissions assigned. However, it should be noted that only actively used tokens will appear in the logs, and that enabling logging comes with extra charges — which might be costly for accounts with extensive activity.
Azure Metrics can be used to monitor SAS tokens usage in storage accounts. By default, Azure records and aggregates storage account events up to 93 days. Utilizing Azure Metrics, users can look up SAS-authenticated requests, highlighting storage accounts with SAS tokens usage.
In addition, we recommend using secret scanning tools to detect leaked or over-privileged SAS tokens in artifacts and publicly exposed assets, such as mobile apps, websites, and GitHub repositories — as can be seen in the Microsoft case.
For more information on cloud secret scanning, please check out our recent talk from the fwd:cloudsec 2023 conference, “Scanning the internet for external cloud exposures”.
Wiz customers can leverage the Wiz secret scanning capabilities to identify SAS tokens in internal and external assets and explore their permissions. In addition, customers can use the Wiz CSPM to track storage accounts with SAS support.
* Detect SAS tokens: use this query to surface all SAS tokens in all your monitored cloud environments.
* Detect high-privilege SAS tokens: use the following control to detect highly-privileged SAS tokens located on publicly exposed workloads.
* CSPM rule for blocking SAS tokens: use the following Cloud Conﬁguration Rule to track storage accounts allowing SAS token usage.
As companies embrace AI more widely, it is important for security teams to understand the inherent security risks at each stage of the AI development process.
The incident detailed in this blog is an example of two of these risks.
The ﬁrst is oversharing of data. Researchers collect and share massive amounts of external and internal data to construct the required training information for their AI models. This poses inherent security risks tied to high-scale data sharing. It is crucial for security teams to deﬁne clear guidelines for external sharing of AI datasets. As we’ve seen in this case, separating the public AI data set to a dedicated storage account could’ve limited the exposure.
The second is the risk of supply chain attacks. Due to improper permissions, the public token granted write access to the storage account containing the AI models. As noted above, injecting malicious code into the model ﬁles could’ve led to a supply chain attack on other researchers who use the repository’s models. Security teams should review and sanitize AI models from external sources, since they can be used as a remote code execution vector.
The simple step of sharing an AI dataset led to a major data leak, containing over 38TB of private data. The root cause was the usage of Account SAS tokens as the sharing mechanism. Due to a lack of monitoring and governance, SAS tokens pose a security risk, and their usage should be as limited as possible. These tokens are very hard to track, as Microsoft does not provide a centralized way to manage them within the Azure portal. In addition, these tokens can be conﬁgured to last effectively forever, with no upper limit on their expiry time. Therefore, using Account SAS tokens for external sharing is unsafe and should be avoided.
In the wider scope, similar incidents can be prevented by granting security teams more visibility into the processes of AI research and development teams. As we see wider adoption of AI models within companies, it’s important to raise awareness of relevant security risks at every step of the AI development process, and make sure the security team works closely with the data science and research teams to ensure proper guardrails are deﬁned.
Microsoft’s account of this issue is available on the MSRC blog.
* Jul. 20, 2020 — SAS token ﬁrst committed to GitHub; expiry set to Oct. 5, 2021
Hi there! We are Hillai Ben-Sasson (@hillai), Shir Tamari (@shirtamari), Nir Ohfeld (@nirohfeld), Sagi Tzadik (@sagitz_) and Ronen Shustin (@ronenshh) from the Wiz Research Team. We are a group of veteran white-hat hackers with a single goal: to make the cloud a safer place for everyone. We primarily focus on ﬁnding new attack vectors in the cloud and uncovering isolation issues in cloud vendors.
We would love to hear from you! Feel free to contact us on Twitter or via email: email@example.com.
Recently Unity announced a pricing update which changes the pricing model from a simple pay per seat to a model whereby developers have to pay per install. This isn’t restricted to paid installs or plays; it applies to every plain install.
This price change has some particularly grave consequences on mobile where revenue per install is highly variable. According to Ironsource (A Unity company) for example, the average revenue per ad impression is $0.02. Unity would like to charge $0.20 per install after your app has made $200,000 over the past year. What this means is that every one of your users has to see at least 10 ads after installing your app to not have it cost you money. These numbers are averages. If your app is more popular in emerging markets then the revenue per ad would be significantly lower, making the problem far worse.
To make matters (even) worse, this change will be done retroactively on existing applications as well.
It’s like if Microsoft decided that you had to pay per person who read your document made in Word. If they did you’d just stop using Word and instead switch over to Google Docs. An inconvenience perhaps, but not an existential threat to your survival as a document editor.
Video games, however, are not like this. Developing a game is a long, intricate process, often spanning months or even years. After this work is done simply jumping ship to a different engine is not easy requiring considerable amount of time and money. If it is possible at all, perhaps the development team of the game has since disbanded.
Open source game engines, such as Godot engine, protect your work by giving you rights to the engine that cannot be taken away, or altered. Under Godot’s MIT license every user gets the rights to use the engine as they see ﬁt, modify and distribute it, the only requirement being that you must acknowledge the original creators and cannot claim the engine as your own creation. Note that this applies only to the engine! Your game is your own!
While Unreal engine currently does not have terms like Unity’s, there’s nothing stopping them from doing something similar. In fact if Unity manages to get away with this it seems likely they will follow suit.
The Ramatak mobile studio enhances the open source Godot engine with the things you need for a mobile game: Ads, in app purchases, and analytics. And while Ramatak does charge for these services, if we were to try to alter the deal in way you are uncomfortable with you can simply take your game and use the open source version instead. You’d lose access to the Ramatak speciﬁc enhancements but your game is yours to do with as you please.
To underscore this point further: The ﬁrst publication on our site talks about how we consider our relationship with our users. We must simply offer something our users want since switching away from our offering to the open source version is so easy.
The (mobile) gaming landscape changes all the time, and we can’t predict what the next “big thing” will be. By using an open source engine you can be sure that whatever that “next thing” is, the engine won’t keep you from taking advantage of it. Nor would the engine be able to dictate your monetization strategy for you.
With Ramatak mobile studio, developers get the best of both worlds: the freedom and security of an open-source engine and the advanced, tailored features that modern mobile games require.
And most importantly: We can’t alter the deal.
Skip to content
We read every piece of feedback, and take your input very seriously.
Include my email address so I can be contacted
Use saved searches to ﬁlter your results more quickly
To see all available qualiﬁers, see our documentation.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Name already in use
A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Use Git or checkout with SVN using the web URL.
Work fast with our ofﬁcial CLI. Learn more about the CLI.
Please sign in
to use Codespaces.
If nothing happens, download GitHub Desktop and try again.
If nothing happens, download GitHub Desktop and try again.
If nothing happens, download Xcode and try again.
Your codespace will open once ready.
There was a problem preparing your codespace, please try again.
You can’t perform that action at this time.
Following the update to its pricing plan that charges developers for each game install, Unity has seemingly silently removed its GitHub repository that tracks any terms of service (ToS) changes the company made.
As discovered by a Reddit user, Unity has removed its GitHub repository that allows the public to track any changes made to the license agreements and has updated the ToS to remove a clause that lets developers use the terms from older versions of the game engine that their product shipped with.
As a result of the repository deletion, the webpage is no longer accessible, resulting in an Error 404 unless users visit through a web archive.
While visiting the page through a web archive, the web page’s last availability was on 16 July 2022, revealing that Unity might have silently deleted the repo sometime before that day.
The GitHub repository was ﬁrst established in 2019 wherein an ofﬁcial blog post, Unity revealed that they are committed to being an open platform and that hosting on the software development cloud-based service will “give developers full transparency about what changes are happening, and when.”
In the same blog post, Unity also revealed that they have updated the license agreement, saying “When you obtain a version of Unity, and don’t upgrade your project, we think you should be able to stick to that version of the ToS.”
In the term update from 10 March 2022, Unity added a clause to the Modiﬁcation section of the ToS, stating the following:
“If the Updated Terms adversely impact your rights, you may elect to continue to use any current-year versions of the Unity Software (e.g., 2018.x and 2018.y and any Long Term Supported (LTS) versions for that current-year release) according to the terms that applied just prior to the Updated Terms.”
“The Updated Terms will then not apply to your use of those current-year versions unless and until you update to a subsequent year version of the Unity Software (e.g. from 2019.4 to 2020.1).”
However, on 3 April 2023, a few months before the supposed repository deletion date, Unity updated their ToS once again, removing the clause that was added on 10 March 2022, disabling developers from using the agreement from the version with which their game shipped.
Now the clause is completely absent in any of the new ToS, which means that users are obligated to any changes Unity made to their services regardless of version numbers including pricing updates such as the recent fee that will charge developers per game install.
When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.
DALL·E 3 will be available to ChatGPT Plus and Enterprise customers in early October. As with DALL·E 2, the images you create with DALL·E 3 are yours to use and you don’t need our permission to reprint, sell or merchandise them.
As of now, 15 September 2023, the comic book property called Fables, including all related Fables spin-offs and characters, is now in the public domain. What was once wholly owned by Bill Willingham is now owned by everyone, for all time. It’s done, and as most experts will tell you, once done it cannot be undone. Take-backs are neither contemplated nor possible.
Q: Why Did You Do This?
A number of reasons. I’ve thought this over for some time. In no particular order they are:
1) Practicality: When I ﬁrst signed my creator-owned publishing contract with DC Comics, the company was run by honest men and women of integrity, who (for the most part) interpreted the details of that agreement fairly and above-board. When problems inevitably came up we worked it out, like reasonable men and women. Since then, over the span of twenty years or so, those people have left or been ﬁred, to be replaced by a revolving door of strangers, of no measurable integrity, who now choose to interpret every facet of our contract in ways that only beneﬁt DC Comics and its owner companies. At one time the Fables properties were in good hands, and now, by virtue of attrition and employee replacement, the Fables properties have fallen into bad hands.
Since I can’t afford to sue DC, to force them to live up to the letter and the spirit of our long-time agreements; since even winning such a suit would take ridiculous amounts of money out of my pocket and years out of my life (I’m 67 years old, and don’t have the years to spare), I’ve decided to take a different approach, and ﬁght them in a different arena, inspired by the principles of asymmetric warfare. The one thing in our contract the DC lawyers can’t contest, or reinterpret to their own beneﬁt, is that I am the sole owner of the intellectual property. I can sell it or give it away to whomever I want.
I chose to give it away to everyone. If I couldn’t prevent Fables from falling into bad hands, at least this is a way I can arrange that it also falls into many good hands. Since I truly believe there are still more good people in the world than bad ones, I count it as a form of victory.
2) Philosophy: In the past decade or so, my thoughts on how to reform the trademark and copyright laws in this country (and others, I suppose) have undergone something of a radical transformation. The current laws are a mishmash of unethical backroom deals to keep trademarks and copyrights in the hands of large corporations, who can largely afford to buy the outcomes they want.
In my template for radical reform of those laws I would like it if any IP is owned by its original creator for up to twenty years from the point of ﬁrst publication, and then goes into the public domain for any and all to use. However, at any time before that twenty year span bleeds out, you the IP owner can sell it to another person or corporate entity, who can have exclusive use of it for up to a maximum of ten years. That’s it. Then it cannot be resold. It goes into the public domain. So then, at the most, any intellectual property can be kept for exclusive use for up to about thirty years, and no longer, without exception.
Of course, if I’m going to believe such radical ideas, what kind of hypocrite would I be if I didn’t practice them? Fables has been my baby for about twenty years now. It’s time to let it go. This is my ﬁrst test of this process. If it works, and I see no legal reason why it won’t, look for other properties to follow in the future. Since DC, or any other corporate entity, doesn’t actually own the property, they don’t get a say in this decision.
Q: What Exactly Has DC Comics Done to Provoke This?
Too many things to list exhaustively, but here are some highlights: Throughout the years of my business relationship with DC, with Fables and with other intellectual properties, DC has always been in violation of their agreements with me. Usually it’s in smaller matters, like forgetting to seek my opinion on artists for new stories, or for covers, or formats of new collections and such. In those times, when called on it, they automatically said, “Sorry, we overlooked you again. It just fell through the cracks.” They use the “fell through the cracks” line so often, and so reﬂexively, that I eventually had to bar them from using it ever again. They are often late reporting royalties, and often under-report said royalties, forcing me to go after them to pay the rest of what’s owed.
Lately though their practices have grown beyond these mere annoyances, prompting some sort of showdown. First they tried to strong arm the ownership of Fables from me. When Mark Doyle and Dan Didio ﬁrst approached me with the idea of bringing Fables back for its 20th anniversary (both gentlemen since ﬁred from DC), during the contract negotiations for the new issues, their legal negotiators tried to make it a condition of the deal that the work be done as work for hire, effectively throwing the property irrevocably into the hands of DC. When that didn’t work their excuse was, “Sorry, we didn’t read your contract going into these negotiations. We thought we owned it.”
More recently, during talks to try to work out our many differences, DC ofﬁcers admitted that their interpretation of our publishing agreement, and the following media rights agreement, is that they could do whatever they wanted with the property. They could change stories or characters in any way they wanted. They had no obligation whatsoever to protect the integrity and value of the IP, either from themselves, or from third parties (Telltale Games, for instance) who want to radically alter the characters, settings, history and premises of the story (I’ve seen the script they tried to hide from me for a couple of years). Nor did they owe me any money for licensing the Fables rights to third parties, since such a license wasn’t anticipated in our original publishing agreement.
When they capitulated on some of the points in a later conference call, promising on the phone to pay me back monies owed for licensing Fables to Telltale Games, for example, in the execution of the new agreement, they reneged on their word and offered the promised amount instead as a “consulting fee,” which avoided the precedent of admitting this was money owed, and included a non-disclosure agreement that would prevent me from saying anything but nice things about Telltale or the license.
And so on. There’s so much more, but these, as I said, are some of the highlights. At that point, since I disagreed on all of their new interpretations of our longstanding agreements, we were in conﬂict. They practically dared me to sue them to enforce my rights, knowing it would be a long and debilitating process. Instead I began to consider other ways to go.
Q: Are You Concerned at What DC Will Do Now?
No. I gave them years to do the right thing. I tried to reason with them, but you can’t reason with the unreasonable. They used these years to make soothing promises, tell lies about how dedicated they were towards working this out, and keep dragging things out as long as possible. I gave them an opportunity to renegotiate the contracts from the ground up, putting everything in unambiguous language, and they ignored that offer. I gave them the opportunity, twice, to simply tear up our contracts, and we each go our separate ways, and they ignored those offers. I tried to go over their heads, to deal directly with their new corporate masters, and maybe ﬁnd someone willing to deal in good faith, and they blocked all attempts to do so. (Try getting any ofﬁcer of DC Comics to identify who they report to up the company ladder. I dare you.) In any case, without giving them details, I warned them months in advance that this moment was coming. I told them what I was about to do would be “both legal and ethical.” Now it’s happened.
Note that my contracts with DC Comics are still in force. I did nothing to break them, and cannot unilaterally end them. I still can’t publish Fables comics through anyone but them. I still can’t authorize a Fables movie through anyone but them. Nor can I license Fables toys nor lunchboxes, nor anything else. And they still have to pay me for the books they publish. And I’m not giving up on the other money they owe. One way or another, I intend to get my 50% of the money they’ve owed me for years for the Telltale Game and other things.
However, you, the new 100% owner of Fables never signed such agreements. For better or worse, DC and I are still locked together in this unhappy marriage, perhaps for all time.
If I understand the law correctly (and be advised that copyright law is a mess; purposely vague and murky, and no two lawyers — not even those specializing in copyright and trademark law — agree on anything), you have the rights to make your Fables movies, and cartoons, and publish your Fables books, and manufacture your Fables toys, and do anything you want with your property, because it’s your property.
Mark Buckingham is free to do his version of Fables (and I dearly hope he does). Steve Leialoha is free to do his version of Fables (which I’d love to see). And so on. You don’t have to get my permission (but you might get my blessing, depending on your plans). You don’t have to get DC’s permission, or the permission of anyone else. You never signed the same agreements I did with DC Comics.
It was my absolute joy and pleasure to bring you Fables stories for the past twenty years. I look forward to seeing what you do with it.
For questions and further information you can contact Bill Willingham at:
firstname.lastname@example.org Please include “Fables Public Domain” in the subject line, so I don’t assume you’re another Netﬂix promotion.
When Chromebooks debuted in 2012, their affordable price tags helped make personal computing more accessible. That also made them a great ﬁt for the education world, providing schools with secure, simple and manageable devices while helping them save on their budgets. In fact, Chromebooks are the number one device used in K-12 education globally, according to Futuresource. Plus, they’re a sustainable choice, with recycled materials that reduce their environmental impact and repair programs that help them last longer. Today, we’re announcing new ways to keep your Chromebooks up and running even longer. All Chromebook platforms will get regular automatic updates for 10 years — more than any other operating system commits to today. We’re also working with partners to build Chromebooks with more post-consumer recycled materials (PCR), and rolling out new, power-efﬁcient features and quicker processes to repair them. And at the end of their usefulness, we continue to help schools, businesses and everyday users ﬁnd the right recycling option.Let’s take a closer look at what’s coming, and how we consider the entire lifecycle of a Chromebook — from manufacturing all the way to recycling.
Security is our number one priority. Chromebooks get automatic updates every four weeks that make your laptop more secure and help it last longer. And starting next year, we’re extending those automatic updates so your Chromebook gets enhanced security, stability and features for 10 years after the platform was released.A platform is a series of components that are designed to work together — something a manufacturer selects for any given Chromebook. To ensure compatibility with our updates, we work with all the component manufacturers within a platform (for things like the processor and Wi-Fi) to develop and test the software on every single Chromebook.Starting in 2024, if you have Chromebooks that were released from 2021 onwards, you’ll automatically get 10 years of updates. For Chromebooks released before 2021 and already in use, users and IT admins will have the option to extend automatic updates to 10 years from the platform’s release (after they receive their last automatic update).Even if a Chromebook no longer receives automatic updates, it still comes with strong, built-in security features. With Veriﬁed Boot, for example, your Chromebook does a self-check every time it starts up. If it detects that the system has been tampered with or corrupted in any way, it will typically repair itself, reverting back to its original state.You can ﬁnd more information about the extended updates in our Help Center, Admin console or in Settings.
Many schools extend their laptops’ lifespans by building in-school repair programs. In fact, more than 80% of U.S. schools that participated in a recent Google survey are repairing at least some of their Chromebooks in-house. The Chromebook Repair Program helps schools like Jenks Public Schools ﬁnd parts and provides guides for repairing speciﬁc Chromebooks, either onsite or through partner programs. Many organizations even offer repair certiﬁcations for Chromebooks.We’re rolling out updates that help make repairs even faster. Our new repair ﬂows allow authorized repair centers and school technicians to repair Chromebooks without a physical USB key. This reduces the time required for software repairs by over 50% and limits time away from the classroom.
Find more information about repairs with our Chromebook Repair Program
We’re making sure Chromebooks are more sustainable when it comes to both hardware and software. In the coming months, we’ll roll out new, energy-efﬁcient features to a majority of compatible platforms. Adaptive charging will help preserve battery health, while battery saver will reduce or turn off energy-intensive processes.
Adaptive charging on Chromebook will help preserve battery health
Battery saver will reduce or turn off energy-intensive processes.
You will be able to manage adaptive charging in Settings on your Chromebook.
And last year, we partnered with Acer, ASUS, Dell, HP and Lenovo to prioritize building more sustainable Chromebooks, including using ocean-bound plastics, PCR materials, recyclable packaging and low carbon emission manufacturing processes. This year alone, Chromebook manufacturers announced 12 new Chromebooks made with PCR and repairable parts.
All devices reach a time when they stop being useful, especially as hardware evolves. Schools can either sell or recycle Chromebooks via their reseller, who will often collect them onsite. (Before you turn them over to a remarketer or recycler, make sure all devices are removed from management ﬁrst.) The reseller or refurbisher can then provide the school with monetary or service credits, and resell, use for parts or recycle the Chromebook completely.You can also search for drop-off recycling locations near you with our global recycling drop-off points feature in Google Maps.
In addition to reducing environmental impact, Chromebooks reduce expenses for school districts — allowing them to focus more of their limited budget on other beneﬁts for teachers and students. Chromebooks include lower upfront costs than other devices: a 55% lower device cost and a 57% lower cost of operations. Over three years, Chromebooks save more than $800 in operating costs per device compared to others. And as a preventative cost-savings measure, automatic updates combined with existing layers of security have protected Chrome from having any reported ransomware attack.With all these updates, we’re committed to keeping Chromebooks universally accessible, helpful and secure — and helping you safely learn and work on them for years to come.
This past month I’ve been working on a project that I’m eager to write way too many words about. But for now, it’s not ready to talk about in public. Meanwhile, I either have way too many words about topics that I’m conﬁdent nobody wants to hear about or too few words about topics that folks tend to ﬁnd interesting.
In lieu of publishing something too unappealing or too trite, I’ve decided to tell a (true!) story that’s been rattling around in the back of my head for more than a few years.
In 2016 I joined Uber. I’d followed a director from Box who had been offered to lead Business Intelligence at Uber. She told me about a team that she thought I’d be perfect for—it was called “Crystal Ball” and it was doing some of the most incredible work she’d come across. I’d put in two good years at Box and seen it through an IPO and was ready for something new, so I jumped.
Uber was weird from the get-go. The ﬁrst week (dubbed “Engucation”) was a mix of learning how to do things that I’d never need to do in the role that I held and setting up beneﬁts and taking compliance classes. Travis Kalanick joined us for a Q&A where he showed off pictures of the self-driving car that the ATG arm of the company was building (it just looked like an SUV with cameras) and the more visually impressive mapping car that was gathering data for the self-driving project (it looked like a weird Dalek on wheels).
When I met the members of the Crystal Ball team, it was about four people (not including myself). Everyone was heavily biased towards back-end. I was given a brief tour to the fourth ﬂoor of 1455 Market St to understand the problem that the team was solving.
“You see all these desks?”
“This is where the data scientists sit. They build data science models in R. They run those models on data that they download from Vertica.”
“The problem is that the models are slow and take up a lot of resources. So the data scientists have multiple laptops that they download the data to, then run the models overnight. When the data scientists arrive in the morning, the laptops whose models didn’t crash have data that’s maybe usable that day.”
“What about the other laptops?”
“We don’t have the data we need and we lose money.”
This was a big problem for the business: they needed a way to take two kinds of inputs (data and code) and run the code to produce useful outputs. Or, hopefully useful. Testing a model meant running it, so the iteration cycle was very close to one iteration per day per laptop.
The team, when I joined, had the beginnings of a tool to automate this. It was called “R-Crusher” and it was essentially a system for scheduling work. They were able to make some API calls and code would be downloaded and executed, and an output ﬁle would appear in a directory eventually. As the ﬁrst (self-professed) front-end engineer on the team, it was my job to build the tool to expose this to the rest of the company.
I was grateful that I didn’t need to write any “real” code. I lived in the world of React building UIs with Uber’s in-house front-end framework (“Bedrock”). Any time I needed something, I could ask the back-end folks to update the R-Crusher API and I’d get some notes a few hours later to unblock me.
The ﬁrst version of the front-end for R-Crusher (“Wesley”) was ready in very little time—maybe a few weeks from the point I joined? It was a joy.
The next 6-7 months were a hectic rush. I was tasked with hiring more front-end engineers. I built a front-end team of seven people. We added user-facing features to Wesley and R-Crusher (“Can we have a text box that only takes capital letters?” “Can we make this text box allow a number whose maximum value is this other text box?”) and debugging tools for the team (“Can we see the output log of the running jobs?”).
There were effectively only two things that people were working on at Uber in 2016:
The app rewrite/redesign (which launched in November 2016)
All of my work and the team’s work, ultimately, was to support Uber China. R-Crusher was a tool to help get the data we needed to compete with Didi. Nobody really cared very much about the processes for the US and any other country—we had less to lose and Lyft wasn’t seen as remarkable competition except in the handful of cities they were operating at scale. China was a make-or-break opportunity for Uber, China was only going to succeed if we had the data for it, and the data was going to come (at least in part) from R-Crusher.
Over the summer of 2016, we came up against a new twist on the project. We had a model that ran overnight to generate data for anticipated ridership in China. That data wasn’t useful on its own, but if you fed it into a tab on a special Excel spreadsheet, you’d get a little interactive Excel tool for choosing driver incentives. Our job was to take that spreadsheet and make it available as the interface for this model’s data.
Now, this was no small feat on the back-end or front-end. First, the data needed to be run and moved to an appropriate location. Then, we had a challenging problem: we needed to take all the logic from this spreadsheet (with hundreds if not thousands of formulas across multiple sheets) and turn it into a UI that Uber China city teams could log into to use. The direction we got from the head of ﬁnance at Uber (who, for whatever reason, was seemingly responsible for the project) was “take this [the spreadsheet] and put it in on the website [Wesley].”
We asked how we could simplify the UI to meet the resource constraints (engineer time) we had. “The city teams only know how to use Excel, just make it like Excel.” We tried explaining why that was hard and what we could conﬁdently deliver in the time allotted. “Every day that we don’t have this tool as specced, we’re losing millions of dollars.” There was no budging on the spec.
In 2015, I had built a prototype of a tool at Box. Box had a collaborative note-taking product called Box Notes (based on Hackpad). I had the idea to make a similar project for working with numbers: sometimes you didn’t need a full spreadsheet, you just needed a place to put together a handful of formulas, format it with some headings and text, and share it with other people. Sort of like an ipython notebook for spreadsheets. I called it Box Sums.
When I built this, I created a simple React-based spreadsheet UI and a super basic spreadsheet formula engine. A few hundred lines of code. And if you dropped an XLS/XLSX ﬁle onto the page, I used a Node library to parse it and extract the contents.
I demoed Box Sums to the Box Notes team at some point, and they nitpicked the UI and implementation details (“What if two people type in the same cell at the same time? They’ll just overwrite each other.” 🙄). Nothing came of it, but I took the code and shoved it into my back pocket for a rainy day.
My idea was to take this code and spruce it up for Uber’s use case. Fill in all the missing features in the spreadsheet engine so that everything the spreadsheet needed to run was supported. The back-end could serve up a 2D array of data representing the ridership data input, and we’d feed that in. And the UI would simply make all but the cells that were meant to be interactive read-only instead.
I got to work polishing up the code. I parsed the XLS ﬁle and extracted all the formulas. I found all of the functions those formulas used, and implemented them in my spreadsheet engine. I then went through and implemented all the fun syntax that I hadn’t implemented for my demo at Box (like absolute cell references, where inserting $ characters into cell references makes them keep their column/row when you drag the corner of the cell, or referencing cells in other sheets).
I sat in the black mirrored wall “spaceship hallway” of 1455’s ﬁfth ﬂoor with my headphones playing the same handful of songs on repeat. I spent the early days ﬁxing crashes and debugging errant NaNs. Then I dug into big performance issues. And ﬁnally, I spent time polishing the UI.
When everything was working, I started checking my work. I entered some values into the Excel version and my version, and compared the numbers.
The answers were all almost correct. After a week of work, I was very pleased to see the sheet working to the extent that it was, but having answers that were very, very close is objectively worse than having numbers that are wildly wrong: very wrong numbers usually always mean a simply logic problem. Almost-correct numbers mean something more insidious.
I started stepping through the debugger as the calculation engine crawled the spreadsheet’s formula graph. I compared computed values to what they were in the Excel version. The sheer size of the spreadsheet made it almost impossible to trace through all of the formulas (there were simply too many), and I didn’t have another spreadsheet which exhibited this problem.
I googled for esoteric knowledge about Excel, rounding, or anything related to non-integer numbers that I could ﬁnd. It all led nowhere.
Just as I was about to resign myself to stepping through the thousands of formulas and recomputations, I decided to head down to the fourth ﬂoor to just ask one of the data scientists.
I approached their desks. They looked up and had a look of recognition.
“Hey guys. I’m working on the driver incentive spreadsheet. I’m trying to mimic the calculations that you have in Excel, but my numbers are all just a little bit off. I was hoping you might have some ideas about what’s going on.”
“Can I take a look?” I showed him my laptop and he played with a few numbers in the inputs. “Oh, that’s the circ.”
“We use a circular reference in Excel to do linear regression.”
My mind was blown. I had thought, naively perhaps, that circular references in Excel simply created an error. But this data scientist showed me that Excel doesn’t error on circular references—if the computed value of the cell converges.
You see, when formulas create a circular reference, Excel will run that computation up to a number of times. If, in those computations, the magnitude of the difference between the most recent and previous computed values for the cell falls below some pre-deﬁned epsilon value (usually a very small number, like 0.00001), Excel will stop recomputing the cell and pretend like it ﬁnished successfully.
I thanked the data scientists and returned to the spaceship hallway to think about what the fuck I was going to do next.
The changes I needed to make were pretty straightforward. First, it required knowing whether a downstream cell was already computed upstream (for whatever definitions of “downstream” and “upstream” you want to use; there’s not really a good notion of “up” and “down” in a spreadsheet or this graph). If you went to recompute a cell with a formula that referenced an already-recomputed cell, you’d simply keep track of the number of times you computed that cell. If the recomputed value was close enough to the previous value that it fell below the epsilon, you simply pretended like you didn’t recompute the cell and moved on. If it didn’t, you’d continue the process until the number of iterations that you’re keeping track of hit some arbitrary limit (for me, 1000), at which point you’d bail.
The changes took a day and a half to make. And would you believe, it worked. The outputs were exactly what they should have been. I wrote tests, I integrated the damn thing into Wesley, and I brought it to the team. We delivered the project in the second week of July.
Two things happened. The ﬁrst was of little consequence but I enjoy telling the story. Rakesh, the team lead working on the back-end, asked me where I got the Excel component.
“But where did you get the Excel engine?”
“But how are you running Excel in the browser?”
“Everything you see is built by me, from scratch.”
He simply couldn’t believe that I’d written a full spreadsheet engine that ran in the browser. All things considered, it was maybe only ﬁve thousand lines of code total. A gnarly ﬁve thousand lines, but (obviously) not intractable. His assumption about the sheer complexity of that option was that it wasn’t a reasonable project to take on.
I do think that if I had challenged Rakesh—under no time pressure—to build a spreadsheet engine, he’d get to a working solution as well. My recollection is that he was a very competent engineer. Despite that, I think his intuition about the complexity and scope were based on bad assumptions about what we were ultimately accomplishing, and it’s a good case study in estimating reasonable project outcomes. It goes to show that the sheer imagined complexity of a possible solution is enough to disqualify it in some folks’ minds, even if it’s the best possible outcome.
The second thing that happened was we shipped. We got the Uber China city team members logging in and using the tool. They plugged away at it, and to my knowledge, the numbers it produced drove driver incentives.
That was the third week of July.
The last week of July, the head of ﬁnance rushed over to our desks.
“Why can you see the formulas?”
“When you click in the cells of the spreadsheet you can see the formulas. You shouldn’t be able to do that.”
“You said to make it just like Excel.”
“People working for Didi apply for intern jobs at Uber China and then exﬁltrate our data. We can’t let them see the formulas or they’ll just copy what we do!”
Apparently that was a thing. I remember being only half-surprised at the time. I hadn’t considered that our threat model might include employees leaking the computations used to produce the numbers in question. Of course, short of moving the computations up to the server, we couldn’t *really* protect the formulas, but that was beyond the scope of what we were being asked to do.
The ﬁx was straightforward: I updated the UI to simply not show formulas when you clicked in cells. Easy enough, I guess.
The ﬁrst week of August 2016, Uber China was sold to Didi. Most of us found out because our phones started dinging with news stories about it. We all stopped working and waited until an email arrived a couple hours later announcing the deal internally. If I remember correctly, I just left the ofﬁce and headed home around lunch time because our team didn’t have anything to do that wasn’t Uber China-related (yet).
After Uber China evaporated, the tool was unceremoniously ripped out of Wesley. It was a bespoke UI for a data job that would never run again. We were never asked to build Excel in the browser again.
I feel no sense of loss or disappointment. I wasn’t disappointed at the time, either.
My ﬁrst reaction was to publish the code on Github.
My second reaction was to move on. There was maybe a part of me—my younger self—that was disappointed that this major piece of code that I’d labored over had been so gently used before being retired. I wasn’t recognized for it in any material way. My manager didn’t even know what I’d built.
On the other hand, we as engineers need to be real with ourselves. Every piece of code you write as an engineer is legacy code. Maybe not right now, but it will be. Someone will take joy in ripping it out someday. Every masterpiece will be gleefully replaced, it’s just a matter of time. So why get precious about how long that period of time is?
I often hear fairly junior folks saying things to the effect of “I’m here to grow as an engineer.” Growing as an engineer is mutually exclusive with the longevity of your output as an engineer. “Growing as an engineer” means becoming a better engineer, and becoming a better engineer (directly or indirectly) means getting better at using your skills to create business value. Early in your career, the work you do will likely have far less longevity than the work you do later on, simply because you gain maturity over time and learn to build tools that tend to be useful for longer.
Sometimes the business value your work generates comes in the way of technical output. Sometimes it’s how you work with the people around you (collaborating, mentoring, etc.). Sometimes it’s about how you support the rest of the team. There are many ways that business value is created.
The end (demise?) of Uber China implicitly meant that there was no business value left to create with this project. Continuing to push on it wouldn’t have gotten me or the business anywhere, even if what I’d done was the best possible solution to the problem.
Sometimes that’s just how it is. The devops saying “Cattle, not pets” is apt here: code (and by proxy, the products built with that code) is cattle. It does a job for you, and when that job is no longer useful, the code is ready to be retired. If you treat the code like a pet for sentimental reasons, you’re working in direct opposition to the interests of the business.
As much as I’d love to work on Uber Excel (I’m ashamed to admit that I thought of “Uber Sheets” far too long after I left the company), I was hired to solve problems. Having Excel in the browser was a useful solution, but the problem wasn’t showing spreadsheets in the browser: the problem was getting a speciﬁc UI delivered to the right users quickly.
It’s easy to treat a particularly clever or elegant piece of code as a masterpiece. It might very well be a beautiful trinket! But we engineers are not in the business of beautiful trinkets, we’re in the business of outcomes. In the same way that a chef shouldn’t be disappointed that a beautiful plate of food is “destroyed” by a hungry customer eating it, we shouldn’t be disappointed that our beautiful git repos are marked as “Archived” and shufﬂed off the production kube cluster.
The attitudes that we have towards the things that we make are good indicators of maturity. It’s natural for us to want our work to have staying power and longevity. It’s extremely human to want the validation of our beautiful things being seen and used and recognized; it means we’ve done well. On the other hand, our work being discarded gives us an opportunity to understand what (if anything) we could have done better:
Did we build something that didn’t meet the project constraints?Did we build what was requested, but what was requested wasn’t the right thing to ask for?Did the requested solution actually address the needs of the end user?What questions didn’t we ask the stakeholders that could have better-aligned our output with the business need that triggered the request to engineering?Were the expectations that we set around the project inaccurate or vague?Did the project need to be as robust as what was delivered? Could a simpler or less clever solution solved the need equally well?Did we focus on the wrong success criteria?Did we even have success criteria beyond “build what was requested?”Who could have been consulted before or after delivery of the project to validate whether all of the actual project requirements were satisﬁed?
You won’t have the opportunity to take lessons away from the project if you see the sunsetting of the project as a failure: there’s often much to learn about what non-technical aspects of the project broke down. Perhaps there aren’t any, and maybe management is just a group of fools! But often that’s not the case; your delicately milled cog wasn’t ripped out of the machine because it was misunderstood, it was ripped out because it didn’t operate smoothly as a part of the larger system it was installed in.