10 interesting stories served every morning and every evening.




1 1,289 shares, 54 trendiness

The Tyranny of the Marginal User

A friend and I were re­cently lament­ing the strange death of OKCupid. Seven years ago when I first tried on­line dat­ing, the way it worked is that you wrote a long es­say about your­self and what you were look­ing for. You an­swered hun­dreds of ques­tions about your per­son­al­ity, your dreams, your de­sires for your part­ner, your hard nos. Then you saw who in your area was most com­pat­i­ble, with a match score” be­tween 0 and 100%. The match scores were eerily good. Pretty much every time I read the pro­file of some­one with a 95% match score or higher, I fell a lit­tle bit in love. Every date I went on was fun; the chem­istry was­n’t al­ways there but I felt like we could at least be great friends.

I’m now quite skep­ti­cal of quan­tifi­ca­tion of ro­mance and the idea that sim­i­lar­ity makes for good re­la­tion­ships. I was some­what skep­ti­cal then, too. What I did not ex­pect, what would have ab­solutely bog­gled young naive techno-op­ti­mist Ivan, was that 2016-era OKCupid was the best that on­line dat­ing would ever get. That the tools that peo­ple use to find the most im­por­tant re­la­tion­ship in their lives would get worse, and worse, and worse. OKCupid, like the other ac­qui­si­tions of Match.com, is now just an­other Tinder clone - see face, swipe left, see face, swipe right. A dig­i­tal night­club. And I just don’t ex­pect to meet my wife in a night­club.

This is­n’t just dat­ing apps. Nearly all pop­u­lar con­sumer soft­ware has been trend­ing to­wards min­i­mal user agency, in­fi­nitely scrolling feeds, and garbage con­tent. Even that crown jewel of the Internet, Google Search it­self, has de­cayed to the point of be­ing un­us­able for com­pli­cated queries. Reddit and Craigslist re­main in­cred­i­bly use­ful and valu­able pre­cisely be­cause their soft­ware re­mains frozen in time. Like old Victorian man­sions in San Francisco they stand, shielded by a quirk of fate from the winds of cap­i­tal, re­minders of a more hu­mane age.

How is it pos­si­ble that soft­ware gets worse, not bet­ter, over time, de­spite bil­lions of dol­lars of R&D and rapid progress in tool­ing and AI? What evil force, more pow­er­ful than Innovation and Progress, is at work here?

In my six years at Google, I got to ob­serve this force up close, re­lent­lessly killing fea­tures users loved and erod­ing the last ves­tiges of cre­ativ­ity and agency from our prod­ucts. I know this force well, and I hate it, but I do not yet know how to fight it. I call this force the Tyranny of the Marginal User.

Simply put, com­pa­nies build­ing apps have strong in­cen­tives to gain more users, even users that de­rive very lit­tle value from the app. Sometimes this is be­cause you can mon­e­tize low value users by sell­ing them ads. Often, it’s be­cause your busi­ness re­lies on net­work ef­fects and even low value users can help you build a moat. So the north star met­ric for de­sign­ers and en­gi­neers is typ­i­cally some­thing like Daily Active Users, or DAUs for short: the num­ber of users who log into your app in a 24 hour pe­riod.

What’s wrong with such a met­ric? A prod­uct that many users want to use is a good prod­uct, right? Sort of. Since most soft­ware prod­ucts charge a flat per-user fee (often zero, be­cause ads), and eco­nomic in­cen­tives op­er­ate on the mar­gin, a com­pany with a bil­lion-user prod­uct does­n’t ac­tu­ally care about its bil­lion ex­ist­ing users. It cares about the mar­ginal user - the bil­lion-plus-first user - and it fo­cuses all its en­ergy on mak­ing sure that mar­ginal user does­n’t stop us­ing the app. Yes, if you ne­glect the ex­ist­ing users’ ex­pe­ri­ence for long enough they will leave, but in prac­tice apps are sticky and by the time your loyal users leave every­one on the team will have long been pro­moted.

So in prac­tice, the de­sign of pop­u­lar apps caters al­most en­tirely to the mar­ginal user. But who is this mar­ginal user, any­way? Why does he have such bad taste in apps?

Here’s what I’ve been able to piece to­gether about the mar­ginal user. Let’s call him Marl. The first thing you need to know about Marl is that he has the at­ten­tion span of a gold­fish on acid. Once Marl opens your app, you have about 1.3 sec­onds to catch his at­ten­tion with a shiny im­age or trig­ger­ing head­line, oth­er­wise he’ll swipe back to TikTok and never open your app again.

Marl’s tol­er­ance for user in­ter­face com­plex­ity is zero. As far as you can tell he only has one work­ing thumb, and the only thing that thumb can do is flick up­wards in a repet­i­tive, zom­bielike scrolling mo­tion. As a prod­uct de­signer con­cerned about the well­be­ing of your users, you might won­der - does Marl re­ally want to be hate-read­ing Trump ar­ti­cles for 6 hours every night? Is Marl okay? You might think to add a set­ting where Marl can en­ter his pref­er­ences about the con­tent he sees: less pol­i­tics, more sports, sim­ple stuff like that. But Marl will never click through any of your ham­burger menus, never change any set­ting to a non-de­fault. You might think Marl just does­n’t know about the set­tings. You might think to make things more con­ve­nient for Marl, per­haps add a lit­tle see less like this” but­ton be­low a piece of con­tent. Oh boy, are you ever wrong. This ab­solutely in­fu­ri­ates Marl. On the mar­gin, the hand­ful of pix­els oc­cu­pied by your well-in­ten­tioned lit­tle but­ton re­placed pix­els that con­tained a trig­ger­ing head­line or a cute im­age of a puppy. Insufficiently stim­u­lated, Marl throws a fit and swipes over to TikTok, never to re­turn to your app. Your fea­ture de­creases DAUs in the A/B test. In the launch com­mit­tee meet­ing, you mum­ble some­thing about user agency” as your VP looks at you with pity and scorn. Your but­ton does­n’t get de­ployed. You don’t get your pro­mo­tion. Your wife leaves you. Probably for Marl.

Of course, Marl” is­n’t al­ways a per­son. Marl can also be a state of mind. We’ve all been Marl at one time or an­other - half con­sciously scrolling in bed, in line at the air­port with the an­nounce­ments blar­ing, re­flex­ively open­ing our phones to dis­tract our­selves from a painful mem­ory. We don’t usu­ally think about Marl, or iden­tify with him. But the struc­ture of the dig­i­tal econ­omy means most of our dig­i­tal lives are de­signed to take ad­van­tage of this state. A sub­stan­tial frac­tion of the world’s most bril­liant, com­pe­tent, and em­pa­thetic peo­ple, armed with near-un­lim­ited cap­i­tal and in­creas­ingly god-like com­put­ers, spend their lives serv­ing Marl.

By con­trast, con­sumer soft­ware tools that en­hance hu­man agency, that serve us when we are most cre­ative and in­ten­tional, are of­ten built by hob­by­ists and used by a hand­ful of nerds. If such a tool ever gets too suc­cess­ful one of the Marl-serving com­pa­nies, flush with cash from ad­ver­tis­ing or growth-hun­gry ven­ture cap­i­tal, will ac­quire it and kill it. So it goes.

Thanks to Ernie French (fuseki.net) for many re­lated con­ver­sa­tions and com­ments on this es­say.

...

Read the original on nothinghuman.substack.com »

2 951 shares, 35 trendiness

sqlite, pandas, gnuplot and friends

When was the Dollar high­est against the Euro?

Here is a small pro­gram that cal­cu­lates it:

The out­put: 2000-10-26. (Try run­ning it your­self.)

The curl bit down­loads the of­fi­cial his­tor­i­cal data that the European

Central Bank

pub­lishes

on the po­si­tion of the Euro against other cur­ren­cies. (The -s flag just re­moves some noise from stan­dard er­ror.)

That data comes as a zip­file, which gun­zip will de­com­press.

sqlite3 queries the csv in­side.  :memory tells sqlite to use an in-mem­ory file. After that, .import /dev/stdin stdin tells sqlite to load stan­dard in­put into a table called stdin. The string that fol­lows that is a SQL query.

Although pulling out a sim­ple max is easy, the data shape is not ideal. It’s in wide” for­mat - a Date col­umn, and then an ex­tra col­umn for every cur­rency. Here’s the csv header for that file:

When do­ing fil­ters and ag­gre­ga­tions, life is eas­ier if the data is in long” for­mat, like this:

Switching from wide to long is a sim­ple op­er­a­tion, com­monly called a melt”. Unfortunately, it’s not avail­able in SQL.

No mat­ter, you can melt with pan­das:

There is one more prob­lem. The file mungers at ECB have wrongly put a trail­ing comma at the end of every line. The makes csv parsers pick up an ex­tra, blank col­umn at the end. Our sqlite query did­n’t no­tice, but these com­mas in­ter­fere with the melt, cre­at­ing a whole set of junk rows at the end:

The ef­fects of that ex­tra comma can be re­moved via pan­das by adding one more thing to our method chain: .iloc[:, :-1], which ef­fec­tively says give me all rows (”:“) and all but the last col­umn (”:-1″). So:

Does every­one who uses this file have to re­peat this data shit­work?

Tragically, the an­swer is yes. As they say: data jan­i­tor: no­body’s dream, every­one’s job”.

In full fair­ness, though, the ECB for­eign ex­change data is prob­a­bly in the top 10% of all open data re­leases. Usually, get­ting vi­able tab­u­lar data out of some­one is a much more tor­tu­ous and in­volved process.

Some things we did­n’t have to do in this case: ne­go­ti­ate ac­cess (for ex­am­ple by pay­ing money or talk­ing to a sales­man); de­posit our email ad­dress/​com­pany name/​job ti­tle into some­one’s data­base of qual­i­fied leads, ob­serve any quota; au­then­ti­cate (often a sub­stan­tial side-quest of its own), read any API docs at all or deal with any is­sues more se­ri­ous than ba­sic for­mat­ting and shape.

So eu­rofxref-hist.zip is, rel­a­tively speak­ing, pretty nice ac­tu­ally.

But any­way - I’ll put my cleaned up copy into a csvbase

table so you, dear reader, can skip the te­dium and just have fun.

Here’s how I do that:

All I’ve done is add an­other curl, to HTTP PUT the csv file into csvbase.

–upload-file - up­loads from stan­dard in­put to the given url (via HTTP PUT). If the table does­n’t al­ready ex­ist in csvbase, it is cre­ated. -n adds my

cre­den­tials

from my ~/.netrc. That’s it. Simples.

Alright, now the data clean­ing phase is over, let’s do some more in­ter­est­ing stuff.

That’s some­what leg­i­ble for over 6000 dat­a­points in an 80x25 char­ac­ter ter­mi­nal. You can make out the broad trend. A rea­son­able data-ink

ra­tio.

gnu­plot is like a lit­tle mini-pro­gram­ming lan­guage of it’s own. Here’s what the above snip­pet does:

* us­ing 1:2 with lines draw lines from columns 1 and 2 (the date and the rate

re­spec­tively)

You can, of course, also draw graphs to proper im­ages:

Outputting to SVG is only a bit more com­pli­cated than ascii art. In or­der for it look de­cent you need to help gnu­plot un­der­stand that it’s timeseries” data - ie: that the x axis is time; give a for­mat for that time and then tell it to ro­tate the mark­ings on the x axis so that they are read­able. It’s a bit wordy though: let’s bind it to bash func­tion so we can reuse it:

So far, so good. But it would be nice to try out more so­phis­ti­cated analy­ses: let’s try putting a nice rolling av­er­age in so that we can see a trend line:

Smooth. If you don’t have duckdb in­stalled, it’s not hard to adapt the above for sqlite3 (the query is the same). DuckDB is a tool I wanted to show be­cause it’s a lot like sqlite but in­stead is colum­nar (rather than row-ori­ented). However for me the main value is that it has a lot of easy er­gonom­ics.

Here is one of them: you can load csvs into table files straight from HTTP:

That’s pretty easy, and DuckDB does a rea­son­able job of in­fer­ring types. There are a lot of other us­abil­ity niceties too: for ex­am­ple, it help­fully de­tects your ter­mi­nal size and abridges ta­bles by de­fault rather than flood­ing your ter­mi­nal with an enor­mous re­sult­set. It has a progress bar for big queries! It can out­put mark­down ta­bles! Etc!

A lot is pos­si­ble with a zip­file of data and just the pro­grams that are ei­ther al­ready in­stalled or a quick brew in­stall/​apt in­stall away. I re­mem­ber how im­pressed I was when I was first shown this eu­rofxref-hist.zip by an old hand from for­eign ex­change when I worked in a bank. It was so sim­ple: the sim­plest cross-or­gan­i­sa­tion data in­ter­change pro­to­col I had then seen (and prob­a­bly since).

A mere zip­file with a csv in it seems so diminu­tive, but in fact an enor­mous mass of fi­nan­cial ap­pli­ca­tions use this par­tic­u­lar zip­file every day. I’m pretty sure that’s why they’ve left those com­mas in - if they re­moved them now they’d break a lot of code.

When open data is made re­ally eas­ily avail­able, it also func­tions dou­ble duty as an open API. After all, for the largeish frac­tion of APIs in which are less about call­ing re­mote func­tions than about ex­chang­ing data, what is the func­tional dif­fer­ence?

So I think the ECBs zip­file is a pretty good start­ing point for a data in­ter­change for­mat. I love the sim­plic­ity - and I’ve tried to keep that with csvbase.

In csvbase, every table has a sin­gle url, fol­low­ing the form:

And on each url, there are four main verbs:

When you GET: you get a csv (or a web page, if you’re in a

browser).

When you PUT a new csv: you cre­ate a new table, or over­write the ex­ist­ing one.

When you POST a new csv: you bulk add more rows to an ex­ist­ing table.

When you DELETE: that table is no more.

To au­then­ti­cate, just use HTTP Basic

Auth.

Could it be any sim­pler? If you can think of a way: write me an

email.

I said above that most SQL data­bases don’t have a melt” op­er­a­tion. The ones that I know of that do are

Snowflake and

MS SQL

Server. One ques­tion that SQL-knowers fre­quently ask is: why does any­one use R or Pandas at all when SQL al­ready ex­ists? A key rea­son is that R and Pandas are very strong on data cleanup.

One un­der-ap­pre­ci­ated fea­ture of bash pipelines is that they are multi-process. Each pro­gram runs in­de­pen­dently, in it’s own process. While curl is down­load­ing data from the web, grep is fil­ter­ing it, sqlite is query­ing it and per­haps curl is up­load­ing it again, etc. All in par­al­lel, which can, sur­pris­ingly, make it very com­pet­i­tive with fancy cloud

al­ter­na­tives.

Why was the Euro so weak back in 2000? It was launched, with­out coins or notes, in January 1999. The Euro was, ini­tially, a sort of in-game cur­rency for the European Union. It ex­isted only in­side banks - so there were no notes or coins for it. That all came later. So did be­lief - early on it did­n’t look like the lit­tle Euro was go­ing to make it: so the rate against the Dollar was 0.8252. That means that in October 2000, a Dollar would buy you 1.21 Euros (to re­verse ex­change rates, do 1/rate). Nowadays the Euro is much stronger: a Dollar would buy you less than 1 Euro.

...

Read the original on csvbase.com »

3 696 shares, 26 trendiness

38TB of data accidentally exposed by Microsoft AI researchers

* Microsoft’s AI re­search team, while pub­lish­ing a bucket of open-source train­ing data on GitHub, ac­ci­den­tally ex­posed 38 ter­abytes of ad­di­tional pri­vate data — in­clud­ing a disk backup of two em­ploy­ees’ work­sta­tions.

* The backup in­cludes se­crets, pri­vate keys, pass­words, and over 30,000 in­ter­nal Microsoft Teams mes­sages.

* The re­searchers shared their files us­ing an Azure fea­ture called SAS to­kens, which al­lows you to share data from Azure Storage ac­counts.

* The ac­cess level can be lim­ited to spe­cific files only; how­ever, in this case, the link was con­fig­ured to share the en­tire stor­age ac­count — in­clud­ing an­other 38TB of pri­vate files.

* This case is an ex­am­ple of the new risks or­ga­ni­za­tions face when start­ing to lever­age the power of AI more broadly, as more of their en­gi­neers now work with mas­sive amounts of train­ing data. As data sci­en­tists and en­gi­neers race to bring new AI so­lu­tions to pro­duc­tion, the mas­sive amounts of data they han­dle re­quire ad­di­tional se­cu­rity checks and safe­guards.

As part of the Wiz Research Team’s on­go­ing work on ac­ci­den­tal ex­po­sure of cloud-hosted data, the team scanned the in­ter­net for mis­con­fig­ured stor­age con­tain­ers. In this process, we found a GitHub repos­i­tory un­der the Microsoft or­ga­ni­za­tion named ro­bust-mod­els-trans­fer. The repos­i­tory be­longs to Microsoft’s AI re­search di­vi­sion, and its pur­pose is to pro­vide open-source code and AI mod­els for im­age recog­ni­tion. Readers of the repos­i­tory were in­structed to down­load the mod­els from an Azure Storage URL:

However, this URL al­lowed ac­cess to more than just open-source mod­els. It was con­fig­ured to grant per­mis­sions on the en­tire stor­age ac­count, ex­pos­ing ad­di­tional pri­vate data by mis­take.

Our scan shows that this ac­count con­tained 38TB of ad­di­tional data — in­clud­ing Microsoft em­ploy­ees’ per­sonal com­puter back­ups. The back­ups con­tained sen­si­tive per­sonal data, in­clud­ing pass­words to Microsoft ser­vices, se­cret keys, and over 30,000 in­ter­nal Microsoft Teams mes­sages from 359 Microsoft em­ploy­ees.

In ad­di­tion to the overly per­mis­sive ac­cess scope, the to­ken was also mis­con­fig­ured to al­low full con­trol” per­mis­sions in­stead of read-only. Meaning, not only could an at­tacker view all the files in the stor­age ac­count, but they could delete and over­write ex­ist­ing files as well.

This is par­tic­u­larly in­ter­est­ing con­sid­er­ing the repos­i­to­ry’s orig­i­nal pur­pose: pro­vid­ing AI mod­els for use in train­ing code. The repos­i­tory in­structs users to down­load a model data file from the SAS link and feed it into a script. The file’s for­mat is ckpt, a for­mat pro­duced by the TensorFlow li­brary. It’s for­mat­ted us­ing Python’s pickle for­mat­ter, which is prone to ar­bi­trary code ex­e­cu­tion by de­sign. Meaning, an at­tacker could have in­jected ma­li­cious code into all the AI mod­els in this stor­age ac­count, and every user who trusts Microsoft’s GitHub repos­i­tory would’ve been in­fected by it.

However, it’s im­por­tant to note this stor­age ac­count was­n’t di­rectly ex­posed to the pub­lic; in fact, it was a pri­vate stor­age ac­count. The Microsoft de­vel­op­ers used an Azure mech­a­nism called SAS to­kens”, which al­lows you to cre­ate a share­able link grant­ing ac­cess to an Azure Storage ac­coun­t’s data — while upon in­spec­tion, the stor­age ac­count would still seem com­pletely pri­vate.

In Azure, a Shared Access Signature (SAS) to­ken is a signed URL that grants ac­cess to Azure Storage data. The ac­cess level can be cus­tomized by the user; the per­mis­sions range be­tween read-only and full con­trol, while the scope can be ei­ther a sin­gle file, a con­tainer, or an en­tire stor­age ac­count. The ex­piry time is also com­pletely cus­tomiz­able, al­low­ing the user to cre­ate never-ex­pir­ing ac­cess to­kens. This gran­u­lar­ity pro­vides great agility for users, but it also cre­ates the risk of grant­ing too much ac­cess; in the most per­mis­sive case (as we’ve seen in Microsoft’s to­ken above), the to­ken can al­low full con­trol per­mis­sions, on the en­tire ac­count, for­ever — es­sen­tially pro­vid­ing the same ac­cess level as the ac­count key it­self.

There are 3 types of SAS to­kens: Account SAS, Service SAS, and User Delegation SAS. In this blog we will fo­cus on the most pop­u­lar type — Account SAS to­kens, which were also used in Microsoft’s repos­i­tory.

Generating an Account SAS is a sim­ple process. As can be seen in the screen be­low, the user con­fig­ures the to­ken’s scope, per­mis­sions, and ex­piry date, and gen­er­ates the to­ken. Behind the scenes, the browser down­loads the ac­count key from Azure, and signs the gen­er­ated to­ken with the key. This en­tire process is done on the client side; it’s not an Azure event, and the re­sult­ing to­ken is not an Azure ob­ject.

Because of this, when a user cre­ates a highly-per­mis­sive non-ex­pir­ing to­ken, there is no way for an ad­min­is­tra­tor to know this to­ken ex­ists and where it cir­cu­lates. Revoking a to­ken is no easy task ei­ther — it re­quires ro­tat­ing the ac­count key that signed the to­ken, ren­der­ing all other to­kens signed by same key in­ef­fec­tive as well. These unique pit­falls make this ser­vice an easy tar­get for at­tack­ers look­ing for ex­posed data.

Besides the risk of ac­ci­den­tal ex­po­sure, the ser­vice’s pit­falls make it an ef­fec­tive tool for at­tack­ers seek­ing to main­tain per­sis­tency on com­pro­mised stor­age ac­counts. A re­cent Microsoft re­port in­di­cates that at­tack­ers are tak­ing ad­van­tage of the ser­vice’s lack of mon­i­tor­ing ca­pa­bil­i­ties in or­der to is­sue priv­i­leged SAS to­kens as a back­door. Since the is­suance of the to­ken is not doc­u­mented any­where, there is no way to know that it was is­sued and act against it.

SAS to­kens pose a se­cu­rity risk, as they al­low shar­ing in­for­ma­tion with ex­ter­nal uniden­ti­fied iden­ti­ties. The risk can be ex­am­ined from sev­eral an­gles: per­mis­sions, hy­giene, man­age­ment and mon­i­tor­ing.

A SAS to­ken can grant a very high ac­cess level to a stor­age ac­count, whether through ex­ces­sive per­mis­sions (like read, list, write or delete), or through wide ac­cess scopes that al­low users to ac­cess ad­ja­cent stor­age con­tain­ers.

SAS to­kens have an ex­piry prob­lem — our scans and mon­i­tor­ing show or­ga­ni­za­tions of­ten use to­kens with a very long (sometimes in­fi­nite) life­time, as there is no up­per limit on a to­ken’s ex­piry. This was the case with Microsoft’s to­ken, which was valid un­til 2051.

Account SAS to­kens are ex­tremely hard to man­age and re­voke. There is­n’t any of­fi­cial way to keep track of these to­kens within Azure, nor to mon­i­tor their is­suance, which makes it dif­fi­cult to know how many to­kens have been is­sued and are in ac­tive use. The rea­son even is­suance can­not be tracked is that SAS to­kens are cre­ated on the client side, there­fore it is not an an Azure tracked ac­tiv­ity, and the gen­er­ated to­ken is not an Azure ob­ject. Because of this, even what ap­pears to be a pri­vate stor­age ac­count may po­ten­tially be widely ex­posed.

As for re­vo­ca­tion, there is­n’t a way to re­voke a sin­gu­lar Account SAS; the only so­lu­tion is re­vok­ing the en­tire ac­count key, which in­val­i­dates all the other to­kens is­sued with the same key as well.

Monitoring the us­age of SAS to­kens is an­other chal­lenge, as it re­quires en­abling log­ging on each stor­age ac­count sep­a­rately. It can also be costly, as the pric­ing de­pends on the re­quest vol­ume of each stor­age ac­count.

SAS se­cu­rity can be sig­nif­i­cantly im­proved with the fol­low­ing rec­om­men­da­tions.

Due to the lack of se­cu­rity and gov­er­nance over Account SAS to­kens, they should be con­sid­ered as sen­si­tive as the ac­count key it­self. Therefore, it is highly rec­om­mended to avoid us­ing Account SAS for ex­ter­nal shar­ing. Token cre­ation mis­takes can eas­ily go un­no­ticed and ex­pose sen­si­tive data.

For ex­ter­nal shar­ing, con­sider us­ing a Service SAS with a Stored Access Policy. This fea­ture con­nects the SAS to­ken to a server-side pol­icy, pro­vid­ing the abil­ity to man­age poli­cies and re­voke them in a cen­tral­ized man­ner.

If you need to share con­tent in a time-lim­ited man­ner, con­sider us­ing a User Delegation SAS, since their ex­piry time is capped at 7 days. This fea­ture con­nects the SAS to­ken to Azure Active Directory’s iden­tity man­age­ment, pro­vid­ing con­trol and vis­i­bil­ity over the iden­tity of the to­ken’s cre­ator and its users.

Additionally, we rec­om­mend cre­at­ing ded­i­cated stor­age ac­counts for ex­ter­nal shar­ing, to en­sure that the po­ten­tial im­pact of an over-priv­i­leged to­ken is lim­ited to ex­ter­nal data only.

To avoid SAS to­kens com­pletely, or­ga­ni­za­tions will have to dis­able SAS ac­cess for each of their stor­age ac­counts sep­a­rately. We rec­om­mend us­ing a CSPM to track and en­force this as a pol­icy.

Another so­lu­tion to dis­able SAS to­ken cre­ation is by block­ing ac­cess to the list stor­age ac­count keys” op­er­a­tion in Azure (since new SAS to­kens can­not be cre­ated with­out the key), then ro­tat­ing the cur­rent ac­count keys, to in­val­i­date pre-ex­ist­ing SAS to­kens. This ap­proach would still al­low cre­ation of User Delegation SAS, since it re­lies on the user’s key in­stead of the ac­count key.

To track ac­tive SAS to­ken us­age, you need to en­able Storage Analytics logs for each of your stor­age ac­counts. The re­sult­ing logs will con­tain de­tails of SAS to­ken ac­cess, in­clud­ing the sign­ing key and the per­mis­sions as­signed. However, it should be noted that only ac­tively used to­kens will ap­pear in the logs, and that en­abling log­ging comes with ex­tra charges — which might be costly for ac­counts with ex­ten­sive ac­tiv­ity.

Azure Metrics can be used to mon­i­tor SAS to­kens us­age in stor­age ac­counts. By de­fault, Azure records and ag­gre­gates stor­age ac­count events up to 93 days. Utilizing Azure Metrics, users can look up SAS-authenticated re­quests, high­light­ing stor­age ac­counts with SAS to­kens us­age.

In ad­di­tion, we rec­om­mend us­ing se­cret scan­ning tools to de­tect leaked or over-priv­i­leged SAS to­kens in ar­ti­facts and pub­licly ex­posed as­sets, such as mo­bile apps, web­sites, and GitHub repos­i­to­ries — as can be seen in the Microsoft case.

For more in­for­ma­tion on cloud se­cret scan­ning, please check out our re­cent talk from the fwd:cloud­sec 2023 con­fer­ence, Scanning the in­ter­net for ex­ter­nal cloud ex­po­sures”.

Wiz cus­tomers can lever­age the Wiz se­cret scan­ning ca­pa­bil­i­ties to iden­tify SAS to­kens in in­ter­nal and ex­ter­nal as­sets and ex­plore their per­mis­sions. In ad­di­tion, cus­tomers can use the Wiz CSPM to track stor­age ac­counts with SAS sup­port.

* Detect SAS to­kens: use this query to sur­face all SAS to­kens in all your mon­i­tored cloud en­vi­ron­ments.

* Detect high-priv­i­lege SAS to­kens: use the fol­low­ing con­trol to de­tect highly-priv­i­leged SAS to­kens lo­cated on pub­licly ex­posed work­loads.

* CSPM rule for block­ing SAS to­kens: use the fol­low­ing Cloud Configuration Rule to track stor­age ac­counts al­low­ing SAS to­ken us­age.

As com­pa­nies em­brace AI more widely, it is im­por­tant for se­cu­rity teams to un­der­stand the in­her­ent se­cu­rity risks at each stage of the AI de­vel­op­ment process.

The in­ci­dent de­tailed in this blog is an ex­am­ple of two of these risks.

The first is over­shar­ing of data. Researchers col­lect and share mas­sive amounts of ex­ter­nal and in­ter­nal data to con­struct the re­quired train­ing in­for­ma­tion for their AI mod­els. This poses in­her­ent se­cu­rity risks tied to high-scale data shar­ing. It is cru­cial for se­cu­rity teams to de­fine clear guide­lines for ex­ter­nal shar­ing of AI datasets. As we’ve seen in this case, sep­a­rat­ing the pub­lic AI data set to a ded­i­cated stor­age ac­count could’ve lim­ited the ex­po­sure.

The sec­ond is the risk of sup­ply chain at­tacks. Due to im­proper per­mis­sions, the pub­lic to­ken granted write ac­cess to the stor­age ac­count con­tain­ing the AI mod­els. As noted above, in­ject­ing ma­li­cious code into the model files could’ve led to a sup­ply chain at­tack on other re­searchers who use the repos­i­to­ry’s mod­els. Security teams should re­view and san­i­tize AI mod­els from ex­ter­nal sources, since they can be used as a re­mote code ex­e­cu­tion vec­tor.

The sim­ple step of shar­ing an AI dataset led to a ma­jor data leak, con­tain­ing over 38TB of pri­vate data. The root cause was the us­age of Account SAS to­kens as the shar­ing mech­a­nism. Due to a lack of mon­i­tor­ing and gov­er­nance, SAS to­kens pose a se­cu­rity risk, and their us­age should be as lim­ited as pos­si­ble. These to­kens are very hard to track, as Microsoft does not pro­vide a cen­tral­ized way to man­age them within the Azure por­tal. In ad­di­tion, these to­kens can be con­fig­ured to last ef­fec­tively for­ever, with no up­per limit on their ex­piry time. Therefore, us­ing Account SAS to­kens for ex­ter­nal shar­ing is un­safe and should be avoided.

In the wider scope, sim­i­lar in­ci­dents can be pre­vented by grant­ing se­cu­rity teams more vis­i­bil­ity into the processes of AI re­search and de­vel­op­ment teams. As we see wider adop­tion of AI mod­els within com­pa­nies, it’s im­por­tant to raise aware­ness of rel­e­vant se­cu­rity risks at every step of the AI de­vel­op­ment process, and make sure the se­cu­rity team works closely with the data sci­ence and re­search teams to en­sure proper guardrails are de­fined.

Microsoft’s ac­count of this is­sue is avail­able on the MSRC blog.

* Jul. 20, 2020 — SAS to­ken first com­mit­ted to GitHub; ex­piry set to Oct. 5, 2021

Hi there! We are Hillai Ben-Sasson (@hillai), Shir Tamari (@shirtamari), Nir Ohfeld (@nirohfeld), Sagi Tzadik (@sagitz_) and Ronen Shustin (@ronenshh) from the Wiz Research Team. We are a group of vet­eran white-hat hack­ers with a sin­gle goal: to make the cloud a safer place for every­one. We pri­mar­ily fo­cus on find­ing new at­tack vec­tors in the cloud and un­cov­er­ing iso­la­tion is­sues in cloud ven­dors.

We would love to hear from you! Feel free to con­tact us on Twitter or via email: re­search@wiz.io.

...

Read the original on www.wiz.io »

4 681 shares, 29 trendiness

A Wake-up Call on the Importance of Open Source in Gaming

Recently Unity an­nounced a pric­ing up­date which changes the pric­ing model from a sim­ple pay per seat to a model whereby de­vel­op­ers have to pay per in­stall. This is­n’t re­stricted to paid in­stalls or plays; it ap­plies to every plain in­stall.

This price change has some par­tic­u­larly grave con­se­quences on mo­bile where rev­enue per in­stall is highly vari­able. According to Ironsource (A Unity com­pany) for ex­am­ple, the av­er­age rev­enue per ad im­pres­sion is $0.02. Unity would like to charge $0.20 per in­stall af­ter your app has made $200,000 over the past year. What this means is that every one of your users has to see at least 10 ads af­ter in­stalling your app to not have it cost you money. These num­bers are av­er­ages. If your app is more pop­u­lar in emerg­ing mar­kets then the rev­enue per ad would be sig­nif­i­cantly lower, mak­ing the prob­lem far worse.

To make mat­ters (even) worse, this change will be done retroac­tively on ex­ist­ing ap­pli­ca­tions as well.

It’s like if Microsoft de­cided that you had to pay per per­son who read your doc­u­ment made in Word. If they did you’d just stop us­ing Word and in­stead switch over to Google Docs. An in­con­ve­nience per­haps, but not an ex­is­ten­tial threat to your sur­vival as a doc­u­ment ed­i­tor.

Video games, how­ever, are not like this. Developing a game is a long, in­tri­cate process, of­ten span­ning months or even years. After this work is done sim­ply jump­ing ship to a dif­fer­ent en­gine is not easy re­quir­ing con­sid­er­able amount of time and money. If it is pos­si­ble at all, per­haps the de­vel­op­ment team of the game has since dis­banded.

Open source game en­gines, such as Godot en­gine, pro­tect your work by giv­ing you rights to the en­gine that can­not be taken away, or al­tered. Under Godot’s MIT li­cense every user gets the rights to use the en­gine as they see fit, mod­ify and dis­trib­ute it, the only re­quire­ment be­ing that you must ac­knowl­edge the orig­i­nal cre­ators and can­not claim the en­gine as your own cre­ation. Note that this ap­plies only to the en­gine! Your game is your own!

While Unreal en­gine cur­rently does not have terms like Unity’s, there’s noth­ing stop­ping them from do­ing some­thing sim­i­lar. In fact if Unity man­ages to get away with this it seems likely they will fol­low suit.

The Ramatak mo­bile stu­dio en­hances the open source Godot en­gine with the things you need for a mo­bile game: Ads, in app pur­chases, and an­a­lyt­ics. And while Ramatak does charge for these ser­vices, if we were to try to al­ter the deal in way you are un­com­fort­able with you can sim­ply take your game and use the open source ver­sion in­stead. You’d lose ac­cess to the Ramatak spe­cific en­hance­ments but your game is yours to do with as you please.

To un­der­score this point fur­ther: The first pub­li­ca­tion on our site talks about how we con­sider our re­la­tion­ship with our users. We must sim­ply of­fer some­thing our users want since switch­ing away from our of­fer­ing to the open source ver­sion is so easy.

The (mobile) gam­ing land­scape changes all the time, and we can’t pre­dict what the next big thing” will be. By us­ing an open source en­gine you can be sure that what­ever that next thing” is, the en­gine won’t keep you from tak­ing ad­van­tage of it. Nor would the en­gine be able to dic­tate your mon­e­ti­za­tion strat­egy for you.

With Ramatak mo­bile stu­dio, de­vel­op­ers get the best of both worlds: the free­dom and se­cu­rity of an open-source en­gine and the ad­vanced, tai­lored fea­tures that mod­ern mo­bile games re­quire.

And most im­por­tantly: We can’t al­ter the deal.

...

Read the original on ramatak.com »

5 673 shares, 27 trendiness

hyperdxio/hyperdx: Resolve production issues, fast. An open source observability platform unifying session replays, logs, metrics, traces and errors.

Skip to con­tent

We read every piece of feed­back, and take your in­put very se­ri­ously.

Include my email ad­dress so I can be con­tacted

Use saved searches to fil­ter your re­sults more quickly

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

Sign up

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

This com­mit does not be­long to any branch on this repos­i­tory, and may be­long to a fork out­side of the repos­i­tory.

Name al­ready in use

A tag al­ready ex­ists with the pro­vided branch name. Many Git com­mands ac­cept both tag and branch names, so cre­at­ing this branch may cause un­ex­pected be­hav­ior. Are you sure you want to cre­ate this branch?

Use Git or check­out with SVN us­ing the web URL.

Work fast with our of­fi­cial CLI. Learn more about the CLI.

Please sign in

to use Codespaces.

If noth­ing hap­pens, down­load GitHub Desktop and try again.

If noth­ing hap­pens, down­load GitHub Desktop and try again.

If noth­ing hap­pens, down­load Xcode and try again.

Your code­space will open once ready.

There was a prob­lem prepar­ing your code­space, please try again.

Permalink

You can’t per­form that ac­tion at this time.

...

Read the original on github.com »

6 654 shares, 22 trendiness

Unity Silently Deletes GitHub Repo that Tracks Terms of Service Changes and Updated Its License

Following the up­date to its pric­ing plan that charges de­vel­op­ers for each game in­stall, Unity has seem­ingly silently re­moved its GitHub repos­i­tory that tracks any terms of ser­vice (ToS) changes the com­pany made.

As dis­cov­ered by a Reddit user, Unity has re­moved its GitHub repos­i­tory that al­lows the pub­lic to track any changes made to the li­cense agree­ments and has up­dated the ToS to re­move a clause that lets de­vel­op­ers use the terms from older ver­sions of the game en­gine that their prod­uct shipped with.

As a re­sult of the repos­i­tory dele­tion, the web­page is no longer ac­ces­si­ble, re­sult­ing in an Error 404 un­less users visit through a web archive.

While vis­it­ing the page through a web archive, the web page’s last avail­abil­ity was on 16 July 2022, re­veal­ing that Unity might have silently deleted the repo some­time be­fore that day.

The GitHub repos­i­tory was first es­tab­lished in 2019 wherein an of­fi­cial blog post, Unity re­vealed that they are com­mit­ted to be­ing an open plat­form and that host­ing on the soft­ware de­vel­op­ment cloud-based ser­vice will give de­vel­op­ers full trans­parency about what changes are hap­pen­ing, and when.”

In the same blog post, Unity also re­vealed that they have up­dated the li­cense agree­ment, say­ing When you ob­tain a ver­sion of Unity, and don’t up­grade your pro­ject, we think you should be able to stick to that ver­sion of the ToS.”

In the term up­date from 10 March 2022, Unity added a clause to the Modification sec­tion of the ToS, stat­ing the fol­low­ing:

If the Updated Terms ad­versely im­pact your rights, you may elect to con­tinue to use any cur­rent-year ver­sions of the Unity Software (e.g., 2018.x and 2018.y and any Long Term Supported (LTS) ver­sions for that cur­rent-year re­lease) ac­cord­ing to the terms that ap­plied just prior to the Updated Terms.”

The Updated Terms will then not ap­ply to your use of those cur­rent-year ver­sions un­less and un­til you up­date to a sub­se­quent year ver­sion of the Unity Software (e.g. from 2019.4 to 2020.1).”

However, on 3 April 2023, a few months be­fore the sup­posed repos­i­tory dele­tion date, Unity up­dated their ToS once again, re­mov­ing the clause that was added on 10 March 2022, dis­abling de­vel­op­ers from us­ing the agree­ment from the ver­sion with which their game shipped.

Now the clause is com­pletely ab­sent in any of the new ToS, which means that users are ob­lig­ated to any changes Unity made to their ser­vices re­gard­less of ver­sion num­bers in­clud­ing pric­ing up­dates such as the re­cent fee that will charge de­vel­op­ers per game in­stall.

...

Read the original on www.gamerbraves.com »

7 652 shares, 27 trendiness

DALL·E 3

When prompted with an idea, ChatGPT will au­to­mat­i­cally gen­er­ate tai­lored, de­tailed prompts for DALL·E 3 that bring your idea to life. If you like a par­tic­u­lar im­age, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

DALL·E 3 will be avail­able to ChatGPT Plus and Enterprise cus­tomers in early October. As with DALL·E 2, the im­ages you cre­ate with DALL·E 3 are yours to use and you don’t need our per­mis­sion to reprint, sell or mer­chan­dise them.

...

Read the original on openai.com »

8 651 shares, 25 trendiness

Willingham Sends Fables Into the Public Domain

As of now, 15 September 2023, the comic book prop­erty called Fables, in­clud­ing all re­lated Fables spin-offs and char­ac­ters, is now in the pub­lic do­main. What was once wholly owned by Bill Willingham is now owned by every­one, for all time. It’s done, and as most ex­perts will tell you, once done it can­not be un­done. Take-backs are nei­ther con­tem­plated nor pos­si­ble.

Q: Why Did You Do This?

A num­ber of rea­sons. I’ve thought this over for some time. In no par­tic­u­lar or­der they are:

1) Practicality: When I first signed my cre­ator-owned pub­lish­ing con­tract with DC Comics, the com­pany was run by hon­est men and women of in­tegrity, who (for the most part) in­ter­preted the de­tails of that agree­ment fairly and above-board. When prob­lems in­evitably came up we worked it out, like rea­son­able men and women. Since then, over the span of twenty years or so, those peo­ple have left or been fired, to be re­placed by a re­volv­ing door of strangers, of no mea­sur­able in­tegrity, who now choose to in­ter­pret every facet of our con­tract in ways that only ben­e­fit DC Comics and its owner com­pa­nies. At one time the Fables prop­er­ties were in good hands, and now, by virtue of at­tri­tion and em­ployee re­place­ment, the Fables prop­er­ties have fallen into bad hands.

Since I can’t af­ford to sue DC, to force them to live up to the let­ter and the spirit of our long-time agree­ments; since even win­ning such a suit would take ridicu­lous amounts of money out of my pocket and years out of my life (I’m 67 years old, and don’t have the years to spare), I’ve de­cided to take a dif­fer­ent ap­proach, and fight them in a dif­fer­ent arena, in­spired by the prin­ci­ples of asym­met­ric war­fare. The one thing in our con­tract the DC lawyers can’t con­test, or rein­ter­pret to their own ben­e­fit, is that I am the sole owner of the in­tel­lec­tual prop­erty. I can sell it or give it away to whomever I want.

I chose to give it away to every­one. If I could­n’t pre­vent Fables from falling into bad hands, at least this is a way I can arrange that it also falls into many good hands. Since I truly be­lieve there are still more good peo­ple in the world than bad ones, I count it as a form of vic­tory.

2) Philosophy: In the past decade or so, my thoughts on how to re­form the trade­mark and copy­right laws in this coun­try (and oth­ers, I sup­pose) have un­der­gone some­thing of a rad­i­cal trans­for­ma­tion. The cur­rent laws are a mish­mash of un­eth­i­cal back­room deals to keep trade­marks and copy­rights in the hands of large cor­po­ra­tions, who can largely af­ford to buy the out­comes they want.

In my tem­plate for rad­i­cal re­form of those laws I would like it if any IP is owned by its orig­i­nal cre­ator for up to twenty years from the point of first pub­li­ca­tion, and then goes into the pub­lic do­main for any and all to use. However, at any time be­fore that twenty year span bleeds out, you the IP owner can sell it to an­other per­son or cor­po­rate en­tity, who can have ex­clu­sive use of it for up to a max­i­mum of ten years. That’s it. Then it can­not be resold. It goes into the pub­lic do­main. So then, at the most, any in­tel­lec­tual prop­erty can be kept for ex­clu­sive use for up to about thirty years, and no longer, with­out ex­cep­tion.

Of course, if I’m go­ing to be­lieve such rad­i­cal ideas, what kind of hyp­ocrite would I be if I did­n’t prac­tice them? Fables has been my baby for about twenty years now. It’s time to let it go. This is my first test of this process. If it works, and I see no le­gal rea­son why it won’t, look for other prop­er­ties to fol­low in the fu­ture. Since DC, or any other cor­po­rate en­tity, does­n’t ac­tu­ally own the prop­erty, they don’t get a say in this de­ci­sion.

Q: What Exactly Has DC Comics Done to Provoke This?

Too many things to list ex­haus­tively, but here are some high­lights: Throughout the years of my busi­ness re­la­tion­ship with DC, with Fables and with other in­tel­lec­tual prop­er­ties, DC has al­ways been in vi­o­la­tion of their agree­ments with me. Usually it’s in smaller mat­ters, like for­get­ting to seek my opin­ion on artists for new sto­ries, or for cov­ers, or for­mats of new col­lec­tions and such. In those times, when called on it, they au­to­mat­i­cally said, Sorry, we over­looked you again. It just fell through the cracks.” They use the fell through the cracks” line so of­ten, and so re­flex­ively, that I even­tu­ally had to bar them from us­ing it ever again. They are of­ten late re­port­ing roy­al­ties, and of­ten un­der-re­port said roy­al­ties, forc­ing me to go af­ter them to pay the rest of what’s owed.

Lately though their prac­tices have grown be­yond these mere an­noy­ances, prompt­ing some sort of show­down. First they tried to strong arm the own­er­ship of Fables from me. When Mark Doyle and Dan Didio first ap­proached me with the idea of bring­ing Fables back for its 20th an­niver­sary (both gen­tle­men since fired from DC), dur­ing the con­tract ne­go­ti­a­tions for the new is­sues, their le­gal ne­go­tia­tors tried to make it a con­di­tion of the deal that the work be done as work for hire, ef­fec­tively throw­ing the prop­erty ir­rev­o­ca­bly into the hands of DC. When that did­n’t work their ex­cuse was, Sorry, we did­n’t read your con­tract go­ing into these ne­go­ti­a­tions. We thought we owned it.”

More re­cently, dur­ing talks to try to work out our many dif­fer­ences, DC of­fi­cers ad­mit­ted that their in­ter­pre­ta­tion of our pub­lish­ing agree­ment, and the fol­low­ing me­dia rights agree­ment, is that they could do what­ever they wanted with the prop­erty. They could change sto­ries or char­ac­ters in any way they wanted. They had no oblig­a­tion what­so­ever to pro­tect the in­tegrity and value of the IP, ei­ther from them­selves, or from third par­ties (Telltale Games, for in­stance) who want to rad­i­cally al­ter the char­ac­ters, set­tings, his­tory and premises of the story (I’ve seen the script they tried to hide from me for a cou­ple of years). Nor did they owe me any money for li­cens­ing the Fables rights to third par­ties, since such a li­cense was­n’t an­tic­i­pated in our orig­i­nal pub­lish­ing agree­ment.

When they ca­pit­u­lated on some of the points in a later con­fer­ence call, promis­ing on the phone to pay me back monies owed for li­cens­ing Fables to Telltale Games, for ex­am­ple, in the ex­e­cu­tion of the new agree­ment, they re­neged on their word and of­fered the promised amount in­stead as a consulting fee,” which avoided the prece­dent of ad­mit­ting this was money owed, and in­cluded a non-dis­clo­sure agree­ment that would pre­vent me from say­ing any­thing but nice things about Telltale or the li­cense.

And so on. There’s so much more, but these, as I said, are some of the high­lights. At that point, since I dis­agreed on all of their new in­ter­pre­ta­tions of our long­stand­ing agree­ments, we were in con­flict. They prac­ti­cally dared me to sue them to en­force my rights, know­ing it would be a long and de­bil­i­tat­ing process. Instead I be­gan to con­sider other ways to go.

Q: Are You Concerned at What DC Will Do Now?

No. I gave them years to do the right thing. I tried to rea­son with them, but you can’t rea­son with the un­rea­son­able. They used these years to make sooth­ing promises, tell lies about how ded­i­cated they were to­wards work­ing this out, and keep drag­ging things out as long as pos­si­ble. I gave them an op­por­tu­nity to rene­go­ti­ate the con­tracts from the ground up, putting every­thing in un­am­bigu­ous lan­guage, and they ig­nored that of­fer. I gave them the op­por­tu­nity, twice, to sim­ply tear up our con­tracts, and we each go our sep­a­rate ways, and they ig­nored those of­fers. I tried to go over their heads, to deal di­rectly with their new cor­po­rate mas­ters, and maybe find some­one will­ing to deal in good faith, and they blocked all at­tempts to do so. (Try get­ting any of­fi­cer of DC Comics to iden­tify who they re­port to up the com­pany lad­der. I dare you.) In any case, with­out giv­ing them de­tails, I warned them months in ad­vance that this mo­ment was com­ing. I told them what I was about to do would be both le­gal and eth­i­cal.” Now it’s hap­pened.

Note that my con­tracts with DC Comics are still in force. I did noth­ing to break them, and can­not uni­lat­er­ally end them. I still can’t pub­lish Fables comics through any­one but them. I still can’t au­tho­rize a Fables movie through any­one but them. Nor can I li­cense Fables toys nor lunch­boxes, nor any­thing else. And they still have to pay me for the books they pub­lish. And I’m not giv­ing up on the other money they owe. One way or an­other, I in­tend to get my 50% of the money they’ve owed me for years for the Telltale Game and other things.

However, you, the new 100% owner of Fables never signed such agree­ments. For bet­ter or worse, DC and I are still locked to­gether in this un­happy mar­riage, per­haps for all time.

If I un­der­stand the law cor­rectly (and be ad­vised that copy­right law is a mess; pur­posely vague and murky, and no two lawyers — not even those spe­cial­iz­ing in copy­right and trade­mark law — agree on any­thing), you have the rights to make your Fables movies, and car­toons, and pub­lish your Fables books, and man­u­fac­ture your Fables toys, and do any­thing you want with your prop­erty, be­cause it’s your prop­erty.

Mark Buckingham is free to do his ver­sion of Fables (and I dearly hope he does). Steve Leialoha is free to do his ver­sion of Fables (which I’d love to see). And so on. You don’t have to get my per­mis­sion (but you might get my bless­ing, de­pend­ing on your plans). You don’t have to get DCs per­mis­sion, or the per­mis­sion of any­one else. You never signed the same agree­ments I did with DC Comics.

It was my ab­solute joy and plea­sure to bring you Fables sto­ries for the past twenty years. I look for­ward to see­ing what you do with it.

For ques­tions and fur­ther in­for­ma­tion you can con­tact Bill Willingham at:

william.thomas.will­ing­ham@gmail.com  Please in­clude Fables Public Domain” in the sub­ject line, so I don’t as­sume you’re an­other Netflix pro­mo­tion.

...

Read the original on billwillingham.substack.com »

9 642 shares, 24 trendiness

Chromebooks will get 10 years of automatic updates

When Chromebooks de­buted in 2012, their af­ford­able price tags helped make per­sonal com­put­ing more ac­ces­si­ble. That also made them a great fit for the ed­u­ca­tion world, pro­vid­ing schools with se­cure, sim­ple and man­age­able de­vices while help­ing them save on their bud­gets. In fact, Chromebooks are the num­ber one de­vice used in K-12 ed­u­ca­tion glob­ally, ac­cord­ing to Futuresource. Plus, they’re a sus­tain­able choice, with re­cy­cled ma­te­ri­als that re­duce their en­vi­ron­men­tal im­pact and re­pair pro­grams that help them last longer. Today, we’re an­nounc­ing new ways to keep your Chromebooks up and run­ning even longer. All Chromebook plat­forms will get reg­u­lar au­to­matic up­dates for 10 years — more than any other op­er­at­ing sys­tem com­mits to to­day. We’re also work­ing with part­ners to build Chromebooks with more post-con­sumer re­cy­cled ma­te­ri­als (PCR), and rolling out new, power-ef­fi­cient fea­tures and quicker processes to re­pair them. And at the end of their use­ful­ness, we con­tinue to help schools, busi­nesses and every­day users find the right re­cy­cling op­tion.Let’s take a closer look at what’s com­ing, and how we con­sider the en­tire life­cy­cle of a Chromebook — from man­u­fac­tur­ing all the way to re­cy­cling.

Security is our num­ber one pri­or­ity. Chromebooks get au­to­matic up­dates every four weeks that make your lap­top more se­cure and help it last longer. And start­ing next year, we’re ex­tend­ing those au­to­matic up­dates so your Chromebook gets en­hanced se­cu­rity, sta­bil­ity and fea­tures for 10 years af­ter the plat­form was re­leased.A plat­form is a se­ries of com­po­nents that are de­signed to work to­gether — some­thing a man­u­fac­turer se­lects for any given Chromebook. To en­sure com­pat­i­bil­ity with our up­dates, we work with all the com­po­nent man­u­fac­tur­ers within a plat­form (for things like the proces­sor and Wi-Fi) to de­velop and test the soft­ware on every sin­gle Chromebook.Starting in 2024, if you have Chromebooks that were re­leased from 2021 on­wards, you’ll au­to­mat­i­cally get 10 years of up­dates. For Chromebooks re­leased be­fore 2021 and al­ready in use, users and IT ad­mins will have the op­tion to ex­tend au­to­matic up­dates to 10 years from the plat­for­m’s re­lease (after they re­ceive their last au­to­matic up­date).Even if a Chromebook no longer re­ceives au­to­matic up­dates, it still comes with strong, built-in se­cu­rity fea­tures. With Verified Boot, for ex­am­ple, your Chromebook does a self-check every time it starts up. If it de­tects that the sys­tem has been tam­pered with or cor­rupted in any way, it will typ­i­cally re­pair it­self, re­vert­ing back to its orig­i­nal state.You can find more in­for­ma­tion about the ex­tended up­dates in our Help Center, Admin con­sole or in Settings.

Many schools ex­tend their lap­tops’ lifes­pans by build­ing in-school re­pair pro­grams. In fact, more than 80% of U.S. schools that par­tic­i­pated in a re­cent Google sur­vey are re­pair­ing at least some of their Chromebooks in-house. The Chromebook Repair Program helps schools like Jenks Public Schools find parts and pro­vides guides for re­pair­ing spe­cific Chromebooks, ei­ther on­site or through part­ner pro­grams. Many or­ga­ni­za­tions even of­fer re­pair cer­ti­fi­ca­tions for Chromebooks.We’re rolling out up­dates that help make re­pairs even faster. Our new re­pair flows al­low au­tho­rized re­pair cen­ters and school tech­ni­cians to re­pair Chromebooks with­out a phys­i­cal USB key. This re­duces the time re­quired for soft­ware re­pairs by over 50% and lim­its time away from the class­room.

Find more in­for­ma­tion about re­pairs with our Chromebook Repair Program

We’re mak­ing sure Chromebooks are more sus­tain­able when it comes to both hard­ware and soft­ware. In the com­ing months, we’ll roll out new, en­ergy-ef­fi­cient fea­tures to a ma­jor­ity of com­pat­i­ble plat­forms. Adaptive charg­ing will help pre­serve bat­tery health, while bat­tery saver will re­duce or turn off en­ergy-in­ten­sive processes.

Adaptive charg­ing on Chromebook will help pre­serve bat­tery health

Battery saver will re­duce or turn off en­ergy-in­ten­sive processes.

You will be able to man­age adap­tive charg­ing in Settings on your Chromebook.

And last year, we part­nered with Acer, ASUS, Dell, HP and Lenovo to pri­or­i­tize build­ing more sus­tain­able Chromebooks, in­clud­ing us­ing ocean-bound plas­tics, PCR ma­te­ri­als, re­cy­clable pack­ag­ing and low car­bon emis­sion man­u­fac­tur­ing processes. This year alone, Chromebook man­u­fac­tur­ers an­nounced 12 new Chromebooks made with PCR and re­pairable parts.

All de­vices reach a time when they stop be­ing use­ful, es­pe­cially as hard­ware evolves. Schools can ei­ther sell or re­cy­cle Chromebooks via their re­seller, who will of­ten col­lect them on­site. (Before you turn them over to a re­mar­keter or re­cy­cler, make sure all de­vices are re­moved from man­age­ment first.) The re­seller or re­fur­bisher can then pro­vide the school with mon­e­tary or ser­vice cred­its, and re­sell, use for parts or re­cy­cle the Chromebook com­pletely.You can also search for drop-off re­cy­cling lo­ca­tions near you with our global re­cy­cling drop-off points fea­ture in Google Maps.

In ad­di­tion to re­duc­ing en­vi­ron­men­tal im­pact, Chromebooks re­duce ex­penses for school dis­tricts — al­low­ing them to fo­cus more of their lim­ited bud­get on other ben­e­fits for teach­ers and stu­dents. Chromebooks in­clude lower up­front costs than other de­vices: a 55% lower de­vice cost and a 57% lower cost of op­er­a­tions. Over three years, Chromebooks save more than $800 in op­er­at­ing costs per de­vice com­pared to oth­ers. And as a pre­ven­ta­tive cost-sav­ings mea­sure, au­to­matic up­dates com­bined with ex­ist­ing lay­ers of se­cu­rity have pro­tected Chrome from hav­ing any re­ported ran­somware at­tack.With all these up­dates, we’re com­mit­ted to keep­ing Chromebooks uni­ver­sally ac­ces­si­ble, help­ful and se­cure — and help­ing you safely learn and work on them for years to come.

...

Read the original on blog.google »

10 637 shares, 24 trendiness

No sacred masterpieces

This past month I’ve been work­ing on a pro­ject that I’m ea­ger to write way too many words about. But for now, it’s not ready to talk about in pub­lic. Meanwhile, I ei­ther have way too many words about top­ics that I’m con­fi­dent no­body wants to hear about or too few words about top­ics that folks tend to find in­ter­est­ing.

In lieu of pub­lish­ing some­thing too un­ap­peal­ing or too trite, I’ve de­cided to tell a (true!) story that’s been rat­tling around in the back of my head for more than a few years.

In 2016 I joined Uber. I’d fol­lowed a di­rec­tor from Box who had been of­fered to lead Business Intelligence at Uber. She told me about a team that she thought I’d be per­fect for—it was called Crystal Ball” and it was do­ing some of the most in­cred­i­ble work she’d come across. I’d put in two good years at Box and seen it through an IPO and was ready for some­thing new, so I jumped.

Uber was weird from the get-go. The first week (dubbed Engucation”) was a mix of learn­ing how to do things that I’d never need to do in the role that I held and set­ting up ben­e­fits and tak­ing com­pli­ance classes. Travis Kalanick joined us for a Q&A where he showed off pic­tures of the self-dri­ving car that the ATG arm of the com­pany was build­ing (it just looked like an SUV with cam­eras) and the more vi­su­ally im­pres­sive map­ping car that was gath­er­ing data for the self-dri­ving pro­ject (it looked like a weird Dalek on wheels).

When I met the mem­bers of the Crystal Ball team, it was about four peo­ple (not in­clud­ing my­self). Everyone was heav­ily bi­ased to­wards back-end. I was given a brief tour to the fourth floor of 1455 Market St to un­der­stand the prob­lem that the team was solv­ing.

You see all these desks?”

This is where the data sci­en­tists sit. They build data sci­ence mod­els in R. They run those mod­els on data that they down­load from Vertica.”

The prob­lem is that the mod­els are slow and take up a lot of re­sources. So the data sci­en­tists have mul­ti­ple lap­tops that they down­load the data to, then run the mod­els overnight. When the data sci­en­tists ar­rive in the morn­ing, the lap­tops whose mod­els did­n’t crash have data that’s maybe us­able that day.”

What about the other lap­tops?”

We don’t have the data we need and we lose money.”

This was a big prob­lem for the busi­ness: they needed a way to take two kinds of in­puts (data and code) and run the code to pro­duce use­ful out­puts. Or, hope­fully use­ful. Testing a model meant run­ning it, so the it­er­a­tion cy­cle was very close to one it­er­a­tion per day per lap­top.

The team, when I joined, had the be­gin­nings of a tool to au­to­mate this. It was called R-Crusher” and it was es­sen­tially a sys­tem for sched­ul­ing work. They were able to make some API calls and code would be down­loaded and ex­e­cuted, and an out­put file would ap­pear in a di­rec­tory even­tu­ally. As the first (self-professed) front-end en­gi­neer on the team, it was my job to build the tool to ex­pose this to the rest of the com­pany.

I was grate­ful that I did­n’t need to write any real” code. I lived in the world of React build­ing UIs with Uber’s in-house front-end frame­work (“Bedrock”). Any time I needed some­thing, I could ask the back-end folks to up­date the R-Crusher API and I’d get some notes a few hours later to un­block me.

The first ver­sion of the front-end for R-Crusher (“Wesley”) was ready in very lit­tle time—maybe a few weeks from the point I joined? It was a joy.

The next 6-7 months were a hec­tic rush. I was tasked with hir­ing more front-end en­gi­neers. I built a front-end team of seven peo­ple. We added user-fac­ing fea­tures to Wesley and R-Crusher (“Can we have a text box that only takes cap­i­tal let­ters?” Can we make this text box al­low a num­ber whose max­i­mum value is this other text box?”) and de­bug­ging tools for the team (“Can we see the out­put log of the run­ning jobs?”).

There were ef­fec­tively only two things that peo­ple were work­ing on at Uber in 2016:

The app rewrite/​re­design (which launched in November 2016)

All of my work and the team’s work, ul­ti­mately, was to sup­port Uber China. R-Crusher was a tool to help get the data we needed to com­pete with Didi. Nobody re­ally cared very much about the processes for the US and any other coun­try—we had less to lose and Lyft was­n’t seen as re­mark­able com­pe­ti­tion ex­cept in the hand­ful of cities they were op­er­at­ing at scale. China was a make-or-break op­por­tu­nity for Uber, China was only go­ing to suc­ceed if we had the data for it, and the data was go­ing to come (at least in part) from R-Crusher.

Over the sum­mer of 2016, we came up against a new twist on the pro­ject. We had a model that ran overnight to gen­er­ate data for an­tic­i­pated rid­er­ship in China. That data was­n’t use­ful on its own, but if you fed it into a tab on a spe­cial Excel spread­sheet, you’d get a lit­tle in­ter­ac­tive Excel tool for choos­ing dri­ver in­cen­tives. Our job was to take that spread­sheet and make it avail­able as the in­ter­face for this mod­el’s data.

Now, this was no small feat on the back-end or front-end. First, the data needed to be run and moved to an ap­pro­pri­ate lo­ca­tion. Then, we had a chal­leng­ing prob­lem: we needed to take all the logic from this spread­sheet (with hun­dreds if not thou­sands of for­mu­las across mul­ti­ple sheets) and turn it into a UI that Uber China city teams could log into to use. The di­rec­tion we got from the head of fi­nance at Uber (who, for what­ever rea­son, was seem­ingly re­spon­si­ble for the pro­ject) was take this [the spread­sheet] and put it in on the web­site [Wesley].”

We asked how we could sim­plify the UI to meet the re­source con­straints (engineer time) we had. The city teams only know how to use Excel, just make it like Excel.” We tried ex­plain­ing why that was hard and what we could con­fi­dently de­liver in the time al­lot­ted. Every day that we don’t have this tool as specced, we’re los­ing mil­lions of dol­lars.” There was no budg­ing on the spec.

The back-end folks tried push­ing the pro­ject over to the front-end side—there was just no time to con­vert the spread­sheet to Python or R code. On the front-end, the naive an­swer was well I guess we’ve got a lot of JavaScript to write.” But I had some­thing up my sleeve.

In 2015, I had built a pro­to­type of a tool at Box. Box had a col­lab­o­ra­tive note-tak­ing prod­uct called Box Notes (based on Hackpad). I had the idea to make a sim­i­lar pro­ject for work­ing with num­bers: some­times you did­n’t need a full spread­sheet, you just needed a place to put to­gether a hand­ful of for­mu­las, for­mat it with some head­ings and text, and share it with other peo­ple. Sort of like an ipython note­book for spread­sheets. I called it Box Sums.

When I built this, I cre­ated a sim­ple React-based spread­sheet UI and a su­per ba­sic spread­sheet for­mula en­gine. A few hun­dred lines of code. And if you dropped an XLS/XLSX file onto the page, I used a Node li­brary to parse it and ex­tract the con­tents.

I de­moed Box Sums to the Box Notes team at some point, and they nit­picked the UI and im­ple­men­ta­tion de­tails (“What if two peo­ple type in the same cell at the same time? They’ll just over­write each other.” 🙄). Nothing came of it, but I took the code and shoved it into my back pocket for a rainy day.

My idea was to take this code and spruce it up for Uber’s use case. Fill in all the miss­ing fea­tures in the spread­sheet en­gine so that every­thing the spread­sheet needed to run was sup­ported. The back-end could serve up a 2D ar­ray of data rep­re­sent­ing the rid­er­ship data in­put, and we’d feed that in. And the UI would sim­ply make all but the cells that were meant to be in­ter­ac­tive read-only in­stead.

I was­n’t go­ing to be Excel, but it would be­have sort of like Excel, it would read an Excel file as in­put, and it would Excel for­mu­las on some data. That was about as close to just make it like Excel” that we were go­ing to get. And it also meant that we could skip the process of trans­lat­ing thou­sands of dense for­mu­las to JavaScript.

I got to work pol­ish­ing up the code. I parsed the XLS file and ex­tracted all the for­mu­las. I found all of the func­tions those for­mu­las used, and im­ple­mented them in my spread­sheet en­gine. I then went through and im­ple­mented all the fun syn­tax that I had­n’t im­ple­mented for my demo at Box (like ab­solute cell ref­er­ences, where in­sert­ing $ char­ac­ters into cell ref­er­ences makes them keep their col­umn/​row when you drag the cor­ner of the cell, or ref­er­enc­ing cells in other sheets).

I sat in the black mir­rored wall spaceship hall­way” of 1455’s fifth floor with my head­phones play­ing the same hand­ful of songs on re­peat. I spent the early days fix­ing crashes and de­bug­ging er­rant NaNs. Then I dug into big per­for­mance is­sues. And fi­nally, I spent time pol­ish­ing the UI.

When every­thing was work­ing, I started check­ing my work. I en­tered some val­ues into the Excel ver­sion and my ver­sion, and com­pared the num­bers.

The an­swers were all al­most cor­rect. After a week of work, I was very pleased to see the sheet work­ing to the ex­tent that it was, but hav­ing an­swers that were very, very close is ob­jec­tively worse than hav­ing num­bers that are wildly wrong: very wrong num­bers usu­ally al­ways mean a sim­ply logic prob­lem. Almost-correct num­bers mean some­thing more in­sid­i­ous.

I started step­ping through the de­bug­ger as the cal­cu­la­tion en­gine crawled the spread­sheet’s for­mula graph. I com­pared com­puted val­ues to what they were in the Excel ver­sion. The sheer size of the spread­sheet made it al­most im­pos­si­ble to trace through all of the for­mu­las (there were sim­ply too many), and I did­n’t have an­other spread­sheet which ex­hib­ited this prob­lem.

I started read­ing about how Excel rep­re­sents float­ing point num­bers—maybe JavaScript’s dou­bles were some­how dif­fer­ent than Excel’s no­tion of a dou­ble? This led nowhere.

I googled for es­o­teric knowl­edge about Excel, round­ing, or any­thing re­lated to non-in­te­ger num­bers that I could find. It all led nowhere.

Just as I was about to re­sign my­self to step­ping through the thou­sands of for­mu­las and re­com­pu­ta­tions, I de­cided to head down to the fourth floor to just ask one of the data sci­en­tists.

I ap­proached their desks. They looked up and had a look of recog­ni­tion.

Hey guys. I’m work­ing on the dri­ver in­cen­tive spread­sheet. I’m try­ing to mimic the cal­cu­la­tions that you have in Excel, but my num­bers are all just a lit­tle bit off. I was hop­ing you might have some ideas about what’s go­ing on.”

Can I take a look?” I showed him my lap­top and he played with a few num­bers in the in­puts. Oh, that’s the circ.”

We use a cir­cu­lar ref­er­ence in Excel to do lin­ear re­gres­sion.”

My mind was blown. I had thought, naively per­haps, that cir­cu­lar ref­er­ences in Excel sim­ply cre­ated an er­ror. But this data sci­en­tist showed me that Excel does­n’t er­ror on cir­cu­lar ref­er­ences—if the com­puted value of the cell con­verges.

You see, when for­mu­las cre­ate a cir­cu­lar ref­er­ence, Excel will run that com­pu­ta­tion up to a num­ber of times. If, in those com­pu­ta­tions, the mag­ni­tude of the dif­fer­ence be­tween the most re­cent and pre­vi­ous com­puted val­ues for the cell falls be­low some pre-de­fined ep­silon value (usually a very small num­ber, like 0.00001), Excel will stop re­com­put­ing the cell and pre­tend like it fin­ished suc­cess­fully.

I thanked the data sci­en­tists and re­turned to the space­ship hall­way to think about what the fuck I was go­ing to do next.

The changes I needed to make were pretty straight­for­ward. First, it re­quired know­ing whether a down­stream cell was al­ready com­puted up­stream (for what­ever de­f­i­n­i­tions of downstream” and upstream” you want to use; there’s not re­ally a good no­tion of up” and down” in a spread­sheet or this graph). If you went to re­com­pute a cell with a for­mula that ref­er­enced an al­ready-re­com­puted cell, you’d sim­ply keep track of the num­ber of times you com­puted that cell. If the re­com­puted value was close enough to the pre­vi­ous value that it fell be­low the ep­silon, you sim­ply pre­tended like you did­n’t re­com­pute the cell and moved on. If it did­n’t, you’d con­tinue the process un­til the num­ber of it­er­a­tions that you’re keep­ing track of hit some ar­bi­trary limit (for me, 1000), at which point you’d bail.

The changes took a day and a half to make. And would you be­lieve, it worked. The out­puts were ex­actly what they should have been. I wrote tests, I in­te­grated the damn thing into Wesley, and I brought it to the team. We de­liv­ered the pro­ject in the sec­ond week of July.

Two things hap­pened. The first was of lit­tle con­se­quence but I en­joy telling the story. Rakesh, the team lead work­ing on the back-end, asked me where I got the Excel com­po­nent.

But where did you get the Excel en­gine?”

But how are you run­ning Excel in the browser?”

Everything you see is built by me, from scratch.”

He sim­ply could­n’t be­lieve that I’d writ­ten a full spread­sheet en­gine that ran in the browser. All things con­sid­ered, it was maybe only five thou­sand lines of code to­tal. A gnarly five thou­sand lines, but (obviously) not in­tractable. His as­sump­tion about the sheer com­plex­ity of that op­tion was that it was­n’t a rea­son­able pro­ject to take on.

I do think that if I had chal­lenged Rakesh—under no time pres­sure—to build a spread­sheet en­gine, he’d get to a work­ing so­lu­tion as well. My rec­ol­lec­tion is that he was a very com­pe­tent en­gi­neer. Despite that, I think his in­tu­ition about the com­plex­ity and scope were based on bad as­sump­tions about what we were ul­ti­mately ac­com­plish­ing, and it’s a good case study in es­ti­mat­ing rea­son­able pro­ject out­comes. It goes to show that the sheer imag­ined com­plex­ity of a pos­si­ble so­lu­tion is enough to dis­qual­ify it in some folks’ minds, even if it’s the best pos­si­ble out­come.

The sec­ond thing that hap­pened was we shipped. We got the Uber China city team mem­bers log­ging in and us­ing the tool. They plugged away at it, and to my knowl­edge, the num­bers it pro­duced drove dri­ver in­cen­tives.

That was the third week of July.

The last week of July, the head of fi­nance rushed over to our desks.

Why can you see the for­mu­las?”

When you click in the cells of the spread­sheet you can see the for­mu­las. You should­n’t be able to do that.”

You said to make it just like Excel.”

People work­ing for Didi ap­ply for in­tern jobs at Uber China and then ex­fil­trate our data. We can’t let them see the for­mu­las or they’ll just copy what we do!”

Apparently that was a thing. I re­mem­ber be­ing only half-sur­prised at the time. I had­n’t con­sid­ered that our threat model might in­clude em­ploy­ees leak­ing the com­pu­ta­tions used to pro­duce the num­bers in ques­tion. Of course, short of mov­ing the com­pu­ta­tions up to the server, we could­n’t *really* pro­tect the for­mu­las, but that was be­yond the scope of what we were be­ing asked to do.

The fix was straight­for­ward: I up­dated the UI to sim­ply not show for­mu­las when you clicked in cells. Easy enough, I guess.

The first week of August 2016, Uber China was sold to Didi. Most of us found out be­cause our phones started ding­ing with news sto­ries about it. We all stopped work­ing and waited un­til an email ar­rived a cou­ple hours later an­nounc­ing the deal in­ter­nally. If I re­mem­ber cor­rectly, I just left the of­fice and headed home around lunch time be­cause our team did­n’t have any­thing to do that was­n’t Uber China-related (yet).

After Uber China evap­o­rated, the tool was un­cer­e­mo­ni­ously ripped out of Wesley. It was a be­spoke UI for a data job that would never run again. We were never asked to build Excel in the browser again.

I feel no sense of loss or dis­ap­point­ment. I was­n’t dis­ap­pointed at the time, ei­ther.

My first re­ac­tion was to pub­lish the code on Github.

My sec­ond re­ac­tion was to move on. There was maybe a part of me—my younger self—that was dis­ap­pointed that this ma­jor piece of code that I’d la­bored over had been so gen­tly used be­fore be­ing re­tired. I was­n’t rec­og­nized for it in any ma­te­r­ial way. My man­ager did­n’t even know what I’d built.

On the other hand, we as en­gi­neers need to be real with our­selves. Every piece of code you write as an en­gi­neer is legacy code. Maybe not right now, but it will be. Someone will take joy in rip­ping it out some­day. Every mas­ter­piece will be glee­fully re­placed, it’s just a mat­ter of time. So why get pre­cious about how long that pe­riod of time is?

I of­ten hear fairly ju­nior folks say­ing things to the ef­fect of I’m here to grow as an en­gi­neer.” Growing as an en­gi­neer is mu­tu­ally ex­clu­sive with the longevity of your out­put as an en­gi­neer. Growing as an en­gi­neer” means be­com­ing a bet­ter en­gi­neer, and be­com­ing a bet­ter en­gi­neer (directly or in­di­rectly) means get­ting bet­ter at us­ing your skills to cre­ate busi­ness value. Early in your ca­reer, the work you do will likely have far less longevity than the work you do later on, sim­ply be­cause you gain ma­tu­rity over time and learn to build tools that tend to be use­ful for longer.

Sometimes the busi­ness value your work gen­er­ates comes in the way of tech­ni­cal out­put. Sometimes it’s how you work with the peo­ple around you (collaborating, men­tor­ing, etc.). Sometimes it’s about how you sup­port the rest of the team. There are many ways that busi­ness value is cre­ated.

The end (demise?) of Uber China im­plic­itly meant that there was no busi­ness value left to cre­ate with this pro­ject. Continuing to push on it would­n’t have got­ten me or the busi­ness any­where, even if what I’d done was the best pos­si­ble so­lu­tion to the prob­lem.

Sometimes that’s just how it is. The de­vops say­ing Cattle, not pets” is apt here: code (and by proxy, the prod­ucts built with that code) is cat­tle. It does a job for you, and when that job is no longer use­ful, the code is ready to be re­tired. If you treat the code like a pet for sen­ti­men­tal rea­sons, you’re work­ing in di­rect op­po­si­tion to the in­ter­ests of the busi­ness.

As much as I’d love to work on Uber Excel (I’m ashamed to ad­mit that I thought of Uber Sheets” far too long af­ter I left the com­pany), I was hired to solve prob­lems. Having Excel in the browser was a use­ful so­lu­tion, but the prob­lem was­n’t show­ing spread­sheets in the browser: the prob­lem was get­ting a spe­cific UI de­liv­ered to the right users quickly.

It’s easy to treat a par­tic­u­larly clever or el­e­gant piece of code as a mas­ter­piece. It might very well be a beau­ti­ful trin­ket! But we en­gi­neers are not in the busi­ness of beau­ti­ful trin­kets, we’re in the busi­ness of out­comes. In the same way that a chef should­n’t be dis­ap­pointed that a beau­ti­ful plate of food is destroyed” by a hun­gry cus­tomer eat­ing it, we should­n’t be dis­ap­pointed that our beau­ti­ful git re­pos are marked as Archived” and shuf­fled off the pro­duc­tion kube clus­ter.

The at­ti­tudes that we have to­wards the things that we make are good in­di­ca­tors of ma­tu­rity. It’s nat­ural for us to want our work to have stay­ing power and longevity. It’s ex­tremely hu­man to want the val­i­da­tion of our beau­ti­ful things be­ing seen and used and rec­og­nized; it means we’ve done well. On the other hand, our work be­ing dis­carded gives us an op­por­tu­nity to un­der­stand what (if any­thing) we could have done bet­ter:

Did we build some­thing that did­n’t meet the pro­ject con­straints?Did we build what was re­quested, but what was re­quested was­n’t the right thing to ask for?Did the re­quested so­lu­tion ac­tu­ally ad­dress the needs of the end user?What ques­tions did­n’t we ask the stake­hold­ers that could have bet­ter-aligned our out­put with the busi­ness need that trig­gered the re­quest to en­gi­neer­ing?Were the ex­pec­ta­tions that we set around the pro­ject in­ac­cu­rate or vague?Did the pro­ject need to be as ro­bust as what was de­liv­ered? Could a sim­pler or less clever so­lu­tion solved the need equally well?Did we fo­cus on the wrong suc­cess cri­te­ria?Did we even have suc­cess cri­te­ria be­yond build what was re­quested?”Who could have been con­sulted be­fore or af­ter de­liv­ery of the pro­ject to val­i­date whether all of the ac­tual pro­ject re­quire­ments were sat­is­fied?

You won’t have the op­por­tu­nity to take lessons away from the pro­ject if you see the sun­set­ting of the pro­ject as a fail­ure: there’s of­ten much to learn about what non-tech­ni­cal as­pects of the pro­ject broke down. Perhaps there aren’t any, and maybe man­age­ment is just a group of fools! But of­ten that’s not the case; your del­i­cately milled cog was­n’t ripped out of the ma­chine be­cause it was mis­un­der­stood, it was ripped out be­cause it did­n’t op­er­ate smoothly as a part of the larger sys­tem it was in­stalled in.

...

Read the original on basta.substack.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.