10 interesting stories served every morning and every evening.




1 651 shares, 44 trendiness

Useful built-in macOS command-line utilities

Sometimes when I’m bored, I like to look at the list of ma­cOS Bash com­mands. Here’s some com­mands that I found in­ter­est­ing:

If you store your se­crets in the Keychain (and you should!), you can ac­cess them pro­gram­mat­i­cally us­ing se­cu­rity.

se­cu­rity find-in­ter­net-pass­word -s https://​ex­am­ple.com

I found this use­ful for writ­ing au­to­mated scripts that used lo­cally-stored cre­den­tials.

Bonus tip: If you are us­ing 1Password, there is a 1Password CLI that you can use to ac­cess your 1Password items from the com­mand line.

If you want to open a file from the ter­mi­nal, you can use the open com­mand.

open file.txt

This will open the file in the de­fault ap­pli­ca­tion for that file type, as if you had dou­ble-clicked it in the Finder.

pb­copy and pb­paste are com­mand-line util­i­ties that al­low you to copy and paste text to the paste­board (what other op­er­at­ing sys­tems might call the clipboard”).

pb­copy takes what­ever was given in the stan­dard in­put, and places it in the paste­board.

echo Hello, world!” | pb­copy;

pb­paste takes what­ever is in the paste­board and prints it to the stan­dard out­put.

pb­paste>> Hello, world!

This is very use­ful for get­ting data from files into the browser, or other GUI ap­pli­ca­tions.

If you work with servers a lot, it can be use­ful to know the cur­rent time in UTC, when e.g. look­ing at server logs.

This is a one-liner in the ter­mi­nal:

date -u

Alternatively, you can use

TZ=UTC date

If you want to run an Internet speedtest, you can run one di­rectly from the ter­mi­nal with

net­workQual­ity # Note the cap­i­tal Q”!

If you are want to keep your Mac from sleep­ing, you can run caf­feinate in the ter­mi­nal.

caf­feinate

caf­feinate will keep your Mac awake un­til you stop it, e.g. by press­ing Ctrl+C. caf­feinate used to be a third-party tool, but it is now built-in to ma­cOS.

I use this mostly to pre­vent my Mac from sleep­ing when I am run­ning a server.

If you need to gen­er­ate a UUID, you can use the uuid­gen com­mand.

uuid­gen

By de­fault uuid­gen out­puts a UUID in up­per­case. You can com­bine this with tr and pb­copy to copy the UUID to the clip­board in low­er­case.

uuid­gen | tr [:upper:]’ [:lower:]’ | pb­copy

I use this a lot when writ­ing unit tests that re­quire IDs.

* mdfind: Spotlight search, but in the ter­mi­nal. I gen­er­ally use Spotlight it­self (or rather the ex­cel­lent Raycast). Link

* say: This com­mand makes your Mac speak the text you give it. Link

* screen­cap­ture: This com­mand al­lows you to take screen­shots and save them to a file. I pre­fer us­ing cmd-shift-5 for this. Link

* net­work­setup: This com­mand al­lows you to con­fig­ure your net­work set­tings pro­gram­mat­i­cally. I found its API very in­tim­i­dat­ing, and so I haven’t re­ally used it much. Link

...

Read the original on weiyen.net »

2 463 shares, 26 trendiness

An analysis of title drops in movies

A ti­tle drop is when a char­ac­ter in a movie says the ti­tle of the movie they’re in. Here’s a large-scale analy­sis of 73,921 movies from the last 80 years on how of­ten, when and maybe even why that hap­pens.

I’m sure you all know the part of the movie where one of the char­ac­ters says the ac­tual ti­tle of the movie

and you’re like

The over­all meta-ness of this is - of course - noth­ing new. And film­mak­ers and scriptwrit­ers have been do­ing it since the dawn of the medium it­self*. It’s known in film speak as a ti­tle drop.

Consequently, there’s tons of ex­am­ples through­out movie his­tory that range from the iconic (see Back to the Future’s above)

via the ec­cen­tric,

the very much self-aware

But how com­mon are these ti­tle drops re­ally? Has this phe­nom­e­non gained mo­men­tum over time with our post­mod­ern cul­ture be­com­ing ever more meta? Can we pre­dict any­thing about the qual­ity of a film based on how many times its ti­tle is men­tioned? And what does a movie ti­tle mean, any­way?

There have been analy­ses

and oh so so many lis­ti­cles

of the ti­tle drop phe­nom­e­non be­fore, but they are small and anec­do­tal. Here’s the first ex­ten­sive analysis of ti­tle drops for a dataset of 73,921 movies that amount to roughly 61% of movies on IMDb with at least 100 user votes*. I’m look­ing at movies re­leased be­tween 1940 and 2023. Special thanks go to my friends at OpenSubtitles.com for pro­vid­ing this data!

I started out with two datasets: 89,242 (English) movie sub­ti­tles from OpenSubtitles.com

and meta­data for 121,797 movies from IMDb. After joining them and fil­ter­ing them for bro­ken sub­ti­tle files I was left with a to­tal of 73,921 subtitled movies. With that out of the way, I re­al­ized that the tougher task was still ahead of me: an­swer­ing the ques­tion what even was a ti­tle drop?

The naïve ap­proach is - of course - to sim­ply look for the movie’s name any­where in the subtitles. Which is a fan­tas­tic ap­proach for movies like Back to the Future with a nice unique ti­tle:

But this quickly breaks down if we look at movies like E or I *, which lead to way too many matches.

We also run into prob­lems with every movie that is a se­quel (Rocky III, Hot Tub Time Machine 2) since none of the char­ac­ters will add the se­quel num­ber to char­ac­ter names/​over­sized bathing equipment. Similarly, the rise of the colon

in movie ti­tles would make for some very awk­ward di­a­logue (LUKE: Gosh Mr. Kenobi, it’s al­most like we’re in the mid­dle of some Star Wars Episode Four: A New Hope!“).

(See also the He Didn’t Say That

meme.)

So I ap­plied a few rules to my ti­tle match­ing in the di­a­logue. Leading The’, An’ and A’s and special char­ac­ters like dashes are ig­nored, se­quel num­bers both Arabic and Roman are dropped (along with Episode…’, Part…’ etc.) and ti­tles con­tain­ing a colon are split and ei­ther side counts as a ti­tle drop. So for The Lord of the Rings: The Fellowship of the Ring

ei­ther Lord of the Rings” or Fellowship of the Ring” would count as ti­tle drops (feel free to hover over the vi­su­al­iza­tions to ex­plore the matches)!

With the data clean­ing out of the way, let’s get down to busi­ness!

Alright, so here’s the num­ber you’ve all been wait­ing for (drumroll):

36.5% - so about a third - of movies have at least one ti­tle drop dur­ing their run­time.

Also, there’s a to­tal of 277,668 ti­tle drops for all 26,965 ti­tle-drop­ping movies which means that there’s an average of 10.3 ti­tle drops per movie that ti­tle drops. If they do it, they really go for it.

So who are the most ex­ces­sive of­fend­ers in men­tion­ing their ti­tles over the course of the film? The over­all star when it comes to fic­tion only came out last year: it’s Barbie by Greta Gerwig with an im­pres­sive 267 ti­tle drops within its 1 hour and 54 min­utes run­time, clock­ing in at a whopping 2.34 BPM (Barbies Per Minute).

On the non-fic­tion side of doc­u­men­taries the win­ner is Mickey: The Story of a Mouse

with 309 ti­tle drops in only 90 min­utes, so 3.43 Mickeys Per Minute!

What’s in­ter­est­ing about the (Fiction) list here is that it’s pretty in­ter­na­tional: only two of the top ten movies come from Hollywood, 6 are from India, one from Indonesia and one from Turkey. So it’s def­i­nitely an in­ter­na­tional phe­nom­e­non.

Looking at the top ten list you might have no­ticed this lit­tle icon

sig­ni­fy­ing a movie where the data says it’s named af­ter one of its char­ac­ters*.

Unsurprisingly, movies named af­ter one of their char­ac­ters have an av­er­age of 24.7 ti­tle drops, more than twice as much as the usual 10.3. Protagonists have a ten­dency to pop up re­peat­edly in a film, so their names usu­ally do the same.

Similarly, movies named af­ter a pro­tag­o­nist have a ti­tle drop rate of 88.5%

while only 34.2% of other movies drop their ti­tles.

A note on the data here

This is the more ex­per­i­men­tal part of the analy­sis. To fig­ure out if a movie was named af­ter its

pro­tag­o­nist I’ve used

IMDb’s Principals Dataset

that lists char­ac­ter names for the first cou­ple of ac­tors and com­pared that to the movie’s ti­tle.

This ap­proach yields re­li­able re­sults, but of course misses movies when the char­ac­ter the movie

is named af­ter does not ap­pear on that list. So you might find movies that miss the

Named’

icon even though they’re clearly named af­ter a char­ac­ter.

Special char­ac­ters in the ti­tle and char­ac­ter name are also chal­leng­ing: for ex­am­ple, Tosun Pasa which ac­tu­ally has a ş char­ac­ter in its ti­tle - wrong on IMDb (Pasa) as well as the sub­ti­tles

(Pasha) - or WALL·E with the chal­leng­ing · in the mid­dle: Even

though there are men­tions of Wall-E” in the sub­ti­tles, the script - look­ing for WALL·E - would­n’t

de­tect it. (I’ve fixed both of these films man­u­ally - but there might be more!)

Titles or sur­names also usu­ally pre­vent be­ing counted as ti­tle drops ac­cord­ing to our de­f­i­n­i­tions.

Michael The Brave,

King Lear or Barry Lyndon might men­tion a char­ac­ter’s name (‘Michael’, Lear’, Barry’) but leave out the ti­tle or sur­name

- so zero drops.

Nevertheless, there do ex­ist named films where you would ex­pect a ti­tle drop which does­n’t come!

Examples are:

Anyway - back to the analy­sis!

An in­ter­est­ing cat­e­gory are movies named af­ter a char­ac­ter that only have a sin­gle ti­tle drop - making it all the more mean­ing­ful?

Title-drop con­nois­seurs might sneer at this point and well-ac­tu­ally us that a real” ti­tle drop should only hap­pen once in a film. That there’s this one mem­o­rable (or cringe-y) scene where the protagonist looks di­rectly at the cam­era and de­clares the ti­tle of the film with as much pathos as they can muster. Or as a nice send-off in the last spo­ken line.

Such sin­gle drops hap­pen sur­pris­ingly of­ten:

11.3% of all movies do EXACTLY ONE ti­tle drop dur­ing their run­time.

Which means that there’s about twice as many movies hav­ing mul­ti­ple ti­tle drops than sin­gle ones.

In the sin­gle drop case it is more likely that the film­mak­ers were adding a ti­tle drop very consciously.

Single drops of­ten hap­pen in a key scene and ex­plain the movie’s ti­tle: what mys­te­ri­ous fellowship the first Lord of the Rings is named af­ter. Or that the au­di­ence wait­ing for some dark knight to show up must sim­ply ac­cept that it’s been the Batman all along.

One sus­pi­cion I had was that the very meta act of hav­ing a char­ac­ter speak the name of the movie they’re in would be some­thing gain­ing more and more trac­tion over the last two or three decades.

And in­deed, if we look at the av­er­age num­ber of movies with ti­tle drops over the decades we can see that there’s a cer­tain up­wards trend. The 1960s and 1970s seemed to be most averse to mentioning their ti­tle in the film, while it’s be­come more com­mon-place over the last years.

If we dig deeper, this growth over the decades comes with a clearer ex­pla­na­tion: split­ting up movies by sin­gle- and multi-ti­tle drops shows that while the ten­dency of movies to drop their title ex­actly once keeps more or less steady, the num­ber of multi-drop films is on the rise.

Your ex­pla­na­tion for this (More movies are be­ing named af­ter their pro­tag­o­nists? Movies are more productified so brand recog­ni­tion be­comes an im­por­tant con­cern?) is prob­a­bly as good as mine 🤷

Another ques­tion I wanted to an­swer was if a high num­ber of ti­tle drops was a sign of a bad movie. Think of all the trashy slasher and hor­ror movies about Meth Marmots and Killer Ballerinas - would­n’t their char­ac­ters in the sparse di­a­logues con­stantly men­tion the ti­tle for brand recog­ni­tion and all that?

Interestingly though, there’s no strong con­nec­tion be­tween film qual­ity (expressed as IMDb rating (YMMV)) and the prob­a­bil­ity of ti­tle-drop­ping.

An as­pect that cer­tainly does have an im­pact on the prob­a­bil­ity of a ti­tle drop though is the genre of a film.

If you think back to the dis­cus­sion about names in ti­tles from ear­lier, gen­res like Biography and other non-fic­tion gen­res like Sport and History - al­most by de­f­i­n­i­tion - men­tion their subject in both the ti­tle and through­out the film.

Accordingly, the prob­a­bil­ity of a ti­tle drop varies wildly by genre. Non-fiction films have a strong ten­dency to­wards ti­tle-drop­ping, while more fic­tion-ori­ented gen­res like Crime, Romance and War don’t.

Finally, we can ask the ques­tion: what even is a movie ti­tle?

I could­n’t find a com­plete clas­si­fi­ca­tion in the sci­en­tific lit­er­a­ture (“What’s in a name? The art of movie ti­tling”

by Ingrid Haidegger comes the clos­est). Movie ti­tles are an in­ter­est­ing case, since they have to work as a de­scrip­tion of a prod­uct, a mar­ket­ing in­stru­ment, but also as the ti­tle of a piece of art.

Consequently, it’s a field ripe with opinions, science and ex­per­i­men­ta­tion

and listicles.

The most ex­ten­sive clas­si­fi­ca­tion of me­dia ti­tles in gen­eral I could find is TVTropes’ Title Tropes list

which lists over 180 (!) dif­fer­ent types of tropes alone. Some of those tropes are:

While nam­ing a movie is a very cre­ative task and pretty suc­cess­fully de­fies clas­si­fi­ca­tion, we can still look at the over­all shape of movie ti­tles and see if that has any im­pact on the num­ber of ti­tle drops.

One such sim­ple as­pect is the length of the ti­tle it­self. As you would ex­pect there’s a neg­a­tive correlation (if only a slight one*) between the length of a ti­tle and the num­ber of ti­tle drops it does.

Still, there are some fun ex­am­ples for reaaaaally

long movie ti­tles that nev­er­the­less do at least one ti­tle drop:

And while these pre­vi­ous ex­am­ples only drops parts from be­fore or af­ter the colon, this next specimen ac­tu­ally does an im­pres­sive full ti­tle drop:

And with that, we’re done with the over­ar­ch­ing analy­sis! Feel free to drop us an e-mail

or fol­low up on X/X, Bluesky

or Mastodon

if you have com­ments, ques­tions, praise ❤️

Oh, and one more thing:

If you’re cu­ri­ous, here’s the full dataset for you to ex­plore!

...

Read the original on www.titledrops.net »

3 386 shares, 26 trendiness

JunoCam : Processing

We in­vite you to down­load raw JunoCam im­ages posted here and do your own im­age pro­cess­ing on them. Be cre­ative! Anything from crop­ping to color en­hanc­ing to col­lag­ing is fair game. Then up­load your cre­ations here.

Please re­frain from di­rect use of any of­fi­cial NASA or Juno mis­sion lo­gos in your work, as this con­fuses what is of­fi­cially sanc­tioned by NASA and by the Juno Project.

We in­vite you to down­load raw JunoCam im­ages posted here and do your own im­age pro­cess­ing on them. Be cre­ative! Anything from crop­ping to color en­hanc­ing to col­lag­ing is fair game. Then up­load your cre­ations here.

Please re­frain from di­rect use of any of­fi­cial NASA or Juno mis­sion lo­gos in your work, as this con­fuses what is of­fi­cially sanc­tioned by NASA and by the Juno Project.

We ask that you re­frain from post­ing any patently of­fen­sive, po­lit­i­cal, or in­ap­pro­pri­ate im­ages. Let’s keep it clean and fun for every­one of any age! Remember, this sec­tion is mod­er­ated so in­ap­pro­pri­ate con­tent will be re­jected. But cre­ativ­ity and cu­rios­ity in the sci­en­tific spirit and the ad­ven­ture of space ex­plo­ration is highly en­cour­aged and we look for­ward to see­ing Jupiter through not only JunoCam’s eyes, but your own. Have at it!

...

Read the original on www.missionjuno.swri.edu »

4 263 shares, 48 trendiness

Learning Not to Trust the All-In Podcast in Ten Minutes

My coworker and I en­joy hav­ing de­bates about whether the American econ­omy is in the ex­press lane to col­lapse or cruis­ing in the good times (I’m re­ally fun at par­ties). For the two-and-a-half years I’ve worked at my cur­rent com­pany, one of us has been a bull while the other has been a bear. I’ll let you guess which one is me.

Monday, November 4, when I walked into the of­fice he brought up a pod­cast he had been lis­ten­ing to on the drive to work: All-In. I had never heard of it, but I guess it’s a group of four ven­ture cap­i­tal­ists that talk about pol­i­tics, cur­rent events, and the econ­omy.

My coworker sur­faced a point that the pod­cast­ers had made in the open­ing seg­ment of last week’s episode: 85% of the past quar­ter’s eco­nomic growth came from gov­ern­ment spend­ing. I was stunned. I had in my mind that gov­ern­ment spend­ing com­posed some­thing like 30%-40% of GDP thanks to Matt Yglesias’ re­cent tirades about how im­ports don’t sub­tract from GDP, resur­fac­ing the macro-101 equa­tion:

My coworker showed me the first few min­utes of the pod­cast, where they flash this chart af­ter not­ing the econ­omy grew by 2.8% in Q3, and one of the hosts, Chamath Palihapitiya, de­scribes what he sees as go­ing on:

This is where you can get a lit­tle con­fused by data. Jason, this is net out­lays. And that’s dif­fer­ent from to­tal gross gov­ern­ment spend­ing, which also in­cludes QE… So just to be clear about what’s hap­pen­ing, 85% of this quar­ter’s GDP was in­duced by the gov­ern­ment. If you sub it out, so take 2.8% and mul­ti­ply it by 0.15, that is the true growth X the United States gov­ern­ment that ex­ists in the United States econ­omy to­day. Sacks, your thoughts here on the GDP, ob­vi­ously looks pretty good for Biden-Harris to have all these stats go­ing in their fa­vor, but there is the caveat ob­vi­ously about the gov­ern­ment spend­ing in there.

This is where you can get a lit­tle con­fused about data” yeah, okay big guy. Let’s see who is con­fused here.

Putting aside the com­ment about quan­ti­ta­tive eas­ing, which feels ir­rel­e­vant, I left his of­fice and went straight to the Department of Commerce’s web­site, where the Bureau of Economic Analysis pub­lishes GDP es­ti­mates. The third-quar­ter ad­vance es­ti­mate table 2 pro­vides us the in­for­ma­tion we’re look­ing for, and, in fact, is the source of Chamath’s graph.

You can see the Macro-101 equa­tion recre­ated here. All of these sub­cat­e­gories (personal con­sump­tion + in­vest­ment + net ex­ports + gov­ern­ment con­sump­tion) add up to 2.8% (2.82% to be pre­cise) of Q3 GDP growth.

0.85% of the to­tal 2.82% GDP growth is from gov­ern­ment spend­ing. Meaning that 0.85% / 2.82% = 30.1% of Q3 GDP growth came from gov­ern­ment spend­ing, not 85%.

If you look closely at Chamath’s chart, you can tell that he’s us­ing this ex­act data source to de­velop his gross mis­in­ter­pre­ta­tion of the data.

So, Chamath’s the­sis that if you back out the per­cent­age of gov­ern­ment con­sump­tion that is in­cluded in GDP, you start to see a very dif­fer­ent pic­ture, which is that over the last two and a half years, all of the eco­nomic gains un­der the Biden ad­min­is­tra­tion have largely been through gov­ern­ment con­sump­tion” is to­tal hog­wash. The claim that makes up the en­tire talk­ing point of this ini­tial seg­ment of the show is a mis­read­ing of the data that I, a ran­dom non­ex­pert guy, no­ticed and dis­proved in ten min­utes of re­search and writ­ing this up.

Looking at gov­ern­ment ex­pen­di­tures as a pro­por­tion of GDP over time, you can see that the cur­rent pe­riod is noth­ing new—in fact, it’s typ­i­cal for the post-Great Recession era, roughly in line with gov­ern­ment spend­ing from the late Obama years through Trump’s pres­i­dency, pre-COVID.

Was this gross in­com­pe­tence or pur­pose­ful de­cep­tion? I’m not sure. But I know that I won’t be tun­ing in for the next episode of All-In to find out. I will not fall prey to Gell-Mann Amnesia. In my first and only 15 min­utes of watch­ing, Chamath’s con­fi­dence in mak­ing this false claim, cou­pled with his co-hosts’ com­plete lack of crit­i­cal push­back, sug­gests to me that these kinds of mis­takes hap­pen of­ten enough to where these guys’ con­tent is­n’t worth con­sum­ing.

...

Read the original on passingtime.substack.com »

5 259 shares, 58 trendiness

SpaceX

On its flight to the International Space Station, Dragon ex­e­cutes a se­ries of burns that po­si­tion the ve­hi­cle pro­gres­sively closer to the sta­tion be­fore it per­forms fi­nal dock­ing ma­neu­vers, fol­lowed by pres­sur­iza­tion of the vestibule, hatch open­ing, and crew ingress.

On its flight to the International Space Station, Dragon ex­e­cuted a se­ries of burns that po­si­tioned the ve­hi­cle pro­gres­sively closer to the sta­tion be­fore it per­formed fi­nal dock­ing ma­neu­vers, fol­lowed by pres­sur­iza­tion of the vestibule, hatch open­ing, and crew ingress.

On its flight to the International Space Station, Dragon ex­e­cutes a se­ries of burns that po­si­tion the ve­hi­cle pro­gres­sively closer to the sta­tion be­fore it per­forms fi­nal dock­ing ma­neu­vers, fol­lowed by pres­sur­iza­tion of the vestibule, hatch open­ing, and crew ingress.

On its flight to the International Space Station, Dragon ex­e­cuted a se­ries of burns that po­si­tioned the ve­hi­cle pro­gres­sively closer to the sta­tion be­fore it per­formed fi­nal dock­ing ma­neu­vers, fol­lowed by pres­sur­iza­tion of the vestibule, hatch open­ing, and crew ingress.

Falcon 9’s first stage lofts Dragon to or­bit. Falcon 9’s first and sec­ond stage sep­a­rate. Second stage ac­cel­er­ates Dragon to or­bital ve­loc­ity.

Dragon sep­a­rates from Falcon 9’s sec­ond stage and per­forms ini­tial or­bit ac­ti­va­tion and check­outs of propul­sion, life sup­port, and ther­mal con­trol sys­tems.

Dragon per­forms delta-ve­loc­ity or­bit rais­ing ma­neu­vers to catch up with the International Space Station.

Dragon es­tab­lishes a com­mu­ni­ca­tion link with the International Space Station and per­forms its fi­nal or­bit rais­ing delta-ve­loc­ity burn.

Dragon es­tab­lishes rel­a­tive nav­i­ga­tion to the International Space Station and ar­rives along the dock­ing axis, ini­ti­at­ing an au­tonomous ap­proach.

Dragon per­forms fi­nal ap­proach and docks with the International Space Station, fol­lowed by pres­sur­iza­tion, hatch open, and crew ingress.

...

Read the original on www.spacex.com »

6 242 shares, 28 trendiness

This page requires JavaScript.

Please turn on JavaScript in your browser and re­fresh the page to view its con­tent.

...

Read the original on security.apple.com »

7 238 shares, 10 trendiness

Only 5.3% of welders in the US are women. After years as a writing professor, I became one − here’s what I learned

Although I have a good gig as a full pro­fes­sor at Iowa State University, I’ve day­dreamed about learn­ing a trade — some­thing that re­quired both my mind and my hands.

So in 2018, I started night courses in weld­ing at Des Moines Area Community College. For three years, I stud­ied dif­fer­ent types of weld­ing and dur­ing the day worked on a book about the com­mu­ni­ca­tion be­tween weld­ing teach­ers and stu­dents. I was­n’t the only woman who be­came in­ter­ested in trades work dur­ing this time. Recognizing the good pay and job se­cu­rity, U. S. women have moved in greater num­bers into skilled trades such as weld­ing and fab­ri­ca­tion within the past 10 years.

From 2017 to 2022, the num­ber of women in trades rose from about 241,000 to nearly 354,000. That’s an in­crease of about 47%. Even so, women still con­sti­tute just 5.3% of welders in the United States.

When I re­ceived my diploma in weld­ing in May 2022, I’d al­ready found the place I wanted to work: Howe’s Welding and Metal Fabrication. I’d met the owner, Jim Howe, when I vis­ited his three-man shop in Ames, Iowa, in January 2022 for re­search on a sec­ond book about com­mu­ni­ca­tion in skilled trades.

Howe’s shop fo­cuses on re­pairs and one-off fab­ri­ca­tion, not large-scale pro­duc­tion of sin­gle items. Under Howe’s tute­lage, I’ve fab­ri­cated skis for the ma­chines that make the rum­ble strips in the road, shep­herd’s hooks for bird feed­ers, fence poles and stain­less-steel lamp­shade frames. I’ve re­paired trail­ers, wheel­chair ramps, of­fice chairs and lawn mow­ers.

Both my ex­pe­ri­ence at Howe’s and my re­search at nine other fab­ri­ca­tion fa­cil­i­ties in Iowa have shown me that — at least for the time be­ing — tradeswomen must find workarounds for com­monly en­coun­tered chal­lenges. Some of these chal­lenges are phys­i­cal. These could in­clude be­ing un­able to eas­ily reach or move nec­es­sary ma­te­r­ial and tools. Or they could be emo­tional, such as en­coun­ter­ing sex­ism. As I ex­plore in my forth­com­ing book, Learning Skilled Trades in the Workplace,” this is true even in a wel­com­ing en­vi­ron­ment like Howe’s shop, where I work with a sup­port­ive and help­ful boss and co-work­ers.

Being a tradeswoman means be­ing scru­ti­nized for com­pe­tence. One of the tradeswomen I in­ter­viewed for the book told me this story about be­ing tested by more ex­pe­ri­enced trades­men:

I re­mem­ber them tack­ing to­gether a cou­ple of pieces of metal for me and say­ing, Okay, I want you to weld a six mil­lime­ter weld here and an eight mil­lime­ter weld here,’ and I was so ner­vous be­cause these are the guys that I’m go­ing to work with, and I just was so ner­vous and I laid down the welds and put my hood up and the guy goes, Well, god­damn, bitch can weld,’ and I was like, Oh my god, thank god.’”

I’ve felt this same scrutiny from Howe’s cus­tomers. Once, two cus­tomers watched me as I used the iron­worker to punch ovals in rec­tan­gu­lar tub­ing. I had to step on the pedal to lower the punch, find the in­den­ta­tion of the spot to punch, hold a com­bi­na­tion square against the metal to en­sure the ob­long shape was par­al­lel to the tub­ing’s edge, step on the pedal and pull the strip­per to­ward me.

I could feel my legs turn to jelly as I per­formed the steps and — as I per­ceived it — rep­re­sented the trade com­pe­tence of all wom­ankind. I’m re­sent­ful of these silent eval­u­a­tions, par­tic­u­larly when I’m learn­ing some­thing new and try­ing to keep all my fin­gers.

The stan­dards es­tab­lished by the Occupational Safety and Health Administration, or OSHA, don’t nec­es­sar­ily ac­count for all the phys­i­cal­ity of trades work. On the day Jim told me to bend 20 pieces of ½-inch round stock, I had to use all my weight to pull the Hossfeld ben­der’s arm to make the S shapes.

The 20 S hooks would hang on a bar and hold the 18 come-alongs that Jim had ac­cu­mu­lated. Tired af­ter I’d fin­ished all the bend­ing, I sighed as Jim told me to hang all the come-alongs on a mo­bile rack he had bought at auc­tion for just this pur­pose.

I had to squat to pick each one up and use my legs and then arms to lift each to a newly made hook. But I did­n’t com­plain. Stoicism is a workaround to cred­i­bil­ity.

My in­ter­ac­tions with Howe’s cus­tomers have been pep­pered with low-grade sex­ism. Trying to de­ter­mine the rea­son for my pres­ence, one cus­tomer asked me, Are you the new sec­re­tary?”

Another man com­mented on my ap­pear­ance, com­par­ing me to my co-worker: You’re bet­ter look­ing than the guy I talked to be­fore.” Such ha­rass­ment re­mains com­mon for tradeswomen and ranges from mild, to vi­o­lent, to just plain creepy, as when one man, pay­ing his bill at the front desk, whis­pered, Your hands are dirty.”

Women in trades have re­ported en­coun­ters with cus­tomers who doubted their com­pe­tence and who re­fused to deal with them, seek­ing a man in­stead.

Some cus­tomers at Howe’s fit this pat­tern. I’ve no­ticed that if I’m at the front desk with a male co-worker, men will of­ten look past me and ad­dress them, even though I’m older and, as far as they know, more ex­pe­ri­enced. Other cus­tomers like to tell me how to do my job.

One man, watch­ing me while I cut 8-foot lengths of tub­ing for him, told me that I could sim­ply hook my tape mea­sure over the saw blade and sub­tract ⅛-inch to find the cor­rect length. Piqued af­ter I ex­plained why his method would­n’t work for a pre­cise mea­sure­ment, he re­sponded by quizzing me on some­thing I was­n’t likely to know: the pur­pose of the black di­a­monds on my tape mea­sure.

The man in the au­di­ence at the aca­d­e­mic con­fer­ence who wants to lec­ture rather than ask a ques­tion of the woman who is the speaker has be­come a trope. The pon­tif­i­cat­ing metal-shop cus­tomer should be, too. Like other tradeswomen, I’ve learned to work around un­wanted com­ments, in­clud­ing un­in­vited con­ver­sa­tions with men bent on sig­nal­ing their ex­per­tise.

My soon-to-be-pub­lished book does­n’t fo­cus solely or even mostly on my ex­pe­ri­ences as a woman in a weld­ing and fab­ri­ca­tion shop. Rather, it looks at the non­lin­ear process of learn­ing skilled trades — a process that is, for tradeswomen, some­times frus­trated by scrutiny, phys­i­cal chal­lenges and sex­ism, which re­quire workarounds.

Nevertheless, along this jour­ney, I’ve leaned on the strength of the tradeswomen be­fore me. Although these women have been alone in a crowd,” they’ve con­sis­tently worked around chal­lenges to­ward broader and deeper ex­per­tise.

...

Read the original on theconversation.com »

8 232 shares, 29 trendiness

Switch 2 will be backwards compatible with Switch, Nintendo confirms

Nintendo has con­firmed that the suc­ces­sor to the Nintendo Switch will be back­ward com­pat­i­ble with the Nintendo Switch.

In a post on X, a mes­sage from Nintendo pres­i­dent Shuntaro Furukawa also an­nounced that fur­ther in­for­ma­tion about the suc­ces­sor to the Nintendo Switch would come at a later date.”

This is Furukawa,” the mes­sage reads. At to­day’s Corporate Management Policy Briefing, we an­nounced that Nintendo Switch soft­ware will also be playable on the suc­ces­sor to Nintendo Switch.

Nintendo Switch Online will be avail­able on the suc­ces­sor to Nintendo Switch as well. Further in­for­ma­tion about the suc­ces­sor to Nintendo Switch, in­clud­ing its com­pat­i­bil­ity with Nintendo Switch, will be an­nounced at a later date.”

The post also con­firmed that Nintendo Switch Online would be avail­able on the suc­ces­sor con­sole. No fur­ther de­tails on its im­ple­men­ta­tion were an­nounced.

Earlier to­day, Nintendo re­it­er­ated it still in­tends to an­nounce its next con­sole hard­ware be­fore the end of its cur­rent fis­cal year, which con­cludes on March 31, 2025.

President Shuntaro Furukawa made the com­ments dur­ing an on­line press con­fer­ence on Tuesday, fol­low­ing the pub­li­ca­tion of Nintendo’s lat­est earn­ings re­sults, but the ex­ec­u­tive did not add any ad­di­tional de­tails.

According to a re­port, de­vel­op­ers have re­port­edly been briefed not to ex­pect Nin­ten­do’s next con­sole to launch be­fore April 2025.

No de­vel­oper I’ve spo­ken to ex­pects it to be launch­ing this fi­nan­cial year,” said GI.biz jour­nal­ist Chris Dring. In fact, they’ve been told not to ex­pect it in the [current] fi­nan­cial year. A bunch of peo­ple I spoke to hope it’s out in April or May time, still early next year, not late.

I don’t think any of us wants a late launch for Switch 2 be­cause we all want a new Nintendo con­sole, every­one gets very ex­cited for it, and we don’t want that crunch of Grand Theft Auto 6 and Switch and all that kind of stuff on top of each other.”

Having launched in March 2017, Switch is in its eighth year on the mar­ket. In July, it sur­passed the Famicom as the Nintendo con­sole with the longest lifes­pan be­fore be­ing re­placed.

...

Read the original on www.videogameschronicle.com »

9 218 shares, 8 trendiness

Mozilla is eliminating its advocacy division, which fought for a free and open web

/ Sign up for Verge Deals to get deals on prod­ucts we’ve tested sent to your in­box weekly.

...

Read the original on www.theverge.com »

10 201 shares, 12 trendiness

Why the deep learning boom caught almost everyone by surprise

During my first se­mes­ter as a com­puter sci­ence grad­u­ate stu­dent at Princeton, I took COS 402: Artificial Intelligence. Toward the end of the se­mes­ter there was a lec­ture about neural net­works. This was in the fall of 2008, and I got the dis­tinct im­pres­sion—both from that lec­ture and the text­book—that neural net­works had be­come a back­wa­ter.

Neural net­works had de­liv­ered some im­pres­sive re­sults in the late 1980s and early 1990s. But then progress stalled. By 2008, many re­searchers had moved on to math­e­mat­i­cally el­e­gant ap­proaches such as sup­port vec­tor ma­chines.

I did­n’t know it at the time, but a team at Princeton—in the same com­puter sci­ence build­ing where I was at­tend­ing lec­tures—was work­ing on a pro­ject that would up­end the con­ven­tional wis­dom and demon­strate the power of neural net­works. That team, led by Prof. Fei-Fei Li, was­n’t work­ing on a bet­ter ver­sion of neural net­works. They were hardly think­ing about neural net­works at all.

Rather, they were cre­at­ing a new im­age dataset that would be far larger than any that had come be­fore: 14 mil­lion im­ages, each la­beled with one of nearly 22,000 cat­e­gories.

Li tells the story of ImageNet in her re­cent mem­oir, The Worlds I See. As she worked on the pro­ject, she faced a lot of skep­ti­cism from friends and col­leagues.

I think you’ve taken this idea way too far,” a men­tor told her a few months into the pro­ject in 2007. The trick is to grow with your field. Not to leap so far ahead of it.”

It was­n’t just that build­ing such a large dataset was a mas­sive lo­gis­ti­cal chal­lenge. People doubted the ma­chine learn­ing al­go­rithms of the day would ben­e­fit from such a vast col­lec­tion of im­ages.

Pre-ImageNet, peo­ple did not be­lieve in data,” Li said in a September in­ter­view at the Computer History Museum. Everyone was work­ing on com­pletely dif­fer­ent par­a­digms in AI with a tiny bit of data.”

Ignoring neg­a­tive feed­back, Li pur­sued the pro­ject for more than two years. It strained her re­search bud­get and the pa­tience of her grad­u­ate stu­dents. When she took a new job at Stanford in 2009, she took sev­eral of those stu­dents—and the ImageNet pro­ject—with her to California.

ImageNet re­ceived lit­tle at­ten­tion for the first cou­ple of years af­ter its re­lease in 2009. But in 2012, a team from the University of Toronto trained a neural net­work on the ImageNet dataset, achiev­ing un­prece­dented per­for­mance in im­age recog­ni­tion. That ground­break­ing AI model, dubbed AlexNet af­ter lead au­thor Alex Krizhevsky, kicked off the deep learn­ing boom that has con­tin­ued un­til the pre­sent day.

AlexNet would not have suc­ceeded with­out the ImageNet dataset. AlexNet also would not have been pos­si­ble with­out a plat­form called CUDA that al­lowed Nvidia’s graph­ics pro­cess­ing units (GPUs) to be used in non-graph­ics ap­pli­ca­tions. Many peo­ple were skep­ti­cal when Nvidia an­nounced CUDA in 2006.

So the AI boom of the last 12 years was made pos­si­ble by three vi­sion­ar­ies who pur­sued un­ortho­dox ideas in the face of wide­spread crit­i­cism. One was Geoffrey Hinton, a University of Toronto com­puter sci­en­tist who spent decades pro­mot­ing neural net­works de­spite near-uni­ver­sal skep­ti­cism. The sec­ond was Jensen Huang, the CEO of Nvidia, who rec­og­nized early that GPUs could be use­ful for more than just graph­ics.

The third was Fei-Fei Li. She cre­ated an im­age dataset that seemed lu­di­crously large to most of her col­leagues. But it turned out to be es­sen­tial for demon­strat­ing the po­ten­tial of neural net­works trained on GPUs.

A neural net­work is a net­work of thou­sands, mil­lions, or even bil­lions of neu­rons. Each neu­ron is a math­e­mat­i­cal func­tion that pro­duces an out­put based on a weighted av­er­age of its in­puts.

Suppose you want to cre­ate a net­work that can iden­tify hand­writ­ten dec­i­mal dig­its like the num­ber two in the red square above. Such a net­work would take in an in­ten­sity value for each pixel in an im­age and out­put a prob­a­bil­ity dis­tri­b­u­tion over the ten pos­si­ble dig­its—0, 1, 2, and so forth.

To train such a net­work, you first ini­tial­ize it with ran­dom weights. Then you run it on a se­quence of ex­am­ple im­ages. For each im­age, you train the net­work by strength­en­ing the con­nec­tions that push the net­work to­ward the right an­swer (in this case, a high prob­a­bil­ity value for the 2” out­put) and weak­en­ing con­nec­tions that push to­ward a wrong an­swer (a low prob­a­bil­ity for 2” and high prob­a­bil­i­ties for other dig­its). If trained on enough ex­am­ple im­ages, the model should start to pre­dict a high prob­a­bil­ity for 2” when shown a two—and not oth­er­wise.

In the late 1950s, sci­en­tists started to ex­per­i­ment with ba­sic net­works that had a sin­gle layer of neu­rons. However, their ini­tial en­thu­si­asm cooled as they re­al­ized that such sim­ple net­works lacked the ex­pres­sive power re­quired for com­plex com­pu­ta­tions.

Deeper net­works—those with mul­ti­ple lay­ers—had the po­ten­tial to be more ver­sa­tile. But in the 1960s, no one knew how to train them ef­fi­ciently. This was be­cause chang­ing a pa­ra­me­ter some­where in the mid­dle of a multi-layer net­work could have com­plex and un­pre­dictable ef­fects on the out­put.

So by the time Hinton be­gan his ca­reer in the 1970s, neural net­works had fallen out of fa­vor. Hinton wanted to study them, but he strug­gled to find an aca­d­e­mic home to do so. Between 1976 and 1986, Hinton spent time at four dif­fer­ent re­search in­sti­tu­tions: Sussex University, the University of California San Diego (UCSD), a branch of the UK Medical Research Council, and fi­nally Carnegie Mellon, where he be­came a pro­fes­sor in 1982.

In a land­mark 1986 pa­per, Hinton teamed up with two of his for­mer col­leagues at UCSD, David Rumelhart and Ronald Williams, to de­scribe a tech­nique called back­prop­a­ga­tion for ef­fi­ciently train­ing deep neural net­works.

Their idea was to start with the fi­nal layer of the net­work and work back­wards. For each con­nec­tion in the fi­nal layer, the al­go­rithm com­putes a gra­di­ent—a math­e­mat­i­cal es­ti­mate of whether in­creas­ing the strength of that con­nec­tion would push the net­work to­ward the right an­swer. Based on these gra­di­ents, the al­go­rithm ad­justs each pa­ra­me­ter in the mod­el’s fi­nal layer.

The al­go­rithm then prop­a­gates these gra­di­ents back­wards to the sec­ond-to-last layer. A key in­no­va­tion here is a for­mula—based on the chain rule from high school cal­cu­lus—for com­put­ing the gra­di­ents in one layer based on gra­di­ents in the fol­low­ing layer. Using these new gra­di­ents, the al­go­rithm up­dates each pa­ra­me­ter in the sec­ond-to-last layer of the model. Then the gra­di­ents get prop­a­gated back­wards to the third-to-last layer and the whole process re­peats once again.

The al­go­rithm only makes small changes to the model in each round of train­ing. But as the process is re­peated over thou­sands, mil­lions, bil­lions, or even tril­lions of train­ing ex­am­ples, the model grad­u­ally be­comes more ac­cu­rate.

Hinton and his col­leagues weren’t the first to dis­cover the ba­sic idea of back­prop­a­ga­tion. But their pa­per pop­u­lar­ized the method. As peo­ple re­al­ized it was now pos­si­ble to train deeper net­works, it trig­gered a new wave of en­thu­si­asm for neural net­works.

Hinton moved to the University of Toronto in 1987 and be­gan at­tract­ing young re­searchers who wanted to study neural net­works. One of the first was the French com­puter sci­en­tist Yann LeCun, who did a year-long post­doc with Hinton be­fore mov­ing to Bell Labs in 1988.

Hinton’s back­prop­a­ga­tion al­go­rithm al­lowed LeCun to train mod­els deep enough to per­form well on real-world tasks like hand­writ­ing recog­ni­tion. By the mid-1990s, LeCun’s tech­nol­ogy was work­ing so well that banks started to use it for pro­cess­ing checks.

At one point, LeCun’s cre­ation read more than 10 per­cent of all checks de­posited in the United States,” wrote Cade Metz in his 2022 book Genius Makers.

But when LeCun and other re­searchers tried to ap­ply neural net­works to larger and more com­plex im­ages, it did­n’t go well. Neural net­works once again fell out of fash­ion, and some re­searchers who had fo­cused on neural net­works moved on to other pro­jects.

Hinton never stopped be­liev­ing that neural net­works could out­per­form other ma­chine learn­ing meth­ods. But it would be many years be­fore he’d have ac­cess to enough data and com­put­ing power to prove his case.

The brains of every per­sonal com­puter is a cen­tral pro­cess­ing unit (CPU). These chips are de­signed to per­form cal­cu­la­tions in or­der, one step at a time. This works fine for con­ven­tional soft­ware like Windows and Office. But some video games re­quire so many cal­cu­la­tions that they strain the ca­pa­bil­i­ties of CPUs. This is es­pe­cially true of games like Quake, Call of Duty, and Grand Theft Auto that ren­der three-di­men­sional worlds many times per sec­ond.

So gamers rely on GPUs to ac­cel­er­ate per­for­mance. Inside a GPU are many ex­e­cu­tion units—es­sen­tially tiny CPUs—packaged to­gether on a sin­gle chip. During game­play, dif­fer­ent ex­e­cu­tion units draw dif­fer­ent ar­eas of the screen. This par­al­lelism en­ables bet­ter im­age qual­ity and higher frame rates than would be pos­si­ble with a CPU alone.

Nvidia in­vented the GPU in 1999 and has dom­i­nated the mar­ket ever since. By the mid-2000s, Nvidia CEO Jensen Huang sus­pected that the mas­sive com­put­ing power in­side a GPU would be use­ful for ap­pli­ca­tions be­yond gam­ing. He hoped sci­en­tists could use it for com­pute-in­ten­sive tasks like weather sim­u­la­tion or oil ex­plo­ration.

So in 2006, Nvidia an­nounced the CUDA plat­form. CUDA al­lows pro­gram­mers to write kernels,” short pro­grams de­signed to run on a sin­gle ex­e­cu­tion unit. Kernels al­low a big com­put­ing task to be split up into bite-sized chunks that can be processed in par­al­lel. This al­lows cer­tain kinds of cal­cu­la­tions to be com­pleted far faster than with a CPU alone.

But there was lit­tle in­ter­est in CUDA when it was first in­tro­duced, wrote Steven Witt in the New Yorker last year:

When CUDA was re­leased, in late 2006, Wall Street re­acted with dis­may. Huang was bring­ing su­per­com­put­ing to the masses, but the masses had shown no in­di­ca­tion that they wanted such a thing.“They were spend­ing a for­tune on this new chip ar­chi­tec­ture,” Ben Gilbert, the co-host of Acquired,” a pop­u­lar Silicon Valley pod­cast, said. They were spend­ing many bil­lions tar­get­ing an ob­scure cor­ner of aca­d­e­mic and sci­en­tific com­put­ing, which was not a large mar­ket at the time—cer­tainly less than the bil­lions they were pour­ing in.”Huang ar­gued that the sim­ple ex­is­tence of CUDA would en­large the su­per­com­put­ing sec­tor. This view was not widely held, and by the end of 2008 Nvidia’s stock price had de­clined by sev­enty per cent…Down­loads of CUDA hit a peak in 2009, then de­clined for three years. Board mem­bers wor­ried that Nvidia’s de­pressed stock price would make it a tar­get for cor­po­rate raiders.

Huang was­n’t specif­i­cally think­ing about AI or neural net­works when he cre­ated the CUDA plat­form. But it turned out that Hinton’s back­prop­a­ga­tion al­go­rithm could eas­ily be split up into bite-sized chunks. And so train­ing neural net­works turned out to be a killer app for CUDA.

According to Witt, Hinton was quick to rec­og­nize the po­ten­tial of CUDA:

In 2009, Hinton’s re­search group used Nvidia’s CUDA plat­form to train a neural net­work to rec­og­nize hu­man speech. He was sur­prised by the qual­ity of the re­sults, which he pre­sented at a con­fer­ence later that year. He then reached out to Nvidia. I sent an e-mail say­ing, Look, I just told a thou­sand ma­chine-learn­ing re­searchers they should go and buy Nvidia cards. Can you send me a free one?’ Hinton told me. They said no.”

Despite the snub, Hinton and his grad­u­ate stu­dents, Alex Krizhevsky and Ilya Sutskever, ob­tained a pair of Nvidia GTX 580 GPUs for the AlexNet pro­ject. Each GPU had 512 ex­e­cu­tion units, al­low­ing Krizhevsky and Sutskever to train a neural net­work hun­dreds of times faster than would be pos­si­ble with a CPU. This speed al­lowed them to train a larger model—and to train it on many more train­ing im­ages. And they would need all that ex­tra com­put­ing power to tackle the mas­sive ImageNet dataset.

Fei-Fei Li was­n’t think­ing about ei­ther neural net­works or GPUs as she be­gan a new job as a com­puter sci­ence pro­fes­sor at Princeton in January of 2007. While earn­ing her PhD at Caltech, she had built a dataset called Caltech 101 that had 9,000 im­ages across 101 cat­e­gories.

That ex­pe­ri­ence had taught her that com­puter vi­sion al­go­rithms tended to per­form bet­ter with larger and more di­verse train­ing datasets. Not only had Li found her own al­go­rithms per­formed bet­ter when trained on Caltech 101, other re­searchers started train­ing their mod­els us­ing Li’s dataset and com­par­ing their per­for­mance to one an­other. This turned Caltech 101 into a bench­mark for the field of com­puter vi­sion.

So when she got to Princeton, Li de­cided to go much big­ger. She be­came ob­sessed with an es­ti­mate by vi­sion sci­en­tist Irving Biederman that the av­er­age per­son rec­og­nizes roughly 30,000 dif­fer­ent kinds of ob­jects. Li started to won­der if it would be pos­si­ble to build a truly com­pre­hen­sive im­age dataset—one that in­cluded every kind of ob­ject peo­ple com­monly en­counter in the phys­i­cal world.

A Princeton col­league told Li about WordNet, a mas­sive data­base that at­tempted to cat­a­log and or­ga­nize 140,000 words. Li called her new dataset ImageNet, and she used WordNet as a start­ing point for choos­ing cat­e­gories. She elim­i­nated verbs and ad­jec­tives as well as in­tan­gi­ble nouns like truth.” That left a list of 22,000 count­able ob­jects, rang­ing from am­bu­lance to zuc­chini.

She planned to take the same ap­proach she’d taken with the Caltech 101 dataset: use Google’s im­age search to find can­di­date im­ages, then have a hu­man be­ing ver­ify them. For the Caltech 101 dataset, Li had done this her­self over the course of a few months. This time she would need more help. She planned to hire dozens of Princeton un­der­grad­u­ates to help her choose and la­bel im­ages.

But even af­ter heav­ily op­ti­miz­ing the la­bel­ing process—for ex­am­ple, pre-down­load­ing can­di­date im­ages so they’re in­stantly avail­able for stu­dents to re­view—Li and her grad­u­ate stu­dent, Jia Deng, cal­cu­lated it would take more than 18 years to se­lect and la­bel mil­lions of im­ages.

The pro­ject was saved when Li learned about Amazon Mechanical Turk, a crowd­sourc­ing plat­form Amazon had launched a cou­ple of years ear­lier. Not only was AMTs in­ter­na­tional work­force more af­ford­able than Princeton un­der­grad­u­ates, the plat­form was far more flex­i­ble and scal­able. Li’s team could hire as many peo­ple as they needed, on de­mand, and pay them only as long as they had work avail­able.

AMT cut the time needed to com­plete ImageNet down from 18 to two years. Li writes that her lab spent two years on the knife-edge of our fi­nances” as they strug­gled to com­plete the ImageNet pro­ject. But they had enough funds to pay three peo­ple to look at each of the 14 mil­lion im­ages in the fi­nal data set.

ImageNet was ready for pub­li­ca­tion in 2009, and Li sub­mit­ted it to the Conference on Computer Vision and Pattern Recognition, which was held in Miami that year. Their pa­per was ac­cepted, but it did­n’t get the kind of recog­ni­tion Li hoped for.

ImageNet was rel­e­gated to a poster ses­sion,” Li writes. This meant that we would­n’t be pre­sent­ing our work in a lec­ture hall to an au­di­ence at a pre­de­ter­mined time, but would in­stead be given space on the con­fer­ence floor to prop up a large-for­mat print sum­ma­riz­ing the pro­ject in hopes that passersby might stop and ask ques­tions… After so many years of ef­fort, this just felt an­ti­cli­mac­tic.”

To gen­er­ate pub­lic in­ter­est, Li turned ImageNet into a com­pe­ti­tion. Realizing that the full dataset might be too un­wieldy to dis­trib­ute to dozens of con­tes­tants, she cre­ated a much smaller (but still mas­sive) dataset with 1,000 cat­e­gories and 1.4 mil­lion im­ages.

The first year’s com­pe­ti­tion in 2010 gen­er­ated a healthy amount of in­ter­est, with 11 teams par­tic­i­pat­ing. The win­ning en­try was based on sup­port vec­tor ma­chines. Unfortunately, Li writes, it was only a slight im­prove­ment over cut­ting-edge work found else­where in our field.”

The sec­ond year of the ImageNet com­pe­ti­tion at­tracted fewer en­tries than the first. The win­ning en­try in 2011 was an­other sup­port vec­tor ma­chine, and it just barely im­proved on the per­for­mance of the 2010 win­ner. Li started to won­der if the crit­ics had been right. Maybe ImageNet was too much for most al­go­rithms to han­dle.”

For two years run­ning, well-worn al­go­rithms had ex­hib­ited only in­cre­men­tal gains in ca­pa­bil­i­ties, while true progress seemed all but ab­sent,” Li writes. If ImageNet was a bet, it was time to start won­der­ing if we’d lost.”

But when Li re­luc­tantly staged the com­pe­ti­tion a third time in 2012, the re­sults were to­tally dif­fer­ent. Geoff Hinton’s team was the first to sub­mit a model based on a deep neural net­work. And its top-5 ac­cu­racy was 85 per­cent—10 per­cent­age points bet­ter than the 2011 win­ner.

Li’s ini­tial re­ac­tion was in­credulity: Most of us saw the neural net­work as a dusty ar­ti­fact en­cased in glass and pro­tected by vel­vet ropes.”

The ImageNet win­ners were sched­uled to be an­nounced at the European Conference on Computer Vision in Florence, Italy. Li, who had a baby at home in California, was plan­ning to skip the event. But when she saw how well AlexNet had done on her dataset, she re­al­ized this mo­ment would be too im­por­tant to miss: I set­tled re­luc­tantly on a twenty-hour slog of sleep de­pri­va­tion and cramped el­bow room.”

On an October day in Florence, Alex Krizhevsky pre­sented his re­sults to a stand­ing-room-only crowd of com­puter vi­sion re­searchers. Fei-Fei Li was in the au­di­ence. So was Yann LeCun.

Cade Metz re­ports that af­ter the pre­sen­ta­tion, LeCun stood up and called AlexNet an un­equiv­o­cal turn­ing point in the his­tory of com­puter vi­sion. This is proof.”

The suc­cess of AlexNet vin­di­cated Hinton’s faith in neural net­works, but it was ar­guably an even big­ger vin­di­ca­tion for LeCun.

AlexNet was a con­vo­lu­tional neural net­work, a type of neural net­work that LeCun had de­vel­oped 20 years ear­lier to rec­og­nize hand­writ­ten dig­its on checks. (For more de­tails on how CNNs work, see the in-depth ex­plainer I wrote for Ars Technica in 2018.) Indeed, there were few ar­chi­tec­tural dif­fer­ences be­tween AlexNet and LeCun’s im­age recog­ni­tion net­works from the 1990s.

AlexNet was sim­ply far larger. In a 1998 pa­per, LeCun de­scribed a doc­u­ment recog­ni­tion net­work with seven lay­ers and 60,000 train­able pa­ra­me­ters. AlexNet had eight lay­ers, but these lay­ers had 60 mil­lion train­able pa­ra­me­ters.

LeCun could not have trained a model that large in the early 1990s be­cause there were no com­puter chips with as much pro­cess­ing power as a 2012-era GPU. Even if LeCun had man­aged to build a big enough su­per­com­puter, he would not have had enough im­ages to train it prop­erly. Collecting those im­ages would have been hugely ex­pen­sive in the years be­fore Google and Amazon Mechanical Turk.

And this is why Fei-Fei Li’s work on ImageNet was so con­se­quen­tial. She did­n’t in­vent con­vo­lu­tional net­works or fig­ure out how to make them run ef­fi­ciently on GPUs. But she pro­vided the train­ing data that large neural net­works needed to reach their full po­ten­tial.

The tech­nol­ogy world im­me­di­ately rec­og­nized the im­por­tance of AlexNet. Hinton and his stu­dents formed a shell com­pany with the goal to be acquihired” by a big tech com­pany. Within months, Google pur­chased the com­pany for $44 mil­lion. Hinton worked at Google for the next decade while re­tain­ing his aca­d­e­mic post in Toronto. Ilya Sutskever spent a few years at Google be­fore be­com­ing a co­founder of OpenAI.

AlexNet also made Nvidia GPUs the in­dus­try stan­dard for train­ing neural net­works. In 2012, the mar­ket val­ued Nvidia at less than $10 bil­lion. Today, Nvidia is one of the most valu­able com­pa­nies in the world, with a mar­ket cap­i­tal­iza­tion north of $3 tril­lion. That high val­u­a­tion is dri­ven mainly by over­whelm­ing de­mand for GPUs like the H100 that are op­ti­mized for train­ing neural net­works.

That mo­ment was pretty sym­bolic to the world of AI be­cause three fun­da­men­tal el­e­ments of mod­ern AI con­verged for the first time,” Li said in a September in­ter­view at the Computer History Museum. The first el­e­ment was neural net­works. The sec­ond el­e­ment was big data, us­ing ImageNet. And the third el­e­ment was GPU com­put­ing.”

Today lead­ing AI labs be­lieve the key to progress in AI is to train huge mod­els on vast data sets. Big tech­nol­ogy com­pa­nies are in such a hurry to build the data cen­ters re­quired to train larger mod­els that they’ve started to lease out en­tire nu­clear power plants to pro­vide the nec­es­sary power.

You can view this as a straight­for­ward ap­pli­ca­tion of the lessons of AlexNet. But I won­der if we ought to draw the op­po­site les­son from AlexNet: that it’s a mis­take to be­come too wed­ded to con­ven­tional wis­dom.

Scaling laws” have had a re­mark­able run in the 12 years since AlexNet, and per­haps we’ll see an­other gen­er­a­tion or two of im­pres­sive re­sults as the lead­ing labs scale up their foun­da­tion mod­els even more.

But we should be care­ful not to let the lessons of AlexNet harden into dogma. I think there’s at least a chance that scal­ing laws will run out of steam in the next few years. And if that hap­pens, we’re go­ing to need a new gen­er­a­tion of stub­born non­con­formists to no­tice that the old ap­proach is­n’t work­ing and try some­thing dif­fer­ent.

...

Read the original on www.understandingai.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.