10 interesting stories served every morning and every evening.




1 616 shares, 31 trendiness

Calculus Made Easy

...

Read the original on calculusmadeeasy.org »

2 469 shares, 17 trendiness

randar-explanation/README.md at master · spawnmason/randar-explanation

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

We read every piece of feed­back, and take your in­put very se­ri­ously.

To see all avail­able qual­i­fiers, see our doc­u­men­ta­tion.

We read every piece of feed­back, and take your in­put very se­ri­ously.

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

...

Read the original on github.com »

3 431 shares, 108 trendiness

Daniel Dennett (1942-2024)

Begin typ­ing your search above and press re­turn to search. Press Esc to can­cel.

news for & about the phi­los­o­phy pro­fes­sion

Daniel Dennett, pro­fes­sor emer­i­tus of phi­los­o­phy at Tufts University, well-known for his work in phi­los­o­phy of mind and a wide range of other philo­soph­i­cal ar­eas, has died.

Professor Dennett wrote ex­ten­sively about is­sues re­lated to phi­los­o­phy of mind and cog­ni­tive sci­ence, es­pe­cially con­scious­ness. He is also rec­og­nized as hav­ing made sig­nif­i­cant con­tri­bu­tions to the con­cept of in­ten­tion­al­ity and de­bates on free will. Some of Professor Dennett’s books in­clude Content and Consciousness (1969), Brainstorms: Philosophical Essays on Mind and Psychology (1981), The Intentional Stance (1987), Consciousness Explained (1992), Darwin’s Dangerous Idea (1995), Breaking the Spell (2006), and From Bacteria to Bach and Back: The Evolution of Minds (2017). He pub­lished a mem­oir last year en­ti­tled I’ve Been Thinking. There are also sev­eral books about him and his ideas. You can learn more about his work here.

Professor Dennett held a po­si­tion at Tufts University for nearly all his ca­reer. Prior to this, he held a po­si­tion at the University of California, Irvine from 1965 to 1971. He also held vis­it­ing po­si­tions at Oxford, Harvard, Pittsburgh, and other in­sti­tu­tions dur­ing his time at Tufts University. Professor Dennett was awarded his PhD from the University of Oxford in 1965 and his un­der­grad­u­ate de­gree in phi­los­o­phy from Harvard University in 1963.

Professor Dennett is the re­cip­i­ent of sev­eral awards and prizes in­clud­ing the Jean Nicod Prize, the Mind and Brain Prize, and the Erasmus Prize. He also held a Fulbright Fellowship, two Guggenheim Fellowships, and a Fellowship at the Center for Advanced Study in Behavioral Sciences. An out­spo­ken athe­ist, Professor Dennett was dubbed one of the Four Horsemen of New Atheism”. He was also a Fellow of the Committee for Skeptical Inquiry, an hon­ored Humanist Laureate of the International Academy of Humanism, and was named Humanist of the Year by the American Humanist Organization.

The fol­low­ing in­ter­view with Professor Dennett was recorded last year:

Related: Philosophers: Stop Being Self-Indulgent and Start Being Like Daniel Dennett, says Daniel Dennett“. (Other DN posts on Dennett can be found here.)

The eth­i­cal aca­d­e­mic should be op­posed to most of our cur­rent grad­ing prac­tices, but they still need to grade stu­dents any­way”

– John Danaher (Galway) on the whats, whys, and hows of eth­i­cal grad­ing

Kant saw rea­son’s po­ten­tial as a tool for lib­er­a­tion”

– Susan Neiman (Einstein Forum) in the NYT on why we should cel­e­brate Kant

Assisted evo­lu­tion is… an ac­knowl­edg­ment that there is no step­ping back, no fu­ture in which hu­mans do not pro­foundly shape the lives and fates of wild crea­tures”

– new ways of pro­tect­ing an­i­mals raise ques­tions about what con­ser­va­tion is and what species are

Metaphysics be­gins with the dis­tinc­tion be­tween ap­pear­ance and re­al­ity, be­tween seems and is, and the play con­stantly plays with this dis­tinc­tion”

– Brad Skow (MIT) on the phi­los­o­phy in Hamlet

Beliefs aim at the truth, you say?

– the New Yorker cov­ers work by philoso­phers and oth­ers in an ar­ti­cle about the com­pli­ca­tions of mis­in­for­ma­tion

Philosophical the­o­ries are very much like pictures’ or stories’ and… philo­soph­i­cal de­bates of­ten come down to temperamental dif­fer­ences’”

– Peter West (Northeastern U. London) on the metaphi­los­o­phy of Margaret MacDonald

The swift­ness and ease of the tech­nol­ogy sep­a­rates peo­ple from the re­al­ity of what they are tak­ing part in”

– and there’s a lot go­ing on

Any sur­pris­ing re­sults sci­en­tists achieved, whether they sup­ported or chal­lenged a pre­vi­ous as­sump­tion, were seen as the ul­ti­mate source of aes­thetic plea­sure”

– Milena Ivanova (Cambridge) on the role of aes­thet­ics in sci­ence

I could­n’t have jus­ti­fied spend­ing a ca­reer as an aca­d­e­mic philoso­pher. Not in this world.”

– Nathan J. Robinson on the im­moral­ity of phi­los­o­phy in a time of cri­sis

Within the ring of light lies what is straight­for­wardly know­able through com­mon sense or main­stream sci­ence” but phi­los­o­phy lives in the penum­bra of dark­ness”

– and even as that light grows, says Eric Schwitzgebel (UC Riverside), just be­yond it there will al­ways be dark­ness”–-and phi­los­o­phy

The sci­en­tific com­mu­nity has gen­er­ally done a poor job of ex­plain­ing to the pub­lic that sci­ence is what is known so far”

– H. Holden Thorp, the ed­i­tor in chief of Science, on why the his­tory and phi­los­o­phy of sci­ence should be part of the sci­ence cur­ricu­lum (via Nathan Nobis)

– Tamar Gendler (Yale) dis­cusses an ex­per­i­men­tal course she taught on phi­los­o­phy and its forms

If you’re go­ing to be a philoso­pher, learn about the world, learn about the sci­ence… Scientists are just as ca­pa­ble of mak­ing philo­soph­i­cal mis­takes… as any lay peo­ple [and] they need the help of in­formed philoso­phers”

I’m cu­ri­ous about why these kinds of places have such a spell­bind­ing aura, and I think it’s be­cause they are ana­log out­liers”

– Evan Selinger (RIT) re­flects on his ob­ses­sion with a small-town fam­ily-run ho­tel that serves sim­ple and de­li­cious food

The story that a sports fan en­gages with is a col­lab­o­ra­tively writ­ten story; [it is] a so­cial en­ter­prise fo­cused around knit­ting in­di­vid­ual games into nar­ra­tive arcs, sto­ries, leg­ends, and char­ac­ter­i­za­tions”

– Peter Kung and Shawn Klein (ASU) on imag­i­na­tion and sports fan­dom

Claude 3 Opus pro­duces ar­gu­ments that don’t sta­tis­ti­cally dif­fer in their per­sua­sive­ness com­pared to ar­gu­ments writ­ten by hu­mans”

– the meth­ods and re­sults of a study on AI per­sua­sive­ness

Limiting virtues [are] virtues that con­strain us in or­der to set us free”

– Sara Hendren (Northeastern), in­spired by David McPherson (Creighton) looks for lim­it­ing virtues in ar­chi­tec­ture

It is not only false but morally mis­lead­ing to de­scribe the re­sult­ing civil­ian deaths as unintentional’ or as what happens in war’”

– Jessica Wolfendale (Case Western) on the tools and tac­tics used in Gaza by Israel’s mil­i­tary

Both were an­a­lyt­i­cal philoso­phers, but their in­tel­lec­tual frame­works and their philo­soph­i­cal ap­proaches were markedly dif­fer­ent”

– Dan Little (UM-Dearborn) on Popper and Parfit

El Salvador seeks philoso­phers (and doc­tors, sci­en­tists, en­gi­neers, artists, and oth­ers)

– the na­tion’s pres­i­dent has of­fered 5000 free pass­ports along with tax ben­e­fits to those an­swer­ing his call

He has awak­ened us to the back­ground prac­tices in our cul­ture, and re­vealed to us that they have no ne­ces­sity, which of­fers us a kind of free­dom we may not have rec­og­nized”

– Mark Ralkowski (GWU) on the phi­los­o­phy of Larry David

I think [NASAs] re­quire­ments are clos­ing the as­tro­naut pro­gram off from im­por­tant in­sights from the hu­man­i­ties and so­cial sci­ences”

– a phi­los­o­phy PhD and US Air Force of­fi­cer on why we should send philoso­phers into space

Before he was the lit­tle guy who spake about teach­ing of the Superman, he ap­peared in Nietzsche’s book The Gay Science’” Who is….?”

– phi­los­o­phy was a cat­e­gory in the sec­ond round of Jeopardy!” ear­lier this week (mouse over the $ to see the an­swers, er ques­tions)

Can phi­los­o­phy be done through nar­ra­tive films like Barbie?”

– that de­pends on what we mean by do­ing phi­los­o­phy, says Tom McClelland (Cambridge)

There is no moral va­lence to some­one just not lik­ing us.” There’s a good­ness and rich­ness in this sort of pre­des­tined suf­fer­ing.”

– the moral sen­si­bil­i­ties of Lillian Fishman, ad­vice colum­nist at The Point

Philosophers write a lot about friend­ship and love, but they tend to do so in terms that leave out the cen­tral­ity of the heart and heart­felt con­nec­tion”

– as a re­sult, says Stephen Darwall (Yale), we miss some im­por­tant things

Wenar’s al­ter­na­tive to ef­fec­tive al­tru­ism is nei­ther vi­able nor de­sir­able nor in­deed any im­prove­ment on ef­fec­tive al­tru­ism”

While the shal­low pond may be a good model to help us think about our im­me­di­ate du­ties, it is a bad model to help us think about the re­la­tion­ship be­tween would be donors and the suf­fer­ing poor in the con­text of de­vel­op­ment”

– Eric Schliesser (Amsterdam) on Richard Pettigrew on Leif Wenar on ef­fec­tive al­tru­ism

...

Read the original on dailynous.com »

4 355 shares, 25 trendiness

now supports the S3 protocol

Supabase Storage is now of­fi­cially an S3-Compatible Storage Provider. This is one of the most-re­quested fea­tures and is avail­able to­day in pub­lic al­pha. Resumable Uploads are also tran­si­tion­ing from Beta to Generally Available.

The Supabase Storage Engine is fully open source and is one of the few stor­age so­lu­tions that of­fer 3 in­ter­op­er­a­ble pro­to­cols to man­age your files:

* S3 up­loads: for com­pat­i­bil­ity across a plethora of tools

We al­ways strive to adopt in­dus­try stan­dards at Supabase. Supporting stan­dards makes work­loads portable, a key prod­uct prin­ci­ple. The S3 API is un­doubt­edly a stor­age stan­dard, and we’re mak­ing it ac­ces­si­ble to de­vel­op­ers of var­i­ous ex­pe­ri­ence-lev­els.

The S3 pro­to­col is back­wards com­pat­i­ble with our other APIs. If you are al­ready us­ing Storage via our REST or TUS APIs, to­day you can use any S3 client to in­ter­act with your buck­ets and files: up­load with TUS, serve them with REST, and man­age them with the S3 pro­to­col.

The pro­to­col works on the cloud, lo­cal de­vel­op­ment, and self-host­ing. Check out the API com­pat­i­bil­ity in our docs

To au­then­ti­cate with Supabase S3 you have 2 op­tions:

The stan­dard ac­cess_key and se­cret_key cre­den­tials. You can gen­er­ate these from the stor­age set­tings page. This au­then­ti­ca­tion method is widely com­pat­i­ble with tools sup­port­ing the S3 pro­to­col. It is also meant to be used ex­clu­sively server­side since it pro­vides full ac­cess to your Storage re­sources.

We will add scoped ac­cess key cre­den­tials in the near fu­ture which can have ac­cess to spe­cific buck­ets.

User-scoped cre­den­tials with RLS. This takes ad­van­tage of a well-adopted con­cept across all Supabase ser­vices, Row Level Security. It al­lows you to in­ter­act with the S3 pro­to­col by scop­ing stor­age op­er­a­tions to a par­tic­u­lar au­then­ti­cated user or role, re­spect­ing your ex­ist­ing RLS poli­cies. This method is made pos­si­ble by us­ing the Session to­ken header which the S3 pro­to­col sup­ports. You can find more in­for­ma­tion on how to use the Session to­ken mech­a­nism in the doc.

With the sup­port of the S3 pro­to­col, you can now con­nect Supabase Storage to many 3rd-party tools and ser­vices by pro­vid­ing a pair of cre­den­tials which can be re­voked at any time.

You can use pop­u­lar tools for back­ups and mi­gra­tions, such as:

* and any other s3-com­pat­i­ble tool …

Check out our Cyberduck guide here.

S3 com­pat­i­bil­ity pro­vides a nice prim­i­tive for Data Engineers. You can use it with many pop­u­lar tools:

In this ex­am­ple our in­cred­i­ble data an­a­lyst, Tyler, demon­strates how to store Parquet files in Supabase Storage and query them di­rectly us­ing DuckDB:

In ad­di­tion to the stan­dard up­loads and re­sum­able up­loads, we now sup­port mul­ti­part up­loads via the S3 pro­to­col. This al­lows you to max­i­mize up­load through­put by up­load­ing chunks in par­al­lel, which are then con­cate­nated at the end.

Along with the plat­form GA an­nounce­ment, we are also thrilled to an­nounce that re­sum­able up­loads are also gen­er­ally avail­able.

Resumable up­loads are pow­ered by the TUS pro­to­col. The jour­ney to get here was im­mensely re­ward­ing, work­ing closely with the TUS team. A big shoutout to the main­tain­ers of the TUS pro­to­col, @murderlon and @acconut, for their col­lab­o­ra­tive ap­proach to open source.

Supabase con­tributed some ad­vanced fea­tures from the Node im­ple­men­ta­tion of TUS Spec in­clud­ing dis­trib­uted locks, max file size, ex­pi­ra­tion ex­ten­sion and nu­mer­ous bug fixes:

These fea­tures were es­sen­tial for Supabase, and since the TUS node server is open source, they are also avail­able for you to use. This is an­other core prin­ci­ple: wher­ever pos­si­ble, we use and sup­port ex­ist­ing tools rather than de­vel­op­ing from scratch.

* Cross-bucket trans­fers: We have added the avail­abil­ity to copy and move ob­jects across buck­ets, where pre­vi­ously you could do these op­er­a­tions only within the same Supabase bucket.

* Standardized er­ror codes: Error codes have now been stan­dard­ized across the Storage server and now will be much eas­ier to branch logic on spe­cific er­rors. You can find the list of er­ror codes here.

* Multi-tenant mi­gra­tions: We made sig­nif­i­cant im­prove­ments to the run­ning mi­gra­tions across all our ten­ants. This has re­duced mi­gra­tion er­rors across the fleet and en­ables us to run long run­ning mi­gra­tions in an asyn­chro­nous man­ner. Stay tuned for a sep­a­rate blog post with more de­tails.

* Decoupled de­pen­den­cies: Storage is fully de­cou­pled from other Supabase prod­ucts, which means you can run Storage as a stand­alone ser­vice. Get started with this docker-com­pose file.

...

Read the original on supabase.com »

5 338 shares, 44 trendiness

Tesla recalls the Cybertruck for faulty accelerator pedals that can get stuck

Tesla is re­call­ing all 3,878 Cybertrucks that it has shipped to date, due to a prob­lem where the ac­cel­er­a­tor pedal can get stuck, putting dri­vers at risk of a crash, ac­cord­ing to the National Highway Traffic Safety Administration.

The re­call caps a tu­mul­tuous week for Tesla. The com­pany laid off more than 10% of its work­force on Monday, and lost two of its high­est-rank­ing ex­ec­u­tives. A few days later, Tesla asked share­hold­ers to re-vote on CEO Elon Musk’s mas­sive com­pen­sa­tion pack­age that was struck down by a judge ear­lier this year.

Reports of prob­lems with the Cybertruck’s ac­cel­er­a­tor pedal started pop­ping up in the last few weeks. Tesla even re­port­edly paused de­liv­er­ies of the truck while it sorted out the is­sue. Musk said in a post on X that Tesla was being very cau­tious” and the com­pany re­ported to NHTSA that it was not aware of any crashes or in­juries re­lated to the prob­lem.

The com­pany has now con­firmed to NHTSA that the pedal can dis­lodge, mak­ing it pos­si­ble for it to slide up and get caught in the trim around the footwell.

Tesla said it first re­ceived a no­tice of one of these ac­cel­er­a­tor pedal in­ci­dents from a cus­tomer on March 31, and then a sec­ond one on April 3. After per­form­ing a se­ries of tests, it de­cided on April 12 to is­sue a re­call af­ter de­ter­min­ing that [a]n un­ap­proved change in­tro­duced lu­bri­cant (soap) to aid in the com­po­nent as­sem­bly of the pad onto the ac­cel­er­a­tor pedal,” and that [r]esidual lu­bri­cant re­duced the re­ten­tion of the pad to the pedal.”

Tesla says it will re­place or re­work the ac­cel­er­a­tor pedal on all ex­ist­ing Cybertrucks. It also told NHTSA that it has started build­ing Cybertrucks with a new ac­cel­er­a­tor pedal, and that it’s fix­ing the ve­hi­cles that are in tran­sit or sit­ting at de­liv­ery cen­ters.

While the Cybertruck only first started ship­ping late last year, this is not the ve­hi­cle’s first re­call. But the ini­tial one was mi­nor: Earlier this year, Tesla re­called the soft­ware on all of its ve­hi­cles be­cause the font sizes of its warn­ing lights were too small. The com­pany un­veiled the truck back in 2019.

...

Read the original on techcrunch.com »

6 266 shares, 11 trendiness

The Rust Calling Convention We Deserve · mcyoung

I’m Miguel. I write about com­pil­ers, per­for­mance, and silly com­puter things. I also draw Pokémon.

I will of­ten say that the so-called C ABI is a very bad one, and a rel­a­tively unimag­i­na­tive one when it comes to pass­ing com­pli­cated types ef­fec­tively. A lot of peo­ple ask me ok, what would you use in­stead”, and I just point them to the Go reg­is­ter ABI, but it seems most peo­ple have trou­ble fill­ing in the gaps of what I mean. This ar­ti­cle ex­plains what I mean in de­tail.

I have dis­cussed call­ing con­ven­tions in the past, but as a re­minder: the call­ing con­ven­tion is the part of the ABI that con­cerns it­self with how to pass ar­gu­ments to and from a func­tion, and how to ac­tu­ally call a func­tion. This in­cludes which reg­is­ters ar­gu­ments go in, which reg­is­ters val­ues are re­turned out of, what func­tion pro­logues/​epi­logues look like, how un­wind­ing works, etc.

This par­tic­u­lar post is pri­mar­ily about x86, but I in­tend to be rea­son­ably generic (so that what I’ve writ­ten ap­plies just as well to ARM, RISC-V, etc). I will as­sume a gen­eral fa­mil­iar­ity with x86 as­sem­bly, LLVM IR, and Rust (but not rustc’s in­ter­nals).

Today, like many other na­tively com­piled lan­guages, Rust de­fines an un­spec­i­fied0- call­ing con­ven­tion that lets it call func­tions how­ever it likes. In prac­tice, Rust low­ers to LLVMs built-in C call­ing con­ven­tion, which LLVMs pro­logue/​epi­logue code­gen gen­er­ates calls for.

Rust is fairly con­ser­v­a­tive: it tries to gen­er­ate LLVM func­tion sig­na­tures that Clang could have plau­si­bly gen­er­ated. This has two sig­nif­i­cant ben­e­fits:

Good prob­a­bil­ity de­bug­gers won’t choke on it. This is not a con­cern on Linux, though, be­cause DWARF is very gen­eral and does not bake-in the Linux C ABI. We will con­cern our­selves only with ELF-based sys­tems and as­sume that de­bug­ga­bil­ity is a non­is­sue. It is less likely to tickle LLVM bugs due to us­ing ABI code­gen that Clang does not ex­er­cise. I think that if Rust tick­les LLVM bugs, we should ac­tu­ally fix them (a very small num­ber of rustc con­trib­u­tors do in fact do this).

However, we are too con­ser­v­a­tive. We get ter­ri­ble code­gen for sim­ple func­tions:

arr is 12 bytes wide, so you’d think it would be passed in reg­is­ters, but no! It is passed by pointer! Rust is ac­tu­ally more con­ser­v­a­tive than what the Linux C ABI man­dates, be­cause it ac­tu­ally passes the [i32; 3] in reg­is­ters when ex­tern C” is re­quested.

The ar­ray is passed in rdi and rsi, with the i32s packed into reg­is­ters. The func­tion moves rdi into rax, the out­put reg­is­ter, and shifts the up­per half down.

Not only does clang pro­duce patently bad code for pass­ing things by value, but it also knows how to do it bet­ter, if you re­quest a stan­dard call­ing con­ven­tion! We could be gen­er­at­ing way bet­ter code than Clang, but we don’t!

Hereforth, I will de­scribe how to do it.

Let’s sup­pose that we keep the cur­rent call­ing con­ven­tion for ex­tern Rust”, but we add a flag -Zcallconv that sets the call­ing con­ven­tion for ex­tern Rust” when com­pil­ing a crate. The sup­ported val­ues will be -Zcallconv=legacy for the cur­rent one, and -Zcallconv=fast for the one we’re go­ing to de­sign. We could even let -O set -Zcallconv=fast au­to­mat­i­cally.

Why keep the old call­ing con­ven­tion? Although I did sweep de­buga­bil­ity un­der the rug, one nice prop­erty -Zcallconv=fast will not have is that it does not place ar­gu­ments in the C ABI or­der, which means that a reader re­ply­ing on the Diana’s silk dress cost $89” mnemonic on x86 will get fairly con­fused.

I am also as­sum­ing we may not even sup­port -Zcallconv=fast for some tar­gets, like WASM, where there is no con­cept of registers” and spilling”. It may not even make sense to en­able it for for de­bug builds, be­cause it will pro­duce much worse code with op­ti­miza­tions turned off.

There is also a mild wrin­kle with func­tion point­ers, and ex­tern Rust” {} blocks. Because this flag is per-crate, even though func­tions can ad­ver­tise which ver­sion of ex­tern Rust” they use, func­tion point­ers have no such lux­ury. However, call­ing through a func­tion pointer is slow and rare, so we can sim­ply force them to use -Zcallconv=legacy. We can gen­er­ate a shim to trans­late call­ing con­ven­tions as needed.

Similarly, we can, in prin­ci­ple, call any Rust func­tion like this:

However, this mech­a­nism can only be used to call un­man­gled sym­bols. Thus, we can sim­ply force #[no_mangle] sym­bols to use the legacy call­ing con­ven­tion.

In an ideal world, LLVM would pro­vide a way for us to spec­ify the call­ing con­ven­tion di­rectly. E.g., this ar­gu­ment goes in that reg­is­ter, this re­turn goes in that one, etc. Unfortunately, adding a call­ing con­ven­tion to LLVM re­quires writ­ing a bunch of C++.

However, we can get away with spec­i­fy­ing our own call­ing con­ven­tion by fol­low­ing the fol­low­ing pro­ce­dure.

First, de­ter­mine, for a given tar­get triple, the max­i­mum num­ber of val­ues that can be passed by reg­is­ter”. I will ex­plain how to do this be­low. Decide how to pass the re­turn value. It will ei­ther fit in the out­put reg­is­ters, or it will need to be re­turned by ref­er­ence”, in which case we pass an ex­tra ptr ar­gu­ment to the func­tion (tagged with the sret at­tribute) and the ac­tual re­turn value of the func­tion is that pointer. Decide which ar­gu­ments that have been passed by value need to be de­moted to be­ing passed by ref­er­ence. This will be a heuris­tic, but gen­er­ally will be ap­prox­i­mately arguments larger than the by-reg­is­ter space”. For ex­am­ple, on x86, this comes out to 176 bytes. Decide which ar­gu­ments get passed by reg­is­ter, so as to max­i­mize reg­is­ter space us­age. This prob­lem is NP-hard (it’s the knap­sack prob­lem) so it will re­quire a heuris­tic. All other ar­gu­ments are passed on the stack. Generate the func­tion sig­na­ture in LLVM IR. This will be all of the ar­gu­ments that are passed by reg­is­ter en­coded as var­i­ous non-ag­gre­gates, such as i64, ptr, dou­ble, and . What valid choices are for said non-ag­gre­gates de­pends on the tar­get, but the above are what you will gen­er­ally get on a 64-bit ar­chi­tec­ture. Arguments passed on the stack will fol­low the register in­puts”. Generate a func­tion pro­logue. This is code to de­code each Rust-level ar­gu­ment from the reg­is­ter in­puts, so that there are %ssa val­ues cor­re­spond­ing to those that would be pre­sent when us­ing -Zcallconv=legacy. This al­lows us to gen­er­ate the same code for the body of the func­tion re­gard­less of call­ing con­ven­tion. Redundant de­cod­ing code will be elim­i­nated by DCE passes. Generate a func­tion exit block. This is a block that con­tains a sin­gle phi in­struc­tion for the re­turn type as it would be for -Zcallconv=legacy. This block will en­code it into the req­ui­site out­put for­mat and then ret as ap­pro­pri­ate. All exit paths through the func­tion should br to this block in­stead of ret-ing. If a non-poly­mor­phic, non-in­line func­tion may have its ad­dress taken (as a func­tion pointer), ei­ther be­cause it is ex­ported out of the crate or the crate takes a func­tion pointer to it, gen­er­ate a shim that uses -Zcallconv=legacy and im­me­di­ately tail-calls the real im­ple­men­ta­tion. This is nec­es­sary to pre­serve func­tion pointer equal­ity.

The main up­shot here is that we need to cook up heuris­tics for fig­ur­ing out what goes in reg­is­ters (since we al­low re­order­ing ar­gu­ments to get bet­ter through­put). This is equiv­a­lent to the knap­sack prob­lem; knap­sack heuris­tics are be­yond the scope of this ar­ti­cle. This should hap­pen early enough that this in­for­ma­tion can be stuffed into rmeta to avoid need­ing to re­com­pute it. We may want to use dif­fer­ent, faster heuris­tics de­pend­ing on -Copt-level. Note that cor­rect­ness re­quires that we for­bid link­ing code gen­er­ated by mul­ti­ple dif­fer­ent Rust com­pil­ers, which is al­ready the case, since Rust breaks ABI from re­lease to re­lease.

Assuming we do that, how do we ac­tu­ally get LLVM to pass things in the way we want it to? We need to de­ter­mine what the largest by reg­is­ter” pass­ing LLVM will per­mit is. The fol­low­ing LLVM pro­gram is use­ful for de­ter­min­ing this on a par­tic­u­lar ver­sion of LLVM:

When you pass an ag­gre­gate by-value to an LLVM func­tion, LLVM will at­tempt to explode” that ag­gre­gate into as many reg­is­ters as pos­si­ble. There are dis­tinct reg­is­ter classes on dif­fer­ent sys­tems. For ex­am­ple, on both x86 and ARM, floats and vec­tors share the same reg­is­ter class (kind of).

The above val­ues are for x86. LLVM will pass six in­te­gers and eight SSE vec­tors by reg­is­ter, and re­turn half as many (3 and 4) by reg­is­ter. Increasing any of the val­ues gen­er­ates ex­tra loads and stores that in­di­cate LLVM gave up and passed ar­gu­ments on the stack.

The val­ues for aarch64-un­known-linux are 8 in­te­gers and 8 vec­tors for both in­puts and out­puts, re­spec­tively.

This is the max­i­mum num­ber of reg­is­ters we get to play with for each class. Anything ex­tra gets passed on the stack.

I rec­om­mend that every func­tion have the same num­ber of by-reg­is­ter ar­gu­ments. So on x86, EVERY -Zcallconv=fast func­tion’s sig­na­ture should look like this:

When pass­ing point­ers, the ap­pro­pri­ate i64s should be re­placed by ptr, and when pass­ing dou­bles, they re­place s.

But you’re prob­a­bly say­ing, Miguel, that’s crazy! Most func­tions don’t pass 176 bytes!” And you’d be right, if not for the magic of LLVMs very well-spec­i­fied poi­son se­man­tics.

We can get away with not do­ing ex­tra work if every ar­gu­ment we do not use is passed poi­son. Because poi­son is equal to the most con­ve­nient pos­si­ble value at the pre­sent mo­ment”, when LLVM sees poi­son passed into a func­tion via reg­is­ter, it de­cides that the most con­ve­nient value is whatever hap­pens to be in the reg­is­ter al­ready”, and so it does­n’t have to touch that reg­is­ter!

For ex­am­ple, if we wanted to pass a pointer via rcx, we would gen­er­ate the fol­low­ing code.

It is per­fectly le­gal to pass poi­son to a func­tion, if it does not in­ter­act with the poi­soned ar­gu­ment in any pro­scribed way. And as we see, load­_rcx() re­ceives its pointer ar­gu­ment in rcx, whereas make_the_­call() takes no penalty in set­ting up the call: load­ing poi­son into the other thir­teen reg­is­ters com­piles down to noth­ing, so it only needs to load the pointer re­turned by mal­loc into rcx.

This gives us al­most to­tal con­trol over ar­gu­ment pass­ing; un­for­tu­nately, it is not to­tal. In an ideal world, the same reg­is­ters are used for in­put and out­put, to al­low eas­ier pipelin­ing of calls with­out in­tro­duc­ing ex­tra reg­is­ter traf­fic. This is true on ARM and RISC-V, but not x86. However, be­cause reg­is­ter or­der­ing is merely a sug­ges­tion for us, we can choose to al­lo­cate the re­turn reg­is­ters in what­ever or­der we want. For ex­am­ple, we can pre­tend the or­der reg­is­ters should be al­lo­cated in is rdx, rcx, rdi, rsi, r8, r9 for in­puts, and rdx, rcx, rax for out­puts.

square gen­er­ates ex­tremely sim­ple code: the in­put and out­put reg­is­ter is rdi, so no ex­tra reg­is­ter traf­fic needs to be gen­er­ated. Similarly, when we ef­fec­tively do @square(@square(%0)), there is no setup be­tween the func­tions. This is sim­i­lar to code seen on aarch64, which uses the same reg­is­ter se­quence for in­put and out­put. We can see that the naive” ver­sion of this IR pro­duces the ex­act same code on aarch64 for this rea­son.

Now that we’ve es­tab­lished to­tal con­trol on how reg­is­ters are as­signed, we can turn to­wards max­i­miz­ing use of these reg­is­ters in Rust.

For sim­plic­ity, we can as­sume that rustc has al­ready processed the user­s’s types into ba­sic ag­gre­gates and unions; no enums here! We then have to make some de­ci­sions about which por­tions of the ar­gu­ments to al­lo­cate to reg­is­ters.

First, re­turn val­ues. This is rel­a­tively straight­for­ward, since there is only one value to pass. The amount of data we need to re­turn is not the size of the struct. For ex­am­ple, [(u64, u32); 2] mea­sures 32 bytes wide. However, eight of those bytes are padding! We do not need to pre­serve padding when re­turn­ing by value, so we can flat­ten the struct into (u64, u32, u64, u32) and sort by size into (u64, u64, u32, u32). This has no padding and is 24 bytes wide, which fits into the three re­turn reg­is­ters LLVM gives us on x86. We de­fine the ef­fec­tive size of a type to be the num­ber of non-un­def bits it oc­cu­pies. For [(u64, u32); 2], this is 192 bits, since it ex­cludes the padding. For bool, this is one. For char this is tech­ni­cally 21, but it’s sim­pler to treat char as an alias for u32.

The rea­son for count­ing bits this way is that it per­mits sig­nif­i­cant com­paction. For ex­am­ple, re­turn­ing a struct full of bools can sim­ply bit-pack the bools into a sin­gle reg­is­ter.

So, a re­turn value is con­verted to a by-ref re­turn if its ef­fec­tive size is smaller than the out­put reg­is­ter space (on x86, this is three in­te­ger reg­is­ters and four SSE reg­is­ters, so we get 88 bytes to­tal, or 704 bits).

Argument reg­is­ters are much harder, be­cause we hit the knap­sack prob­lem, which is NP-hard. The fol­low­ing rel­a­tively naive heuris­tic is where I would start, but it can be made in­fi­nitely smarter over time.

First, de­mote to by-ref any ar­gu­ment whose ef­fec­tive size is larget than the to­tal by-reg­is­ter in­put space (on x86, 176 bytes or 1408 bits). This means we get a pointer ar­gu­ment in­stead. This is ben­e­fi­cial to do first, since a sin­gle pointer might pack bet­ter than the huge struct.

Enums should be re­placed by the ap­pro­pri­ate dis­crim­i­nant-union pair. For ex­am­ple, Option is, in­ter­nally, (union { i32, () }, i1), while Option is (union { i32, (), () }, i2). Using a small non-power-of-two in­te­ger im­proves our abil­ity to pack things, since enum dis­crim­i­nants are of­ten quite tiny.

Next, we need to han­dle unions. Because muck­ing about with unions’ unini­tial­ized bits be­hind our backs is al­lowed, we need to ei­ther pass it as an ar­ray of u8, un­less it only has a sin­gle non-empty vari­ant, in which case it is re­placed with that vari­ant.

Now, we can pro­ceed to flat­ten every­thing. All of the con­verted ar­gu­ments are flat­tened into their most prim­i­tive com­po­nents: point­ers, in­te­gers, floats, and bools. Every field should be no larger than the small­est ar­gu­ment reg­is­ter; this may re­quire split­ting large types such as u128 or f64.

This big list of prim­i­tives is next sorted by ef­fec­tive size, from small­est to largest. We take the largest pre­fix of this that will fit in the avail­able reg­is­ter space; every­thing else goes on the stack.

If part of a Rust-level in­put is sent to the stack in this way, and that part is larger than a small mul­ti­ple of the pointer size (e.g., 2x), it is de­moted to be­ing passed by pointer-on-the-stack, to min­i­mize mem­ory traf­fic. Everything else is passed di­rectly on the stack in the or­der those in­puts were be­fore the sort. This helps keep re­gions that need to be copied rel­a­tively con­tigu­ous, to min­i­mize calls to mem­cpy.

The things we choose to pass in reg­is­ters are al­lo­cated to reg­is­ters in re­verse size or­der, so e.g. first 64-bit things, then 32-bit things, etc. This is the same lay­out al­go­rithm that repr(Rust) structs use to move all the padding into the tail. Once we get to the bools, those are bit-packed, 64 to a reg­is­ter.

Here’s a rel­a­tively com­pli­cated ex­am­ple. My Rust func­tion is as fol­lows:

The code­gen for this func­tion is quite com­plex, so I’ll only cover the pro­logue and epi­logue. After sort­ing and flat­ten­ing, our raw ar­gu­ment LLVM types are some­thing like this:

Everything fits in reg­is­ters! So, what does the LLVM func­tion look like on x86?

Above, !dbg meta­data for the ar­gu­ment val­ues should be at­tached to the in­struc­tion that ac­tu­ally ma­te­ri­al­izes it. This en­sures that gdb does some­thing halfway in­tel­li­gent when you ask it to print ar­gu­ment val­ues.

On the other hand, in cur­rent rustc, it gives LLVM eight pointer-sized pa­ra­me­ters, so it winds up spend­ing all six in­te­ger reg­is­ters, plus two val­ues passed on the stack. Not great!

This is not a com­plete de­scrip­tion of what a com­pletely over-en­gi­neered call­ing con­ven­tion could en­tail: in some cases we might know that we have ad­di­tional reg­is­ters avail­able (such as AVX reg­is­ters on x86). There are cases where we might want to split a struct across reg­is­ters and the stack.

This also is­n’t even get­ting into what re­turns could look like. Results are of­ten passed through sev­eral lay­ers of func­tions via ?, which can re­sult in a lot of re­dun­dant reg­is­ter moves. Often, a Result is large enough that it does­n’t fit in reg­is­ters, so each call in the ? stack has to in­spect an ok bit by load­ing it from mem­ory. Instead, a Result re­turn might be im­ple­mented as an out-pa­ra­me­ter pointer for the er­ror, with the ok vari­ant’s pay­load, and the is ok bit, re­turned as an Option. There are some fussy de­tails with Into calls via ?, but the idea is im­ple­mentable.

Now, be­cause we’re Rust, we’ve also got a trick up our sleeve that C does­n’t (but Go does)! When we’re gen­er­at­ing the ABI that all callers will see (for -Zcallconv=fast), we can look at the func­tion body. This means that a crate can ad­ver­tise the pre­cise ABI (in terms of reg­is­ter-pass­ing) of its func­tions.

This opens the door to a more ex­treme op­ti­miza­tion-based ABIs. We can start by sim­ply throw­ing out un­used ar­gu­ments: if the func­tion never does any­thing with a pa­ra­me­ter, don’t bother spend­ing reg­is­ters on it.

Another ex­am­ple: sup­pose that we know that an &T ar­gu­ment is not re­tained (a ques­tion the bor­row checker can an­swer at this point in the com­piler) and is never con­verted to a raw pointer (or writ­ten to mem­ory a raw pointer is taken of, etc). We also know that T is fairly small, and T: Freeze. Then, we can re­place the ref­er­ence with the pointee di­rectly, passed by value.

The most ob­vi­ous can­di­dates for this is APIs like HashMap::get(). If the key is some­thing like an i32, we need to spill that in­te­ger to the stack and pass a pointer to it! This re­sults in un­nec­es­sary, avoid­able mem­ory traf­fic.

Profile-guided ABI is a step fur­ther. We might know that some ar­gu­ments are hot­ter than oth­ers, which might cause them to be pri­or­i­tized in the reg­is­ter al­lo­ca­tion or­der.

You could even imag­ine a case where a func­tion takes a very large struct by ref­er­ence, but three i64 fields are very hot, so the caller can pre­load those fields, pass­ing them both by reg­is­ter and via the pointer to the large struct. The callee does not see ad­di­tional cost: it had to is­sue those loads any­way. However, the caller prob­a­bly has those val­ues in reg­is­ters al­ready, which avoids some mem­ory traf­fic.

Instrumentation pro­files may even in­di­cate that it makes sense to du­pli­cate whole func­tions, which are iden­ti­cal ex­cept for their ABIs. Maybe they take dif­fer­ent ar­gu­ments by reg­is­ter to avoid costly spills.

This is a bit more ad­vanced (and ranty) than my usual writ­ing, but this is an as­pect of Rust that I find re­ally frus­trat­ing. We could be do­ing so much bet­ter than C++ ever can (because of their ABI con­straints). None of this is new ideas; this is lit­er­ally how Go does it!

So why don’t we? Part of the rea­son is that ABI code­gen is com­plex, and as I de­scribed above, LLVM gives us very few use­ful knobs. It’s not a friendly part of rustc, and do­ing things wrong can have nasty con­se­quences for us­abil­ity. The other part is a lack of ex­per­tise. As of writ­ing, only a hand­ful of peo­ple con­tribut­ing to rustc have the nec­es­sary grasp of LLVMs se­man­tics (and mood swings) to emit the Right Code such that we get good code­gen and don’t crash LLVM.

Another rea­son is com­pi­la­tion time. The more com­pli­cated the func­tion sig­na­tures, the more pro­logue/​epi­logue code we have to gen­er­ate that LLVM has to chew on. But -Zcallconv is in­tended to only be used with op­ti­miza­tions turned on, so I don’t think this is a mean­ing­ful com­plaint. Nor do I think the pro­jec­t’s Goodhartization of com­pi­la­tion time as a met­ric is healthy… but I do not think this is ul­ti­mately a rel­e­vant draw­back.

I, un­for­tu­nately, do not have the spare time to dive into fix­ing rustc’s ABI code, but I do know LLVM re­ally well, and I know that this is a place where Rust has a low bus fac­tor. For that rea­son, I am happy to pro­vide the Rust com­piler team ex­pert knowl­edge on get­ting LLVM to do the right thing in ser­vice of mak­ing op­ti­mized code faster.

I will of­ten say that the so-called C ABI is a very bad one, and a rel­a­tively unimag­i­na­tive one when it comes to pass­ing com­pli­cated types ef­fec­tively. A lot of peo­ple ask me ok, what would you use in­stead”, and I just point them to the Go reg­is­ter ABI, but it seems most peo­ple have trou­ble fill­ing in the gaps of what I mean. This ar­ti­cle ex­plains what I mean in de­tail. I have dis­cussed call­ing con­ven­tions in the past, but as a re­minder: the call­ing con­ven­tion is the part of the ABI that con­cerns it­self with how to pass ar­gu­ments to and from a func­tion, and how to ac­tu­ally call a func­tion. This in­cludes which reg­is­ters ar­gu­ments go in, which reg­is­ters val­ues are re­turned out of, what func­tion pro­logues/​epi­logues look like, how un­wind­ing works, etc. This par­tic­u­lar post is pri­mar­ily about x86, but I in­tend to be rea­son­ably generic (so that what I’ve writ­ten ap­plies just as well to ARM, RISC-V, etc). I will as­sume a gen­eral fa­mil­iar­ity with x86 as­sem­bly, LLVM IR, and Rust (but not rustc’s in­ter­nals). Today, like many other na­tively com­piled lan­guages, Rust de­fines an un­spec­i­fied0- call­ing con­ven­tion that lets it call func­tions how­ever it likes. In prac­tice, Rust low­ers to LLVMs built-in C call­ing con­ven­tion, which LLVMs pro­logue/​epi­logue code­gen gen­er­ates calls for. Rust is fairly con­ser­v­a­tive: it tries to gen­er­ate LLVM func­tion sig­na­tures that Clang could have plau­si­bly gen­er­ated. This has two sig­nif­i­cant ben­e­fits: Good prob­a­bil­ity de­bug­gers won’t choke on it. This is not a con­cern on Linux, though, be­cause DWARF is very gen­eral and does not bake-in the Linux C ABI. We will con­cern our­selves only with ELF-based sys­tems and as­sume that de­bug­ga­bil­ity is a non­is­sue. It is less likely to tickle LLVM bugs due to us­ing ABI code­gen that Clang does not ex­er­cise. I think that if Rust tick­les LLVM bugs, we should ac­tu­ally fix them (a very small num­ber of rustc con­trib­u­tors do in fact do this). However, we are too con­ser­v­a­tive. We get ter­ri­ble code­gen for sim­ple func­tions: arr is 12 bytes wide, so you’d think it would be passed in reg­is­ters, but no! It is passed by pointer! Rust is ac­tu­ally more con­ser­v­a­tive than what the Linux C ABI man­dates, be­cause it ac­tu­ally passes the [i32; 3] in reg­is­ters when ex­tern C” is re­quested. The ar­ray is passed in rdi and rsi, with the i32s packed into reg­is­ters. The func­tion moves rdi into rax, the out­put reg­is­ter, and shifts the up­per half down. Not only does clang pro­duce patently bad code for pass­ing things by value, but it also knows how to do it bet­ter, if you re­quest a stan­dard call­ing con­ven­tion! We could be gen­er­at­ing way bet­ter code than Clang, but we don’t! Hereforth, I will de­scribe how to do it. Let’s sup­pose that we keep the cur­rent call­ing con­ven­tion for ex­tern Rust”, but we add a flag -Zcallconv that sets the call­ing con­ven­tion for ex­tern Rust” when com­pil­ing a crate. The sup­ported val­ues will be -Zcallconv=legacy for the cur­rent one, and -Zcallconv=fast for the one we’re go­ing to de­sign. We could even let -O set -Zcallconv=fast au­to­mat­i­cally. Why keep the old call­ing con­ven­tion? Although I did sweep de­buga­bil­ity un­der the rug, one nice prop­erty -Zcallconv=fast will not have is that it does not place ar­gu­ments in the C ABI or­der, which means that a reader re­ply­ing on the Diana’s silk dress cost $89” mnemonic on x86 will get fairly con­fused. I am also as­sum­ing we may not even sup­port -Zcallconv=fast for some tar­gets, like WASM, where there is no con­cept of registers” and spilling”. It may not even make sense to en­able it for for de­bug builds, be­cause it will pro­duce much worse code with op­ti­miza­tions turned off. There is also a mild wrin­kle with func­tion point­ers, and ex­tern Rust” {} blocks. Because this flag is per-crate, even though func­tions can ad­ver­tise which ver­sion of ex­tern Rust” they use, func­tion point­ers have no such lux­ury. However, call­ing through a func­tion pointer is slow and rare, so we can sim­ply force them to use -Zcallconv=legacy. We can gen­er­ate a shim to trans­late call­ing con­ven­tions as needed. Similarly, we can, in prin­ci­ple, call any Rust func­tion like this: However, this mech­a­nism can only be used to call un­man­gled sym­bols. Thus, we can sim­ply force #[no_mangle] sym­bols to use the legacy call­ing con­ven­tion. Bending LLVM to Our Will In an ideal world, LLVM would pro­vide a way for us to spec­ify the call­ing con­ven­tion di­rectly. E.g., this ar­gu­ment goes in that reg­is­ter, this re­turn goes in that one, etc. Unfortunately, adding a call­ing con­ven­tion to LLVM re­quires writ­ing a bunch of C++. However, we can get away with spec­i­fy­ing our own call­ing con­ven­tion by fol­low­ing the fol­low­ing pro­ce­dure. First, de­ter­mine, for a given tar­get triple, the max­i­mum num­ber of val­ues that can be passed by reg­is­ter”. I will ex­plain how to do this be­low. Decide how to pass the re­turn value. It will ei­ther fit in the out­put reg­is­ters, or it will need to be re­turned by ref­er­ence”, in which case we pass an ex­tra ptr ar­gu­ment to the func­tion (tagged with the sret at­tribute) and the ac­tual re­turn value of the func­tion is that pointer. Decide which ar­gu­ments that have been passed by value need to be de­moted to be­ing passed by ref­er­ence. This will be a heuris­tic, but gen­er­ally will be ap­prox­i­mately arguments larger than the by-reg­is­ter space”. For ex­am­ple, on x86, this comes out to 176 bytes. Decide which ar­gu­ments get passed by reg­is­ter, so as to max­i­mize reg­is­ter space us­age. This prob­lem is NP-hard (it’s the knap­sack prob­lem) so it will re­quire a heuris­tic. All other ar­gu­ments are passed on the stack. Generate the func­tion sig­na­ture in LLVM IR. This will be all of the ar­gu­ments that are passed by reg­is­ter en­coded as var­i­ous non-ag­gre­gates, such as i64, ptr, dou­ble, and . What valid choices are for said non-ag­gre­gates de­pends on the tar­get, but the above are what you will gen­er­ally get on a 64-bit ar­chi­tec­ture. Arguments passed on the stack will fol­low the register in­puts”. Generate a func­tion pro­logue. This is code to de­code each Rust-level ar­gu­ment from the reg­is­ter in­puts, so that there are %ssa val­ues cor­re­spond­ing to those that would be pre­sent when us­ing -Zcallconv=legacy. This al­lows us to gen­er­ate the same code for the body of the func­tion re­gard­less of call­ing con­ven­tion. Redundant de­cod­ing code will be elim­i­nated by DCE passes. Generate a func­tion exit block. This is a block that con­tains a sin­gle phi in­struc­tion for the re­turn type as it would be for -Zcallconv=legacy. This block will en­code it into the req­ui­site out­put for­mat and then ret as ap­pro­pri­ate. All exit paths through the func­tion should br to this block in­stead of ret-ing. If a non-poly­mor­phic, non-in­line func­tion may have its ad­dress taken (as a func­tion pointer), ei­ther be­cause it is ex­ported out of the crate or the crate takes a func­tion pointer to it, gen­er­ate a shim that uses -Zcallconv=legacy and im­me­di­ately tail-calls the real im­ple­men­ta­tion. This is nec­es­sary to pre­serve func­tion pointer equal­ity. The main up­shot here is that we need to cook up heuris­tics for fig­ur­ing out what goes in reg­is­ters (since we al­low re­order­ing ar­gu­ments to get bet­ter through­put). This is equiv­a­lent to the knap­sack prob­lem; knap­sack heuris­tics are be­yond the scope of this ar­ti­cle. This should hap­pen early enough that this in­for­ma­tion can be stuffed into rmeta to avoid need­ing to re­com­pute it. We may want to use dif­fer­ent, faster heuris­tics de­pend­ing on -Copt-level. Note that cor­rect­ness re­quires that we for­bid link­ing code gen­er­ated by mul­ti­ple dif­fer­ent Rust com­pil­ers, which is al­ready the case, since Rust breaks ABI from re­lease to re­lease. What Is LLVM Willing to Do? Assuming we do that, how do we ac­tu­ally get LLVM to pass things in the way we want it to? We need to de­ter­mine what the largest by reg­is­ter” pass­ing LLVM will per­mit is. The fol­low­ing LLVM pro­gram is use­ful for de­ter­min­ing this on a par­tic­u­lar ver­sion of LLVM: When you pass an ag­gre­gate by-value to an LLVM func­tion, LLVM will at­tempt to explode” that ag­gre­gate into as many reg­is­ters as pos­si­ble. There are dis­tinct reg­is­ter classes on dif­fer­ent sys­tems. For ex­am­ple, on both x86 and ARM, floats and vec­tors share the same reg­is­ter class (kind of). The above val­ues are for x86. LLVM will pass six in­te­gers and eight SSE vec­tors by reg­is­ter, and re­turn half as many (3 and 4) by reg­is­ter. Increasing any of the val­ues gen­er­ates ex­tra loads and stores that in­di­cate LLVM gave up and passed ar­gu­ments on the stack. The val­ues for aarch64-un­known-linux are 8 in­te­gers and 8 vec­tors for both in­puts and out­puts, re­spec­tively. This is the max­i­mum num­ber of reg­is­ters we get to play with for each class. Anything ex­tra gets passed on the stack. I rec­om­mend that every func­tion have the same num­ber of by-reg­is­ter ar­gu­ments. So on x86, EVERY -Zcallconv=fast func­tion’s sig­na­ture should look like this: When pass­ing point­ers, the ap­pro­pri­ate i64s should be re­placed by ptr, and when pass­ing dou­bles, they re­place s. But you’re prob­a­bly say­ing, Miguel, that’s crazy! Most func­tions don’t pass 176 bytes!” And you’d be right, if not for the magic of LLVMs very well-spec­i­fied poi­son se­man­tics. We can get away with not do­ing ex­tra work if every ar­gu­ment we do not use is passed poi­son. Because poi­son is equal to the most con­ve­nient pos­si­ble value at the pre­sent mo­ment”, when LLVM sees poi­son passed into a func­tion via reg­is­ter, it de­cides that the most con­ve­nient value is whatever hap­pens to be in the reg­is­ter al­ready”, and so it does­n’t have to touch that reg­is­ter! For ex­am­ple, if we wanted to pass a pointer via rcx, we would gen­er­ate the fol­low­ing code.  ; This is a -Zcallconv=fast-style func­tion.

%Out = type {[3 x i64], [4 x ]}

de­fine %Out @load_rcx(

i64 %rdi, i64 %rsi, i64 %rdx,

ptr %rcx, i64 %r8, i64 %r9,

%xmm0, %xmm1,

%xmm2, %xmm3,

%xmm4, %xmm5,

%xmm6, %xmm7

%load = load i64, ptr %rcx

%out = in­sert­value %Out poi­son,

i64 %load, 0, 0

ret %Out %out

de­clare ptr @malloc(i64)

de­fine i64 @make_the_call() {

%1 = call ptr @malloc(i64 8)

store i64 42, ptr %1

%2 = call %Out @by_rcx(

i64 poi­son, i64 poi­son, i64 poi­son,

ptr %1, i64 poi­son, i64 poi­son,

It is per­fectly le­gal to pass poi­son to a func­tion, if it does not in­ter­act with the poi­soned ar­gu­ment in any pro­scribed way. And as we see, load­_rcx() re­ceives its pointer ar­gu­ment in rcx, whereas make_the_­call() takes no penalty in set­ting up the call: load­ing poi­son into the other thir­teen reg­is­ters com­piles down to noth­ing, so it only needs to load the pointer re­turned by mal­loc into rcx. This gives us al­most to­tal con­trol over ar­gu­ment pass­ing; un­for­tu­nately, it is not to­tal. In an ideal world, the same reg­is­ters are used for in­put and out­put, to al­low eas­ier pipelin­ing of calls with­out in­tro­duc­ing ex­tra reg­is­ter traf­fic. This is true on ARM and RISC-V, but not x86. However, be­cause reg­is­ter or­der­ing is merely a sug­ges­tion for us, we can choose to al­lo­cate the re­turn reg­is­ters in what­ever or­der we want. For ex­am­ple, we can pre­tend the or­der reg­is­ters should be al­lo­cated in is rdx, rcx, rdi, rsi, r8, r9 for in­puts, and rdx, rcx, rax for out­puts. %Out = type {[3 x i64], [4 x ]}

de­fine %Out @square(

i64 %rdi, i64 %rsi, i64 %rdx,

ptr %rcx, i64 %r8, i64 %r9,

%xmm0, %xmm1,

%xmm2, %xmm3,

%xmm4, %xmm5,

%xmm6, %xmm7

%sq = mul i64 %rdx, %rdx

%out = in­sert­value %Out poi­son,

i64 %sq, 0, 1

ret %Out %out

de­fine i64 @make_the_call(i64) {

%2 = call %Out @square(

...

Read the original on mcyoung.xyz »

7 217 shares, 11 trendiness

The SeaMonkey® Project

Web-browser, ad­vanced e-mail, news­group and feed client, IRC chat, and HTML edit­ing made sim­ple—all your Internet needs in one ap­pli­ca­tion.

Which op­er­at­ing sys­tem are you us­ing?

Would you like to se­lect a dif­fer­ent lan­guage?

The SeaMonkey pro­ject is a com­mu­nity ef­fort to de­velop the SeaMonkey Internet Application Suite (see be­low). Such a soft­ware suite was pre­vi­ously made pop­u­lar by Netscape and Mozilla, and the SeaMonkey pro­ject con­tin­ues to de­velop and de­liver high-qual­ity up­dates to this con­cept. Containing an Internet browser, email & news­group client with an in­cluded web feed reader, HTML ed­i­tor, IRC chat and web de­vel­op­ment tools, SeaMonkey is sure to ap­peal to ad­vanced users, web de­vel­op­ers and cor­po­rate users.

Under the hood, SeaMonkey uses much of the same Mozilla Firefox source code which pow­ers such prod­ucts as

Thunderbird. Legal back­ing is pro­vided by the SeaMonkey Association (SeaMonkey e. V.).

The SeaMonkey pro­ject is proud to pre­sent SeaMonkey 2.53.18.2: The new re­lease of the all-in-one Internet suite is

avail­able for free down­load now!

2.53.18.2 is a mi­nor bug­fix re­lease on the 2.53.x branch and con­tains a crash fix and a few other fixes to the ap­pli­ca­tion from the un­der­ly­ing plat­form code.

SeaMonkey 2.53.18.2 is avail­able in 23 lan­guages, for Windows, ma­cOS x64 and Linux.

Automatic up­grades from pre­vi­ous 2.53.x ver­sions are en­abled for this

re­lease, but if you have prob­lems with it please down­load the full in­staller

from the down­loads sec­tion and in­stall SeaMonkey 2.53.18.2 man­u­ally over the

pre­vi­ous ver­sion.

For a more com­plete list of ma­jor changes in SeaMonkey 2.53.18.2, see the

What’s New in SeaMonkey 2.53.18.2

sec­tion of the Release Notes, which also con­tains a list of known is­sues and an­swers to fre­quently asked ques­tions. For a more gen­eral overview of the SeaMonkey pro­ject (and screen shots!), visit www.sea­mon­key-pro­ject.org.

We en­cour­age users to get in­volved in dis­cussing and re­port­ing prob­lems as well as fur­ther im­prov­ing the prod­uct.

The SeaMonkey pro­ject is proud to pre­sent SeaMonkey 2.53.18 Beta 1: The new beta test re­lease of the all-in-one Internet suite is

avail­able for free down­load now!

2.53.18 will be an in­cre­men­tal up­date on the 2.53.x branch and in­cor­po­rates a num­ber of en­hance­ments, changes and fixes to the ap­pli­ca­tion as well as those from the un­der­ly­ing plat­form code. Support for pars­ing and pro­cess­ing newer reg­exp ex­pres­sions has been added help­ing with web com­pat­i­bil­ity on more than a few sites. Crash re­port­ing has been switched over to

BugSplat. We also added many fixes and back­ports for over­all plat­form sta­bil­ity.

Before in­stalling the new ver­sion make a full backup of

your pro­file and thor­oughly read and fol­low the

Release Notes. We en­cour­age testers to get in­volved in dis­cussing and re­port­ing prob­lems as well as fur­ther im­prov­ing the prod­uct.

SeaMonkey 2.53.18 Beta 1 is avail­able in 23 lan­guages, for Windows, ma­cOS x64 and Linux.

Attention ma­cOS users! The cur­rent SeaMonkey re­lease crashes dur­ing startup af­ter up­grad­ing to ma­cOS 13 Ventura. Until we have a fix we ad­vise you not to up­grade your ma­cOS in­stal­la­tion to Ventura. No us­able crash in­for­ma­tion is gen­er­ated and this might take a bit longer than usual to fix. This is not a prob­lem with Monterey 12.6.1 or any lower sup­ported ma­cOS ver­sion so might even be an Apple bug.

SeaMonkey has in­her­ited the suc­cess­ful all-in-one con­cept of the orig­i­nal Netscape Communicator and con­tin­ues that prod­uct line based on the mod­ern, cross-plat­form ar­chi­tec­ture pro­vided by the

Mozilla pro­ject.

* The Internet browser at the core of

the SeaMonkey Internet Application Suite uses the same ren­der­ing en­gine and

ap­pli­ca­tion plat­form as Mozilla Firefox, with pop­u­lar fea­tures like tabbed

brows­ing, feed de­tec­tion, popup block­ing, smart lo­ca­tion bar, find as you

type and a lot of other func­tion­al­ity for a smooth web ex­pe­ri­ence.

* SeaMonkey’s Mail and Newsgroups client

shares lots of code with Thunderbird and fea­tures adap­tive Junk mail

fil­ter­ing, tags and mail views, web feeds read­ing, tabbed mes­sag­ing, mul­ti­ple

ac­counts, S/MIME, ad­dress books with LDAP sup­port and is ready for both

pri­vate and cor­po­rate use.

* Additional com­po­nents in­clude an easy-to-use

HTML Editor, the ChatZilla IRC chat

ap­pli­ca­tion and web de­vel­op­ment tools like a DOM Inspector.

* If that’s still not enough, SeaMonkey can be ex­tended with nu­mer­ous

Add-Ons that pro­vide

ad­di­tional func­tion­al­ity and cus­tomiza­tion for a com­plete Internet

ex­pe­ri­ence.

...

Read the original on www.seamonkey-project.org »

8 206 shares, 29 trendiness

Blurmatic

...

Read the original on www.blurmatic.com »

9 203 shares, 16 trendiness

Discover the vast ranges of our visible and invisible world.

Scale of Universe is an in­ter­ac­tive ex­pe­ri­ence to in­spire peo­ple to learn about the vast ranges of the vis­i­ble and in­vis­i­ble world. Click on ob­jects to learn more. Use the scroll bar to zoom in and out. Remastered by Dave Caruso, Ben Plate, and more.

...

Read the original on scaleofuniverse.com »

10 186 shares, 10 trendiness

The Former Slave Who Became a Cowboy, a Rancher, and a Texas Legend

One day in the late 1930s, Daniel Webster Wallace—“80 John,” as he was known in ranch­ing cir­cles—rode his fa­vorite horse, Blondie, from his Mitchell County ranch to the Loraine post of­fice. It was a fa­mil­iar route for him in this mostly flat, sandy part of the state halfway be­tween Midland and Abilene. He had made the six-mile round trip dozens of times. Wallace col­lected his mail then walked back to Blondie. In his time, 80 John had bro­ken hun­dreds of broncs. His tougher-than-bull-hide body had never failed him. But on this day, he lacked the strength to swing his leg over the sad­dle. A group of men saw him strug­gling and hus­tled over to lift 80 John onto Blondie.

Wallace was Black. The men who helped him were white. One might imag­ine that such a scene would have been jaw-drop­ping in Depression-era Texas, where white hos­til­ity to­ward peo­ple of color was com­mon. But the West Texas cow­boy cul­ture of the time was dis­tinc­tive. Men of dif­fer­ent races of­ten sup­ported and re­spected one an­other. And no cow­boy was more re­spected than Wallace.

In fact he was one of the most re­mark­able fig­ures in our his­tory. By the time of his death from in­fluenza and pneu­mo­nia in 1939, Wallace had built a West Texas ranch of 8,800 acres and amassed a per­sonal for­tune pur­ported to be more than a mil­lion dol­lars—equiv­a­lent to about $22 mil­lion to­day. He brought about tech­ni­cal in­no­va­tions that are still used and de­voted much of his sav­ings to strength­en­ing his com­mu­nity—all at a time when it would have been dif­fi­cult for a white man, much less a Black man born en­slaved, to ac­com­plish any of that.

Yet few have heard of him to­day. Last year I at­tended the awards ban­quet at the National Cowboy & Western Heritage Museum, in Oklahoma City, at which Wallace was in­ducted into the Hall of Great Westerners, join­ing such lu­mi­nar­ies as Buffalo Bill and U. S. Supreme Court jus­tice Sandra Day O’Connor. I pride my­self on be­ing knowl­edge­able about the Old West, but I had­n’t heard of 80 John un­til that night, nor had many other at­ten­dees.

Fourteen mem­bers of Wallace’s fam­ily were pre­sent, and when they ar­rived, people were look­ing at us like, What are you do­ing here?’ ” says his great-grand­daugh­ter Daphne Fowler, who lives on her great-grand­par­ents’ ranch. But af­ter the ban­quet, we had all these peo­ple com­ing up to us like, Wow, we had no idea!’ ” The event, she says, was about twenty years in the mak­ing. We have been push­ing to get him rec­og­nized. And it’s not just recog­ni­tion for my great-grand­fa­ther. There have been a lot of cow­boys of color, and their sto­ries don’t get told.”

Larry Callies, founder of the Black Cowboy Museum, in Rosenberg, be­lieves that the Houston re­gion is the birth­place of the American cow­boy. The word cowboy’ it­self be­gan be­ing used in Fort Bend County and the sur­round­ing area in 1821,” Callies says. It was ap­plied to Black slaves who worked with cat­tle.” (He notes that just as an en­slaved per­son who worked in­side the man­sion would be re­ferred to as a houseboy,” one who took care of cat­tle was re­ferred to as a cowboy.” Even decades later, the term cowboy” re­mained un­pop­u­lar with white cow­punch­ers be­cause of its racial con­no­ta­tions.) As early as the 1840s, Black men who were en­slaved rounded up free-range Longhorns on the plains west of Houston and drove them north across Indian Territory to Kansas, es­tab­lish­ing Texas’s fa­bled cat­tle-drive era. 80 John’s story emerges from this rich cul­ture.

He was born in 1860 on a two-hun­dred-acre farm out­side Inez, in Victoria County, not far from the Gulf Coast, to par­ents who were en­slaved. His mother, Mary Wallace, had been bought by the far­m’s own­ers, Mary and Josiah O’Daniel, to serve as a maid and wet nurse. His fa­ther, William Wallace, worked as a farm­hand. The O’Daniels put young Wallace to work in the fields as a small child, and he re­ceived vir­tu­ally no for­mal ed­u­ca­tion. Schools were scarce,” he told his daugh­ter in the 1930s. I re­ceived most of my learn­ing by con­tact with oth­ers and ob­ser­va­tion.” 80 John and his fam­ily re­mained en­slaved un­til the Juneteenth eman­ci­pa­tion an­nounce­ment, in 1865.

The fol­low­ing year, the O’Daniels moved to a larger farm about eighty miles north, out­side Flatonia, tak­ing the Wallace fam­ily with them—this time as paid em­ploy­ees. The fam­i­lies main­tained ties for the next sev­enty years; 80 John stayed in touch with the O’Daniels’ sons, M. H. and Dial, un­til his death.

Mary Wallace saved her coins with the in­tent of some­day buy­ing real es­tate. The les­son was not lost on 80 John, who from an early age dreamed of own­ing a spread. He loathed chop­ping cot­ton for some­one else.

Cattle dri­ves of­ten passed through the area, and Wallace de­cided they were his es­cape route, since many of the cow­boys he saw were Black. One predawn morn­ing in March 1876, he ran away from the O’Daniel farm and joined a crew mov­ing cat­tle nearly three hun­dred miles north­west, to Buffalo Gap. Wallace was a green hand when he de­parted Flatonia. By the time the herd reached its des­ti­na­tion, he had plenty of ex­pe­ri­ence at trailing.” Thereafter, work was not hard to find.

He rode for the most fa­mous and re­spected cat­tle barons of these sem­i­nal days of the Texas cat­tle in­dus­try,” says his great-grand­son Alfred McMichael, who, along with his son, Keir, hosts a web­site de­voted to 80 John. He rode every ma­jor cat­tle trail, from the Chisholm Trail to the Goodnight-Loving Trail. He made long, dar­ing solo rides.” None of it was easy. He con­tended with thun­der­storms, bliz­zards, droughts, dust storms, out­law gangs, poi­so­nous snakes, dis­ease, in­fec­tion, sun­stroke, lack of food and wa­ter, stam­pedes, and skir­mishes with Native Americans. Once he spent days track­ing a hand­ful of Comanches who had stolen one of his fa­vorite horses.

A friend rec­om­mended that I read Lonesome Dove,” McMichael says. I got a sense of what my great-grand­fa­ther’s life on the cat­tle ranges was like from read­ing about Joshua Deets.” Deets was the much-re­spected Black cow­boy por­trayed by Danny Glover in the TV ren­di­tion of Larry McMurtry’s novel. On the other hand, 80 John’s time on the trails was noth­ing like the world por­trayed in the vast ma­jor­ity of Hollywood west­erns, in which all of the char­ac­ters—other than the evil Indians”—are white.

Even now, in 2024, a lot of peo­ple aren’t aware of how di­verse cow­boy cul­ture was,” says Robert Tidwell, in­terim di­rec­tor of col­lec­tions, ex­hibits, and re­search at Texas Tech’s National Ranching Heritage Center, in Lubbock. The main con­cern on a ranch is, Can you do the job? Can you pull your own weight? And when that’s the main cri­te­ria, you’d be sur­prised at how things shake out. You had more tol­er­ance than you’d find in other places where so­cial struc­tures were more de­fined. Between twenty and thirty per­cent of cow­boys were Black.” A com­pa­ra­ble num­ber of eth­nic Mexicans worked on the cat­tle trail.

Still, 80 John did oc­ca­sion­ally run into racists, and he re­fused to tol­er­ate them. His daugh­ter Hettye Wallace Branch writes in her book, The Story of 80 John,” that one white man bul­lied him at a cow camp in West Texas. The six-foot-three 80 John re­sponded by whup­ping the of­fender in a fist­fight, which ap­par­ently in­stilled re­spect in his ad­ver­sary. Improbable as it seems, the two men de­vel­oped a friend­ship.

In 1878 a let­ter in­formed Wallace that his mother, whom he had not seen since leav­ing the O’Daniels’ farm, two years ear­lier, was deathly ill. He hur­ried back home but ar­rived too late. He set­tled his moth­er’s es­tate—which in­cluded a few acres she had bought—and then re­turned to West Texas, where he went to work for Clay Mann. Neither man’s life would be the same.

Though largely for­got­ten now, Mann was a pro­to­type for the white Texas wheeler-dealer. In his time he was a leg­end in cow camps through­out the West, a fig­ure who had, as a teenager, started a cat­tle herd and killed his fa­ther’s mur­derer. At one point Mann owned tens of thou­sands of cat­tle and thou­sands of acres in Texas, New Mexico, and Wyoming. He also owned a 600,000-acre ranch in Mexico.

But there was a prob­lem.

Now, Clay, he liked to drink, and he liked to gam­ble,” his great-grand­son Tom Possum” Mann tells me. He loved to play a game called Mexican Monte, which was pop­u­lar in the West at the time. He might win ten thou­sand dol­lars in a night, then lose it the next night.” Mann knew he needed some­one in his out­fit who was lev­el­headed and trust­wor­thy. 80 John fit the bill. 80 John did­n’t drink or gam­ble or any­thing,” Possum says. Soon Wallace and Mann were head­ing up large dri­ves from Texas to cat­tle towns through­out the Midwest.

At the end of a drive,” Possum says, Clay would pay the hands then give most of the left­over money to 80 John to take back to Clay’s wife, Mary Mann, in Mitchell County. She’d de­posit it in the bank he founded in Colorado City. Clay would stay be­hind with some of the money to gam­ble and even­tu­ally catch a train home.” Mann once gave Wallace $30,000 (the equiv­a­lent of $900,000 to­day) in cash to take to Midland, a three-day ride west from Mann’s ranch. 80 John made sure the money ar­rived safely.

Mann’s op­er­a­tions were ex­ten­sive enough that he reg­is­tered 43 cat­tle brands, in­clud­ing, most fa­mously, a huge num­ber 80. Wallace over­saw the burn­ing of that brand onto Mann’s stock, and cat­tle­men as­so­ci­ated him with it. In West Texas,” Tidwell says, the name John’ was sort of a generic term for cowhands in gen­eral.” Hence the nick­name 80 John.

Wallace’s ex­per­tise ex­tended to more than brand­ing and bronc bust­ing. Mann dis­cov­ered his top em­ployee was freak­ishly good at arith­metic. He could add columns of num­bers in his head, which proved es­pe­cially use­ful when it came to fi­nan­cial trans­ac­tions. I have a method of fig­ur­ing of my own which has stood me in good stead,” 80 John told his daugh­ter. Very sel­dom I have missed any­thing due me any larger than a frac­tion.” He could also scan a pas­ture with hun­dreds of cat­tle and pro­duce a head count that was usu­ally off by fewer than a dozen an­i­mals.

Early in their busi­ness re­la­tion­ship, Wallace told Mann that his goal in life was to own land and a herd. Mann be­gan set­ting aside the li­on’s share of 80 John’s pay to ac­com­plish that. Bit by bit, Wallace ac­quired cat­tle as he con­tin­ued work­ing for Mann. Among his other re­spon­si­bil­i­ties, Wallace was the de facto fore­man of many of Mann’s prop­er­ties. Even amid the rel­a­tive racial comity of cow­boy cul­ture it was ex­tra­or­di­nar­ily un­usual—per­haps un­prece­dented—for a Black man to over­see white ranch hands.

In 1885, Wallace had saved enough to buy 1,280 acres of rail­road land south of Loraine, in­tend­ing to home­stead it. Realizing he needed to learn how to read and write be­fore he could be­come a suc­cess­ful rancher, he en­rolled as a sec­ond grader at a seg­re­gated Black school in Navarro County, al­most three hun­dred miles east of his newly ac­quired land. Over two terms of study­ing along­side chil­dren, he be­came func­tion­ally lit­er­ate. He also met and fell in love with Laura Dee Owens, a Corsicana beauty who was fin­ish­ing high school. They were mar­ried in 1888 and re­mained to­gether un­til his death al­most 51 years later. Laura sac­ri­ficed her plans to teach in or­der to help him build a ranch from scratch. They started life as a mar­ried cou­ple in a two-room cabin on one of Mann’s ranches.

In 1889, 80 John’s world was turned up­side down when Mann died from a stom­ach he­m­or­rhage, caused by decades of hard drink­ing. Wallace took charge of the cat­tle op­er­a­tions and be­came a fa­ther fig­ure to Mann’s sons, teach­ing them the fun­da­men­tals of ranch­ing. The Wallace and Mann fam­i­lies es­tab­lished a bond that con­tin­ued through gen­er­a­tions. (Fowler, Wallace’s great-grand­daugh­ter, says Mann’s great-great-grand­daugh­ter, Becca Mann George, was her child­hood best friend, and they re­main close.)

Over the next decades, 80 John and Laura lived as fru­gally as pos­si­ble and saved every dime they could to buy more land and cat­tle. They made for an ideal part­ner­ship. He was good with num­bers, while she was more ver­bal. She could parse the thick­est con­tracts and deeds and then read the salient points aloud to her hus­band. Wallace had a near-pho­to­graphic mem­ory when it came to busi­ness mat­ters. In meet­ing with lawyers, he could quote pas­sages ver­ba­tim that Laura had read to him. She also took charge of treat­ing sick and in­jured live­stock. Whenever 80 John had to be ab­sent to tend to busi­ness, she ran the ranch by her­self.

I just can’t imag­ine,” Fowler says, what it was like in the late 1800s and early 1900s with that pres­sure of be­ing the only Black rancher out here. Sometimes I’ll go by the ceme­tery and I’ll smile think­ing about what he and Laura did.”

As early as 1903, Wallace was ad­mit­ted to the Cattle Raisers Association of Texas and be­came a highly re­garded mem­ber. He was, so far as I can tell, the only Black man dur­ing that era who was in­vited to join the group. To at­tend each year, he had to board a seg­re­gated pas­sen­ger car at the Colorado City rail­road sta­tion to travel to Fort Worth. He had to find lodg­ing at a so-called col­ored ho­tel. From there he walked to the whites-only ho­tel that hosted the con­ven­tion. During one of his train trips, some of his white rancher friends joined him for con­ver­sa­tion in the seg­re­gated pas­sen­ger car. The con­duc­tor at­tempted to re­move the white men, but one re­fused to budge. I have known 80 John for thirty years,” the cat­tle­man said. We ate and slept on the ground to­gether. I see no rea­son that makes it im­pos­si­ble for me to sit here now.”

The Wallaces even­tu­ally lived in a four-room house that 80 John de­signed and built him­self. It has since been moved to the grounds of the National Ranching Heritage Center and re­stored to its orig­i­nal con­di­tion. Wallace dis­dained au­to­mo­biles, air­planes, tele­phones, gramo­phones, and in­door plumb­ing. But he was ahead of his time when it came to rais­ing cat­tle,” says Mark Merrell, a re­tired Mitchell County judge and ed­u­ca­tor. The man who started out herd­ing wild Longhorns ended up cross­breed­ing Herefords and Durhams (Shorthorns), decades be­fore do­ing so be­came com­mon­place.

He also was an early adopter of wind­mill tech­nol­ogy in West Texas (one of his wind­mills is on dis­play at the American Windmill Museum, in Lubbock) and cre­ated so­phis­ti­cated con­crete wa­ter­ing troughs that other ranch­ers copied. As the cou­ple be­came more af­flu­ent, they con­tributed to Mitchell County char­i­ties, in­clud­ing pay­ing the con­struc­tion costs for a Baptist church in Loraine. They also built the Wallace name­sake school, in Colorado City, that pro­vided area Black chil­dren their only ac­cess to ed­u­ca­tion.

In the 1930s, 80 John and Laura drew up a will that put the ranch into a trust. Their pri­mary goal was to pro­vide ed­u­ca­tion fund­ing for sub­se­quent gen­er­a­tions of their fam­ily. They also wanted to keep the ranch in­tact. The prop­erty is now held by seven trusts. Each one of the mem­bers of the fam­ily makes it very spe­cific: don’t sell the land, ex­cept to a fam­ily mem­ber,” says Dwayne Harris, who, as trust of­fi­cer at City National Bank, man­aged the ranch for more than twenty years be­fore his re­tire­ment. We had cat­tle-graz­ing leases and oil leases as well as rev­enue from cot­ton fields, a gravel quarry, and, more re­cently, wind tur­bines.” Harris es­ti­mates the ranch’s value at $15 mil­lion.

It’s said that around the time Wallace made that mail trip on Blondie, he sank a post in the sandy loam on his prop­erty and an­nounced he wanted to be buried at that ex­act spot. The tomb­stone that now sits in the mid­dle of the fam­ily grave­yard re­placed the post more than eight decades ago. Even fac­ing death, 80 John proved to be a man who knew what he wanted—and achieved it, stam­pedes and racism be damned.

Austin writer W. K. Stratton’s most re­cent book is The Wild Bunch: Sam Peckinpah, a Revolution in Hollywood, and the Making of a Legendary Film.

Photo Credits: Wallace: cour­tesy of the Wallace fam­ily; cat­tle drive: Bettmann/Getty; cow­boy: Erwin Smith/Library of Congress; map: UNT Libraries/The Portal to Texas History/Hardin-Simmons University Library

This ar­ti­cle orig­i­nally ap­peared in the May 2024 is­sue of Texas Monthly with the head­line The Immortal Life of 80 John.” Subscribe to­day.

...

Read the original on www.texasmonthly.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.