10 interesting stories served every morning and every evening.
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
...
Read the original on github.com »
Begin typing your search above and press return to search. Press Esc to cancel.
news for & about the philosophy profession
Daniel Dennett, professor emeritus of philosophy at Tufts University, well-known for his work in philosophy of mind and a wide range of other philosophical areas, has died.
Professor Dennett wrote extensively about issues related to philosophy of mind and cognitive science, especially consciousness. He is also recognized as having made significant contributions to the concept of intentionality and debates on free will. Some of Professor Dennett’s books include Content and Consciousness (1969), Brainstorms: Philosophical Essays on Mind and Psychology (1981), The Intentional Stance (1987), Consciousness Explained (1992), Darwin’s Dangerous Idea (1995), Breaking the Spell (2006), and From Bacteria to Bach and Back: The Evolution of Minds (2017). He published a memoir last year entitled I’ve Been Thinking. There are also several books about him and his ideas. You can learn more about his work here.
Professor Dennett held a position at Tufts University for nearly all his career. Prior to this, he held a position at the University of California, Irvine from 1965 to 1971. He also held visiting positions at Oxford, Harvard, Pittsburgh, and other institutions during his time at Tufts University. Professor Dennett was awarded his PhD from the University of Oxford in 1965 and his undergraduate degree in philosophy from Harvard University in 1963.
Professor Dennett is the recipient of several awards and prizes including the Jean Nicod Prize, the Mind and Brain Prize, and the Erasmus Prize. He also held a Fulbright Fellowship, two Guggenheim Fellowships, and a Fellowship at the Center for Advanced Study in Behavioral Sciences. An outspoken atheist, Professor Dennett was dubbed one of the “Four Horsemen of New Atheism”. He was also a Fellow of the Committee for Skeptical Inquiry, an honored Humanist Laureate of the International Academy of Humanism, and was named Humanist of the Year by the American Humanist Organization.
The following interview with Professor Dennett was recorded last year:
Related: “Philosophers: Stop Being Self-Indulgent and Start Being Like Daniel Dennett, says Daniel Dennett“. (Other DN posts on Dennett can be found here.)
“The ethical academic should be opposed to most of our current grading practices, but they still need to grade students anyway”
– John Danaher (Galway) on the whats, whys, and hows of ethical grading
“Kant saw reason’s potential as a tool for liberation”
– Susan Neiman (Einstein Forum) in the NYT on why we should celebrate Kant
“Assisted evolution is… an acknowledgment that there is no stepping back, no future in which humans do not profoundly shape the lives and fates of wild creatures”
– new ways of protecting animals raise questions about what conservation is and what species are
“Metaphysics begins with the distinction between appearance and reality, between seems and is, and the play constantly plays with this distinction”
– Brad Skow (MIT) on the philosophy in Hamlet
Beliefs aim at the truth, you say?
– the New Yorker covers work by philosophers and others in an article about the complications of misinformation
“Philosophical theories are very much like ‘pictures’ or ‘stories’ and… philosophical debates often come down to ‘temperamental differences’”
– Peter West (Northeastern U. London) on the metaphilosophy of Margaret MacDonald
“The swiftness and ease of the technology separates people from the reality of what they are taking part in”
– and there’s a lot going on
“Any surprising results scientists achieved, whether they supported or challenged a previous assumption, were seen as the ultimate source of aesthetic pleasure”
– Milena Ivanova (Cambridge) on the role of aesthetics in science
“I couldn’t have justified spending a career as an academic philosopher. Not in this world.”
– Nathan J. Robinson on the immorality of philosophy in a time of crisis
“Within the ring of light lies what is straightforwardly knowable through common sense or mainstream science” but philosophy “lives in the penumbra of darkness”
– and even as that light grows, says Eric Schwitzgebel (UC Riverside), just beyond it “there will always be darkness”–-and philosophy
“The scientific community has generally done a poor job of explaining to the public that science is what is known so far”
– H. Holden Thorp, the editor in chief of Science, on why the history and philosophy of science should be part of the science curriculum (via Nathan Nobis)
– Tamar Gendler (Yale) discusses an experimental course she taught on philosophy and its forms
“If you’re going to be a philosopher, learn about the world, learn about the science… Scientists are just as capable of making philosophical mistakes… as any lay people [and] they need the help of informed philosophers”
“I’m curious about why these kinds of places have such a spellbinding aura, and I think it’s because they are analog outliers”
– Evan Selinger (RIT) reflects on his obsession with a small-town family-run hotel that serves simple and delicious food
“The story that a sports fan engages with is a collaboratively written story; [it is] a social enterprise focused around knitting individual games into narrative arcs, stories, legends, and characterizations”
– Peter Kung and Shawn Klein (ASU) on imagination and sports fandom
“Claude 3 Opus produces arguments that don’t statistically differ in their persuasiveness compared to arguments written by humans”
– the methods and results of a study on AI persuasiveness
“Limiting virtues [are] virtues that constrain us in order to set us free”
– Sara Hendren (Northeastern), inspired by David McPherson (Creighton) looks for limiting virtues in architecture
“It is not only false but morally misleading to describe the resulting civilian deaths as ‘unintentional’ or as what ‘happens in war’”
– Jessica Wolfendale (Case Western) on the tools and tactics used in Gaza by Israel’s military
“Both were analytical philosophers, but their intellectual frameworks and their philosophical approaches were markedly different”
– Dan Little (UM-Dearborn) on Popper and Parfit
El Salvador seeks philosophers (and doctors, scientists, engineers, artists, and others)
– the nation’s president has offered 5000 free passports along with tax benefits to those answering his call
“He has awakened us to the background practices in our culture, and revealed to us that they have no necessity, which offers us a kind of freedom we may not have recognized”
– Mark Ralkowski (GWU) on the philosophy of Larry David
“I think [NASA’s] requirements are closing the astronaut program off from important insights from the humanities and social sciences”
– a philosophy PhD and US Air Force officer on why we should send philosophers into space
“Before he was the little guy who spake about teaching of the Superman, he appeared in Nietzsche’s book ‘The Gay Science’” “Who is….?”
– philosophy was a category in the second round of “Jeopardy!” earlier this week (mouse over the $ to see the answers, er questions)
Can philosophy be done through narrative films like “Barbie?”
– that depends on what we mean by doing philosophy, says Tom McClelland (Cambridge)
“There is no moral valence to someone just not liking us.” “There’s a goodness and richness in this sort of predestined suffering.”
– the moral sensibilities of Lillian Fishman, advice columnist at The Point
“Philosophers write a lot about friendship and love, but they tend to do so in terms that leave out the centrality of the heart and heartfelt connection”
– as a result, says Stephen Darwall (Yale), we miss some important things
“Wenar’s alternative to effective altruism is neither viable nor desirable nor indeed any improvement on effective altruism”
“While the shallow pond may be a good model to help us think about our immediate duties, it is a bad model to help us think about the relationship between would be donors and the suffering poor in the context of development”
– Eric Schliesser (Amsterdam) on Richard Pettigrew on Leif Wenar on effective altruism
...
Read the original on dailynous.com »
Supabase Storage is now officially an S3-Compatible Storage Provider. This is one of the most-requested features and is available today in public alpha. Resumable Uploads are also transitioning from Beta to Generally Available.
The Supabase Storage Engine is fully open source and is one of the few storage solutions that offer 3 interoperable protocols to manage your files:
* S3 uploads: for compatibility across a plethora of tools
We always strive to adopt industry standards at Supabase. Supporting standards makes workloads portable, a key product principle. The S3 API is undoubtedly a storage standard, and we’re making it accessible to developers of various experience-levels.
The S3 protocol is backwards compatible with our other APIs. If you are already using Storage via our REST or TUS APIs, today you can use any S3 client to interact with your buckets and files: upload with TUS, serve them with REST, and manage them with the S3 protocol.
The protocol works on the cloud, local development, and self-hosting. Check out the API compatibility in our docs
To authenticate with Supabase S3 you have 2 options:
The standard access_key and secret_key credentials. You can generate these from the storage settings page. This authentication method is widely compatible with tools supporting the S3 protocol. It is also meant to be used exclusively serverside since it provides full access to your Storage resources.
We will add scoped access key credentials in the near future which can have access to specific buckets.
User-scoped credentials with RLS. This takes advantage of a well-adopted concept across all Supabase services, Row Level Security. It allows you to interact with the S3 protocol by scoping storage operations to a particular authenticated user or role, respecting your existing RLS policies. This method is made possible by using the Session token header which the S3 protocol supports. You can find more information on how to use the Session token mechanism in the doc.
With the support of the S3 protocol, you can now connect Supabase Storage to many 3rd-party tools and services by providing a pair of credentials which can be revoked at any time.
You can use popular tools for backups and migrations, such as:
* and any other s3-compatible tool …
Check out our Cyberduck guide here.
S3 compatibility provides a nice primitive for Data Engineers. You can use it with many popular tools:
In this example our incredible data analyst, Tyler, demonstrates how to store Parquet files in Supabase Storage and query them directly using DuckDB:
In addition to the standard uploads and resumable uploads, we now support multipart uploads via the S3 protocol. This allows you to maximize upload throughput by uploading chunks in parallel, which are then concatenated at the end.
Along with the platform GA announcement, we are also thrilled to announce that resumable uploads are also generally available.
Resumable uploads are powered by the TUS protocol. The journey to get here was immensely rewarding, working closely with the TUS team. A big shoutout to the maintainers of the TUS protocol, @murderlon and @acconut, for their collaborative approach to open source.
Supabase contributed some advanced features from the Node implementation of TUS Spec including distributed locks, max file size, expiration extension and numerous bug fixes:
These features were essential for Supabase, and since the TUS node server is open source, they are also available for you to use. This is another core principle: wherever possible, we use and support existing tools rather than developing from scratch.
* Cross-bucket transfers: We have added the availability to copy and move objects across buckets, where previously you could do these operations only within the same Supabase bucket.
* Standardized error codes: Error codes have now been standardized across the Storage server and now will be much easier to branch logic on specific errors. You can find the list of error codes here.
* Multi-tenant migrations: We made significant improvements to the running migrations across all our tenants. This has reduced migration errors across the fleet and enables us to run long running migrations in an asynchronous manner. Stay tuned for a separate blog post with more details.
* Decoupled dependencies: Storage is fully decoupled from other Supabase products, which means you can run Storage as a standalone service. Get started with this docker-compose file.
...
Read the original on supabase.com »
Tesla is recalling all 3,878 Cybertrucks that it has shipped to date, due to a problem where the accelerator pedal can get stuck, putting drivers at risk of a crash, according to the National Highway Traffic Safety Administration.
The recall caps a tumultuous week for Tesla. The company laid off more than 10% of its workforce on Monday, and lost two of its highest-ranking executives. A few days later, Tesla asked shareholders to re-vote on CEO Elon Musk’s massive compensation package that was struck down by a judge earlier this year.
Reports of problems with the Cybertruck’s accelerator pedal started popping up in the last few weeks. Tesla even reportedly paused deliveries of the truck while it sorted out the issue. Musk said in a post on X that Tesla was “being very cautious” and the company reported to NHTSA that it was not aware of any crashes or injuries related to the problem.
The company has now confirmed to NHTSA that the pedal can dislodge, making it possible for it to slide up and get caught in the trim around the footwell.
Tesla said it first received a notice of one of these accelerator pedal incidents from a customer on March 31, and then a second one on April 3. After performing a series of tests, it decided on April 12 to issue a recall after determining that “[a]n unapproved change introduced lubricant (soap) to aid in the component assembly of the pad onto the accelerator pedal,” and that “[r]esidual lubricant reduced the retention of the pad to the pedal.”
Tesla says it will replace or rework the accelerator pedal on all existing Cybertrucks. It also told NHTSA that it has started building Cybertrucks with a new accelerator pedal, and that it’s fixing the vehicles that are in transit or sitting at delivery centers.
While the Cybertruck only first started shipping late last year, this is not the vehicle’s first recall. But the initial one was minor: Earlier this year, Tesla recalled the software on all of its vehicles because the font sizes of its warning lights were too small. The company unveiled the truck back in 2019.
...
Read the original on techcrunch.com »
I’m Miguel. I write about compilers, performance, and silly computer things. I also draw Pokémon.
I will often say that the so-called “C ABI” is a very bad one, and a relatively unimaginative one when it comes to passing complicated types effectively. A lot of people ask me “ok, what would you use instead”, and I just point them to the Go register ABI, but it seems most people have trouble filling in the gaps of what I mean. This article explains what I mean in detail.
I have discussed calling conventions in the past, but as a reminder: the calling convention is the part of the ABI that concerns itself with how to pass arguments to and from a function, and how to actually call a function. This includes which registers arguments go in, which registers values are returned out of, what function prologues/epilogues look like, how unwinding works, etc.
This particular post is primarily about x86, but I intend to be reasonably generic (so that what I’ve written applies just as well to ARM, RISC-V, etc). I will assume a general familiarity with x86 assembly, LLVM IR, and Rust (but not rustc’s internals).
Today, like many other natively compiled languages, Rust defines an unspecified0- calling convention that lets it call functions however it likes. In practice, Rust lowers to LLVM’s built-in C calling convention, which LLVM’s prologue/epilogue codegen generates calls for.
Rust is fairly conservative: it tries to generate LLVM function signatures that Clang could have plausibly generated. This has two significant benefits:
Good probability debuggers won’t choke on it. This is not a concern on Linux, though, because DWARF is very general and does not bake-in the Linux C ABI. We will concern ourselves only with ELF-based systems and assume that debuggability is a nonissue. It is less likely to tickle LLVM bugs due to using ABI codegen that Clang does not exercise. I think that if Rust tickles LLVM bugs, we should actually fix them (a very small number of rustc contributors do in fact do this).
However, we are too conservative. We get terrible codegen for simple functions:
arr is 12 bytes wide, so you’d think it would be passed in registers, but no! It is passed by pointer! Rust is actually more conservative than what the Linux C ABI mandates, because it actually passes the [i32; 3] in registers when extern “C” is requested.
The array is passed in rdi and rsi, with the i32s packed into registers. The function moves rdi into rax, the output register, and shifts the upper half down.
Not only does clang produce patently bad code for passing things by value, but it also knows how to do it better, if you request a standard calling convention! We could be generating way better code than Clang, but we don’t!
Hereforth, I will describe how to do it.
Let’s suppose that we keep the current calling convention for extern “Rust”, but we add a flag -Zcallconv that sets the calling convention for extern “Rust” when compiling a crate. The supported values will be -Zcallconv=legacy for the current one, and -Zcallconv=fast for the one we’re going to design. We could even let -O set -Zcallconv=fast automatically.
Why keep the old calling convention? Although I did sweep debugability under the rug, one nice property -Zcallconv=fast will not have is that it does not place arguments in the C ABI order, which means that a reader replying on the “Diana’s silk dress cost $89” mnemonic on x86 will get fairly confused.
I am also assuming we may not even support -Zcallconv=fast for some targets, like WASM, where there is no concept of “registers” and “spilling”. It may not even make sense to enable it for for debug builds, because it will produce much worse code with optimizations turned off.
There is also a mild wrinkle with function pointers, and extern “Rust” {} blocks. Because this flag is per-crate, even though functions can advertise which version of extern “Rust” they use, function pointers have no such luxury. However, calling through a function pointer is slow and rare, so we can simply force them to use -Zcallconv=legacy. We can generate a shim to translate calling conventions as needed.
Similarly, we can, in principle, call any Rust function like this:
However, this mechanism can only be used to call unmangled symbols. Thus, we can simply force #[no_mangle] symbols to use the legacy calling convention.
In an ideal world, LLVM would provide a way for us to specify the calling convention directly. E.g., this argument goes in that register, this return goes in that one, etc. Unfortunately, adding a calling convention to LLVM requires writing a bunch of C++.
However, we can get away with specifying our own calling convention by following the following procedure.
First, determine, for a given target triple, the maximum number of values that can be passed “by register”. I will explain how to do this below. Decide how to pass the return value. It will either fit in the output registers, or it will need to be returned “by reference”, in which case we pass an extra ptr argument to the function (tagged with the sret attribute) and the actual return value of the function is that pointer. Decide which arguments that have been passed by value need to be demoted to being passed by reference. This will be a heuristic, but generally will be approximately “arguments larger than the by-register space”. For example, on x86, this comes out to 176 bytes. Decide which arguments get passed by register, so as to maximize register space usage. This problem is NP-hard (it’s the knapsack problem) so it will require a heuristic. All other arguments are passed on the stack. Generate the function signature in LLVM IR. This will be all of the arguments that are passed by register encoded as various non-aggregates, such as i64, ptr, double, and . What valid choices are for said non-aggregates depends on the target, but the above are what you will generally get on a 64-bit architecture. Arguments passed on the stack will follow the “register inputs”. Generate a function prologue. This is code to decode each Rust-level argument from the register inputs, so that there are %ssa values corresponding to those that would be present when using -Zcallconv=legacy. This allows us to generate the same code for the body of the function regardless of calling convention. Redundant decoding code will be eliminated by DCE passes. Generate a function exit block. This is a block that contains a single phi instruction for the return type as it would be for -Zcallconv=legacy. This block will encode it into the requisite output format and then ret as appropriate. All exit paths through the function should br to this block instead of ret-ing. If a non-polymorphic, non-inline function may have its address taken (as a function pointer), either because it is exported out of the crate or the crate takes a function pointer to it, generate a shim that uses -Zcallconv=legacy and immediately tail-calls the real implementation. This is necessary to preserve function pointer equality.
The main upshot here is that we need to cook up heuristics for figuring out what goes in registers (since we allow reordering arguments to get better throughput). This is equivalent to the knapsack problem; knapsack heuristics are beyond the scope of this article. This should happen early enough that this information can be stuffed into rmeta to avoid needing to recompute it. We may want to use different, faster heuristics depending on -Copt-level. Note that correctness requires that we forbid linking code generated by multiple different Rust compilers, which is already the case, since Rust breaks ABI from release to release.
Assuming we do that, how do we actually get LLVM to pass things in the way we want it to? We need to determine what the largest “by register” passing LLVM will permit is. The following LLVM program is useful for determining this on a particular version of LLVM:
When you pass an aggregate by-value to an LLVM function, LLVM will attempt to “explode” that aggregate into as many registers as possible. There are distinct register classes on different systems. For example, on both x86 and ARM, floats and vectors share the same register class (kind of).
The above values are for x86. LLVM will pass six integers and eight SSE vectors by register, and return half as many (3 and 4) by register. Increasing any of the values generates extra loads and stores that indicate LLVM gave up and passed arguments on the stack.
The values for aarch64-unknown-linux are 8 integers and 8 vectors for both inputs and outputs, respectively.
This is the maximum number of registers we get to play with for each class. Anything extra gets passed on the stack.
I recommend that every function have the same number of by-register arguments. So on x86, EVERY -Zcallconv=fast function’s signature should look like this:
When passing pointers, the appropriate i64s should be replaced by ptr, and when passing doubles, they replace s.
But you’re probably saying, “Miguel, that’s crazy! Most functions don’t pass 176 bytes!” And you’d be right, if not for the magic of LLVM’s very well-specified poison semantics.
We can get away with not doing extra work if every argument we do not use is passed poison. Because poison is equal to “the most convenient possible value at the present moment”, when LLVM sees poison passed into a function via register, it decides that the most convenient value is “whatever happens to be in the register already”, and so it doesn’t have to touch that register!
For example, if we wanted to pass a pointer via rcx, we would generate the following code.
It is perfectly legal to pass poison to a function, if it does not interact with the poisoned argument in any proscribed way. And as we see, load_rcx() receives its pointer argument in rcx, whereas make_the_call() takes no penalty in setting up the call: loading poison into the other thirteen registers compiles down to nothing, so it only needs to load the pointer returned by malloc into rcx.
This gives us almost total control over argument passing; unfortunately, it is not total. In an ideal world, the same registers are used for input and output, to allow easier pipelining of calls without introducing extra register traffic. This is true on ARM and RISC-V, but not x86. However, because register ordering is merely a suggestion for us, we can choose to allocate the return registers in whatever order we want. For example, we can pretend the order registers should be allocated in is rdx, rcx, rdi, rsi, r8, r9 for inputs, and rdx, rcx, rax for outputs.
square generates extremely simple code: the input and output register is rdi, so no extra register traffic needs to be generated. Similarly, when we effectively do @square(@square(%0)), there is no setup between the functions. This is similar to code seen on aarch64, which uses the same register sequence for input and output. We can see that the “naive” version of this IR produces the exact same code on aarch64 for this reason.
Now that we’ve established total control on how registers are assigned, we can turn towards maximizing use of these registers in Rust.
For simplicity, we can assume that rustc has already processed the users’s types into basic aggregates and unions; no enums here! We then have to make some decisions about which portions of the arguments to allocate to registers.
First, return values. This is relatively straightforward, since there is only one value to pass. The amount of data we need to return is not the size of the struct. For example, [(u64, u32); 2] measures 32 bytes wide. However, eight of those bytes are padding! We do not need to preserve padding when returning by value, so we can flatten the struct into (u64, u32, u64, u32) and sort by size into (u64, u64, u32, u32). This has no padding and is 24 bytes wide, which fits into the three return registers LLVM gives us on x86. We define the effective size of a type to be the number of non-undef bits it occupies. For [(u64, u32); 2], this is 192 bits, since it excludes the padding. For bool, this is one. For char this is technically 21, but it’s simpler to treat char as an alias for u32.
The reason for counting bits this way is that it permits significant compaction. For example, returning a struct full of bools can simply bit-pack the bools into a single register.
So, a return value is converted to a by-ref return if its effective size is smaller than the output register space (on x86, this is three integer registers and four SSE registers, so we get 88 bytes total, or 704 bits).
Argument registers are much harder, because we hit the knapsack problem, which is NP-hard. The following relatively naive heuristic is where I would start, but it can be made infinitely smarter over time.
First, demote to by-ref any argument whose effective size is larget than the total by-register input space (on x86, 176 bytes or 1408 bits). This means we get a pointer argument instead. This is beneficial to do first, since a single pointer might pack better than the huge struct.
Enums should be replaced by the appropriate discriminant-union pair. For example, Option is, internally, (union { i32, () }, i1), while Option is (union { i32, (), () }, i2). Using a small non-power-of-two integer improves our ability to pack things, since enum discriminants are often quite tiny.
Next, we need to handle unions. Because mucking about with unions’ uninitialized bits behind our backs is allowed, we need to either pass it as an array of u8, unless it only has a single non-empty variant, in which case it is replaced with that variant.
Now, we can proceed to flatten everything. All of the converted arguments are flattened into their most primitive components: pointers, integers, floats, and bools. Every field should be no larger than the smallest argument register; this may require splitting large types such as u128 or f64.
This big list of primitives is next sorted by effective size, from smallest to largest. We take the largest prefix of this that will fit in the available register space; everything else goes on the stack.
If part of a Rust-level input is sent to the stack in this way, and that part is larger than a small multiple of the pointer size (e.g., 2x), it is demoted to being passed by pointer-on-the-stack, to minimize memory traffic. Everything else is passed directly on the stack in the order those inputs were before the sort. This helps keep regions that need to be copied relatively contiguous, to minimize calls to memcpy.
The things we choose to pass in registers are allocated to registers in reverse size order, so e.g. first 64-bit things, then 32-bit things, etc. This is the same layout algorithm that repr(Rust) structs use to move all the padding into the tail. Once we get to the bools, those are bit-packed, 64 to a register.
Here’s a relatively complicated example. My Rust function is as follows:
The codegen for this function is quite complex, so I’ll only cover the prologue and epilogue. After sorting and flattening, our raw argument LLVM types are something like this:
Everything fits in registers! So, what does the LLVM function look like on x86?
Above, !dbg metadata for the argument values should be attached to the instruction that actually materializes it. This ensures that gdb does something halfway intelligent when you ask it to print argument values.
On the other hand, in current rustc, it gives LLVM eight pointer-sized parameters, so it winds up spending all six integer registers, plus two values passed on the stack. Not great!
This is not a complete description of what a completely over-engineered calling convention could entail: in some cases we might know that we have additional registers available (such as AVX registers on x86). There are cases where we might want to split a struct across registers and the stack.
This also isn’t even getting into what returns could look like. Results are often passed through several layers of functions via ?, which can result in a lot of redundant register moves. Often, a Result is large enough that it doesn’t fit in registers, so each call in the ? stack has to inspect an ok bit by loading it from memory. Instead, a Result return might be implemented as an out-parameter pointer for the error, with the ok variant’s payload, and the is ok bit, returned as an Option. There are some fussy details with Into calls via ?, but the idea is implementable.
Now, because we’re Rust, we’ve also got a trick up our sleeve that C doesn’t (but Go does)! When we’re generating the ABI that all callers will see (for -Zcallconv=fast), we can look at the function body. This means that a crate can advertise the precise ABI (in terms of register-passing) of its functions.
This opens the door to a more extreme optimization-based ABIs. We can start by simply throwing out unused arguments: if the function never does anything with a parameter, don’t bother spending registers on it.
Another example: suppose that we know that an &T argument is not retained (a question the borrow checker can answer at this point in the compiler) and is never converted to a raw pointer (or written to memory a raw pointer is taken of, etc). We also know that T is fairly small, and T: Freeze. Then, we can replace the reference with the pointee directly, passed by value.
The most obvious candidates for this is APIs like HashMap::get(). If the key is something like an i32, we need to spill that integer to the stack and pass a pointer to it! This results in unnecessary, avoidable memory traffic.
Profile-guided ABI is a step further. We might know that some arguments are hotter than others, which might cause them to be prioritized in the register allocation order.
You could even imagine a case where a function takes a very large struct by reference, but three i64 fields are very hot, so the caller can preload those fields, passing them both by register and via the pointer to the large struct. The callee does not see additional cost: it had to issue those loads anyway. However, the caller probably has those values in registers already, which avoids some memory traffic.
Instrumentation profiles may even indicate that it makes sense to duplicate whole functions, which are identical except for their ABIs. Maybe they take different arguments by register to avoid costly spills.
This is a bit more advanced (and ranty) than my usual writing, but this is an aspect of Rust that I find really frustrating. We could be doing so much better than C++ ever can (because of their ABI constraints). None of this is new ideas; this is literally how Go does it!
So why don’t we? Part of the reason is that ABI codegen is complex, and as I described above, LLVM gives us very few useful knobs. It’s not a friendly part of rustc, and doing things wrong can have nasty consequences for usability. The other part is a lack of expertise. As of writing, only a handful of people contributing to rustc have the necessary grasp of LLVM’s semantics (and mood swings) to emit the Right Code such that we get good codegen and don’t crash LLVM.
Another reason is compilation time. The more complicated the function signatures, the more prologue/epilogue code we have to generate that LLVM has to chew on. But -Zcallconv is intended to only be used with optimizations turned on, so I don’t think this is a meaningful complaint. Nor do I think the project’s Goodhartization of compilation time as a metric is healthy… but I do not think this is ultimately a relevant drawback.
I, unfortunately, do not have the spare time to dive into fixing rustc’s ABI code, but I do know LLVM really well, and I know that this is a place where Rust has a low bus factor. For that reason, I am happy to provide the Rust compiler team expert knowledge on getting LLVM to do the right thing in service of making optimized code faster.
I will often say that the so-called “C ABI” is a very bad one, and a relatively unimaginative one when it comes to passing complicated types effectively. A lot of people ask me “ok, what would you use instead”, and I just point them to the Go register ABI, but it seems most people have trouble filling in the gaps of what I mean. This article explains what I mean in detail. I have discussed calling conventions in the past, but as a reminder: the calling convention is the part of the ABI that concerns itself with how to pass arguments to and from a function, and how to actually call a function. This includes which registers arguments go in, which registers values are returned out of, what function prologues/epilogues look like, how unwinding works, etc. This particular post is primarily about x86, but I intend to be reasonably generic (so that what I’ve written applies just as well to ARM, RISC-V, etc). I will assume a general familiarity with x86 assembly, LLVM IR, and Rust (but not rustc’s internals). Today, like many other natively compiled languages, Rust defines an unspecified0- calling convention that lets it call functions however it likes. In practice, Rust lowers to LLVM’s built-in C calling convention, which LLVM’s prologue/epilogue codegen generates calls for. Rust is fairly conservative: it tries to generate LLVM function signatures that Clang could have plausibly generated. This has two significant benefits: Good probability debuggers won’t choke on it. This is not a concern on Linux, though, because DWARF is very general and does not bake-in the Linux C ABI. We will concern ourselves only with ELF-based systems and assume that debuggability is a nonissue. It is less likely to tickle LLVM bugs due to using ABI codegen that Clang does not exercise. I think that if Rust tickles LLVM bugs, we should actually fix them (a very small number of rustc contributors do in fact do this). However, we are too conservative. We get terrible codegen for simple functions: arr is 12 bytes wide, so you’d think it would be passed in registers, but no! It is passed by pointer! Rust is actually more conservative than what the Linux C ABI mandates, because it actually passes the [i32; 3] in registers when extern “C” is requested. The array is passed in rdi and rsi, with the i32s packed into registers. The function moves rdi into rax, the output register, and shifts the upper half down. Not only does clang produce patently bad code for passing things by value, but it also knows how to do it better, if you request a standard calling convention! We could be generating way better code than Clang, but we don’t! Hereforth, I will describe how to do it. Let’s suppose that we keep the current calling convention for extern “Rust”, but we add a flag -Zcallconv that sets the calling convention for extern “Rust” when compiling a crate. The supported values will be -Zcallconv=legacy for the current one, and -Zcallconv=fast for the one we’re going to design. We could even let -O set -Zcallconv=fast automatically. Why keep the old calling convention? Although I did sweep debugability under the rug, one nice property -Zcallconv=fast will not have is that it does not place arguments in the C ABI order, which means that a reader replying on the “Diana’s silk dress cost $89” mnemonic on x86 will get fairly confused. I am also assuming we may not even support -Zcallconv=fast for some targets, like WASM, where there is no concept of “registers” and “spilling”. It may not even make sense to enable it for for debug builds, because it will produce much worse code with optimizations turned off. There is also a mild wrinkle with function pointers, and extern “Rust” {} blocks. Because this flag is per-crate, even though functions can advertise which version of extern “Rust” they use, function pointers have no such luxury. However, calling through a function pointer is slow and rare, so we can simply force them to use -Zcallconv=legacy. We can generate a shim to translate calling conventions as needed. Similarly, we can, in principle, call any Rust function like this: However, this mechanism can only be used to call unmangled symbols. Thus, we can simply force #[no_mangle] symbols to use the legacy calling convention. Bending LLVM to Our Will In an ideal world, LLVM would provide a way for us to specify the calling convention directly. E.g., this argument goes in that register, this return goes in that one, etc. Unfortunately, adding a calling convention to LLVM requires writing a bunch of C++. However, we can get away with specifying our own calling convention by following the following procedure. First, determine, for a given target triple, the maximum number of values that can be passed “by register”. I will explain how to do this below. Decide how to pass the return value. It will either fit in the output registers, or it will need to be returned “by reference”, in which case we pass an extra ptr argument to the function (tagged with the sret attribute) and the actual return value of the function is that pointer. Decide which arguments that have been passed by value need to be demoted to being passed by reference. This will be a heuristic, but generally will be approximately “arguments larger than the by-register space”. For example, on x86, this comes out to 176 bytes. Decide which arguments get passed by register, so as to maximize register space usage. This problem is NP-hard (it’s the knapsack problem) so it will require a heuristic. All other arguments are passed on the stack. Generate the function signature in LLVM IR. This will be all of the arguments that are passed by register encoded as various non-aggregates, such as i64, ptr, double, and . What valid choices are for said non-aggregates depends on the target, but the above are what you will generally get on a 64-bit architecture. Arguments passed on the stack will follow the “register inputs”. Generate a function prologue. This is code to decode each Rust-level argument from the register inputs, so that there are %ssa values corresponding to those that would be present when using -Zcallconv=legacy. This allows us to generate the same code for the body of the function regardless of calling convention. Redundant decoding code will be eliminated by DCE passes. Generate a function exit block. This is a block that contains a single phi instruction for the return type as it would be for -Zcallconv=legacy. This block will encode it into the requisite output format and then ret as appropriate. All exit paths through the function should br to this block instead of ret-ing. If a non-polymorphic, non-inline function may have its address taken (as a function pointer), either because it is exported out of the crate or the crate takes a function pointer to it, generate a shim that uses -Zcallconv=legacy and immediately tail-calls the real implementation. This is necessary to preserve function pointer equality. The main upshot here is that we need to cook up heuristics for figuring out what goes in registers (since we allow reordering arguments to get better throughput). This is equivalent to the knapsack problem; knapsack heuristics are beyond the scope of this article. This should happen early enough that this information can be stuffed into rmeta to avoid needing to recompute it. We may want to use different, faster heuristics depending on -Copt-level. Note that correctness requires that we forbid linking code generated by multiple different Rust compilers, which is already the case, since Rust breaks ABI from release to release. What Is LLVM Willing to Do? Assuming we do that, how do we actually get LLVM to pass things in the way we want it to? We need to determine what the largest “by register” passing LLVM will permit is. The following LLVM program is useful for determining this on a particular version of LLVM: When you pass an aggregate by-value to an LLVM function, LLVM will attempt to “explode” that aggregate into as many registers as possible. There are distinct register classes on different systems. For example, on both x86 and ARM, floats and vectors share the same register class (kind of). The above values are for x86. LLVM will pass six integers and eight SSE vectors by register, and return half as many (3 and 4) by register. Increasing any of the values generates extra loads and stores that indicate LLVM gave up and passed arguments on the stack. The values for aarch64-unknown-linux are 8 integers and 8 vectors for both inputs and outputs, respectively. This is the maximum number of registers we get to play with for each class. Anything extra gets passed on the stack. I recommend that every function have the same number of by-register arguments. So on x86, EVERY -Zcallconv=fast function’s signature should look like this: When passing pointers, the appropriate i64s should be replaced by ptr, and when passing doubles, they replace s. But you’re probably saying, “Miguel, that’s crazy! Most functions don’t pass 176 bytes!” And you’d be right, if not for the magic of LLVM’s very well-specified poison semantics. We can get away with not doing extra work if every argument we do not use is passed poison. Because poison is equal to “the most convenient possible value at the present moment”, when LLVM sees poison passed into a function via register, it decides that the most convenient value is “whatever happens to be in the register already”, and so it doesn’t have to touch that register! For example, if we wanted to pass a pointer via rcx, we would generate the following code. ; This is a -Zcallconv=fast-style function.
%Out = type {[3 x i64], [4 x ]}
define %Out @load_rcx(
i64 %rdi, i64 %rsi, i64 %rdx,
ptr %rcx, i64 %r8, i64 %r9,
%xmm0, %xmm1,
%xmm2, %xmm3,
%xmm4, %xmm5,
%xmm6, %xmm7
%load = load i64, ptr %rcx
%out = insertvalue %Out poison,
i64 %load, 0, 0
ret %Out %out
declare ptr @malloc(i64)
define i64 @make_the_call() {
%1 = call ptr @malloc(i64 8)
store i64 42, ptr %1
%2 = call %Out @by_rcx(
i64 poison, i64 poison, i64 poison,
ptr %1, i64 poison, i64 poison,
It is perfectly legal to pass poison to a function, if it does not interact with the poisoned argument in any proscribed way. And as we see, load_rcx() receives its pointer argument in rcx, whereas make_the_call() takes no penalty in setting up the call: loading poison into the other thirteen registers compiles down to nothing, so it only needs to load the pointer returned by malloc into rcx. This gives us almost total control over argument passing; unfortunately, it is not total. In an ideal world, the same registers are used for input and output, to allow easier pipelining of calls without introducing extra register traffic. This is true on ARM and RISC-V, but not x86. However, because register ordering is merely a suggestion for us, we can choose to allocate the return registers in whatever order we want. For example, we can pretend the order registers should be allocated in is rdx, rcx, rdi, rsi, r8, r9 for inputs, and rdx, rcx, rax for outputs. %Out = type {[3 x i64], [4 x ]}
define %Out @square(
i64 %rdi, i64 %rsi, i64 %rdx,
ptr %rcx, i64 %r8, i64 %r9,
%xmm0, %xmm1,
%xmm2, %xmm3,
%xmm4, %xmm5,
%xmm6, %xmm7
%sq = mul i64 %rdx, %rdx
%out = insertvalue %Out poison,
i64 %sq, 0, 1
ret %Out %out
define i64 @make_the_call(i64) {
%2 = call %Out @square(
...
Read the original on mcyoung.xyz »
Web-browser, advanced e-mail, newsgroup and feed client, IRC chat, and HTML editing made simple—all your Internet needs in one application.
Which operating system are you using?
Would you like to select a different language?
The SeaMonkey project is a community effort to develop the SeaMonkey Internet Application Suite (see below). Such a software suite was previously made popular by Netscape and Mozilla, and the SeaMonkey project continues to develop and deliver high-quality updates to this concept. Containing an Internet browser, email & newsgroup client with an included web feed reader, HTML editor, IRC chat and web development tools, SeaMonkey is sure to appeal to advanced users, web developers and corporate users.
Under the hood, SeaMonkey uses much of the same Mozilla Firefox source code which powers such products as
Thunderbird. Legal backing is provided by the SeaMonkey Association (SeaMonkey e. V.).
The SeaMonkey project is proud to present SeaMonkey 2.53.18.2: The new release of the all-in-one Internet suite is
available for free download now!
2.53.18.2 is a minor bugfix release on the 2.53.x branch and contains a crash fix and a few other fixes to the application from the underlying platform code.
SeaMonkey 2.53.18.2 is available in 23 languages, for Windows, macOS x64 and Linux.
Automatic upgrades from previous 2.53.x versions are enabled for this
release, but if you have problems with it please download the full installer
from the downloads section and install SeaMonkey 2.53.18.2 manually over the
previous version.
For a more complete list of major changes in SeaMonkey 2.53.18.2, see the
What’s New in SeaMonkey 2.53.18.2
section of the Release Notes, which also contains a list of known issues and answers to frequently asked questions. For a more general overview of the SeaMonkey project (and screen shots!), visit www.seamonkey-project.org.
We encourage users to get involved in discussing and reporting problems as well as further improving the product.
The SeaMonkey project is proud to present SeaMonkey 2.53.18 Beta 1: The new beta test release of the all-in-one Internet suite is
available for free download now!
2.53.18 will be an incremental update on the 2.53.x branch and incorporates a number of enhancements, changes and fixes to the application as well as those from the underlying platform code. Support for parsing and processing newer regexp expressions has been added helping with web compatibility on more than a few sites. Crash reporting has been switched over to
BugSplat. We also added many fixes and backports for overall platform stability.
Before installing the new version make a full backup of
your profile and thoroughly read and follow the
Release Notes. We encourage testers to get involved in discussing and reporting problems as well as further improving the product.
SeaMonkey 2.53.18 Beta 1 is available in 23 languages, for Windows, macOS x64 and Linux.
Attention macOS users! The current SeaMonkey release crashes during startup after upgrading to macOS 13 Ventura. Until we have a fix we advise you not to upgrade your macOS installation to Ventura. No usable crash information is generated and this might take a bit longer than usual to fix. This is not a problem with Monterey 12.6.1 or any lower supported macOS version so might even be an Apple bug.
SeaMonkey has inherited the successful all-in-one concept of the original Netscape Communicator and continues that product line based on the modern, cross-platform architecture provided by the
Mozilla project.
* The Internet browser at the core of
the SeaMonkey Internet Application Suite uses the same rendering engine and
application platform as Mozilla Firefox, with popular features like tabbed
browsing, feed detection, popup blocking, smart location bar, find as you
type and a lot of other functionality for a smooth web experience.
* SeaMonkey’s Mail and Newsgroups client
shares lots of code with Thunderbird and features adaptive Junk mail
filtering, tags and mail views, web feeds reading, tabbed messaging, multiple
accounts, S/MIME, address books with LDAP support and is ready for both
private and corporate use.
* Additional components include an easy-to-use
HTML Editor, the ChatZilla IRC chat
application and web development tools like a DOM Inspector.
* If that’s still not enough, SeaMonkey can be extended with numerous
Add-Ons that provide
additional functionality and customization for a complete Internet
experience.
...
Read the original on www.seamonkey-project.org »
Scale of Universe is an interactive experience to inspire people to learn about the vast ranges of the visible and invisible world. Click on objects to learn more. Use the scroll bar to zoom in and out. Remastered by Dave Caruso, Ben Plate, and more.
...
Read the original on scaleofuniverse.com »
One day in the late 1930s, Daniel Webster Wallace—“80 John,” as he was known in ranching circles—rode his favorite horse, Blondie, from his Mitchell County ranch to the Loraine post office. It was a familiar route for him in this mostly flat, sandy part of the state halfway between Midland and Abilene. He had made the six-mile round trip dozens of times. Wallace collected his mail then walked back to Blondie. In his time, 80 John had broken hundreds of broncs. His tougher-than-bull-hide body had never failed him. But on this day, he lacked the strength to swing his leg over the saddle. A group of men saw him struggling and hustled over to lift 80 John onto Blondie.
Wallace was Black. The men who helped him were white. One might imagine that such a scene would have been jaw-dropping in Depression-era Texas, where white hostility toward people of color was common. But the West Texas cowboy culture of the time was distinctive. Men of different races often supported and respected one another. And no cowboy was more respected than Wallace.
In fact he was one of the most remarkable figures in our history. By the time of his death from influenza and pneumonia in 1939, Wallace had built a West Texas ranch of 8,800 acres and amassed a personal fortune purported to be more than a million dollars—equivalent to about $22 million today. He brought about technical innovations that are still used and devoted much of his savings to strengthening his community—all at a time when it would have been difficult for a white man, much less a Black man born enslaved, to accomplish any of that.
Yet few have heard of him today. Last year I attended the awards banquet at the National Cowboy & Western Heritage Museum, in Oklahoma City, at which Wallace was inducted into the Hall of Great Westerners, joining such luminaries as Buffalo Bill and U. S. Supreme Court justice Sandra Day O’Connor. I pride myself on being knowledgeable about the Old West, but I hadn’t heard of 80 John until that night, nor had many other attendees.
Fourteen members of Wallace’s family were present, and when they arrived, “people were looking at us like, ‘What are you doing here?’ ” says his great-granddaughter Daphne Fowler, who lives on her great-grandparents’ ranch. “But after the banquet, we had all these people coming up to us like, ‘Wow, we had no idea!’ ” The event, she says, was about twenty years in the making. “We have been pushing to get him recognized. And it’s not just recognition for my great-grandfather. There have been a lot of cowboys of color, and their stories don’t get told.”
Larry Callies, founder of the Black Cowboy Museum, in Rosenberg, believes that the Houston region is the birthplace of the American cowboy. “The word ‘cowboy’ itself began being used in Fort Bend County and the surrounding area in 1821,” Callies says. “It was applied to Black slaves who worked with cattle.” (He notes that just as an enslaved person who worked inside the mansion would be referred to as a “houseboy,” one who took care of cattle was referred to as a “cowboy.” Even decades later, the term “cowboy” remained unpopular with white cowpunchers because of its racial connotations.) As early as the 1840s, Black men who were enslaved rounded up free-range Longhorns on the plains west of Houston and drove them north across Indian Territory to Kansas, establishing Texas’s fabled cattle-drive era. 80 John’s story emerges from this rich culture.
He was born in 1860 on a two-hundred-acre farm outside Inez, in Victoria County, not far from the Gulf Coast, to parents who were enslaved. His mother, Mary Wallace, had been bought by the farm’s owners, Mary and Josiah O’Daniel, to serve as a maid and wet nurse. His father, William Wallace, worked as a farmhand. The O’Daniels put young Wallace to work in the fields as a small child, and he received virtually no formal education. “Schools were scarce,” he told his daughter in the 1930s. “I received most of my learning by contact with others and observation.” 80 John and his family remained enslaved until the Juneteenth emancipation announcement, in 1865.
The following year, the O’Daniels moved to a larger farm about eighty miles north, outside Flatonia, taking the Wallace family with them—this time as paid employees. The families maintained ties for the next seventy years; 80 John stayed in touch with the O’Daniels’ sons, M. H. and Dial, until his death.
Mary Wallace saved her coins with the intent of someday buying real estate. The lesson was not lost on 80 John, who from an early age dreamed of owning a spread. He loathed chopping cotton for someone else.
Cattle drives often passed through the area, and Wallace decided they were his escape route, since many of the cowboys he saw were Black. One predawn morning in March 1876, he ran away from the O’Daniel farm and joined a crew moving cattle nearly three hundred miles northwest, to Buffalo Gap. Wallace was a green hand when he departed Flatonia. By the time the herd reached its destination, he had plenty of experience at “trailing.” Thereafter, work was not hard to find.
“He rode for the most famous and respected cattle barons of these seminal days of the Texas cattle industry,” says his great-grandson Alfred McMichael, who, along with his son, Keir, hosts a website devoted to 80 John. “He rode every major cattle trail, from the Chisholm Trail to the Goodnight-Loving Trail. He made long, daring solo rides.” None of it was easy. He contended with thunderstorms, blizzards, droughts, dust storms, outlaw gangs, poisonous snakes, disease, infection, sunstroke, lack of food and water, stampedes, and skirmishes with Native Americans. Once he spent days tracking a handful of Comanches who had stolen one of his favorite horses.
“A friend recommended that I read Lonesome Dove,” McMichael says. “I got a sense of what my great-grandfather’s life on the cattle ranges was like from reading about Joshua Deets.” Deets was the much-respected Black cowboy portrayed by Danny Glover in the TV rendition of Larry McMurtry’s novel. On the other hand, 80 John’s time on the trails was nothing like the world portrayed in the vast majority of Hollywood westerns, in which all of the characters—other than the “evil Indians”—are white.
“Even now, in 2024, a lot of people aren’t aware of how diverse cowboy culture was,” says Robert Tidwell, interim director of collections, exhibits, and research at Texas Tech’s National Ranching Heritage Center, in Lubbock. “The main concern on a ranch is, Can you do the job? Can you pull your own weight? And when that’s the main criteria, you’d be surprised at how things shake out. You had more tolerance than you’d find in other places where social structures were more defined. Between twenty and thirty percent of cowboys were Black.” A comparable number of ethnic Mexicans worked on the cattle trail.
Still, 80 John did occasionally run into racists, and he refused to tolerate them. His daughter Hettye Wallace Branch writes in her book, The Story of “80 John,” that one white man bullied him at a cow camp in West Texas. The six-foot-three 80 John responded by whupping the offender in a fistfight, which apparently instilled respect in his adversary. Improbable as it seems, the two men developed a friendship.
In 1878 a letter informed Wallace that his mother, whom he had not seen since leaving the O’Daniels’ farm, two years earlier, was deathly ill. He hurried back home but arrived too late. He settled his mother’s estate—which included a few acres she had bought—and then returned to West Texas, where he went to work for Clay Mann. Neither man’s life would be the same.
Though largely forgotten now, Mann was a prototype for the white Texas wheeler-dealer. In his time he was a legend in cow camps throughout the West, a figure who had, as a teenager, started a cattle herd and killed his father’s murderer. At one point Mann owned tens of thousands of cattle and thousands of acres in Texas, New Mexico, and Wyoming. He also owned a 600,000-acre ranch in Mexico.
But there was a problem.
“Now, Clay, he liked to drink, and he liked to gamble,” his great-grandson Tom “Possum” Mann tells me. “He loved to play a game called Mexican Monte, which was popular in the West at the time. He might win ten thousand dollars in a night, then lose it the next night.” Mann knew he needed someone in his outfit who was levelheaded and trustworthy. 80 John fit the bill. “80 John didn’t drink or gamble or anything,” Possum says. Soon Wallace and Mann were heading up large drives from Texas to cattle towns throughout the Midwest.
“At the end of a drive,” Possum says, “Clay would pay the hands then give most of the leftover money to 80 John to take back to Clay’s wife, Mary Mann, in Mitchell County. She’d deposit it in the bank he founded in Colorado City. Clay would stay behind with some of the money to gamble and eventually catch a train home.” Mann once gave Wallace $30,000 (the equivalent of $900,000 today) in cash to take to Midland, a three-day ride west from Mann’s ranch. 80 John made sure the money arrived safely.
Mann’s operations were extensive enough that he registered 43 cattle brands, including, most famously, a huge number 80. Wallace oversaw the burning of that brand onto Mann’s stock, and cattlemen associated him with it. “In West Texas,” Tidwell says, “the name ‘John’ was sort of a generic term for cowhands in general.” Hence the nickname 80 John.
Wallace’s expertise extended to more than branding and bronc busting. Mann discovered his top employee was freakishly good at arithmetic. He could add columns of numbers in his head, which proved especially useful when it came to financial transactions. “I have a method of figuring of my own which has stood me in good stead,” 80 John told his daughter. “Very seldom I have missed anything due me any larger than a fraction.” He could also scan a pasture with hundreds of cattle and produce a head count that was usually off by fewer than a dozen animals.
Early in their business relationship, Wallace told Mann that his goal in life was to own land and a herd. Mann began setting aside the lion’s share of 80 John’s pay to accomplish that. Bit by bit, Wallace acquired cattle as he continued working for Mann. Among his other responsibilities, Wallace was the de facto foreman of many of Mann’s properties. Even amid the relative racial comity of cowboy culture it was extraordinarily unusual—perhaps unprecedented—for a Black man to oversee white ranch hands.
In 1885, Wallace had saved enough to buy 1,280 acres of railroad land south of Loraine, intending to homestead it. Realizing he needed to learn how to read and write before he could become a successful rancher, he enrolled as a second grader at a segregated Black school in Navarro County, almost three hundred miles east of his newly acquired land. Over two terms of studying alongside children, he became functionally literate. He also met and fell in love with Laura Dee Owens, a Corsicana beauty who was finishing high school. They were married in 1888 and remained together until his death almost 51 years later. Laura sacrificed her plans to teach in order to help him build a ranch from scratch. They started life as a married couple in a two-room cabin on one of Mann’s ranches.
In 1889, 80 John’s world was turned upside down when Mann died from a stomach hemorrhage, caused by decades of hard drinking. Wallace took charge of the cattle operations and became a father figure to Mann’s sons, teaching them the fundamentals of ranching. The Wallace and Mann families established a bond that continued through generations. (Fowler, Wallace’s great-granddaughter, says Mann’s great-great-granddaughter, Becca Mann George, was her childhood best friend, and they remain close.)
Over the next decades, 80 John and Laura lived as frugally as possible and saved every dime they could to buy more land and cattle. They made for an ideal partnership. He was good with numbers, while she was more verbal. She could parse the thickest contracts and deeds and then read the salient points aloud to her husband. Wallace had a near-photographic memory when it came to business matters. In meeting with lawyers, he could quote passages verbatim that Laura had read to him. She also took charge of treating sick and injured livestock. Whenever 80 John had to be absent to tend to business, she ran the ranch by herself.
“I just can’t imagine,” Fowler says, “what it was like in the late 1800s and early 1900s with that pressure of being the only Black rancher out here. Sometimes I’ll go by the cemetery and I’ll smile thinking about what he and Laura did.”
As early as 1903, Wallace was admitted to the Cattle Raisers Association of Texas and became a highly regarded member. He was, so far as I can tell, the only Black man during that era who was invited to join the group. To attend each year, he had to board a segregated passenger car at the Colorado City railroad station to travel to Fort Worth. He had to find lodging at a so-called colored hotel. From there he walked to the whites-only hotel that hosted the convention. During one of his train trips, some of his white rancher friends joined him for conversation in the segregated passenger car. The conductor attempted to remove the white men, but one refused to budge. “I have known 80 John for thirty years,” the cattleman said. “We ate and slept on the ground together. I see no reason that makes it impossible for me to sit here now.”
The Wallaces eventually lived in a four-room house that 80 John designed and built himself. It has since been moved to the grounds of the National Ranching Heritage Center and restored to its original condition. Wallace disdained automobiles, airplanes, telephones, gramophones, and indoor plumbing. “But he was ahead of his time when it came to raising cattle,” says Mark Merrell, a retired Mitchell County judge and educator. The man who started out herding wild Longhorns ended up crossbreeding Herefords and Durhams (Shorthorns), decades before doing so became commonplace.
He also was an early adopter of windmill technology in West Texas (one of his windmills is on display at the American Windmill Museum, in Lubbock) and created sophisticated concrete watering troughs that other ranchers copied. As the couple became more affluent, they contributed to Mitchell County charities, including paying the construction costs for a Baptist church in Loraine. They also built the Wallace namesake school, in Colorado City, that provided area Black children their only access to education.
In the 1930s, 80 John and Laura drew up a will that put the ranch into a trust. Their primary goal was to provide education funding for subsequent generations of their family. They also wanted to keep the ranch intact. The property is now held by seven trusts. “Each one of the members of the family makes it very specific: don’t sell the land, except to a family member,” says Dwayne Harris, who, as trust officer at City National Bank, managed the ranch for more than twenty years before his retirement. “We had cattle-grazing leases and oil leases as well as revenue from cotton fields, a gravel quarry, and, more recently, wind turbines.” Harris estimates the ranch’s value at $15 million.
It’s said that around the time Wallace made that mail trip on Blondie, he sank a post in the sandy loam on his property and announced he wanted to be buried at that exact spot. The tombstone that now sits in the middle of the family graveyard replaced the post more than eight decades ago. Even facing death, 80 John proved to be a man who knew what he wanted—and achieved it, stampedes and racism be damned.
Austin writer W. K. Stratton’s most recent book is The Wild Bunch: Sam Peckinpah, a Revolution in Hollywood, and the Making of a Legendary Film.
Photo Credits: Wallace: courtesy of the Wallace family; cattle drive: Bettmann/Getty; cowboy: Erwin Smith/Library of Congress; map: UNT Libraries/The Portal to Texas History/Hardin-Simmons University Library
This article originally appeared in the May 2024 issue of Texas Monthly with the headline “The Immortal Life of “80 John.” Subscribe today.
...
Read the original on www.texasmonthly.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.