10 interesting stories served every morning and every evening.
Update 2025/02/04: Oracle asks the USPTO to dismiss our petition. Read more
Update 2024/11/22: We’ve filed a petition to cancel with the USPTO. Read more
Update 2025/02/04: Oracle asks the USPTO to dismiss our petition. Read more
Update 2024/11/22: We’ve filed a petition to cancel with the USPTO. Read more
You have long ago abandoned the JavaScript trademark, and it is causing
widespread, unwarranted confusion and disruption.
JavaScript is the world’s most popular programming language, powering websites everywhere. Yet, few of the millions who program in it realize that JavaScript is a trademark you, Oracle, control. The disconnect is glaring: JavaScript has become a general-purpose term used by countless individuals and companies, independent of any Oracle product.
Oracle’s hold on the JavaScript trademark clearly fits the legal definition of
trademark abandonment. A previous
blog post addressed this issue, requesting that you, Oracle, release the trademark. Unsurprisingly, the request was met with silence. It is therefore time to take active steps in order to bring the JavaScript trademark into the public domain, where it belongs.
A mark shall be deemed to be “abandoned” if either of the following occurs:
When
its use has
been discontinued with intent not to resume
such use.
Intent not to resume may be inferred from circumstances. Nonuse for 3
consecutive years shall be prima facie evidence of abandonment.
“Use”
of
a mark means
the bona
fide use of
such mark made
in the ordinary course of trade, and not made merely to reserve a right in
a mark.
When any course of conduct of the owner, including acts of omission as well
as commission, causes
the mark to
become the generic name for the goods or services on or in connection with
which it is used or otherwise to lose its significance as
a mark.
Purchaser motivation shall not be a test for determining abandonment under
this paragraph.
In the case of JavaScript, both criteria apply.
The JavaScript trademark is currently held by Oracle America, Inc. (US Serial Number: 75026640, US Registration Number: 2416017). How did this come to be?
In 1995, Netscape partnered with Sun Microsystems to create interactive websites. Brendan Eich famously spent only 10 days creating the first version of JavaScript, a dynamic programming language with a rough syntactic lineage from Sun’s Java language. As a result of this partnership, Sun held the JavaScript trademark. In 2009, Oracle acquired Sun Microsystems and the JavaScript trademark as a result.
The trademark is simply a relic of this acquisition. Neither Sun nor Oracle has ever built a product using the mark. Legal staff, year after year, have renewed the trademark without question. It’s likely that only a few within Oracle even know they possess the JavaScript trademark, and even if they do, they likely don’t understand the frustration it causes within the developer community.
Oracle has abandoned the JavaScript trademark through nonuse.
Oracle has never seriously offered a product called JavaScript. In the 1990s and early 2000s, Netscape Navigator, which supported JavaScript as a browser feature, was a key player. However, Netscape’s usage and influence faded by 2003, and the browser saw its final release in 2008. JavaScript, meanwhile, evolved into a widely used, independent programming language, embedded in multiple browsers, entirely separate from Oracle.
The most recent
specimen, filed with the USPTO in 2019, references nodejs.org (a project created by Ryan Dahl, the author of this letter) and Oracle’s
JavaScript Extension Toolkit (JET). But Node.js is not an Oracle product, and JET is merely a set of JavaScript libraries for Oracle services, particularly Oracle Cloud. There are millions of JavaScript libraries; JET is not special.
Oracle also offers GraalVM, a JVM that can execute JavaScript, among other languages. But GraalVM is far from a canonical JavaScript implementation; engines like V8, JavaScriptCore, and SpiderMonkey hold that role. GraalVM’s product page doesn’t even mention “JavaScript”; you must dig into the documentation to find its support.
Oracle’s use of JavaScript in GraalVM and JET does not reflect genuine use of
the trademark. These weak connections do not satisfy the requirement for consistent, real-world use in trade.
A mark can also be considered abandoned if it becomes a generic term.
In 1996, Netscape
announced
a meeting of the ECMA International standards organization to standardize the JavaScript programming language. Sun (now Oracle), refused to give up the “JavaScript” mark for this use though, so it was decided that the language would be called “ECMAScript” instead. (Microsoft happily offered up “JScript”, but no-one else wanted that.) Brendan Eich, the creator of JavaScript and a co-signatory of this letter,
wrote in 2006
that “ECMAScript was always an unwanted trade name that sounds like a skin disease.”
Ecma International formed TC39, a technical steering committee, which publishes ECMA-262, the specification for JavaScript. This committee includes participants from all major browsers, like Google’s Chrome, Apple’s Safari, and Mozilla’s Firefox, as well as representatives from server-side JavaScript runtimes like Node.js and Deno.
Oracle’s ownership of the JavaScript trademark only causes confusion. The term “JavaScript” is used freely by millions of developers, companies, and organizations around the world, with no interference from Oracle. Oracle has done nothing to assert its rights over the JavaScript name, likely because they do not believe their claim to the mark would hold up in court. Unlike typical trademark holders who protect their trademarks by extracting licensing fees or enforcing usage restrictions, Oracle has allowed the JavaScript name to be used by anyone. This inaction further supports the argument that the trademark has lost its significance and has become generic.
Programmers working with JavaScript have formed innumerable community organizations. These organizations, like the standards bodies, have been forced to painstakingly avoid naming the programming language they are built around—for example, JSConf. Sadly, without risking a legal trademark challenge against Oracle, there can be no “JavaScript Conference” nor a “JavaScript Specification.” The world’s most popular programming language cannot even have a conference in its name.
There is a vast misalignment between the trademark’s ownership and its
widespread, generic use.
By law, a trademark is abandoned if it is either not used or becomes a generic
term. Both apply to JavaScript.
It’s time for the USPTO to end the JavaScript trademark and recognize it as a generic name for the world’s most popular programming language, which has multiple implementations across the industry.
Oracle, you likely have no real business interest in the mark. It’s renewed simply because legal staff are obligated to renew all trademarks, regardless of their relevance or use.
We urge you to release the mark into the public domain. However, asking nicely has been tried before, and it was met with silence. If you do not act, we will challenge your ownership by filing a petition for cancellation with the USPTO.
...
Read the original on javascript.tm »
...
Read the original on arxiv.org »
The students at America’s elite universities are supposed to be the smartest, most promising young people in the country. And yet, shocking percentages of them are claiming academic accommodations designed for students with learning disabilities.
In an article published this week in The Atlantic, education reporter Rose Horowitch lays out some shocking numbers. At Brown and Harvard, 20 percent of undergraduate students are disabled. At Amherst College, that’s 34 percent. At Stanford University, it’s a galling 38 percent. Most of these students are claiming mental health conditions and learning disabilities, like anxiety, depression, and ADHD.
Obviously, something is off here. The idea that some of the most elite, selective universities in America—schools that require 99th percentile SATs and sterling essays—would be educating large numbers of genuinely learning disabled students is clearly bogus. A student with real cognitive struggles is much more likely to end up in community college, or not in higher education at all, right?
The professors Horowitz interviewed largely back up this theory. “You hear ‘students with disabilities’ and it’s not kids in wheelchairs,” one professor told Horowitch. “It’s just not. It’s rich kids getting extra time on tests.” Talented students get to college, start struggling, and run for a diagnosis to avoid bad grades. Ironically, the very schools that cognitively challenged students are most likely to attend—community colleges—have far lower rates of disabled students, with only three to four percent of such students getting accommodations.
To be fair, some of the students receiving these accommodations do need them. But the current language of the Americans with Disabilities Act (ADA) allows students to get expansive accommodations with little more than a doctor’s note.
While some students are no doubt seeking these accommodations as semi-conscious cheaters, I think most genuinely identify with the mental health condition they’re using to get extra time on tests. Over the past few years, there’s been a rising push to see mental health and neurodevelopmental conditions as not just a medical fact, but an identity marker. Will Lindstrom, the director of the Regents’ Center for Learning Disorders at the University of Georgia, told Horowitch that he sees a growing number of students with this perspective. “It’s almost like it’s part of their identity,” Lindstrom told her. “By the time we see them, they’re convinced they have a neurodevelopmental disorder.”
What’s driving this trend? Well, the way conditions like ADHD, autism, and anxiety get talked about online—the place where most young people first learn about these conditions—is probably a contributing factor. Online creators tend to paint a very broad picture of the conditions they describe. A quick scroll of TikTok reveals creators labeling everything from always wearing headphones, to being bad at managing your time, to doodling in class as a sign that someone may have a diagnosable condition. According to these videos, who isn’t disabled?
The result is a deeply distorted view of “normal.” If ever struggling to focus or experiencing boredom is a sign you have ADHD, the implication is that a “normal,” nondisabled person has essentially no problems. A “neurotypical” person, the thinking goes, can churn out a 15-page paper with no hint of procrastination, maintain perfect focus during a boring lecture, and never experience social anxiety or awkwardness. This view is buffeted by the current way many of these conditions are diagnosed. As Horowitch points out, when the latest issue of the DSM, the manual psychiatrists use to diagnose patients, was released in 2013, it significantly lowered the bar for an ADHD diagnosis. When the definition of these conditions is set so liberally, it’s easy to imagine a highly intelligent Stanford student becoming convinced that any sign of academic struggle proves they’re learning disabled, and any problems making friends are a sign they have autism.
Risk-aversion, too, seems like a compelling factor driving bright students to claim learning disabilities. Our nation’s most promising students are also its least assured. So afraid of failure—of bad grades, of a poorly-received essay—they take any sign of struggle as a diagnosable condition. A few decades ago, a student who entered college and found the material harder to master and their time less easily managed than in high school would have been seen as relatively normal. Now, every time she picks up her phone, a barrage of influencers is clamoring to tell her this is a sign she has ADHD. Discomfort and difficulty are no longer perceived as typical parts of growing up.
In this context, it’s easy to read the rise of academic accommodations among the nation’s most intelligent students as yet another manifestation of the risk-aversion endemic in the striving children of the upper middle class. For most of the elite-college students who receive them, academic accommodations are a protection against failure and self-doubt. Unnecessary accommodations are a two-front form of cheating—they give you an unjust leg-up on your fellow students, but they also allow you to cheat yourself out of genuine intellectual growth. If you mask learning deficiencies with extra time on texts, soothe social anxiety by forgoing presentations, and neglect time management skills with deadline extensions, you might forge a path to better grades. But you’ll also find yourself less capable of tackling the challenges of adult life.
...
Read the original on reason.com »
Lately I’ve been reading Sean Goedecke’s essays on being a Staff+ engineer. His work (particularly Software engineering under the spotlight and It’s Not Your Codebase) is razor-sharp and feels painfully familiar to anyone in Big Tech.
On paper, I fit the mold he describes: I’m a Senior Staff engineer at Google. Yet, reading his work left me with a lingering sense of unease. At first, I dismissed this as cynicism. After reflecting, however, I realized the problem wasn’t Sean’s writing but my reading.
Sean isn’t being bleak; he is accurately describing how to deal with a world where engineers are fungible assets and priorities shift quarterly. But my job looks nothing like that and I know deep down that if I tried to operate in that environment or in the way he described I’d burn out within months.
Instead I’ve followed an alternate path, one that optimizes for systems over spotlights, and stewardship over fungibility.
The foundational reason for our diverging paths is that Sean and I operate in entirely different worlds with different laws governing them.
From Sean’s resume, my understanding is that he has primarily worked in product teams building for external customers. Business goals pivot quarterly, and success is measured by revenue or MAU. Optimizing for the “Spotlight” makes complete sense in this environment. Product development at big tech scale is a crowded room: VPs, PMs and UX designers all have strong opinions. To succeed, you have to be agile and ensure you are working specifically on what executives are currently looking at.
On the other hand, I’ve spent my entire career much more behind the scenes: in developer tools and infra teams.
My team’s customers are thousands of engineers in Android, Chrome, and throughout Google . End users of Google products don’t even know we exist; our focus is on making sure developers have the tools to collect product and performance metrics and debug issues using detailed traces.
In this environment, our relationship with leadership is very different. We’re never the “hot project everyone wants,” so execs are not fighting to work with us. In fact, my team has historically struggled to hire PMs. The PM career ladder at Google incentivizes splashy external launches so we cannot provide good “promotion material” for them. Also, our feedback comes directly from engineers. Adding a PM in the middle causes a loss in translation, slowing down a tight, high-bandwidth feedback loop.
All of this together means our team operates “bottom-up”: instead of execs telling us “you should do X”, we figure out what we think will have the most impact to our customers and work on building those features and tools. Execs ensure that we’re actually solving these problems by considering our impact on more product facing teams.
In the product environments Sean describes, where goals pivot quarterly and features are often experimental, speed is the ultimate currency. You need to ship, iterate, and often move on before the market shifts. But in Infrastructure and Developer Experience, context is the currency.
Treating engineers as fungible assets destroys context. You might gain fresh eyes, but you lose the implicit knowledge of how systems actually break. Stewardship, staying with a system long-term, unlocks compounding returns that are impossible to achieve on a short rotation.
The first is efficiency via pattern matching. When you stay in one domain for years, new requests are rarely truly “new.” I am not just debugging code; I am debugging the intersection of my tools and hundreds of diverse engineering teams. When a new team comes to me with a “unique” problem, I can often reach back in time: “We tried this approach in 2021 with the Camera team; here is exactly why it failed, and here is the architecture that actually works”.
But the more powerful return is systemic innovation. If you rotate teams every year, you are limited to solving acute bugs that are visible right now. Some problems, however, only reveal their shape over long horizons.
Take Bigtrace, a project I recently led; it was a solution that emerged solely because I stuck around long enough to see the shape of the problem:
* Start of 2023 (Observation): I began noticing a pattern. Teams across Google were collecting terabytes or even petabytes of performance traces, but they were struggling to process them. Engineers were writing brittle, custom pipelines to parse data, often complaining about how slow and painful it was to iterate on their analysis.
* Most of 2023 (Research): I didn’t jump to build a production system. Instead, I spent the best part of a year prototyping quietly in the background while working on other projects. I gathered feedback from these same engineers who had complained and because I had established long-term relationships, they gave me honest and introspective feedback. I learned what sort of UX, latency and throughput requirements they had and figured out how I could meet them.
* End of 2023 to Start of 2024 (Execution): We built and launched Bigtrace, a distributed big data query engine for traces. Today, it processes over 2 billion traces a month and is a critical part of the daily workflow for 100+ engineers.
If I had followed the advice to “optimize for fungibility” (i.e. if I had switched teams in 2023 to chase a new project) Bigtrace would not exist.
Instead, I would have left during the research phase and my successor would have seen the same “noise” of engineers complaining. But without the historical context to recognize a missing puzzle piece, I think they would have struggled to build something like Bigtrace.
One of the most seductive arguments for chasing the “Spotlight” is that it guarantees resources and executive attention. But that attention is a double-edged sword.
High-visibility projects are often volatile. They come with shifting executive whims, political maneuvering, and often end up in situations where long-term quality is sacrificed for short-term survival. For some engineers, navigating this chaos is a thrill. For those of us who care about system stability, it feels like a trap.
The advantage of stewardship is that it generates a different kind of capital: trust. When you have spent years delivering reliable tools, you earn the political capital to say “No” to the spotlight when it threatens the product.
Recently, the spotlight has been on AI. Every team is under pressure to incorporate it. We have been asked repeatedly: “Why don’t you integrate LLMs into Perfetto?” If I were optimizing for visibility, the answer would be obvious: build an LLM wrapper, demo it to leadership, and claim we are “AI-first.” It would be an easy win for my career.
But as a steward of the system, I know that one of Perfetto’s core values is precision. When a kernel developer is debugging a race condition, they need exact timestamps, not a hallucination. Users trust that when we tell them “X is the problem” that it actually is the problem and they’re not going to go chasing their tail for the next week, debugging an issue which doesn’t exist.
But it’s important not to take this too far: skepticism shouldn’t become obstructionism. With AI, it’s not “no forever” but “not until it can be done right” .
A spotlight-seeking engineer might view this approach as a missed opportunity; I view it as protecting what makes our product great: user trust.
The most common fear engineers have about leaving the “Spotlight” is career stagnation. The logic goes: If I’m not launching flashy features at Google I/O, and my work isn’t on my VP’s top 5 list, how will I ever get promoted to Staff+?
It is true that you lose the currency of “Executive Visibility.” But in infrastructure, you gain two alternate currencies that are just as valuable, and potentially more stable.
In a product organization, you often need to impress your manager’s manager. In an infrastructure organization, you need to impress your customers’ managers.
I call this the Shadow Hierarchy. You don’t need your VP to understand the intricacies of your code. You need the Staff+ Engineers in other critical organizations to need your tools.
When a Senior Staff Engineer in Pixel tells their VP, “We literally cannot debug the next Pixel phone without Perfetto”, that statement carries immense weight. It travels up their reporting chain, crosses over at the Director/VP level, and comes back down to your manager.
This kind of advocacy is powerful because it is technical, not political. It is hard to fake. When you are a steward of a critical system, your promotion packet is filled with testimonials from the most respected engineers in the company saying, “This person’s work enabled our success”.
While product teams might be poring over daily active users or revenue, we rely on metrics tracking engineering health:
* Utility: Every bug fixed using our tools is an engineer finding us useful. It is the purest measure of utility.
* Criticality: If the Pixel team uses Perfetto to debug a launch-blocking stutter, or Chrome uses it to fix a memory leak, our impact is implicitly tied to their success.
* Ubiquity: Capturing a significant percentage of the engineering population proves you’ve created a technical “lingua franca”. This becomes especially obvious when you see disconnected parts of the company collaborating with each other, using shared Perfetto traces as a “reference everyone understands”.
* Scale: Ingesting petabytes of data or processing billions of traces proves architectural resilience better than any design doc.
When you combine Criticality (VIP teams need this) with Utility (bugs are being fixed), you create a promotion case that is immune to executive reorganizations.
I am far from the first to notice the idea of “there are multiple ways to be a staff software engineer”. In his book Staff Engineer, Will Larson categorizes Staff-plus engineers into four distinct archetypes.
Sean describes the Solver or the Right Hand: engineers who act as agents of executive will, dropping into fires and moving on once the problem is stabilized. I am describing the Architect or the Tech Lead: roles defined by long-term ownership of a specific domain and deep technical context.
I can hear the criticism already: “You just got lucky finding your team. Most of us don’t have that luxury.”
There are two caveats to all my advice in this post. First, the strategy I have employed so far requires a company profitable enough to sustain long-term infrastructure. This path generally does not exist in startups or early growth companies; it is optimized for Big Tech.
Second, luck does play a role in landing on a good team. It is very hard to accurately evaluate team and company culture from the outside. But while finding the team might have involved luck, staying there for almost a decade was a choice.
And, at least in my experience, my team is not particularly special: I can name five other teams in Android alone . Sure, they might have a director change here or a VP change there, but the core mission and the engineering team remained stable.
The reason these teams seem rare is not that they don’t exist, but that they are often ignored. Because they don’t offer the rapid, visible “wins” of a product launch nor are they working on the “shiny cool features”, they attract less competition. If you are motivated by “shipping to billions of users” or seeing your friends and family use something you built, you won’t find that satisfaction here. That is the price of admission.
But if you want to build long-term systems and are willing to trade external validation for deep technical ownership, you just need to look behind the curtain.
The tech industry loves to tell you to move fast. But there is another path. It is a path where leverage comes from depth, patience, and the quiet satisfaction of building the foundation that others stand on.
You don’t have to chase the spotlight to have a meaningful, high-impact career at a big company. Sometimes, the most ambitious thing you can do is stay put, dig in, and build something that lasts. To sit with a problem space for years until you understand it well enough to build a Bigtrace.
...
Read the original on lalitm.com »
Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.
AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”
The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.
According to The Information, one US Azure sales unit set quotas for salespeople to increase customer spending on a product called Foundry, which helps customers develop AI applications, by 50 percent. Less than a fifth of salespeople in that unit met their Foundry sales growth targets. In July, Microsoft lowered those targets to roughly 25 percent growth for the current fiscal year. In another US Azure unit, most salespeople failed to meet an earlier quota to double Foundry sales, and Microsoft cut their quotas to 50 percent for the current fiscal year.
...
Read the original on arstechnica.com »
: Parenting and leadership is similar. Teach a man to fish, etc.
I spent a couple of years managing a team, and I entered that role — like many — without knowing anything about how to do it. I tried to figure out how to be a good manager, and doing so I ended up reading a lot about servant leadership. It never quite sat right with me, though. Servant leadership seems to me a lot like curling parenting: the leader/parent anticipate problems and sweep the way for their direct reports/children.
To be clear, this probably feels very good (initially, anyway) for the direct reports/children. But the servant leader/curling parent quickly becomes an overworked single point of failure, and once they leave there is nobody else who knows how to handle the obstacles the leader moved out of the way for everyone. In the worst cases, they leave behind a group of people who have been completely isolated from the rest of the organisation, and has no idea what their purpose is and how to fit in with the rest of the world.
I would like to invent my own buzzword: transparent leadership. In my book, a good leader
explains values and principles embraced by the organisation to aid them in
making aligned decisions on their own,
creates direct links between supply and demand (instead of deliberately making
themselves a middle man),
allows their direct reports career growth by gradually taking over leadership
responsibilities,
The middle manager that doesn’t perform any useful work is a fun stereotype, but I also think it’s a good target to aim for. The difference lies in what to do once one has rendered oneself redundant. A common response is to invent new work, ask for status reports, and add bureaucracy.
A better response is to go back to working on technical problems. This keeps the manager’s skills fresh and gets them more respect from their reports. The manager should turn into a high-powered spare worker, rather than a paper-shuffler.
...
Read the original on entropicthoughts.com »
Self-host and scale web apps
without the complexity
Take your Docker Compose apps to production with zero-downtime deployments, automatic HTTPS, and cross-machine scaling. No Kubernetes required.
# Start with any cloud VM or your own server
$ uc machine init [email protected]
# Deploy your app with automatic HTTPS
$ uc run –name my-app -p app.example.com:8000/https app-image:latest
✨ Your app is available at https://app.example.com
# Achieve high availability by adding more machines and scaling the app
$ uc machine add [email protected]
$ uc scale my-app 2
PaaS-like workflow on your own servers
Deploy with the simplicity of Heroku or Fly.io while keeping full control over your infrastructure.
Full control over your servers and data
SSH into machines and debug with standard tools
Build, push, and deploy with one command
No control plane or quorum to manage
Uncloud replaces complex clusters with a simple network of machines working seamlessly together — no maintenance overhead, just reliable infrastructure.
Each machine joins a WireGuard mesh network with automatic peer discovery
and NAT traversal. Containers get unique IPs and can communicate directly
across machines.
Unlike traditional orchestrators, there’s no central control plane to
maintain. Each machine maintains a synchronised copy of the cluster
state
through peer-to-peer communication, keeping cluster operations
functional
even if some machines go offline.
Control your entire infrastructure using intuitive Docker-like commands from
anywhere. Deploy, monitor, and scale applications across all your machines
while the CLI only needs SSH access to a single machine.
Run your apps on any Linux machine — from cloud VMs and dedicated servers to bare metal at your office or home.
Get free TLS certificates and automatic HTTPS for your domains with zero configuration using built-in Caddy reverse proxy.
Distribute traffic across container replicas running on different machines for improved reliability and performance.
Access any service by its name from any container using the built-in DNS that automatically tracks services across your network.
Define your entire app stack in a familiar Docker Compose file. No need to learn a new config format.
Mix cloud providers and your own hardware freely to optimise costs and performance, without changing how you deploy or manage apps.
Deploy a highly available web app with automatic HTTPS across multiple regions and on-premises in just a couple minutes.
...
Read the original on uncloud.run »
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Anthropic has tapped law firm Wilson Sonsini to begin work on one of the largest initial public offerings ever, which could come as soon as 2026, as the artificial intelligence start-up races OpenAI to the public market.
The maker of the Claude chatbot, which is in talks for a private funding round that would value it at more than $300bn, chose the US west coast law firm in recent days, according to two people with knowledge of the decision.
The start-up, led by chief executive Dario Amodei, had also discussed a potential IPO with big investment banks, according to multiple people with knowledge of those talks. The people characterised the discussions as preliminary and informal, suggesting that the company was not close to picking its IPO underwriters.
Nonetheless, these moves represent a significant step up in Anthropic’s preparations for an IPO that would test the appetite of public markets to back the massive, lossmaking research labs at the heart of the AI boom.
Wilson Sonsini has advised Anthropic since 2022, including on commercial aspects of multibillion-dollar investments from Amazon, and has worked on high-profile tech IPOs such as Google, LinkedIn and Lyft.
Its investors are enthusiastic about an IPO, arguing that Anthropic can seize the initiative from its larger rival OpenAI by listing first.
Anthropic could be prepared to list in 2026, according to one person with knowledge of its plans. Another person close to the company cautioned that an IPO so soon was unlikely.
“It’s fairly standard practice for companies operating at our scale and revenue level to effectively operate as if they are publicly traded companies,” said an Anthropic spokesperson. “We haven’t made any decisions about when or even whether to go public, and don’t have any news to share at this time.”
OpenAI was also undertaking preliminary work to ready itself for a public offering, according to people with knowledge of its plans, though they cautioned it was too soon to set even an approximate date for a listing.
But both companies may also be hampered by the fact that their rapid growth and the astronomical costs of training AI models make their financial performance difficult to forecast.
The pair will also be attempting IPOs at valuations that are unprecedented for US tech start-ups. OpenAI was valued at $500bn in October. Anthropic received a $15bn commitment from Microsoft and Nvidia last month, which will form part of a funding round expected to value the group between $300bn and $350bn.
Anthropic had been working through an internal checklist of changes required to go public, according to one person familiar with the process.
The San Francisco-headquartered start-up hired Krishna Rao, who worked at Airbnb for six years and was instrumental in that company’s IPO, as chief financial officer last year.
Wilson Sonsini did not respond to a request for comment.
...
Read the original on www.ft.com »
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
RAM is so expensive, Samsung won’t even sell it to Samsung
Due to rising prices from the “AI” bubble, Samsung Semiconductor reportedly refused a RAM order for new Galaxy phones from Samsung Electronics.
The price of eggs has nothing on the price of computer memory right now. Thanks to a supply crunch from the “AI” bubble, RAM chips are the new gold, with prices on consumer PC memory kits ballooning out of control. In an object lesson in the ridiculousness of an economic bubble, Samsung won’t even sell its memory to… Samsung.
Here’s the situation. Samsung makes everything from refrigerators to supermassive oil tankers. Getting all that stuff made requires an organization that’s literally dozens of affiliated companies and subsidiaries, which don’t necessarily work as closely or harmoniously as you might assume. For this story, we’re talking about Samsung Electronics, which makes Galaxy phones, tablets, laptops, watches, etc., and Samsung Semiconductor Global, which manufactures memory and other chips and supplies the global market. That global market includes both Samsung subsidiaries and their competitors—laptops from Samsung, Dell, and Lenovo sitting on a Best Buy store shelf might all have Samsung-manufactured memory sitting in their RAM slots.
Samsung subsidiaries are, naturally, going to look to Samsung Semiconductor first when they need parts. Such was reportedly the case for Samsung Electronics, in search of memory supplies for its newest smartphones as the company ramps up production for 2026 flagship designs. But with so much RAM hardware going into new “AI” data centers—and those companies willing to pay top dollar for their hardware—memory manufacturers like Samsung, SK Hynix, and Micron are prioritizing data center suppliers to maximize profits.
The end result, according to a report from SE Daily spotted by SamMobile, is that Samsung Semiconductor rejected the original order for smartphone DRAM chips from Samsung Electronics’ Mobile Experience division. The smartphone manufacturing arm of the company had hoped to nail down pricing and supply for another year. But reports say that due to “chipflation,” the phone-making division must renegotiate quarterly, with a long-term supply deal rejected by its corporate sibling. A short-term deal, with higher prices, was reportedly hammered out.
Assuming that this information is accurate—and to be clear, we can’t independently confirm it—consumers will see prices rise for Samsung phones and other mobile hardware. But that’s hardly a surprise. Finished electronics probably won’t see the same meteoric rise in prices as consumer-grade RAM modules, but this rising tide is flooding all the boats. Raspberry Pi, which strives to keep its mod-friendly electronics as cheap as possible, has recently had to bring prices up and called out memory costs as the culprit. Lenovo, the world’s largest PC manufacturer, is stockpiling memory supplies as a bulwark against the market.
But if you’re hoping to see prices lower in 2026, don’t hold your breath. According to a forecast from memory supplier TeamGroup, component prices have tripled recently, causing finished modules to jump in prices as quickly as 100 percent in a month. Absent some kind of disastrous market collapse, prices are expected to continue rising into next year, and supply could remain constrained well into 2027 or later.
Michael is a 10-year veteran of technology journalism, covering everything from Apple to ZTE. On PCWorld he’s the resident keyboard nut, always using a new one for a review and building a new mechanical board or expanding his desktop “battlestation” in his off hours. Michael’s previous bylines include Android Police, Digital Trends, Wired, Lifehacker, and How-To Geek, and he’s covered events like CES and Mobile World Congress live. Michael lives in Pennsylvania where he’s always looking forward to his next kayaking trip.
AMD’s FSR Redstone tech to get wider rollout in December
...
Read the original on www.pcworld.com »
Memory price inflation comes for us all, and if you’re not affected yet, just wait.
I was building a new PC last month using some parts I had bought earlier this year. The 64 Gigabyte T-Create DDR5 memory kit I used cost $209 then. Today? The same kit costs $650!
Just in the past week, we found out Raspberry Pi’s increasing their single board computer prices. Micron’s killing the Crucial brand of RAM and storage devices completely, meaning there’s gonna be one fewer consumer memory manufacturer. Samsung can’t even buy RAM from themselves to build their own Smartphones, and small vendors like Libre Computer and Mono are seeing RAM prices double, triple, or even worse, and they’re not even buying the latest RAM tech!
I think PC builders might be the first crowd to get impacted across the board—just look at these insane graphs from PC Parts Picker, showing RAM prices going from like $30 to $120 for DDR4, or like $150 to five hundred dollars for 64 gigs of DDR5.
But the impacts are only just starting to hit other markets.
Libre Computer mentioned on Twitter a single 4 gigabyte module of LPDDR4 memory costs $35. That’s more expensive than every other component on one of their single board computers combined! You can’t survive selling products at a loss, so once the current production batches are sold through, either prices will be increased, or certain product lines will go out of stock.
The smaller the company, the worse the price hit will be. Even Raspberry Pi, who I’m sure has a little more margin built in, already raised SBC prices (and introduced a 1 GB Pi 5—maybe a good excuse for developers to drop Javascript frameworks and program for lower memory requirements again?).
Cameras, gaming consoles, tablets, almost anything that has memory will get hit sooner or later.
I can’t believe I’m saying this, but compared to the current market, Apple’s insane memory upgrade pricing is… actually in line with the rest of the industry.
The reason for all this, of course, is AI datacenter buildouts. I have no clue if there’s any price fixing going on like there was a few decades ago—that’s something conspiracy theorists can debate—but the problem is there’s only a few companies producing all the world’s memory supplies.
And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.
So they’re shutting down their consumer memory lines, and devoting all production to AI.
Even companies like GPU board manufacturers are getting shafted; Nvidia’s not giving memory to them along with their chips like they used to, basically telling them “good luck, you’re on your own for VRAM now!”
Which is especially rich, because Nvidia’s profiting obscenely off of all this stuff.
That’s all bad enough, but some people see a silver lining. I’ve seen some people say “well, once the AI bubble bursts, at least we’ll have a ton of cheap hardware flooding the market!”
And yes, in past decades, that might be one outcome.
But the problem here is the RAM they’re making, a ton of it is either integrated into specialized GPUs that won’t run on normal computers, or being fitted into special types of memory modules that don’t work on consumer PCs, either. (See: HBM).
That, and the GPUs and servers being deployed now don’t even run on normal power and cooling, they’re part of massive systems that would take a ton of effort to get running in even the most well-equipped homelabs. It’s not like the classic Dell R720 that just needs some air and a wall outlet to run.
That is to say, we might be hitting a weird era where the PC building hobby is gutted, SBCs get prohibitively expensive, and anyone who didn’t stockpile parts earlier this year is, pretty much, in a lurch.
Even Lenovo admits to stockpiling RAM, making this like the toilet paper situation back in 2020, except for massive corporations. Not enough supply, so companies who can afford to get some will buy it all up, hoping to stave off the shortages that will probably last longer, partly because of that stockpiling.
I don’t think it’s completely outlandish to think some companies will start scavenging memory chips (ala dosdude1) off other systems for stock, especially if RAM prices keep going up.
It’s either that, or just stop making products. There are some echoes to the global chip shortages that hit in 2021-2022, and that really shook up the market for smaller companies.
I hate to see it happening again, but somehow, here we are a few years later, except this time, the AI bubble is to blame.
Sorry for not having a positive note to end this on, but I guess… maybe it’s a good time to dig into that pile of old projects you never finished instead of buying something new this year.
How long will this last? That’s anybody’s guess. But I’ve already put off some projects I was gonna do for 2026, and I’m sure I’m not the only one.
...
Read the original on www.jeffgeerling.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.