10 interesting stories served every morning and every evening.
FTC failed to follow rulemaking process required by US law, judges rule.
A federal appeals court today struck down a “click-to-cancel” rule that would have required companies to make cancelling services as easy as signing up. The Federal Trade Commission rule was scheduled to take effect on July 14 but was vacated by the US Court of Appeals for the 8th Circuit.
A three-judge panel ruled unanimously that the Biden-era FTC, then led by Chair Lina Khan, failed to follow the full rulemaking process required under US law. “While we certainly do not endorse the use of unfair and deceptive practices in negative option marketing, the procedural deficiencies of the Commission’s rulemaking process are fatal here,” the ruling said.
Indicating their sympathy with the FTC’s motivations, judges wrote that many Americans “have found themselves unwittingly enrolled in recurring subscription plans, continuing to pay for unwanted products or services because they neglected to cancel their subscriptions.” Last year, the FTC updated its 1973 Negative Option Rule by “adding provisions that bar sellers from misrepresenting material facts and require disclosure of material terms, express consumer consent, and a simple cancellation mechanism,” the ruling said.
The FTC is required to conduct a preliminary regulatory analysis when a rule has an estimated annual economic effect of $100 million or more. The FTC estimated in a Notice of Proposed Rulemaking (NPRM) that the rule would not have a $100 million effect.
But an administrative law judge later found that the rule’s impact surpassed the threshold, observing that compliance costs would exceed $100 million “unless each business used fewer than twenty-three hours of professional services at the lowest end of the spectrum of estimated hourly rates,” the 8th Circuit ruling said. Despite the administrative law judge’s finding, the FTC did not conduct a preliminary regulatory analysis and instead “proceeded to issue only the final regulatory analysis alongside the final Rule,” the judges’ panel said.
Summarizing the FTC’s arguments, judges said the agency contended that US law “did not require the Commission to conduct the preliminary regulatory analysis later in the rulemaking process,” and that “any alleged error was harmless because the NPRM addressed alternatives to the proposed amendments to the 1973 [Negative Option] Rule and analyzed record-keeping and compliance costs.”
Judges disagreed with the FTC, writing that “the statutory language, ‘shall issue,’ mandates a separate preliminary analysis for public review and comment ‘in any case’ where the Commission issues a notice of proposed rulemaking and the $100 million threshold is surpassed.”
Numerous industry groups and businesses, including cable companies, sued the FTC in four federal circuit courts. The cases were consolidated at the 8th Circuit, where it was decided by Circuit Judges James Loken, Ralph Erickson, and Jonathan Kobes. Loken was appointed by George H. W. Bush, while Erickson and Kobes are Trump appointees.
The judges said the lack of a preliminary analysis meant that industry groups and businesses weren’t given enough time to contest the FTC’s findings:
By the time the final regulatory analysis was issued, Petitioners still did not have the opportunity to assess the Commission’s cost-benefit analysis of alternatives, an element of the preliminary regulatory analysis not required in the final analysis. And the Commission’s discussion of alternatives in the final regulatory analysis was perfunctory. It briefly mentioned two alternatives to the final Rule, either terminating the rulemaking altogether and continuing to rely on the existing regulatory framework or limiting the Rule’s scope to negative option plans marketed in-person or through the mail. While the Commission’s decision to bypass the preliminary regulatory analysis requirement was certainly not made in bad faith or an “outright dodge of APA [Administrative Procedure Act] procedures,” Petitioners have raised ‘enough uncertainty whether [their] comments would have had some effect if they had been considered,’ especially in the context of a closely divided Commission vote that elicited a lengthy dissenting statement.
The 8th Circuit ruling said the FTC’s tactics, if not stopped, “could open the door to future manipulation of the rulemaking process. Furnishing an initially unrealistically low estimate of the economic impacts of a proposed rule would avail the Commission of a procedural shortcut that limits the need for additional public engagement and more substantive analysis of the potential effects of the rule on the front end.”
The FTC issued the proposal in March 2023 and voted 3-2 to approve the rule in October 2024, with Republican Commissioners Melissa Holyoak and Andrew Ferguson voting against it. Ferguson is now chairman of the FTC, which has consisted only of Republicans since Trump fired the two Democrats who remained after Khan’s departure.
At the time of the vote, Holyoak’s dissenting statement accused the majority of hurrying to finalize the rule before the November 2024 election and warned that the new regulation “may not survive legal challenge.” Holyoak also argued that the rule is too broad, saying it “is nothing more than a back-door effort at obtaining civil penalties in any industry where negative option is a method to secure payment.”
Khan’s announcement of the now-vacated rule said that too many “businesses make people jump through endless hoops just to cancel a subscription. The FTC’s rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want.”
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
Ars staffers share some of their favorite unexpected 3D prints
Unless users take action, Android will let Gemini access third-party apps
Linda Yaccarino quits X without saying why, one day after Grok praised Hitler
...
Read the original on arstechnica.com »
The Rust programming language is well known for its ownership-based type system, which offers strong guarantees like memory safety and data race freedom. However, Rust also provides unsafe escape hatches, for which safety is not guaranteed automatically and must instead be manually upheld by the programmer. This creates a tension. On the one hand, compilers would like to exploit the strong guarantees of the type system—particularly those pertaining to aliasing of pointers—in order to unlock powerful intraprocedural optimizations. On the other hand, those optimizations are easily invalidated by “badly behaved” unsafe code. To ensure correctness of such optimizations, it thus becomes necessary to clearly define what unsafe code is “badly behaved.” In prior work, external page Stacked Borrows defined a set of rules achieving this goal. However, Stacked Borrows rules out several patterns that turn out to be common in real-world unsafe Rust code, and it does not account for advanced features of the Rust borrow checker that were introduced more recently.
To resolve these issues, we present Tree Borrows. As the name suggests, Tree Borrows is defined by replacing the stack at the heart of Stacked Borrows with a tree. This overcomes the aforementioned limitations: our evaluation on the 30 000 most widely used Rust crates shows that Tree Borrows rejects 54% fewer test cases than Stacked Borrows does. Additionally, we prove (in Rocq) that it retains most of the Stacked Borrows optimizations and also enables important new ones, notably read-read reorderings.
...
Read the original on plf.inf.ethz.ch »
Ikea is relaunching its smart home line in a move that will make its low-cost products work with other brands, with or without Ikea’s own hub. Starting in January, the Swedish furniture giant will release more than 20 new Matter-over-Thread smart lights, sensors, and remotes with “more new product types and form factors to come,” David Granath of Ikea of Sweden, tells The Verge in an exclusive interview.
Ikea is also rebooting its audio offerings as it seeks to fill the Sonos Symfonisk-shaped hole on its shelves. The first two models in the new line of inexpensive, easy-to-use Bluetooth speakers for the home are the $50 retro-style Nattbad and a speaker-slash-table-lamp, the Blomprakt, coming in October, with many more on the way.
These new products are part of the company’s ongoing effort to make its smart home as simple and affordable for as many people as possible. “A couple of years ago, we made some strategic decisions about how to move on with the smart range and the speaker range and make an impact in an Ikea way for the many people,” says Granath. He points to the company’s learnings from working with Zigbee and Sonos over the last few years, as well as its involvement in founding and developing the new smart home standard, Matter. “We feel we’ve reached that point. There’s a lot coming, but this is all the first step, getting things in place.”
Last week, Ikea released an update, currently in beta, to its Dirigera smart home hub that turns the hub into a Matter Controller and activates its long-dormant Thread radio, making it a Thread border router. This means it can now connect to and control any compatible Matter device, including those from other brands, and control them in its Home Smart app. It will also work with Ikea’s new Matter devices when they arrive, which will eventually replace its existing Zigbee devices, says Granath. It’s a major step toward a more open, plug-and-play smart home.
We don’t have a lot of details on the over 20 new devices coming next year, but Granath confirmed that they are replacing existing functions. So, new smart bulbs, plugs, sensors, remotes, buttons, and air-quality devices, including temperature and humidity monitors. They will also come with a new design. Although “not necessarily what’s been leaked,” says Granath, referring to images of the Bilresa Dual Button that appeared earlier this year.
He did confirm that some new product categories will arrive in January, with more to follow in April and beyond, including potentially Matter-over-Wi-Fi products. Pricing will be comparable to or lower than that of previous products, which start under $10. “Affordability remains a key priority for us.”
“The premium to make a product smart is not that high anymore, so you can expect new product types and form factors coming,” he says. “Matter unlocks interoperability, ease of use, and affordability for us. The standardization process means more companies are sharing the workload of developing for this.”
Despite the move away from Zigbee, Ikea is keeping Zigbee’s Touchlink functionality. This point-to-point protocol allows devices to be paired directly to each other and work together out of the box, without an app or hub — such as the bulb and remote bundles Ikea sells.
Granath says Ikea’s goal is for customers to get the best value from their products — it doesn’t matter whether that’s with Apple Home, with their hub, or without any hub. This is why the company is embracing Matter’s open approach. “We want to remove barriers to complexity, we want it to be simple to use, and we just want it to work,” he says. “If you want the most user-friendly system, choose ours. But if you’re an Apple user, take our bulb and onboard it to your Apple Home.”
This reboot positions Ikea as one of the first major retailers to bring Matter to the mainstream market, a potentially risky move as Matter has struggled with fragmentation, slow adoption, and other issues since its launch. But Granath is confident that it’s the right move. “Ikea is a good catalyst for the mass market, as we’re not aiming for the techy people; we can make it affordable and easy enough for the many people.”
So far, Ikea has taken a slow and steady approach to the technology, waiting for some of the kinks to be ironed out before unleashing it on Ikea customers who expect things to be simple and to just work. “We don’t want people to have a bad experience — it’s been about timing. We’ve been waiting to find the balance of potential and user experience,” says Granath.
For Ikea, that time is here, and Granath says the team has done a lot of work to get the tech ready. But while Matter has undergone significant improvements recently, it’s yet to be fully proven in the mainstream. Is it really ready for such a big splash, I ask? “We definitely hope so,” says Granath.
...
Read the original on www.theverge.com »
When talking about REST, it is worth reading the dissertation of Roy Thomas Fielding. The original paper that describes RESTful web, “Architectural Styles and the Design of Network-based Software Architectures” Roy T. Fielding (2000), introduces the Representational State Transfer (REST) architectural style as a framework for designing scalable, performant, and maintainable networked systems, particularly web services.
The paper aims to analyze architectural styles for network-based systems, identifying their strengths and weaknesses. It defines REST as a specific architectural style optimized for the modern web, focusing on scalability, simplicity, and adaptability.
Fielding demonstrates how REST principles shape the success of the web, advocating for its adoption in designing distributed systems with universal, stateless interfaces and clear resource-based interactions.
In his dissertation he does not prescribe the specific use of HTTP verbs (like GET, POST, PUT, DELETE) or focus on CRUD-style APIs as REST is often implemented today.
The dissertation clearly describes REST as an architectural style that provides principles and constraints for building network-based applications, using the web as its foundational example.
Roy Fielding has explicitly criticized the oversimplification of REST in CRUD-style APIs, emphasizing that many so-called “RESTful” APIs fail to implement key REST constraints, particularly the use of hypermedia for driving application state transitions. In his 2008 blog post, “REST APIs must be hypertext-driven,” Fielding states:
“If the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.” Source
This underscores his view that without hypermedia controls, an API does not fulfill the REST architectural style Hypermedia controls are elements embedded in a resource representation that guide clients on what actions they can take next.
Common misconceptions in the context of what people consider REST are for example:
REST is CRUD (often it is, but not always)
A resource is an entity (often meaning a persistence entity).
RESTful API must not use verbs.
But those are actually design decision made for the API at hand, chosen by their creators and have nothing to do with REST.
What Does “Driven by Hypertext” Mean?
The phrase “not being driven by hypertext” in Roy Fielding’s criticism refers to the absence of Hypermedia as the Engine of Application State (HATEOAS) in many APIs that claim to be RESTful. HATEOAS is a fundamental principle of REST, requiring that the client dynamically discover actions and interactions through hypermedia links embedded in server responses, rather than relying on out-of-band knowledge (e.g., API documentation).
The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Simply implementing HATEOAS will bring you closer to RESTful principles than debating whether verbs are allowed in your API.
Often people argue about what a resource in REST is. I’ve seen more or less commonly people expressing the opinion that a resource is a data structure coming from the server, unfortunately often even equal to a persistence entity.
Let’s check what Fielding says about it:
The key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service (e.g. “today’s weather in Los Angeles”), a collection of other resources, a non-virtual object (e.g. a person), and so on. In other words, any concept that might be the target of an author’s hypertext reference must fit within the definition of a resource. A resource is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time.
Semantics are a by-product of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI — they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI. In other words, there are no resources on the server; just mechanisms that supply answers across an abstract interface defined by resources. It may seem odd, but this is the essence of what makes the Web work across so many different implementations.
Now let’s take a look at RFC 3986
This specification does not limit the scope of what might be a resource; rather, the term “resource” is used in a general sense for whatever might be identified by a URI. Familiar examples include an electronic document, an image, a source of information
with a consistent purpose (e.g., “today’s weather report for Los Angeles”), a service (e.g., an HTTP-to-SMS gateway), and a collection of other resources. A resource is not necessarily
accessible via the Internet; e.g., human beings, corporations, and bound books in a library can also be resources. Likewise, abstract concepts can be resources, such as the operators and operands of a mathematical equation, the types of a relationship (e.g., “parent” or “employee”), or numeric values (e.g., zero, one, and infinity).
The following examples of URIs are taken from the RFC as well:
A resource can be virtually anything that can be addressed by a URI. It can be a physical object, a concept, a document, a service, or even a virtual or abstract thing—as long as it can be uniquely identified and represented.
I am getting frustrated by the number of people calling any HTTP-based interface a REST API.
Fielding describes six rules that you should consider before calling your API a RESTful API Source.
API designers, please note the following rules before calling your creation a REST API:
A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc. In general, any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. [Failure here implies that identification is not separated from interaction.]
A REST API should not contain any changes to the communication protocols aside from filling-out or fixing the details of underspecified bits of standard protocols, such as HTTP’s PATCH method or Link header field. Workarounds for broken implementations (such as those browsers stupid enough to believe that HTML defines HTTP’s method set) should be defined separately, or at least in appendices, with an expectation that the workaround will eventually be obsolete. [Failure here implies that the resource interfaces are object-specific, not generic.]
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC’s functional coupling].
A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names. [ditto]
A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
But what does all of that exactly mean? To be honest, without spending some time and thoughts I didn’t comprehend it either the first time I was reading it.
This is my understanding of them, feel free to disagree and lets have a conversation! I’m curious to learn a different point of view or opinion about them.
A REST API uses Uniform Resource Identifiers (URIs) to name things. A URL (http://…) is just one specific type of URI that also includes a location. The key principle here is that a resource’s fundamental identity should be separate from its access mechanism.
A URI (Uniform Resource Identifier) is a broad concept that refers to any string used to identify a resource, while a URL (Uniform Resource Locator) is a specific type of URI that not only identifies a resource but also provides a means to locate it by describing its primary access mechanism (such as its network location).
Your API should work with any URI and not rely on HTTP-specific mechanisms.
Stick to existing standards (like HTTP). If something in the standard is vague, clarify it. Don’t invent new behaviors or break existing ones.
Don’t hack or redefine how HTTP works. Fix the gaps, but don’t change the rules.
If HTTP doesn’t fully define how PATCH should work, you can clarify it. But don’t redefine GET to also delete data just because it’s convenient.
Your API should define how to understand and use the data it returns — through media types (like JSON, HTML) and links — not focus on the structure of URIs or what actions to call.
Put your energy into designing the format of the data and the links inside it, not into documenting what URLs to hit.
Instead of documenting that you must POST to /users/123/activate, your API should return a user representation in a hypermedia-aware format (like application/hal+json or a custom type like application/vnd.myapp.user+json).
The client code doesn’t know about the /users/123/activate path. It only knows that the media type defines an “activate” link relation, and it uses the href and method provided in the response to perform the action.
This rule is the direct consequence of Rule #3. Clients shouldn’t assume or hardcode paths like /users/123/posts. Instead, they should discover URIs through links provided by the server. Clients should learn about URIs dynamically.
A client shouldn’t assume that a user’s posts are at /users/123/posts. It should read a link like this from the resource:
The server’s internal classification of a resource (e.g., User vs. Admin) must be entirely irrelevant and invisible to the client.
The client shouldn’t care what kind of entity a resource represents internally (like User, Admin, Moderator). It should care only about the media type (like application/json) and the links/actions it sees.
Don’t expose internal object types or roles. Just send a well-structured response with useful links. Don’t require the client to know that a resource is of type Admin. Just give them a consistent application/json response with relevant links and data.
By “standardized relation names” he refers to the registered link relations by the IANA.
The client should only need one starting point (e.g., a base URL, a bookmark) and a knowledge of standard media types. Everything else — what to do, where to go — should come from the server responses.
Clients should follow links like browsing a website — starting from the home page and clicking through, not hardcoding paths.
Start with https://api.example.com/ and follow the _links in each response:
In practice, very few APIs adhere to these principles. The next section examines why.
Why aren’t most APIs truly RESTful?
The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams. These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,” making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier. It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Fielding’s rules emphasize that a truly RESTful API should embrace hypermedia (HATEOAS) as the central mechanism for interaction, not just use HTTP as a transport. REST is protocol-independent at its core; HTTP is simply a convenient way of using it. Clients should discover and navigate resources dynamically through links and standardized relations embedded in representations — not rely on hardcoded URI structures, types, or external documentation.
This makes REST systems loosely coupled, evolvable, and aligned with how the web itself operates: through representation, discovery, and transitions. REST isn’t about exposing your internal object model over HTTP — it’s about building distributed systems that behave like the web.
Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs. Build whatever makes sense for your project and the consumers of your API. Ignore the dogmatists who preach what RESTful APIs might be and what not. An API should be easy to learn and hard to misuse in the first place. If it fulfills that criteria it doesn’t matter if it is RESTful or not. Follow Postels Law if it makes sense for your case: “Be conservative in what you do, be liberal in what you accept from others.”.
Who is the consumer of your API? How easy will it be for it to learn and use the API? Will it be intuitive to use? What are possible constraints? How do you version it? Deprecation and sun-setting strategies? How are changes to the API effectively communicated to consumers? Those things are much more valuable than the actual format of your resource identifier.
By using HATEOAS and referencing schema definitions (such as XSD or JSON Schema) from within your resource representations, you can enable clients to understand the structure of the data and navigate the API dynamically. This can support generic or self-adapting clients. If that aligns with your goals (e.g., client decoupling, evolvability, dynamic interaction), then it’s a valid and powerful design choice. If you are building a public API for external developers you don’t control, invest in HATEOAS. If you are building a backend for a single frontend controlled by your own team, a simpler RPC-style API may be the more practical choice.
Please enable JavaScript to view the comments powered by
Disqus.
...
Read the original on florian-kraemer.net »
Today the Council of the European Union formally approved the accession of Bulgaria to the euro area on 1 January 2026 and determined a Bulgarian lev conversion rate of 1.95583 per euro. This is the current central rate of the lev in the Exchange Rate Mechanism (ERM II), which the currency joined on 10 July 2020. The European Central Bank (ECB) and Българска народна банка (Bulgarian National Bank) agreed to monitor developments in the Bulgarian lev against the euro on the foreign exchange market until 1 January 2026. With the entry into force of the close cooperation framework between the ECB and Българска народна банка (Bulgarian National Bank), the ECB has been responsible for directly supervising four significant institutions and overseeing 13 less significant institutions in Bulgaria since 1 October 2020. The agreement to monitor the lev is in the context of ERM II. Participation in ERM II and observance of the normal fluctuation margins for at least the last two years is one of the convergence criteria to be fulfilled ahead of euro area accession. The conversion rate of the lev is set by way of an amendment to Regulation (EC) No 2866/98, which will become effective on 1 January 2026.
Disclaimer
Please note that related topic tags are currently available for selected content only.
Reproduction is permitted provided that the source is acknowledged.
Media contacts
Are you happy with this page?
Yes
No
Page not working
Information not useful
Design not attractive
Something else
Thank you for letting us know!
We use functional cookies to store user preferences; analytics cookies to improve website performance; third-party cookies set by third-party services integrated into the website. You have the choice to accept or reject them. For more information or to review your preference on the cookies and server logs we use, we invite you to:
We are always working to improve this website for our users. To do this, we use the anonymous data provided by cookies.
Learn more about how we use cookies
...
Read the original on www.ecb.europa.eu »
After migrating several projects from WordPress to Astro, I’ve become a massive fan of this framework.
Astro is a web framework that came out in 2021 and immediately felt different. While most JavaScript frameworks started with building complex applications and then tried to adapt to simpler sites, Astro went the opposite direction. It was built from day one for content-focused websites.
The philosophy is refreshingly simple. Astro believes in being content-driven and server-first, shipping zero JavaScript by default (yes, really), while still being easy to use with excellent tooling. It’s like someone finally asked, “What if we built a framework specifically for the types of websites most of us actually make?”
Astro introduced something called “Island Architecture,” and once you understand it, you’ll wonder why we’ve been doing things any other way.
Traditional frameworks hydrate entire pages with JavaScript. Even if you’ve got a simple blog post with one interactive widget, the whole page gets the JavaScript treatment. Astro flips this on its head. Your pages are static HTML by default, and only the bits that need interactivity become JavaScript “islands.”
Picture a blog post with thousands of words of content. In Astro, all that text stays as pure HTML. Only your comment section or image carousel loads JavaScript. Everything else remains lightning fast. It’s brilliantly simple.
Astro sites are fast, we’re talking 40% faster load times compared to traditional React frameworks. But here’s the thing, this isn’t just about impressing other developers. These performance gains translate directly to better search rankings, happier users, and yes, more conversions. On slower devices or dodgy mobile connections, the difference is even more dramatic.
The developer experience in Astro feels like someone actually thought about how we work. Setting up a new project is straightforward, and you’re guided through the process by Houston, their friendly setup assistant.
What I love most about Astro components is how they just make sense:
// This runs at build time
const { title } = Astro.props;
const posts = await fetchPosts();
See that code fence at the top? That runs at build time, not in the browser. Your data fetching, your logic - it all happens before the user even loads the page. You get brilliant TypeScript support without any of the complexity of hooks, state management, or lifecycle methods.
With Astro you’re not locked into a single way of doing things. Need React for a complex form? Chuck it in. Prefer Vue for data visualisation? Go for it. Want to keep most things as simple Astro components? Perfect.
Astro allows you to use React for the admin dashboard components, Vue for some interactive charts, and keep everything else as vanilla Astro. It all just works together seamlessly.
Markdown support in Astro isn’t bolted on as an afterthought. You can import Markdown files directly and use them like components:
import { Content, frontmatter } from ”../content/post.md”;
The build pipeline is modern and complete. TypeScript works out of the box, Sass compilation is built in, images get optimised automatically with Astro’s tag, and you get hot module replacement during development. No setting up Webpack configs or fighting with build tools.
You also get flexibility in how pages are rendered. Build everything statically for maximum speed, render on the server for dynamic content, or mix both approaches in the same project. Astro adapts to what you need.
I’ve found Astro perfect for marketing sites, blogs, e-commerce catalogues, and portfolio sites. Basically, anywhere content is the hero and you don’t need complex client-side state management, Astro excels.
Astro isn’t the answer to everything. If you’re building a complex single-page application (SPA) with lots of client-side routing, need ISR (hello Next.js), or you need heavy state management across components, you’ll probably want something else like Next.js.
The ecosystem is growing but still very small in comparison to something like Next.js. File-based routing can feel constraining on larger projects (though some people love it).
# Create project
npm create astro@latest my-site
cd my-site
# Add a framework if you want
npx astro add react
# Start developing
npm run dev
Put your pages in src/pages/, components in src/components/, and you’re ready to build something great.
After years of JavaScript frameworks getting more complex, Astro feels like a breath of fresh air. It’s a return to the fundamentals of the web - fast, accessible, content-first experiences - but with all the modern developer conveniences we’ve come to expect.
What struck me most after migrating several projects is how Astro makes the right thing the easy thing. Want a fast site? That’s the default. Want to add interactivity? Easy, but only where you need it. Want to use your favourite framework? Go ahead, Astro won’t judge.
If you’re building anything content-focused, from a simple blog to a full e-commerce site, give Astro a serious look. Your users will get a faster experience, you’ll enjoy the development process, and your Core Web Vitals will be spectacular.
Note - The website you’re reading this blog from is built with Astro.
...
Read the original on websmith.studio »
On StackExchange, someone asks why programmers talk about “calling” a function. Several possible allusions spring to mind:
* Calling a function is like calling on a friend — we go, we stay a while, we come back.
* Calling a function is like calling for a servant — a summoning to perform a task.
* Calling a function is like making a phone call — we ask a question and get an answer from outside ourselves.
The true answer seems to be the middle one — “calling” as in “calling up, summoning” — but indirectly, originating in the notion of “calling for” a subroutine out of a library of subroutines in the same way that we’d “call for” a book out of a closed-stack library of books.
The OED’s first citation for call number in the library-science sense comes from
Melvil Dewey (yes, that Dewey) in 1876. The OED defines it as:
A mark, esp. a number, on a library book, or listed in a library’s catalogue, indicating the book’s location in the library; a book’s press mark or shelf mark.
I see librarians using the term “call-number” in The Library Journal 13.9
(1888) as if it was very well established already by that point:
Mr. Davidson read a letter from Mr. A. W. Tyler […] enclosing sample of the new call blank used at the Plainfield (N.J.) P. L., giving more room for the signature and address of the applicant. […] “In connection with Mr. Tyler’s new call slip […] I always feel outraged when I make up a long list of call numbers in order to make sure of a book, and then the librarian keeps the list, and the next time I have it all to do over again.”
According to The Organization of Information 4th ed. (Joudrey & Taylor, 2017):
Call number. A notation on a resource that matches the same notation in the metadata description and is used to identify and locate the item; it often consists of a classification notation and a cutter number, and it may also include a workmark and/or a date. It is the number used to “call” for an item in a closed-stack library; thus the source of the name “call number.”
Cutter number. A designation with the purpose of alphabetizing all works that have exactly the same classification notation. Named for Charles Ammi Cutter, who devised such a scheme, but spelled with a small c when referring to another such table that is not Cutter’s own.
John W. Mauchly’s article “Preparation of problems for EDVAC-type machines” (1947) uses the English word “call” only twice, yet this seems to be an important early attestation of the word in the context of a “library” of computer subroutines:
Important questions for the users of a machine are: How easily can reference be made to any of the subroutines? How hard is it to initiate a subroutine? What conditions can be used to terminate a subroutine? And with what facility can control revert to any part of the original sequence or some further sequence […] Facilities for conditional and other transfers to subroutines, transfers to still further subroutines, and transfers back again, are certain to be used frequently.
[…] the position in the memory at which arguments are placed can be standardized, so that whenever a subroutine is called in to perform a calculation, the subroutine will automatically know that the argument which is to be used is at a specified place.
[…] Some of them might be written out in a handbook and transferred to the coding of the problem as needed, but those of any complexity presumably ought to be in a library — that is, a set of magnetic tapes in which previously coded problems of permanent value are stored.
[…] One of the problems which must be met in this case is the method of withdrawal from the library and of compilation in the proper sequence for the particular problem. […] It is possible […] to evolve a coding instruction for placing the subroutines in the memory at places known to the machine, and in such a way that they may easily be called into use […] all one needs to do is make brief reference to them by number, as they are indicated in the coding.
The manual for the “MANIAC II assembly routine” (January 1956) follows Mauchly’s sketch pretty closely. MANIAC II has a paper-tape “library” of subroutines which can be summoned up (by the assembler) to become part of a fully assembled program, and in fact each item in the “library” has an identifying “call number,” just like every book in a real library has a call number:
The assembly routine for Maniac II is designed to translate descriptive code into absolute code. […] The bulk of the descriptive tape consists of a series of instructions, separated, by control words, into numbered groups called boxes [because flowcharts: today we’d say “basic blocks”]. The allowed box numbers are 01 through EF[. …] If the address [in an instruction’s address field] is FXXX, then the instruction must be a transfer, and the transfer is to the subroutine whose call number
is XXX. The most common subroutines are on the same magnetic tape as the assembly routine, and are brought in automatically. For other subroutines, the assembly routine stops to allow the appropriate paper tapes to be put into the photoreader.
Notice that the actual instruction (or “order”) in MANIAC II is still known as “TC,” “transfer control,” and the program’s runtime behavior is known as a transfer of control, not yet as a call. The calling here not the runtime behavior but rather the calling-up of the coded subroutine (at assembly time) to become part of the fully assembled program.
Fortran II (1958; also
here) introduced CALL and RETURN statements, with this description:
The additional facilities of FORTRAN II effectively enable the programmer to expand the language of the system indefinitely. […] Each [CALL statement] will constitute a call for the defining subprogram, which may carry out a procedure of any length or complexity […]
[The CALL] statement causes transfer of control to the subroutine NAME and presents the subroutine with the arguments, if any, enclosed in parentheses. […] A subroutine introduced by a SUBROUTINE statement is called into the main program by a CALL statement specifying the name of the subroutine. For example, the subroutine introduced by
could be called into the main program by the statement
Notice that Fortran II still describes the runtime behavior as “transfer of control,” but as the computer language becomes higher-level the English starts to blur and conflate the runtime transfer-of-control behavior with the assembly- or link-time “calling-in” behavior.
In Robert I. Sarbacher’s Encyclopedic dictionary of electronics and nuclear engineering (1959), the entry for Subroutine doesn’t use the word “call,” but Sarbacher does seem to be reflecting a mental model somewhere inside the union of Mauchly’s definition and Fortran II’s.
Call in. In computer programming, the transfer of control of a computer from a main routine to a subroutine that has been inserted into the sequence of calculating operations to perform a subsidiary operation.
Call number. In computer programming, a set of characters used to identify a subroutine. They may include information related to the operands, or may be used to generate the subroutine.
Call word. In computer programming, a call number exactly the length of one word.
Notice that Sarbacher defines “call in” as the runtime transfer of control itself; that’s different from how the Fortran II manual used the term. Maybe Sarbacher was accurately reflecting an actual shift in colloquial meaning that had already taken place between 1958 and 1959 — but personally I think he might simply have goofed it. (Sarbacher was a highly trained physicist, but not a computer guy, as far as I can tell.)
“JOVIAL: A Description of the Language” (February 1960) says:
A procedure call [today we’d say “call site”] is the link from the main program to a procedure. It is the only place from which a procedure may be entered.
An input parameter [today we’d say “argument”] is an arithmetic expression specified in the procedure call
which represents a value on which the procedure is to operate[.] A dummy input parameter is an item specified in the procedure declaration which represents a value to be used by the procedure as an input parameter.
One or more Procedure Calls (of other procedures) may appear within a procedure. At present, only four “levels” of calls may exist.
That JOVIAL manual mentions not only the “procedure call” (the syntax for transferring control to a procedure declaration) but also the “SWITCH call” (the syntax for transferring control to a switch-case label). That is, JOVIAL (1960) has fully adopted the noun “call” to mean “the syntactic indicator of a runtime transfer of control.” However, JOVIAL never uses “to call” as a verb.
A procedure statement serves to initiate (call for) the execution of a procedure, which is a closed and self-contained process […] The procedure declaration defining the called procedure contains, in its heading, a string of symbols identical in form to the procedure statement, and the formal parameters […] give complete information concerning the admissibility of parameters used in any procedure call[.]
Peter Naur’s “Algol 60 Report” (May 1960) avoids the verb “call,” but in a new development casually uses the noun “call” to mean “the period during which the procedure itself is working” — not the transfer of control but the period between the transfers in and out:
A procedure statement serves to invoke (call for) the execution of a procedure body. […] [When passing an array parameter, if] the formal parameter is called by value the local array created
during the call will have the same subscript bounds as the actual array.
Finally, “Burroughs Algebraic Compiler: A representation of ALGOL for use with the Burroughs 220 data-processing system” (1961) attests a single (definitionary) instance of the preposition-less verb “call”:
The ENTER statement is used to initiate the execution of a subroutine (to call a subroutine).
The usage in “Advanced Computer Programming: A case study of a classroom assembly program” (Corbató, Poduska, & Saltzer, 1963) is entirely modern: “It is still convenient for pass one to call a subroutine to store the cards”; “In order to call EVAL, it is necessary to save away temporary results”; “the subroutine which calls PASS1 and PASS2”; etc.
Therefore my guesses at the moment are:
Fortran II (1958) rapidly popularized the phrasing “to call X” for the temporary transfer of control to X, because “CALL X” is literally what you write in a Fortran II program when you want to transfer control to the procedure named X.
Fortran’s own choice of the “CALL” mnemonic was an original neologism, inspired by the pre-existing use of “call (in/up)” as seen in the Mauchly and MANIAC II quotations but introducing wrinkles that had never been seen anywhere before Fortran.
By 1959, Algol had picked up “call” from Fortran. Algol’s “procedure statements” produced calls at runtime; a procedure could be called; during the call the procedure would perform its work.
By 1961, we see the first uses of the exact phrase “to call X.”
...
Read the original on quuxplusone.github.io »
A beautiful, non-destructive, and GPU-accelerated RAW image editor built with performance in mind.
RapidRAW is a modern, high-performance alternative to Adobe Lightroom®. It delivers a feature-rich, beautiful editing experience in a lightweight package (under 30MB) for Windows, macOS, and Linux.
I developed this project as a personal challenge at the age of 18. My goal was to create a high-performance tool for my own photography workflow while deepening my understanding of both React and Rust, with the support from Google Gemini.
If you like the theme images and want to see more of my own images, checkout my Instagram: @timonkaech.photography
As a photography enthusiast, I often found existing software to be sluggish and resource-heavy on my machine. Born from the desire for a more responsive and streamlined photo editing experience, I set out to build my own. The goal was to create a tool that was not only fast but also helped me learn the details of digital image processing and camera technology.
I set an ambitious goal to rapidly build a functional, feature-rich application from an empty folder. This personal challenge pushed me to learn quickly and focus intensely on the core architecture and user experience.
The foundation is built on Rust for its safety and performance, and Tauri for its ability to create lightweight, cross-platform desktop apps with a web frontend. The entire image processing pipeline is offloaded to the GPU via WGPU and a custom WGSL shader, ensuring that even on complex edits with multiple masks, the UI remains fluid.
I am immensely grateful for Google’s Gemini suite of AI models. As an 18-year-old without a formal background in advanced mathematics or image science, the AI Studio’s free tier was an invaluable assistant, helping me research and implement concepts like the Menon demosaicing algorithm.
While the core functionality is in place, I’m actively working on improving several key areas. Here’s a transparent look at the current focus:
RapidRAW features a two-tier approach to AI to provide both speed and power. It distinguishes between lightweight, integrated tools and heavy, optional generative features.
Built-in AI Masking: The core application includes lightweight, fast and open source AI models (SAM from Meta) for intelligent masking (e.g., Subject and Foreground selection). These tools run locally, are always available, and are designed to accelerate your standard editing workflow.
Optional Generative AI: For computationally intensive tasks like inpainting (Generative Replace), RapidRAW connects to an external ComfyUI backend. This keeps the main application small and fast, while offloading heavy processing to a dedicated, user-run server.
The Built-in AI Masking is fully functional for all users.
The Optional Generative AI features, however, currently require a manual setup of a ComfyUI backend. The official, easy-to-use Docker container is not yet provided.
This means the generative tools are considered a developer preview and are not ready for general, out-of-the-box use.
The initial work on generative AI focused on building a connection to the ComfyUI backend and implementing the first key features.
* Modular Backend: RapidRAW connects to a local ComfyUI server, which acts as the inference engine.
* Generative Replace (Inpainting): Users can paint a mask over an area of the image (or use the AI masking tool to create a precise selection) and provide a text prompt to fill that area with generated content.
* Non-Destructive Patches: Each generative edit is stored as a separate “patch” layer. These can be toggled, re-ordered, or deleted at any time, consistent with RapidRAW’s non-destructive philosophy.
This project began as an intensive sprint to build the core functionality. Here’s a summary of the initial progress and key milestones:
You have two options to run RapidRAW:
Grab the pre-built installer or application bundle for your operating system from the Releases page.
If you want to build the project yourself, you’ll need to have Rust and Node.js installed.
# 1. Clone the repository
git clone https://github.com/CyberTimon/RapidRAW.git
cd RapidRAW
# 2. Install frontend dependencies
npm install
# 3. Build and run the application in development mode
# Use –release for a build that runs much faster (image loading etc.)
npx tauri dev –release
Contributions are welcome and highly appreciated! Whether it’s reporting a bug, suggesting a feature, or submitting a pull request, your help makes this project better. Please feel free to open an issue to discuss your ideas.
A huge thank you to the following projects and tools that were very important in the development of RapidRAW:
* Google AI Studio: For providing amazing assistance in researching, implementing image processing algorithms and giving an overall speed boost.
* rawler: For the excellent Rust crate that provides the foundation for RAW file processing in this project.
As an 18-year-old developer balancing this project with an apprenticeship, your support means the world. If you find RapidRAW useful or exciting, please consider donating to help me dedicate more time to its development and cover any associated costs.
* Crypto:
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). I chose this license to ensure that RapidRAW and any of its derivatives will always remain open-source and free for the community. It protects the project from being used in closed-source commercial software, ensuring that improvements benefit everyone.
See the LICENSE file for more details.
...
Read the original on github.com »
Ruby 3.4 takes the first step in a multi-version transition to frozen string literals by default. Your Rails app will continue working exactly as before, but Ruby now provides opt-in warnings to help you prepare. Here’s what you need to know.
Ruby is implementing frozen string literals gradually over three releases:
Ruby 3.4 (Now): Opt-in warnings when you enable deprecation warnings
By default, nothing changes. Your code runs exactly as before. But when you enable deprecation warnings:
Important: These warnings are opt-in. You won’t see them unless you explicitly enable them.
Performance improvements vary by application, but benchmarks have shown:
* Up to 20% reduction in garbage collection for string-heavy code
* Faster execution in hot paths that create many identical strings
For more Rails performance optimization strategies, check out our guide on finding the 20% of code causing 80% of performance problems.
The biggest impact won’t be your code - it’ll be your dependencies:
This allows Ruby to:
The Ruby team is moving away from magic comments. For new code, write code that naturally works with frozen strings by treating all strings as immutable. See the Common Patterns to Fix section for techniques that work well with frozen strings.
Don’t rush to remove magic comments - Files with the comment keep their current behavior
Fix warnings gradually - Use CI to track new warnings
Update gems first - Check for updates that fix string mutation warnings
Yes. Here’s why the transition is developer-friendly:
Nothing breaks by default - Your app runs exactly as before
Warnings are opt-in - You control when to see them
* Now (Ruby 3.4): Upgrade and run normally, enable warnings in development
* Before Ruby 3.7: Fix warnings at your own pace
* Ruby 3.7: Warnings become default, most issues should be fixed
Ruby 3.4’s opt-in warnings are the first step in a thoughtful, multi-version transition. Enable them when you’re ready, fix issues at your own pace, and prepare for better performance in Ruby 4.0.
...
Read the original on prateekcodes.dev »
Upgrading my Shopify email notification templates this morning, I asked Shopify’s LLM-powered developer documentation bot this question:
What’s the syntax, in Liquid, to detect whether an order in an email notification contains items that will be fulfilled through Shopify Collective?
I’d done a few traditional searches, but couldn’t find an answer, so I thought, okay, let’s try the doc bot!
The reply came, quick and clean:
{% if order.tags contains ‘Shopify Collective’ %}
Some items in your order are being fulfilled by a partner merchant through Shopify Collective.
{% endif %}
Looks great. I added this code, but it didn’t work; the condition wasn’t being satisfied, even when orders had the necessary tag, visible on the admin screen.
Shopify doesn’t provide a way to test unconventional email formats without actually placing real orders, so I did my customary dance of order-refund, order-refund, order-refund. My credit card is going to get locked one of these days.
Turns out, the Shopify Collective tag isn’t present on the order at the time the confirmation email is generated. It’s added later, possibly just moments later, by some other cryptic Shopify process.
I don’t think this is documented anywhere, so in that sense, it’s not surprising the documentation bot got it wrong. My question is: what’s the point of a doc bot that just takes a guess?
More pointedly: is the doc bot docs, or not?
And very pointedly indeed: what are we even doing here??
I’ve sent other queries to the doc bot and received helpful replies; I’d categorize these basically as search: “How do I do X?” “What’s the syntax for Y?” But I believe this is a situation in which the cost of bad advice outweighs the benefit of quick help by 10X, at least. I can, in fact, figure out how to do X using the real docs. Only the doc bot can make things up.
If it was Claude making this kind of mistake, I’d be annoyed but not surprised. But this is Shopify’s sanctioned helper! It waits twinkling in the header of every page of the dev site. I suppose there are domains in which just taking a guess is okay; is the official documentation one of them?
I vote no, and I think a freestyling doc bot undermines the effort and care of the folks at Shopify who work — in my estimation very successfully — to produce documentation that is thorough and accurate.
P. S. In the end, I adapted the code from the section of the notification that lists the items in the order — a bit circuitous, but it works:
{% assign is_collective_order = false %}
{% for line_item in line_items %}
{% if line_item.product.tags contains ‘Shopify Collective’ %}
{% assign is_collective_order = true %}
{% endif %}
{% endfor %}
The product-level Shopify Collective tag is available and reliable at the time the notification is generated.
To the blog home page
...
Read the original on www.robinsloan.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.