10 interesting stories served every morning and every evening.




1 536 shares, 25 trendiness

Court nullifies “click-to-cancel” rule that required easy methods of cancellation

FTC failed to fol­low rule­mak­ing process re­quired by US law, judges rule.

A fed­eral ap­peals court to­day struck down a click-to-cancel” rule that would have re­quired com­pa­nies to make can­celling ser­vices as easy as sign­ing up. The Federal Trade Commission rule was sched­uled to take ef­fect on July 14 but was va­cated by the US Court of Appeals for the 8th Circuit.

A three-judge panel ruled unan­i­mously that the Biden-era FTC, then led by Chair Lina Khan, failed to fol­low the full rule­mak­ing process re­quired un­der US law. While we cer­tainly do not en­dorse the use of un­fair and de­cep­tive prac­tices in neg­a­tive op­tion mar­ket­ing, the pro­ce­dural de­fi­cien­cies of the Commission’s rule­mak­ing process are fa­tal here,” the rul­ing said.

Indicating their sym­pa­thy with the FTCs mo­ti­va­tions, judges wrote that many Americans have found them­selves un­wit­tingly en­rolled in re­cur­ring sub­scrip­tion plans, con­tin­u­ing to pay for un­wanted prod­ucts or ser­vices be­cause they ne­glected to can­cel their sub­scrip­tions.” Last year, the FTC up­dated its 1973 Negative Option Rule by adding pro­vi­sions that bar sell­ers from mis­rep­re­sent­ing ma­te­r­ial facts and re­quire dis­clo­sure of ma­te­r­ial terms, ex­press con­sumer con­sent, and a sim­ple can­cel­la­tion mech­a­nism,” the rul­ing said.

The FTC is re­quired to con­duct a pre­lim­i­nary reg­u­la­tory analy­sis when a rule has an es­ti­mated an­nual eco­nomic ef­fect of $100 mil­lion or more. The FTC es­ti­mated in a Notice of Proposed Rulemaking (NPRM) that the rule would not have a $100 mil­lion ef­fect.

But an ad­min­is­tra­tive law judge later found that the rule’s im­pact sur­passed the thresh­old, ob­serv­ing that com­pli­ance costs would ex­ceed $100 mil­lion unless each busi­ness used fewer than twenty-three hours of pro­fes­sional ser­vices at the low­est end of the spec­trum of es­ti­mated hourly rates,” the 8th Circuit rul­ing said. Despite the ad­min­is­tra­tive law judge’s find­ing, the FTC did not con­duct a pre­lim­i­nary reg­u­la­tory analy­sis and in­stead proceeded to is­sue only the fi­nal reg­u­la­tory analy­sis along­side the fi­nal Rule,” the judges’ panel said.

Summarizing the FTCs ar­gu­ments, judges said the agency con­tended that US law did not re­quire the Commission to con­duct the pre­lim­i­nary reg­u­la­tory analy­sis later in the rule­mak­ing process,” and that any al­leged er­ror was harm­less be­cause the NPRM ad­dressed al­ter­na­tives to the pro­posed amend­ments to the 1973 [Negative Option] Rule and an­a­lyzed record-keep­ing and com­pli­ance costs.”

Judges dis­agreed with the FTC, writ­ing that the statu­tory lan­guage, shall is­sue,’ man­dates a sep­a­rate pre­lim­i­nary analy­sis for pub­lic re­view and com­ment in any case’ where the Commission is­sues a no­tice of pro­posed rule­mak­ing and the $100 mil­lion thresh­old is sur­passed.”

Numerous in­dus­try groups and busi­nesses, in­clud­ing ca­ble com­pa­nies, sued the FTC in four fed­eral cir­cuit courts. The cases were con­sol­i­dated at the 8th Circuit, where it was de­cided by Circuit Judges James Loken, Ralph Erickson, and Jonathan Kobes. Loken was ap­pointed by George H. W. Bush, while Erickson and Kobes are Trump ap­pointees.

The judges said the lack of a pre­lim­i­nary analy­sis meant that in­dus­try groups and busi­nesses weren’t given enough time to con­test the FTCs find­ings:

By the time the fi­nal reg­u­la­tory analy­sis was is­sued, Petitioners still did not have the op­por­tu­nity to as­sess the Commission’s cost-ben­e­fit analy­sis of al­ter­na­tives, an el­e­ment of the pre­lim­i­nary reg­u­la­tory analy­sis not re­quired in the fi­nal analy­sis. And the Commission’s dis­cus­sion of al­ter­na­tives in the fi­nal reg­u­la­tory analy­sis was per­func­tory. It briefly men­tioned two al­ter­na­tives to the fi­nal Rule, ei­ther ter­mi­nat­ing the rule­mak­ing al­to­gether and con­tin­u­ing to rely on the ex­ist­ing reg­u­la­tory frame­work or lim­it­ing the Rule’s scope to neg­a­tive op­tion plans mar­keted in-per­son or through the mail. While the Commission’s de­ci­sion to by­pass the pre­lim­i­nary reg­u­la­tory analy­sis re­quire­ment was cer­tainly not made in bad faith or an outright dodge of APA [Administrative Procedure Act] pro­ce­dures,” Petitioners have raised enough un­cer­tainty whether [their] com­ments would have had some ef­fect if they had been con­sid­ered,’ es­pe­cially in the con­text of a closely di­vided Commission vote that elicited a lengthy dis­sent­ing state­ment.

The 8th Circuit rul­ing said the FTCs tac­tics, if not stopped, could open the door to fu­ture ma­nip­u­la­tion of the rule­mak­ing process. Furnishing an ini­tially un­re­al­is­ti­cally low es­ti­mate of the eco­nomic im­pacts of a pro­posed rule would avail the Commission of a pro­ce­dural short­cut that lim­its the need for ad­di­tional pub­lic en­gage­ment and more sub­stan­tive analy­sis of the po­ten­tial ef­fects of the rule on the front end.”

The FTC is­sued the pro­posal in March 2023 and voted 3-2 to ap­prove the rule in October 2024, with Republican Commissioners Melissa Holyoak and Andrew Ferguson vot­ing against it. Ferguson is now chair­man of the FTC, which has con­sisted only of Republicans since Trump fired the two Democrats who re­mained af­ter Khan’s de­par­ture.

At the time of the vote, Holyoak’s dis­sent­ing state­ment ac­cused the ma­jor­ity of hur­ry­ing to fi­nal­ize the rule be­fore the November 2024 elec­tion and warned that the new reg­u­la­tion may not sur­vive le­gal chal­lenge.” Holyoak also ar­gued that the rule is too broad, say­ing it is noth­ing more than a back-door ef­fort at ob­tain­ing civil penal­ties in any in­dus­try where neg­a­tive op­tion is a method to se­cure pay­ment.”

Khan’s an­nounce­ment of the now-va­cated rule said that too many businesses make peo­ple jump through end­less hoops just to can­cel a sub­scrip­tion. The FTCs rule will end these tricks and traps, sav­ing Americans time and money. Nobody should be stuck pay­ing for a ser­vice they no longer want.”

Jon is a Senior IT Reporter for Ars Technica. He cov­ers the tele­com in­dus­try, Federal Communications Commission rule­mak­ings, broad­band con­sumer af­fairs, court cases, and gov­ern­ment reg­u­la­tion of the tech in­dus­try.

Grok praises Hitler, gives credit to Musk for re­mov­ing woke fil­ters”

Ars staffers share some of their fa­vorite un­ex­pected 3D prints

Unless users take ac­tion, Android will let Gemini ac­cess third-party apps

Linda Yaccarino quits X with­out say­ing why, one day af­ter Grok praised Hitler

...

Read the original on arstechnica.com »

2 409 shares, 32 trendiness

Tree Borrows

The Rust pro­gram­ming lan­guage is well known for its own­er­ship-based type sys­tem, which of­fers strong guar­an­tees like mem­ory safety and data race free­dom. However, Rust also pro­vides un­safe es­cape hatches, for which safety is not guar­an­teed au­to­mat­i­cally and must in­stead be man­u­ally up­held by the pro­gram­mer. This cre­ates a ten­sion. On the one hand, com­pil­ers would like to ex­ploit the strong guar­an­tees of the type sys­tem—par­tic­u­larly those per­tain­ing to alias­ing of point­ers—in or­der to un­lock pow­er­ful in­trapro­ce­dural op­ti­miza­tions. On the other hand, those op­ti­miza­tions are eas­ily in­val­i­dated by badly be­haved” un­safe code. To en­sure cor­rect­ness of such op­ti­miza­tions, it thus be­comes nec­es­sary to clearly de­fine what un­safe code is badly be­haved.” In prior work, ex­ter­nal page Stacked Borrows de­fined a set of rules achiev­ing this goal. However, Stacked Borrows rules out sev­eral pat­terns that turn out to be com­mon in real-world un­safe Rust code, and it does not ac­count for ad­vanced fea­tures of the Rust bor­row checker that were in­tro­duced more re­cently.

To re­solve these is­sues, we pre­sent Tree Borrows. As the name sug­gests, Tree Borrows is de­fined by re­plac­ing the stack at the heart of Stacked Borrows with a tree. This over­comes the afore­men­tioned lim­i­ta­tions: our eval­u­a­tion on the 30 000 most widely used Rust crates shows that Tree Borrows re­jects 54% fewer test cases than Stacked Borrows does. Additionally, we prove (in Rocq) that it re­tains most of the Stacked Borrows op­ti­miza­tions and also en­ables im­por­tant new ones, no­tably read-read re­order­ings.

...

Read the original on plf.inf.ethz.ch »

3 367 shares, 23 trendiness

Ikea goes all in on Matter smart homes

Ikea is re­launch­ing its smart home line in a move that will make its low-cost prod­ucts work with other brands, with or with­out Ikea’s own hub. Starting in January, the Swedish fur­ni­ture gi­ant will re­lease more than 20 new Matter-over-Thread smart lights, sen­sors, and re­motes with more new prod­uct types and form fac­tors to come,” David Granath of Ikea of Sweden, tells The Verge in an ex­clu­sive in­ter­view.

Ikea is also re­boot­ing its au­dio of­fer­ings as it seeks to fill the Sonos Symfonisk-shaped hole on its shelves. The first two mod­els in the new line of in­ex­pen­sive, easy-to-use Bluetooth speak­ers for the home are the $50 retro-style Nattbad and a speaker-slash-table-lamp, the Blomprakt, com­ing in October, with many more on the way.

These new prod­ucts are part of the com­pa­ny’s on­go­ing ef­fort to make its smart home as sim­ple and af­ford­able for as many peo­ple as pos­si­ble. A cou­ple of years ago, we made some strate­gic de­ci­sions about how to move on with the smart range and the speaker range and make an im­pact in an Ikea way for the many peo­ple,” says Granath. He points to the com­pa­ny’s learn­ings from work­ing with Zigbee and Sonos over the last few years, as well as its in­volve­ment in found­ing and de­vel­op­ing the new smart home stan­dard, Matter. We feel we’ve reached that point. There’s a lot com­ing, but this is all the first step, get­ting things in place.”

Last week, Ikea re­leased an up­date, cur­rently in beta, to its Dirigera smart home hub that turns the hub into a Matter Controller and ac­ti­vates its long-dor­mant Thread ra­dio, mak­ing it a Thread bor­der router. This means it can now con­nect to and con­trol any com­pat­i­ble Matter de­vice, in­clud­ing those from other brands, and con­trol them in its Home Smart app. It will also work with Ikea’s new Matter de­vices when they ar­rive, which will even­tu­ally re­place its ex­ist­ing Zigbee de­vices, says Granath. It’s a ma­jor step to­ward a more open, plug-and-play smart home.

We don’t have a lot of de­tails on the over 20 new de­vices com­ing next year, but Granath con­firmed that they are re­plac­ing ex­ist­ing func­tions. So, new smart bulbs, plugs, sen­sors, re­motes, but­tons, and air-qual­ity de­vices, in­clud­ing tem­per­a­ture and hu­mid­ity mon­i­tors. They will also come with a new de­sign. Although not nec­es­sar­ily what’s been leaked,” says Granath, re­fer­ring to im­ages of the Bilresa Dual Button that ap­peared ear­lier this year.

He did con­firm that some new prod­uct cat­e­gories will ar­rive in January, with more to fol­low in April and be­yond, in­clud­ing po­ten­tially Matter-over-Wi-Fi prod­ucts. Pricing will be com­pa­ra­ble to or lower than that of pre­vi­ous prod­ucts, which start un­der $10. Affordability re­mains a key pri­or­ity for us.”

The pre­mium to make a prod­uct smart is not that high any­more, so you can ex­pect new prod­uct types and form fac­tors com­ing,” he says. Matter un­locks in­ter­op­er­abil­ity, ease of use, and af­ford­abil­ity for us. The stan­dard­iza­tion process means more com­pa­nies are shar­ing the work­load of de­vel­op­ing for this.”

Despite the move away from Zigbee, Ikea is keep­ing Zigbee’s Touchlink func­tion­al­ity. This point-to-point pro­to­col al­lows de­vices to be paired di­rectly to each other and work to­gether out of the box, with­out an app or hub — such as the bulb and re­mote bun­dles Ikea sells.

Granath says Ikea’s goal is for cus­tomers to get the best value from their prod­ucts — it does­n’t mat­ter whether that’s with Apple Home, with their hub, or with­out any hub. This is why the com­pany is em­brac­ing Matter’s open ap­proach. We want to re­move bar­ri­ers to com­plex­ity, we want it to be sim­ple to use, and we just want it to work,” he says. If you want the most user-friendly sys­tem, choose ours. But if you’re an Apple user, take our bulb and on­board it to your Apple Home.”

This re­boot po­si­tions Ikea as one of the first ma­jor re­tail­ers to bring Matter to the main­stream mar­ket, a po­ten­tially risky move as Matter has strug­gled with frag­men­ta­tion, slow adop­tion, and other is­sues since its launch. But Granath is con­fi­dent that it’s the right move. Ikea is a good cat­a­lyst for the mass mar­ket, as we’re not aim­ing for the techy peo­ple; we can make it af­ford­able and easy enough for the many peo­ple.”

So far, Ikea has taken a slow and steady ap­proach to the tech­nol­ogy, wait­ing for some of the kinks to be ironed out be­fore un­leash­ing it on Ikea cus­tomers who ex­pect things to be sim­ple and to just work. We don’t want peo­ple to have a bad ex­pe­ri­ence — it’s been about tim­ing. We’ve been wait­ing to find the bal­ance of po­ten­tial and user ex­pe­ri­ence,” says Granath.

For Ikea, that time is here, and Granath says the team has done a lot of work to get the tech ready. But while Matter has un­der­gone sig­nif­i­cant im­prove­ments re­cently, it’s yet to be fully proven in the main­stream. Is it re­ally ready for such a big splash, I ask? We def­i­nitely hope so,” says Granath.

...

Read the original on www.theverge.com »

4 293 shares, 16 trendiness

Most RESTful APIs aren’t really RESTful

When talk­ing about REST, it is worth read­ing the dis­ser­ta­tion of Roy Thomas Fielding. The orig­i­nal pa­per that de­scribes RESTful web, Architectural Styles and the Design of Network-based Software Architectures” Roy T. Fielding (2000), in­tro­duces the Representational State Transfer (REST) ar­chi­tec­tural style as a frame­work for de­sign­ing scal­able, per­for­mant, and main­tain­able net­worked sys­tems, par­tic­u­larly web ser­vices.

The pa­per aims to an­a­lyze ar­chi­tec­tural styles for net­work-based sys­tems, iden­ti­fy­ing their strengths and weak­nesses. It de­fines REST as a spe­cific ar­chi­tec­tural style op­ti­mized for the mod­ern web, fo­cus­ing on scal­a­bil­ity, sim­plic­ity, and adapt­abil­ity.

Fielding demon­strates how REST prin­ci­ples shape the suc­cess of the web, ad­vo­cat­ing for its adop­tion in de­sign­ing dis­trib­uted sys­tems with uni­ver­sal, state­less in­ter­faces and clear re­source-based in­ter­ac­tions.

In his dis­ser­ta­tion he does not pre­scribe the spe­cific use of HTTP verbs (like GET, POST, PUT, DELETE) or fo­cus on CRUD-style APIs as REST is of­ten im­ple­mented to­day.

The dis­ser­ta­tion clearly de­scribes REST as an ar­chi­tec­tural style that pro­vides prin­ci­ples and con­straints for build­ing net­work-based ap­pli­ca­tions, us­ing the web as its foun­da­tional ex­am­ple.

Roy Fielding has ex­plic­itly crit­i­cized the over­sim­pli­fi­ca­tion of REST in CRUD-style APIs, em­pha­siz­ing that many so-called RESTful” APIs fail to im­ple­ment key REST con­straints, par­tic­u­larly the use of hy­per­me­dia for dri­ving ap­pli­ca­tion state tran­si­tions. In his 2008 blog post, REST APIs must be hy­per­text-dri­ven,” Fielding states:

If the en­gine of ap­pli­ca­tion state (and hence the API) is not be­ing dri­ven by hy­per­text, then it can­not be RESTful and can­not be a REST API. Period.” Source

This un­der­scores his view that with­out hy­per­me­dia con­trols, an API does not ful­fill the REST ar­chi­tec­tural style Hypermedia con­trols are el­e­ments em­bed­ded in a re­source rep­re­sen­ta­tion that guide clients on what ac­tions they can take next.

Common mis­con­cep­tions in the con­text of what peo­ple con­sider REST are for ex­am­ple:

REST is CRUD (often it is, but not al­ways)

A re­source is an en­tity (often mean­ing a per­sis­tence en­tity).

RESTful API must not use verbs.

But those are ac­tu­ally de­sign de­ci­sion made for the API at hand, cho­sen by their cre­ators and have noth­ing to do with REST.

What Does Driven by Hypertext” Mean?

The phrase not be­ing dri­ven by hy­per­text” in Roy Fielding’s crit­i­cism refers to the ab­sence of Hypermedia as the Engine of Application State (HATEOAS) in many APIs that claim to be RESTful. HATEOAS is a fun­da­men­tal prin­ci­ple of REST, re­quir­ing that the client dy­nam­i­cally dis­cover ac­tions and in­ter­ac­tions through hy­per­me­dia links em­bed­ded in server re­sponses, rather than re­ly­ing on out-of-band knowl­edge (e.g., API doc­u­men­ta­tion).

The core prob­lem it ad­dresses is client-server cou­pling. There are prob­a­bly count­less pro­jects where a small change in a server’s URI struc­ture re­quired a co­or­di­nated (and of­ten painful) de­ploy­ment of mul­ti­ple client ap­pli­ca­tions. A HATEOAS-driven ap­proach di­rectly solves this by de­cou­pling the client from the server’s name­space. This ad­dresses the qual­ity of evolv­abil­ity.

Simply im­ple­ment­ing HATEOAS will bring you closer to RESTful prin­ci­ples than de­bat­ing whether verbs are al­lowed in your API.

Often peo­ple ar­gue about what a re­source in REST is. I’ve seen more or less com­monly peo­ple ex­press­ing the opin­ion that a re­source is a data struc­ture com­ing from the server, un­for­tu­nately of­ten even equal to a per­sis­tence en­tity.

Let’s check what Fielding says about it:

The key ab­strac­tion of in­for­ma­tion in REST is a re­source. Any in­for­ma­tion that can be named can be a re­source: a doc­u­ment or im­age, a tem­po­ral ser­vice (e.g. today’s weather in Los Angeles”), a col­lec­tion of other re­sources, a non-vir­tual ob­ject (e.g. a per­son), and so on. In other words, any con­cept that might be the tar­get of an au­thor’s hy­per­text ref­er­ence must fit within the de­f­i­n­i­tion of a re­source. A re­source is a con­cep­tual map­ping to a set of en­ti­ties, not the en­tity that cor­re­sponds to the map­ping at any par­tic­u­lar point in time.

Semantics are a by-prod­uct of the act of as­sign­ing re­source iden­ti­fiers and pop­u­lat­ing those re­sources with rep­re­sen­ta­tions. At no time what­so­ever do the server or client soft­ware need to know or un­der­stand the mean­ing of a URI — they merely act as a con­duit through which the cre­ator of a re­source (a hu­man nam­ing au­thor­ity) can as­so­ci­ate rep­re­sen­ta­tions with the se­man­tics iden­ti­fied by the URI. In other words, there are no re­sources on the server; just mech­a­nisms that sup­ply an­swers across an ab­stract in­ter­face de­fined by re­sources. It may seem odd, but this is the essence of what makes the Web work across so many dif­fer­ent im­ple­men­ta­tions.

Now let’s take a look at RFC 3986

This spec­i­fi­ca­tion does not limit the scope of what might be a re­source; rather, the term resource” is used in a gen­eral sense for what­ever might be iden­ti­fied by a URI. Familiar ex­am­ples in­clude an elec­tronic doc­u­ment, an im­age, a source of in­for­ma­tion

with a con­sis­tent pur­pose (e.g., today’s weather re­port for Los Angeles”), a ser­vice (e.g., an HTTP-to-SMS gate­way), and a col­lec­tion of other re­sources. A re­source is not nec­es­sar­ily

ac­ces­si­ble via the Internet; e.g., hu­man be­ings, cor­po­ra­tions, and bound books in a li­brary can also be re­sources. Likewise, ab­stract con­cepts can be re­sources, such as the op­er­a­tors and operands of a math­e­mat­i­cal equa­tion, the types of a re­la­tion­ship (e.g., parent” or employee”), or nu­meric val­ues (e.g., zero, one, and in­fin­ity).

The fol­low­ing ex­am­ples of URIs are taken from the RFC as well:

A re­source can be vir­tu­ally any­thing that can be ad­dressed by a URI. It can be a phys­i­cal ob­ject, a con­cept, a doc­u­ment, a ser­vice, or even a vir­tual or ab­stract thing—as long as it can be uniquely iden­ti­fied and rep­re­sented.

I am get­ting frus­trated by the num­ber of peo­ple call­ing any HTTP-based in­ter­face a REST API.

Fielding de­scribes six rules that you should con­sider be­fore call­ing your API a RESTful API Source.

API de­sign­ers, please note the fol­low­ing rules be­fore call­ing your cre­ation a REST API:

A REST API should not be de­pen­dent on any sin­gle com­mu­ni­ca­tion pro­to­col, though its suc­cess­ful map­ping to a given pro­to­col may be de­pen­dent on the avail­abil­ity of meta­data, choice of meth­ods, etc. In gen­eral, any pro­to­col el­e­ment that uses a URI for iden­ti­fi­ca­tion must al­low any URI scheme to be used for the sake of that iden­ti­fi­ca­tion. [Failure here im­plies that iden­ti­fi­ca­tion is not sep­a­rated from in­ter­ac­tion.]

A REST API should not con­tain any changes to the com­mu­ni­ca­tion pro­to­cols aside from fill­ing-out or fix­ing the de­tails of un­der­spec­i­fied bits of stan­dard pro­to­cols, such as HTTPs PATCH method or Link header field. Workarounds for bro­ken im­ple­men­ta­tions (such as those browsers stu­pid enough to be­lieve that HTML de­fines HTTPs method set) should be de­fined sep­a­rately, or at least in ap­pen­dices, with an ex­pec­ta­tion that the workaround will even­tu­ally be ob­so­lete. [Failure here im­plies that the re­source in­ter­faces are ob­ject-spe­cific, not generic.]

A REST API should spend al­most all of its de­scrip­tive ef­fort in defin­ing the me­dia type(s) used for rep­re­sent­ing re­sources and dri­ving ap­pli­ca­tion state, or in defin­ing ex­tended re­la­tion names and/​or hy­per­text-en­abled mark-up for ex­ist­ing stan­dard me­dia types. Any ef­fort spent de­scrib­ing what meth­ods to use on what URIs of in­ter­est should be en­tirely de­fined within the scope of the pro­cess­ing rules for a me­dia type (and, in most cases, al­ready de­fined by ex­ist­ing me­dia types). [Failure here im­plies that out-of-band in­for­ma­tion is dri­ving in­ter­ac­tion in­stead of hy­per­text.]

A REST API must not de­fine fixed re­source names or hi­er­ar­chies (an ob­vi­ous cou­pling of client and server). Servers must have the free­dom to con­trol their own name­space. Instead, al­low servers to in­struct clients on how to con­struct ap­pro­pri­ate URIs, such as is done in HTML forms and URI tem­plates, by defin­ing those in­struc­tions within me­dia types and link re­la­tions. [Failure here im­plies that clients are as­sum­ing a re­source struc­ture due to out-of band in­for­ma­tion, such as a do­main-spe­cific stan­dard, which is the data-ori­ented equiv­a­lent to RPCs func­tional cou­pling].

A REST API should never have typed” re­sources that are sig­nif­i­cant to the client. Specification au­thors may use re­source types for de­scrib­ing server im­ple­men­ta­tion be­hind the in­ter­face, but those types must be ir­rel­e­vant and in­vis­i­ble to the client. The only types that are sig­nif­i­cant to a client are the cur­rent rep­re­sen­ta­tion’s me­dia type and stan­dard­ized re­la­tion names. [ditto]

A REST API should be en­tered with no prior knowl­edge be­yond the ini­tial URI (bookmark) and set of stan­dard­ized me­dia types that are ap­pro­pri­ate for the in­tended au­di­ence (i.e., ex­pected to be un­der­stood by any client that might use the API). From that point on, all ap­pli­ca­tion state tran­si­tions must be dri­ven by client se­lec­tion of server-pro­vided choices that are pre­sent in the re­ceived rep­re­sen­ta­tions or im­plied by the user’s ma­nip­u­la­tion of those rep­re­sen­ta­tions. The tran­si­tions may be de­ter­mined (or lim­ited by) the clien­t’s knowl­edge of me­dia types and re­source com­mu­ni­ca­tion mech­a­nisms, both of which may be im­proved on-the-fly (e.g., code-on-de­mand). [Failure here im­plies that out-of-band in­for­ma­tion is dri­ving in­ter­ac­tion in­stead of hy­per­text.]

But what does all of that ex­actly mean? To be hon­est, with­out spend­ing some time and thoughts I did­n’t com­pre­hend it ei­ther the first time I was read­ing it.

This is my un­der­stand­ing of them, feel free to dis­agree and lets have a con­ver­sa­tion! I’m cu­ri­ous to learn a dif­fer­ent point of view or opin­ion about them.

A REST API uses Uniform Resource Identifiers (URIs) to name things. A URL (http://…) is just one spe­cific type of URI that also in­cludes a lo­ca­tion. The key prin­ci­ple here is that a re­source’s fun­da­men­tal iden­tity should be sep­a­rate from its ac­cess mech­a­nism.

A URI (Uniform Resource Identifier) is a broad con­cept that refers to any string used to iden­tify a re­source, while a URL (Uniform Resource Locator) is a spe­cific type of URI that not only iden­ti­fies a re­source but also pro­vides a means to lo­cate it by de­scrib­ing its pri­mary ac­cess mech­a­nism (such as its net­work lo­ca­tion).

Your API should work with any URI and not rely on HTTP-specific mech­a­nisms.

Stick to ex­ist­ing stan­dards (like HTTP). If some­thing in the stan­dard is vague, clar­ify it. Don’t in­vent new be­hav­iors or break ex­ist­ing ones.

Don’t hack or re­de­fine how HTTP works. Fix the gaps, but don’t change the rules.

If HTTP does­n’t fully de­fine how PATCH should work, you can clar­ify it. But don’t re­de­fine GET to also delete data just be­cause it’s con­ve­nient.

Your API should de­fine how to un­der­stand and use the data it re­turns — through me­dia types (like JSON, HTML) and links — not fo­cus on the struc­ture of URIs or what ac­tions to call.

Put your en­ergy into de­sign­ing the for­mat of the data and the links in­side it, not into doc­u­ment­ing what URLs to hit.

Instead of doc­u­ment­ing that you must POST to /users/123/activate, your API should re­turn a user rep­re­sen­ta­tion in a hy­per­me­dia-aware for­mat (like ap­pli­ca­tion/​hal+json or a cus­tom type like ap­pli­ca­tion/​vnd.myapp.user+json).

The client code does­n’t know about the /users/123/activate path. It only knows that the me­dia type de­fines an activate” link re­la­tion, and it uses the href and method pro­vided in the re­sponse to per­form the ac­tion.

This rule is the di­rect con­se­quence of Rule #3. Clients should­n’t as­sume or hard­code paths like /users/123/posts. Instead, they should dis­cover URIs through links pro­vided by the server. Clients should learn about URIs dy­nam­i­cally.

A client should­n’t as­sume that a user’s posts are at /users/123/posts. It should read a link like this from the re­source:

The server’s in­ter­nal clas­si­fi­ca­tion of a re­source (e.g., User vs. Admin) must be en­tirely ir­rel­e­vant and in­vis­i­ble to the client.

The client should­n’t care what kind of en­tity a re­source rep­re­sents in­ter­nally (like User, Admin, Moderator). It should care only about the me­dia type (like ap­pli­ca­tion/​json) and the links/​ac­tions it sees.

Don’t ex­pose in­ter­nal ob­ject types or roles. Just send a well-struc­tured re­sponse with use­ful links. Don’t re­quire the client to know that a re­source is of type Admin. Just give them a con­sis­tent ap­pli­ca­tion/​json re­sponse with rel­e­vant links and data.

By standardized re­la­tion names” he refers to the reg­is­tered link re­la­tions by the IANA.

The client should only need one start­ing point (e.g., a base URL, a book­mark) and a knowl­edge of stan­dard me­dia types. Everything else — what to do, where to go — should come from the server re­sponses.

Clients should fol­low links like brows­ing a web­site — start­ing from the home page and click­ing through, not hard­cod­ing paths.

Start with https://​api.ex­am­ple.com/ and fol­low the _links in each re­sponse:

In prac­tice, very few APIs ad­here to these prin­ci­ples. The next sec­tion ex­am­ines why.

Why aren’t most APIs truly RESTful?

The wide­spread adop­tion of a sim­pler, RPC-like style over HTTP can prob­a­bly at­trib­uted to prac­ti­cal trade-offs in tool­ing and de­vel­oper ex­pe­ri­ence: The ecosys­tem around spec­i­fi­ca­tions like OpenAPI grew rapidly, of­fer­ing im­me­di­ate, ben­e­fits that proved ir­re­sistible to de­vel­op­ment teams. These tools pro­vided pow­er­ful fea­tures like au­to­matic client/​server code gen­er­a­tion, in­ter­ac­tive doc­u­men­ta­tion, and re­quest val­i­da­tion out-of-the-box. For a team un­der pres­sure to de­liver, the clear, sta­tic con­tract pro­vided by an OpenAPI de­f­i­n­i­tion was and still is prob­a­bly of­ten seen as good enough,” mak­ing the long-term ar­chi­tec­tural ben­e­fits of HATEOAS, like evolv­abil­ity, seem ab­stract and less ur­gent.

Furthermore, the ini­tial cog­ni­tive over­head of build­ing a truly hy­per­me­dia-dri­ven client was per­ceived as a sig­nif­i­cant bar­rier. It felt eas­ier for a de­vel­oper to read doc­u­men­ta­tion and hard­code a URI tem­plate like /users/{id}/orders than to write a client that could dy­nam­i­cally parse a _links sec­tion and dis­cover the orders” URI at run­time.

In many com­mon sce­nar­ios, such as a front-end sin­gle-page ap­pli­ca­tion be­ing de­vel­oped by the same team as the back-end, the client and server are al­ready tightly cou­pled. In this con­text, the pri­mary prob­lem that HATEOAS solves—de­cou­pling the client from the server’s URI struc­ture—does­n’t pre­sent as an im­me­di­ate pain point, mak­ing the sim­pler, doc­u­men­ta­tion-dri­ven ap­proach the path of least re­sis­tance.

Fielding’s rules em­pha­size that a truly RESTful API should em­brace hy­per­me­dia (HATEOAS) as the cen­tral mech­a­nism for in­ter­ac­tion, not just use HTTP as a trans­port. REST is pro­to­col-in­de­pen­dent at its core; HTTP is sim­ply a con­ve­nient way of us­ing it. Clients should dis­cover and nav­i­gate re­sources dy­nam­i­cally through links and stan­dard­ized re­la­tions em­bed­ded in rep­re­sen­ta­tions — not rely on hard­coded URI struc­tures, types, or ex­ter­nal doc­u­men­ta­tion.

This makes REST sys­tems loosely cou­pled, evolv­able, and aligned with how the web it­self op­er­ates: through rep­re­sen­ta­tion, dis­cov­ery, and tran­si­tions. REST is­n’t about ex­pos­ing your in­ter­nal ob­ject model over HTTP — it’s about build­ing dis­trib­uted sys­tems that be­have like the web.

Therefore, sim­ply be prag­matic. I per­son­ally like to avoid the term RESTful” for the rea­sons given in the ar­ti­cle and in­stead say HTTP based APIs. Build what­ever makes sense for your pro­ject and the con­sumers of your API. Ignore the dog­ma­tists who preach what RESTful APIs might be and what not. An API should be easy to learn and hard to mis­use in the first place. If it ful­fills that cri­te­ria it does­n’t mat­ter if it is RESTful or not. Follow Postels Law if it makes sense for your case: Be con­ser­v­a­tive in what you do, be lib­eral in what you ac­cept from oth­ers.”.

Who is the con­sumer of your API? How easy will it be for it to learn and use the API? Will it be in­tu­itive to use? What are pos­si­ble con­straints? How do you ver­sion it? Deprecation and sun-set­ting strate­gies? How are changes to the API ef­fec­tively com­mu­ni­cated to con­sumers? Those things are much more valu­able than the ac­tual for­mat of your re­source iden­ti­fier.

By us­ing HATEOAS and ref­er­enc­ing schema de­f­i­n­i­tions (such as XSD or JSON Schema) from within your re­source rep­re­sen­ta­tions, you can en­able clients to un­der­stand the struc­ture of the data and nav­i­gate the API dy­nam­i­cally. This can sup­port generic or self-adapt­ing clients. If that aligns with your goals (e.g., client de­cou­pling, evolv­abil­ity, dy­namic in­ter­ac­tion), then it’s a valid and pow­er­ful de­sign choice. If you are build­ing a pub­lic API for ex­ter­nal de­vel­op­ers you don’t con­trol, in­vest in HATEOAS. If you are build­ing a back­end for a sin­gle fron­tend con­trolled by your own team, a sim­pler RPC-style API may be the more prac­ti­cal choice.

Please en­able JavaScript to view the com­ments pow­ered by

Disqus.

...

Read the original on florian-kraemer.net »

5 268 shares, 4 trendiness

Bulgaria to join euro area on 1 January 2026

Today the Council of the European Union for­mally ap­proved the ac­ces­sion of Bulgaria to the euro area on 1 January 2026 and de­ter­mined a Bulgarian lev con­ver­sion rate of 1.95583 per euro. This is the cur­rent cen­tral rate of the lev in the Exchange Rate Mechanism (ERM II), which the cur­rency joined on 10 July 2020. The European Central Bank (ECB) and Българска народна банка (Bulgarian National Bank) agreed to mon­i­tor de­vel­op­ments in the Bulgarian lev against the euro on the for­eign ex­change mar­ket un­til 1 January 2026. With the en­try into force of the close co­op­er­a­tion frame­work be­tween the ECB and Българска народна банка (Bulgarian National Bank), the ECB has been re­spon­si­ble for di­rectly su­per­vis­ing four sig­nif­i­cant in­sti­tu­tions and over­see­ing 13 less sig­nif­i­cant in­sti­tu­tions in Bulgaria since 1 October 2020. The agree­ment to mon­i­tor the lev is in the con­text of ERM II. Participation in ERM II and ob­ser­vance of the nor­mal fluc­tu­a­tion mar­gins for at least the last two years is one of the con­ver­gence cri­te­ria to be ful­filled ahead of euro area ac­ces­sion. The con­ver­sion rate of the lev is set by way of an amend­ment to Regulation (EC) No 2866/98, which will be­come ef­fec­tive on 1 January 2026.

Disclaimer

Please note that re­lated topic tags are cur­rently avail­able for se­lected con­tent only.

Reproduction is per­mit­ted pro­vided that the source is ac­knowl­edged.

Media con­tacts

Are you happy with this page?

Yes

No

Page not work­ing

Information not use­ful

Design not at­trac­tive

Something else

Thank you for let­ting us know!

We use func­tional cook­ies to store user pref­er­ences; an­a­lyt­ics cook­ies to im­prove web­site per­for­mance; third-party cook­ies set by third-party ser­vices in­te­grated into the web­site. You have the choice to ac­cept or re­ject them. For more in­for­ma­tion or to re­view your pref­er­ence on the cook­ies and server logs we use, we in­vite you to:

We are al­ways work­ing to im­prove this web­site for our users. To do this, we use the anony­mous data pro­vided by cook­ies.

Learn more about how we use cook­ies

...

Read the original on www.ecb.europa.eu »

6 266 shares, 14 trendiness

Astro is a developers f***ing dream — Websmith Studio

After mi­grat­ing sev­eral pro­jects from WordPress to Astro, I’ve be­come a mas­sive fan of this frame­work.

Astro is a web frame­work that came out in 2021 and im­me­di­ately felt dif­fer­ent. While most JavaScript frame­works started with build­ing com­plex ap­pli­ca­tions and then tried to adapt to sim­pler sites, Astro went the op­po­site di­rec­tion. It was built from day one for con­tent-fo­cused web­sites.

The phi­los­o­phy is re­fresh­ingly sim­ple. Astro be­lieves in be­ing con­tent-dri­ven and server-first, ship­ping zero JavaScript by de­fault (yes, re­ally), while still be­ing easy to use with ex­cel­lent tool­ing. It’s like some­one fi­nally asked, What if we built a frame­work specif­i­cally for the types of web­sites most of us ac­tu­ally make?”

Astro in­tro­duced some­thing called Island Architecture,” and once you un­der­stand it, you’ll won­der why we’ve been do­ing things any other way.

Traditional frame­works hy­drate en­tire pages with JavaScript. Even if you’ve got a sim­ple blog post with one in­ter­ac­tive wid­get, the whole page gets the JavaScript treat­ment. Astro flips this on its head. Your pages are sta­tic HTML by de­fault, and only the bits that need in­ter­ac­tiv­ity be­come JavaScript islands.”

Picture a blog post with thou­sands of words of con­tent. In Astro, all that text stays as pure HTML. Only your com­ment sec­tion or im­age carousel loads JavaScript. Everything else re­mains light­ning fast. It’s bril­liantly sim­ple.

Astro sites are fast, we’re talk­ing 40% faster load times com­pared to tra­di­tional React frame­works. But here’s the thing, this is­n’t just about im­press­ing other de­vel­op­ers. These per­for­mance gains trans­late di­rectly to bet­ter search rank­ings, hap­pier users, and yes, more con­ver­sions. On slower de­vices or dodgy mo­bile con­nec­tions, the dif­fer­ence is even more dra­matic.

The de­vel­oper ex­pe­ri­ence in Astro feels like some­one ac­tu­ally thought about how we work. Setting up a new pro­ject is straight­for­ward, and you’re guided through the process by Houston, their friendly setup as­sis­tant.

What I love most about Astro com­po­nents is how they just make sense:

// This runs at build time

const { ti­tle } = Astro.props;

const posts = await fetch­Posts();

See that code fence at the top? That runs at build time, not in the browser. Your data fetch­ing, your logic - it all hap­pens be­fore the user even loads the page. You get bril­liant TypeScript sup­port with­out any of the com­plex­ity of hooks, state man­age­ment, or life­cy­cle meth­ods.

With Astro you’re not locked into a sin­gle way of do­ing things. Need React for a com­plex form? Chuck it in. Prefer Vue for data vi­su­al­i­sa­tion? Go for it. Want to keep most things as sim­ple Astro com­po­nents? Perfect.

Astro al­lows you to use React for the ad­min dash­board com­po­nents, Vue for some in­ter­ac­tive charts, and keep every­thing else as vanilla Astro. It all just works to­gether seam­lessly.

Markdown sup­port in Astro is­n’t bolted on as an af­ter­thought. You can im­port Markdown files di­rectly and use them like com­po­nents:

im­port { Content, front­mat­ter } from ../content/post.md”;

The build pipeline is mod­ern and com­plete. TypeScript works out of the box, Sass com­pi­la­tion is built in, im­ages get op­ti­mised au­to­mat­i­cally with Astro’s tag, and you get hot mod­ule re­place­ment dur­ing de­vel­op­ment. No set­ting up Webpack con­figs or fight­ing with build tools.

You also get flex­i­bil­ity in how pages are ren­dered. Build every­thing sta­t­i­cally for max­i­mum speed, ren­der on the server for dy­namic con­tent, or mix both ap­proaches in the same pro­ject. Astro adapts to what you need.

I’ve found Astro per­fect for mar­ket­ing sites, blogs, e-com­merce cat­a­logues, and port­fo­lio sites. Basically, any­where con­tent is the hero and you don’t need com­plex client-side state man­age­ment, Astro ex­cels.

Astro is­n’t the an­swer to every­thing. If you’re build­ing a com­plex sin­gle-page ap­pli­ca­tion (SPA) with lots of client-side rout­ing, need ISR (hello Next.js), or you need heavy state man­age­ment across com­po­nents, you’ll prob­a­bly want some­thing else like Next.js.

The ecosys­tem is grow­ing but still very small in com­par­i­son to some­thing like Next.js. File-based rout­ing can feel con­strain­ing on larger pro­jects (though some peo­ple love it).

# Create pro­ject

npm cre­ate as­tro@lat­est my-site

cd my-site

# Add a frame­work if you want

npx as­tro add re­act

# Start de­vel­op­ing

npm run dev

Put your pages in src/​pages/, com­po­nents in src/​com­po­nents/, and you’re ready to build some­thing great.

After years of JavaScript frame­works get­ting more com­plex, Astro feels like a breath of fresh air. It’s a re­turn to the fun­da­men­tals of the web - fast, ac­ces­si­ble, con­tent-first ex­pe­ri­ences - but with all the mod­ern de­vel­oper con­ve­niences we’ve come to ex­pect.

What struck me most af­ter mi­grat­ing sev­eral pro­jects is how Astro makes the right thing the easy thing. Want a fast site? That’s the de­fault. Want to add in­ter­ac­tiv­ity? Easy, but only where you need it. Want to use your favourite frame­work? Go ahead, Astro won’t judge.

If you’re build­ing any­thing con­tent-fo­cused, from a sim­ple blog to a full e-com­merce site, give Astro a se­ri­ous look. Your users will get a faster ex­pe­ri­ence, you’ll en­joy the de­vel­op­ment process, and your Core Web Vitals will be spec­tac­u­lar.

Note - The web­site you’re read­ing this blog from is built with Astro.

...

Read the original on websmith.studio »

7 258 shares, 10 trendiness

Why do we “call” functions?

On StackExchange, some­one asks why pro­gram­mers talk about calling” a func­tion. Several pos­si­ble al­lu­sions spring to mind:

* Calling a func­tion is like call­ing on a friend — we go, we stay a while, we come back.

* Calling a func­tion is like call­ing for a ser­vant — a sum­mon­ing to per­form a task.

* Calling a func­tion is like mak­ing a phone call — we ask a ques­tion and get an an­swer from out­side our­selves.

The true an­swer seems to be the mid­dle one — calling” as in calling up, sum­mon­ing” — but in­di­rectly, orig­i­nat­ing in the no­tion of calling for” a sub­rou­tine out of a li­brary of sub­rou­tines in the same way that we’d call for” a book out of a closed-stack li­brary of books.

The OEDs first ci­ta­tion for call num­ber in the li­brary-sci­ence sense comes from

Melvil Dewey (yes, that Dewey) in 1876. The OED de­fines it as:

A mark, esp. a num­ber, on a li­brary book, or listed in a li­brary’s cat­a­logue, in­di­cat­ing the book’s lo­ca­tion in the li­brary; a book’s press mark or shelf mark.

I see li­brar­i­ans us­ing the term call-number” in The Library Journal 13.9

(1888) as if it was very well es­tab­lished al­ready by that point:

Mr. Davidson read a let­ter from Mr. A. W. Tyler […] en­clos­ing sam­ple of the new call blank used at the Plainfield (N.J.) P. L., giv­ing more room for the sig­na­ture and ad­dress of the ap­pli­cant. […] In con­nec­tion with Mr. Tyler’s new call slip […] I al­ways feel out­raged when I make up a long list of call num­bers in or­der to make sure of a book, and then the li­brar­ian keeps the list, and the next time I have it all to do over again.”

According to The Organization of Information 4th ed. (Joudrey & Taylor, 2017):

Call num­ber. A no­ta­tion on a re­source that matches the same no­ta­tion in the meta­data de­scrip­tion and is used to iden­tify and lo­cate the item; it of­ten con­sists of a clas­si­fi­ca­tion no­ta­tion and a cut­ter num­ber, and it may also in­clude a work­mark and/​or a date. It is the num­ber used to call” for an item in a closed-stack li­brary; thus the source of the name call num­ber.”

Cutter num­ber. A des­ig­na­tion with the pur­pose of al­pha­bet­iz­ing all works that have ex­actly the same clas­si­fi­ca­tion no­ta­tion. Named for Charles Ammi Cutter, who de­vised such a scheme, but spelled with a small c when re­fer­ring to an­other such table that is not Cutter’s own.

John W. Mauchly’s ar­ti­cle Preparation of prob­lems for EDVAC-type ma­chines” (1947) uses the English word call” only twice, yet this seems to be an im­por­tant early at­tes­ta­tion of the word in the con­text of a library” of com­puter sub­rou­tines:

Important ques­tions for the users of a ma­chine are: How eas­ily can ref­er­ence be made to any of the sub­rou­tines? How hard is it to ini­ti­ate a sub­rou­tine? What con­di­tions can be used to ter­mi­nate a sub­rou­tine? And with what fa­cil­ity can con­trol re­vert to any part of the orig­i­nal se­quence or some fur­ther se­quence […] Facilities for con­di­tional and other trans­fers to sub­rou­tines, trans­fers to still fur­ther sub­rou­tines, and trans­fers back again, are cer­tain to be used fre­quently.

[…] the po­si­tion in the mem­ory at which ar­gu­ments are placed can be stan­dard­ized, so that when­ever a sub­rou­tine is called in to per­form a cal­cu­la­tion, the sub­rou­tine will au­to­mat­i­cally know that the ar­gu­ment which is to be used is at a spec­i­fied place.

[…] Some of them might be writ­ten out in a hand­book and trans­ferred to the cod­ing of the prob­lem as needed, but those of any com­plex­ity pre­sum­ably ought to be in a li­brary — that is, a set of mag­netic tapes in which pre­vi­ously coded prob­lems of per­ma­nent value are stored.

[…] One of the prob­lems which must be met in this case is the method of with­drawal from the li­brary and of com­pi­la­tion in the proper se­quence for the par­tic­u­lar prob­lem. […] It is pos­si­ble […] to evolve a cod­ing in­struc­tion for plac­ing the sub­rou­tines in the mem­ory at places known to the ma­chine, and in such a way that they may eas­ily be called into use […] all one needs to do is make brief ref­er­ence to them by num­ber, as they are in­di­cated in the cod­ing.

The man­ual for the MANIAC II as­sem­bly rou­tine” (January 1956) fol­lows Mauchly’s sketch pretty closely. MANIAC II has a pa­per-tape library” of sub­rou­tines which can be sum­moned up (by the as­sem­bler) to be­come part of a fully as­sem­bled pro­gram, and in fact each item in the library” has an iden­ti­fy­ing call num­ber,” just like every book in a real li­brary has a call num­ber:

The as­sem­bly rou­tine for Maniac II is de­signed to trans­late de­scrip­tive code into ab­solute code. […] The bulk of the de­scrip­tive tape con­sists of a se­ries of in­struc­tions, sep­a­rated, by con­trol words, into num­bered groups called boxes [because flow­charts: to­day we’d say basic blocks”]. The al­lowed box num­bers are 01 through EF[. …] If the ad­dress [in an in­struc­tion’s ad­dress field] is FXXX, then the in­struc­tion must be a trans­fer, and the trans­fer is to the sub­rou­tine whose call num­ber

is XXX. The most com­mon sub­rou­tines are on the same mag­netic tape as the as­sem­bly rou­tine, and are brought in au­to­mat­i­cally. For other sub­rou­tines, the as­sem­bly rou­tine stops to al­low the ap­pro­pri­ate pa­per tapes to be put into the pho­tore­ader.

Notice that the ac­tual in­struc­tion (or order”) in MANIAC II is still known as TC,” transfer con­trol,” and the pro­gram’s run­time be­hav­ior is known as a trans­fer of con­trol, not yet as a call. The call­ing here not the run­time be­hav­ior but rather the call­ing-up of the coded sub­rou­tine (at as­sem­bly time) to be­come part of the fully as­sem­bled pro­gram.

Fortran II (1958; also

here) in­tro­duced CALL and RETURN state­ments, with this de­scrip­tion:

The ad­di­tional fa­cil­i­ties of FORTRAN II ef­fec­tively en­able the pro­gram­mer to ex­pand the lan­guage of the sys­tem in­def­i­nitely. […] Each [CALL state­ment] will con­sti­tute a call for the defin­ing sub­pro­gram, which may carry out a pro­ce­dure of any length or com­plex­ity […]

[The CALL] state­ment causes trans­fer of con­trol to the sub­rou­tine NAME and pre­sents the sub­rou­tine with the ar­gu­ments, if any, en­closed in paren­the­ses. […] A sub­rou­tine in­tro­duced by a SUBROUTINE state­ment is called into the main pro­gram by a CALL state­ment spec­i­fy­ing the name of the sub­rou­tine. For ex­am­ple, the sub­rou­tine in­tro­duced by

could be called into the main pro­gram by the state­ment

Notice that Fortran II still de­scribes the run­time be­hav­ior as transfer of con­trol,” but as the com­puter lan­guage be­comes higher-level the English starts to blur and con­flate the run­time trans­fer-of-con­trol be­hav­ior with the as­sem­bly- or link-time calling-in” be­hav­ior.

In Robert I. Sarbacher’s Encyclopedic dic­tio­nary of elec­tron­ics and nu­clear en­gi­neer­ing (1959), the en­try for Subroutine does­n’t use the word call,” but Sarbacher does seem to be re­flect­ing a men­tal model some­where in­side the union of Mauchly’s de­f­i­n­i­tion and Fortran IIs.

Call in. In com­puter pro­gram­ming, the trans­fer of con­trol of a com­puter from a main rou­tine to a sub­rou­tine that has been in­serted into the se­quence of cal­cu­lat­ing op­er­a­tions to per­form a sub­sidiary op­er­a­tion.

Call num­ber. In com­puter pro­gram­ming, a set of char­ac­ters used to iden­tify a sub­rou­tine. They may in­clude in­for­ma­tion re­lated to the operands, or may be used to gen­er­ate the sub­rou­tine.

Call word. In com­puter pro­gram­ming, a call num­ber ex­actly the length of one word.

Notice that Sarbacher de­fines call in” as the run­time trans­fer of con­trol it­self; that’s dif­fer­ent from how the Fortran II man­ual used the term. Maybe Sarbacher was ac­cu­rately re­flect­ing an ac­tual shift in col­lo­quial mean­ing that had al­ready taken place be­tween 1958 and 1959 — but per­son­ally I think he might sim­ply have goofed it. (Sarbacher was a highly trained physi­cist, but not a com­puter guy, as far as I can tell.)

JOVIAL: A Description of the Language” (February 1960) says:

A pro­ce­dure call [today we’d say call site”] is the link from the main pro­gram to a pro­ce­dure. It is the only place from which a pro­ce­dure may be en­tered.

An in­put pa­ra­me­ter [today we’d say argument”] is an arith­metic ex­pres­sion spec­i­fied in the pro­ce­dure call

which rep­re­sents a value on which the pro­ce­dure is to op­er­ate[.] A dummy in­put pa­ra­me­ter is an item spec­i­fied in the pro­ce­dure de­c­la­ra­tion which rep­re­sents a value to be used by the pro­ce­dure as an in­put pa­ra­me­ter.

One or more Procedure Calls (of other pro­ce­dures) may ap­pear within a pro­ce­dure. At pre­sent, only four levels” of calls may ex­ist.

That JOVIAL man­ual men­tions not only the procedure call” (the syn­tax for trans­fer­ring con­trol to a pro­ce­dure de­c­la­ra­tion) but also the SWITCH call” (the syn­tax for trans­fer­ring con­trol to a switch-case la­bel). That is, JOVIAL (1960) has fully adopted the noun call” to mean the syn­tac­tic in­di­ca­tor of a run­time trans­fer of con­trol.” However, JOVIAL never uses to call” as a verb.

A pro­ce­dure state­ment serves to ini­ti­ate (call for) the ex­e­cu­tion of a pro­ce­dure, which is a closed and self-con­tained process […] The pro­ce­dure de­c­la­ra­tion defin­ing the called pro­ce­dure con­tains, in its head­ing, a string of sym­bols iden­ti­cal in form to the pro­ce­dure state­ment, and the for­mal pa­ra­me­ters […] give com­plete in­for­ma­tion con­cern­ing the ad­mis­si­bil­ity of pa­ra­me­ters used in any pro­ce­dure call[.]

Peter Naur’s Algol 60 Report” (May 1960) avoids the verb call,” but in a new de­vel­op­ment ca­su­ally uses the noun call” to mean the pe­riod dur­ing which the pro­ce­dure it­self is work­ing” — not the trans­fer of con­trol but the pe­riod be­tween the trans­fers in and out:

A pro­ce­dure state­ment serves to in­voke (call for) the ex­e­cu­tion of a pro­ce­dure body. […] [When pass­ing an ar­ray pa­ra­me­ter, if] the for­mal pa­ra­me­ter is called by value the lo­cal ar­ray cre­ated

dur­ing the call will have the same sub­script bounds as the ac­tual ar­ray.

Finally, Burroughs Algebraic Compiler: A rep­re­sen­ta­tion of ALGOL for use with the Burroughs 220 data-pro­cess­ing sys­tem” (1961) at­tests a sin­gle (definitionary) in­stance of the prepo­si­tion-less verb call”:

The ENTER state­ment is used to ini­ti­ate the ex­e­cu­tion of a sub­rou­tine (to call a sub­rou­tine).

The us­age in Advanced Computer Programming: A case study of a class­room as­sem­bly pro­gram” (Corbató, Poduska, & Saltzer, 1963) is en­tirely mod­ern: It is still con­ve­nient for pass one to call a sub­rou­tine to store the cards”; In or­der to call EVAL, it is nec­es­sary to save away tem­po­rary re­sults”; the sub­rou­tine which calls PASS1 and PASS2; etc.

Therefore my guesses at the mo­ment are:

Fortran II (1958) rapidly pop­u­lar­ized the phras­ing to call X” for the tem­po­rary trans­fer of con­trol to X, be­cause CALL X” is lit­er­ally what you write in a Fortran II pro­gram when you want to trans­fer con­trol to the pro­ce­dure named X.

Fortran’s own choice of the CALL mnemonic was an orig­i­nal ne­ol­o­gism, in­spired by the pre-ex­ist­ing use of call (in/up)” as seen in the Mauchly and MANIAC II quo­ta­tions but in­tro­duc­ing wrin­kles that had never been seen any­where be­fore Fortran.

By 1959, Algol had picked up call” from Fortran. Algol’s procedure state­ments” pro­duced calls at run­time; a pro­ce­dure could be called; dur­ing the call the pro­ce­dure would per­form its work.

By 1961, we see the first uses of the ex­act phrase to call X.”

...

Read the original on quuxplusone.github.io »

8 254 shares, 11 trendiness

CyberTimon/RapidRAW: A beautiful, non-destructive, and GPU-accelerated RAW image editor built with performance in mind.

A beau­ti­ful, non-de­struc­tive, and GPU-accelerated RAW im­age ed­i­tor built with per­for­mance in mind.

RapidRAW is a mod­ern, high-per­for­mance al­ter­na­tive to Adobe Lightroom®. It de­liv­ers a fea­ture-rich, beau­ti­ful edit­ing ex­pe­ri­ence in a light­weight pack­age (under 30MB) for Windows, ma­cOS, and Linux.

I de­vel­oped this pro­ject as a per­sonal chal­lenge at the age of 18. My goal was to cre­ate a high-per­for­mance tool for my own pho­tog­ra­phy work­flow while deep­en­ing my un­der­stand­ing of both React and Rust, with the sup­port from Google Gemini.

If you like the theme im­ages and want to see more of my own im­ages, check­out my Instagram: @timonkaech.photography

As a pho­tog­ra­phy en­thu­si­ast, I of­ten found ex­ist­ing soft­ware to be slug­gish and re­source-heavy on my ma­chine. Born from the de­sire for a more re­spon­sive and stream­lined photo edit­ing ex­pe­ri­ence, I set out to build my own. The goal was to cre­ate a tool that was not only fast but also helped me learn the de­tails of dig­i­tal im­age pro­cess­ing and cam­era tech­nol­ogy.

I set an am­bi­tious goal to rapidly build a func­tional, fea­ture-rich ap­pli­ca­tion from an empty folder. This per­sonal chal­lenge pushed me to learn quickly and fo­cus in­tensely on the core ar­chi­tec­ture and user ex­pe­ri­ence.

The foun­da­tion is built on Rust for its safety and per­for­mance, and Tauri for its abil­ity to cre­ate light­weight, cross-plat­form desk­top apps with a web fron­tend. The en­tire im­age pro­cess­ing pipeline is of­floaded to the GPU via WGPU and a cus­tom WGSL shader, en­sur­ing that even on com­plex ed­its with mul­ti­ple masks, the UI re­mains fluid.

I am im­mensely grate­ful for Google’s Gemini suite of AI mod­els. As an 18-year-old with­out a for­mal back­ground in ad­vanced math­e­mat­ics or im­age sci­ence, the AI Studio’s free tier was an in­valu­able as­sis­tant, help­ing me re­search and im­ple­ment con­cepts like the Menon demo­saic­ing al­go­rithm.

While the core func­tion­al­ity is in place, I’m ac­tively work­ing on im­prov­ing sev­eral key ar­eas. Here’s a trans­par­ent look at the cur­rent fo­cus:

RapidRAW fea­tures a two-tier ap­proach to AI to pro­vide both speed and power. It dis­tin­guishes be­tween light­weight, in­te­grated tools and heavy, op­tional gen­er­a­tive fea­tures.

Built-in AI Masking: The core ap­pli­ca­tion in­cludes light­weight, fast and open source AI mod­els (SAM from Meta) for in­tel­li­gent mask­ing (e.g., Subject and Foreground se­lec­tion). These tools run lo­cally, are al­ways avail­able, and are de­signed to ac­cel­er­ate your stan­dard edit­ing work­flow.

Optional Generative AI: For com­pu­ta­tion­ally in­ten­sive tasks like in­paint­ing (Generative Replace), RapidRAW con­nects to an ex­ter­nal ComfyUI back­end. This keeps the main ap­pli­ca­tion small and fast, while of­fload­ing heavy pro­cess­ing to a ded­i­cated, user-run server.

The Built-in AI Masking is fully func­tional for all users.

The Optional Generative AI fea­tures, how­ever, cur­rently re­quire a man­ual setup of a ComfyUI back­end. The of­fi­cial, easy-to-use Docker con­tainer is not yet pro­vided.

This means the gen­er­a­tive tools are con­sid­ered a de­vel­oper pre­view and are not ready for gen­eral, out-of-the-box use.

The ini­tial work on gen­er­a­tive AI fo­cused on build­ing a con­nec­tion to the ComfyUI back­end and im­ple­ment­ing the first key fea­tures.

* Modular Backend: RapidRAW con­nects to a lo­cal ComfyUI server, which acts as the in­fer­ence en­gine.

* Generative Replace (Inpainting): Users can paint a mask over an area of the im­age (or use the AI mask­ing tool to cre­ate a pre­cise se­lec­tion) and pro­vide a text prompt to fill that area with gen­er­ated con­tent.

* Non-Destructive Patches: Each gen­er­a­tive edit is stored as a sep­a­rate patch” layer. These can be tog­gled, re-or­dered, or deleted at any time, con­sis­tent with RapidRAW’s non-de­struc­tive phi­los­o­phy.

This pro­ject be­gan as an in­ten­sive sprint to build the core func­tion­al­ity. Here’s a sum­mary of the ini­tial progress and key mile­stones:

You have two op­tions to run RapidRAW:

Grab the pre-built in­staller or ap­pli­ca­tion bun­dle for your op­er­at­ing sys­tem from the Releases page.

If you want to build the pro­ject your­self, you’ll need to have Rust and Node.js in­stalled.

# 1. Clone the repos­i­tory

git clone https://​github.com/​Cy­ber­Ti­mon/​RapidRAW.git

cd RapidRAW

# 2. Install fron­tend de­pen­den­cies

npm in­stall

# 3. Build and run the ap­pli­ca­tion in de­vel­op­ment mode

# Use –release for a build that runs much faster (image load­ing etc.)

npx tauri dev –release

Contributions are wel­come and highly ap­pre­ci­ated! Whether it’s re­port­ing a bug, sug­gest­ing a fea­ture, or sub­mit­ting a pull re­quest, your help makes this pro­ject bet­ter. Please feel free to open an is­sue to dis­cuss your ideas.

A huge thank you to the fol­low­ing pro­jects and tools that were very im­por­tant in the de­vel­op­ment of RapidRAW:

* Google AI Studio: For pro­vid­ing amaz­ing as­sis­tance in re­search­ing, im­ple­ment­ing im­age pro­cess­ing al­go­rithms and giv­ing an over­all speed boost.

* rawler: For the ex­cel­lent Rust crate that pro­vides the foun­da­tion for RAW file pro­cess­ing in this pro­ject.

As an 18-year-old de­vel­oper bal­anc­ing this pro­ject with an ap­pren­tice­ship, your sup­port means the world. If you find RapidRAW use­ful or ex­cit­ing, please con­sider do­nat­ing to help me ded­i­cate more time to its de­vel­op­ment and cover any as­so­ci­ated costs.

* Crypto:

This pro­ject is li­censed un­der the GNU Affero General Public License v3.0 (AGPL-3.0). I chose this li­cense to en­sure that RapidRAW and any of its de­riv­a­tives will al­ways re­main open-source and free for the com­mu­nity. It pro­tects the pro­ject from be­ing used in closed-source com­mer­cial soft­ware, en­sur­ing that im­prove­ments ben­e­fit every­one.

See the LICENSE file for more de­tails.

...

Read the original on github.com »

9 206 shares, 13 trendiness

What Rails Developers Actually Need to Know

Ruby 3.4 takes the first step in a multi-ver­sion tran­si­tion to frozen string lit­er­als by de­fault. Your Rails app will con­tinue work­ing ex­actly as be­fore, but Ruby now pro­vides opt-in warn­ings to help you pre­pare. Here’s what you need to know.

Ruby is im­ple­ment­ing frozen string lit­er­als grad­u­ally over three re­leases:

Ruby 3.4 (Now): Opt-in warn­ings when you en­able dep­re­ca­tion warn­ings

By de­fault, noth­ing changes. Your code runs ex­actly as be­fore. But when you en­able dep­re­ca­tion warn­ings:

Important: These warn­ings are opt-in. You won’t see them un­less you ex­plic­itly en­able them.

Performance im­prove­ments vary by ap­pli­ca­tion, but bench­marks have shown:

* Up to 20% re­duc­tion in garbage col­lec­tion for string-heavy code

* Faster ex­e­cu­tion in hot paths that cre­ate many iden­ti­cal strings

For more Rails per­for­mance op­ti­miza­tion strate­gies, check out our guide on find­ing the 20% of code caus­ing 80% of per­for­mance prob­lems.

The biggest im­pact won’t be your code - it’ll be your de­pen­den­cies:

This al­lows Ruby to:

The Ruby team is mov­ing away from magic com­ments. For new code, write code that nat­u­rally works with frozen strings by treat­ing all strings as im­mutable. See the Common Patterns to Fix sec­tion for tech­niques that work well with frozen strings.

Don’t rush to re­move magic com­ments - Files with the com­ment keep their cur­rent be­hav­ior

Fix warn­ings grad­u­ally - Use CI to track new warn­ings

Update gems first - Check for up­dates that fix string mu­ta­tion warn­ings

Yes. Here’s why the tran­si­tion is de­vel­oper-friendly:

Nothing breaks by de­fault - Your app runs ex­actly as be­fore

Warnings are opt-in - You con­trol when to see them

* Now (Ruby 3.4): Upgrade and run nor­mally, en­able warn­ings in de­vel­op­ment

* Before Ruby 3.7: Fix warn­ings at your own pace

* Ruby 3.7: Warnings be­come de­fault, most is­sues should be fixed

Ruby 3.4’s opt-in warn­ings are the first step in a thought­ful, multi-ver­sion tran­si­tion. Enable them when you’re ready, fix is­sues at your own pace, and pre­pare for bet­ter per­for­mance in Ruby 4.0.

...

Read the original on prateekcodes.dev »

10 186 shares, 10 trendiness

Is the doc bot docs, or not?

Upgrading my Shopify email noti­fi­ca­tion tem­plates this morn­ing, I asked Shopify’s LLM-powered devel­oper doc­u­men­ta­tion bot this ques­tion:

What’s the syn­tax, in Liquid, to de­tect whether an or­der in an email noti­fi­ca­tion con­tains items that will be ful­filled through Shopify Collective?

I’d done a few tra­di­tional searches, but could­n’t find an an­swer, so I thought, okay, let’s try the doc bot!

The re­ply came, quick and clean:

{% if or­der.tags con­tains Shopify Collective’ %}

Some items in your or­der are be­ing ful­filled by a part­ner mer­chant through Shopify Collective.

{% en­dif %}

Looks great. I added this code, but it did­n’t work; the con­di­tion was­n’t be­ing sat­is­fied, even when or­ders had the nec­es­sary tag, vis­ible on the ad­min screen.

Shopify does­n’t pro­vide a way to test uncon­ven­tional email for­mats with­out actu­ally plac­ing real or­ders, so I did my cus­tomary dance of or­der-re­fund, or­der-re­fund, or­der-re­fund. My credit card is go­ing to get locked one of these days.

Turns out, the Shopify Collective tag is­n’t pre­sent on the or­der at the time the con­fir­ma­tion email is gen­er­ated. It’s added later, pos­sibly just mo­ments later, by some other cryp­tic Shopify process.

I don’t think this is doc­u­mented any­where, so in that sense, it’s not sur­prising the doc­u­men­ta­tion bot got it wrong. My ques­tion is: what’s the point of a doc bot that just takes a guess?

More point­edly: is the doc bot docs, or not?

And very point­edly in­deed: what are we even do­ing here??

I’ve sent other queries to the doc bot and re­ceived help­ful replies; I’d cat­e­go­rize these basi­cally as search: How do I do X?” What’s the syn­tax for Y?” But I believe this is a sit­u­a­tion in which the cost of bad ad­vice out­weighs the ben­efit of quick help by 10X, at least. I can, in fact, fig­ure out how to do X us­ing the real docs. Only the doc bot can make things up.

If it was Claude mak­ing this kind of mis­take, I’d be an­noyed but not sur­prised. But this is Shopify’s sanc­tioned helper! It waits twin­kling in the header of every page of the dev site. I suppose there are do­mains in which just tak­ing a guess is okay; is the offi­cial doc­u­men­ta­tion one of them?

I vote no, and I think a freestyling doc bot under­mines the ef­fort and care of the folks at Shopify who work — in my esti­ma­tion very suc­cess­fully — to pro­duce doc­u­men­ta­tion that is thor­ough and ac­cu­rate.

P. S. In the end, I adapted the code from the sec­tion of the noti­fi­ca­tion that lists the items in the or­der — a bit cir­cuitous, but it works:

{% as­sign is_­col­lec­tive_or­der = false %}

{% for line_item in line_items %}

{% if line_item.prod­uct.tags con­tains Shopify Collective’ %}

{% as­sign is_­col­lec­tive_or­der = true %}

{% en­dif %}

{% end­for %}

The prod­uct-level Shopify Collective tag is avail­able and reli­able at the time the noti­fi­ca­tion is gen­er­ated.

To the blog home page

...

Read the original on www.robinsloan.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.