10 interesting stories served every morning and every evening.
More than a decade ago, when I was applying to graduate school, I went through a period of deep uncertainty. I had tried the previous year and hadn’t gotten in anywhere. I wanted to try again, but I had a lot going against me.
I’d spent most of my undergrad building a student job-portal startup and hadn’t balanced it well with academics. My GPA needed explaining. My GMAT score was just okay. I didn’t come from a big-brand employer. And there was no shortage of people with similar or stronger profiles applying to the same schools.
Even though I had learned a few things from the first round, the second attempt was still difficult. There were multiple points after I submitted applications where I lost hope.
But during that stretch, a friend and colleague kept repeating one line to me:
“All it takes is for one to work out.”
He’d say it every time I spiraled. And as much as it made me smile, a big part of me didn’t fully believe it. Still, it became a little maxim between us. And eventually, he was right — that one did work out. And it changed my life.
I’ve thought about that framing so many times since then.
You don’t need every job to choose you. You just need the one that’s the right fit.
You don’t need every house to accept your offer. You just need the one that feels like home.
You don’t need every person to want to build a life with you. You just need the one.
You don’t need ten universities to say yes. You just need the one that opens the right door.
These processes — college admissions, job searches, home buying, finding a partner — can be emotionally brutal. They can get you down in ways that feel personal. But in those moments, that truth can be grounding.
All it takes is for one to work out.
And that one is all you need.
...
Read the original on alearningaday.blog »
For those unfamiliar, Zigtools was founded to support the Zig community, especially newcomers, by creating editor tooling such as ZLS, providing building blocks for language servers written in Zig with lsp-kit, working on tools like the Zigtools Playground, and contributing to Zig editor extensions like vscode-zig.
A couple weeks ago, a Zig resource called Zigbook was released with a bold claim of “zero AI” and an original “project-based” structure.
Unfortunately, even a cursory look at the nonsense chapter structure, book content, examples, generic website, or post-backlash issue-disabled repo reveals that the book is wholly LLM slop and the project itself is structured like some sort of sycophantic psy-op, with botted accounts and fake reactions.
We’re leaving out all direct links to Zigbook to not give them any more SEO traction.
We thought that the broad community backlash would be the end of the project, but Zigbook persevered, releasing just last week a brand new feature, a “high-voltage beta” Zig playground.
As we at Zigtools have our own Zig playground (repo, website), our interest was immediately piqued. The form and functionality looked pretty similar and Zigbook even integrated (in a non-functional manner) ZLS into their playground to provide all the fancy editor bells-and-whistles, like code completions and goto definition.
Knowing Zigbook’s history of deception, we immediately investigated the WASM blobs. Unfortunately, the WASM blobs are byte-for-byte identical to ours. This cannot be a coincidence given the two blobs (zig.wasm, a lightly modified version of the Zig compiler, and zls.wasm, ZLS with a modified entry point for WASI) are entirely custom-made for the Zigtools Playground.
We archived the WASM files for your convenience, courtesy of the great Internet Archive:
We proceeded to look at the JavaScript code, which we quickly determined was similarly copied, but with LLM distortions, likely to prevent the code from being completely identical. Still, certain sections were copied one-to-one, like the JavaScript worker data-passing structure and logging (original ZLS playground code, plagiarized Zigbook code).
The following code from both files is identical:
try {
// @ts-ignore
const exitCode = wasi.start(instance);
postMessage({
stderr: `\n\n–-\nexit with exit code ${exitCode}\n–-\n`,
} catch (err) {
postMessage({ stderr: `${err}` });
postMessage({
done: true,
onmessage = (event) => {
if (event.data.run) {
run(event.data.run);
The \n\n–-\nexit with exit code ${exitCode}\n–-\n is perhaps the most obviously copied string.
Funnily enough, despite copying many parts of our code, Zigbook didn’t copy the most important part of the ZLS integration code, the JavaScript ZLS API designed to work with the ZLS WASM binary’s API. That JavaScript code is absolutely required to interact with the ZLS binary which they did plagiarize. Zigbook either avoided copying that JavaScript code because they knew it would be too glaringly obvious, because they fundamentally do not understand how the Zigtools Playground works, or because they plan to copy more of our code.
To be clear, copying our code and WASM blobs is entirely permissible given that the playground and Zig are MIT licensed. Unfortunately, Zigbook has not complied with the terms of the MIT license at all, and seemingly claims the code and blobs as their own without correctly reproducing the license.
We sent Zigbook a neutral PR correcting the license violations, but they quickly closed it and deleted the description, seemingly to hide their misdeeds.
The original description (also available in the “edits” dropdown of the original PR comment) is reproduced below:
We (@zigtools) noticed you were using code from the Zigtools Playground, including byte-by-byte copies of our WASM blobs and excerpts of our JavaScript source code. This is a violation of the MIT license that the Zigtools Playground is licensed under alongside a violation of the Zig MIT license (for the zig.wasm blob).The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
We’ve fixed this by adding the licenses in question to your repository. As your repository does not include a direct link to the *.wasm dependencies, we’ve added a license disclaimer on the playground page as well that mentions the licenses.
Zigbook’s aforementioned bad behavior and their continued violation of our license and unwillingness to fix the violation motivated us to write this blog post.
It’s sad that our first blog post is about the plagiarism of our coolest subproject. We challenged ourselves by creating a WASM-based client-side playground to enable offline usage, code privacy, and no server costs.
This incident has motivated us to invest more time into our playground and has generated a couple of ideas:
* We’d like to enable multifile support to allow more complex Zig projects to be run in the browser
* We’d like to collaborate with fellow Ziguanas to integrate the playground into their excellent Zig tutorials, books, and blogpostsA perfect example usecase would be enabling folks to hop into Ziglings online with the playgroundThe Zig website itself would be a great target as well!
* A perfect example usecase would be enabling folks to hop into Ziglings online with the playground
* The Zig website itself would be a great target as well!
* We’d like to support stack traces using DWARF debug info which is not yet emitted by the self-hosted Zig compiler
As Zig community members, we advise all other members of the Zig community to steer clear of Zigbook.
If you’re looking to learn Zig, we strongly recommend looking at the excellent official Zig learn page which contains excellent resources from the previously mentioned Ziglings to Karl Seguin’s Learning Zig.
We’re also using this opportunity to mention that we’re fundraising to keep ZLS sustainable for our only full-time maintainer, Techatrix. We’d be thrilled if you’d be willing to give just $5 a month. You can check out our OpenCollective or GitHub Sponsors.
...
Read the original on zigtools.org »
Americans have grown sour on one of the longtime key ingredients of the American dream.
Almost two-thirds of registered voters say that a four-year college degree isn’t worth the cost, according to a new NBC News poll, a dramatic decline over the last decade.
Just 33% agree a four-year college degree is “worth the cost because people have a better chance to get a good job and earn more money over their lifetime,” while 63% agree more with the concept that it’s “not worth the cost because people often graduate without specific job skills and with a large amount of debt to pay off.”
In 2017, U. S. adults surveyed were virtually split on the question — 49% said a degree was worth the cost and 47% said it wasn’t. When CNBC asked the same question in 2013 as part of its All American Economic Survey, 53% said a degree was worth it and 40% said it was not.
The eye-popping shift over the last 12 years comes against the backdrop of several major trends shaping the job market and the education world, from exploding college tuition prices to rapid changes in the modern economy — which seems once again poised for radical transformation alongside advances in AI.
“It’s just remarkable to see attitudes on any issue shift this dramatically, and particularly on a central tenet of the American dream, which is a college degree. Americans used to view a college degree as aspirational — it provided an opportunity for a better life. And now that promise is really in doubt,” said Democratic pollster Jeff Horwitt of Hart Research Associates, who conducted the poll along with the Republican pollster Bill McInturff of Public Opinion Strategies.
“What is really surprising about it is that everybody has moved. It’s not just people who don’t have a college degree,” Horwitt added.
National data from the Bureau of Labor Statistics shows that those with advanced degrees earn more and have lower unemployment rates than those with lower levels of education. That’s been true for years.
But what has shifted is the price of college. While there have been some small declines in tuition prices over the last decade, when adjusted for inflation, College Board data shows that the average, inflation-adjusted cost of public four-year college tuition for in-state students has doubled since 1995. Tuition at private, four-year colleges is up 75% over the same period.
Poll respondents who spoke with NBC News all emphasized those rising costs as a major reason why the value of a four-year degree has been undercut.
Jacob Kennedy, a 28-year-old server and bartender living in Detroit, told NBC News that while he believes “an educated populace is the most important thing for a country to have,” if people can’t use those degrees because of the debt they’re carrying, it undercuts the value.
Kennedy, who has a two-year degree, reflected on “the number of people who I’ve met working in the service industry who have four-year degrees and then within a year of graduating immediately quit their ‘grown-up jobs’ to go back to the jobs they had.”
“The cost overwhelms the value,” he continued. “You go to school with all that student debt — the jobs you get out of college don’t pay that debt, so you have to go find something else that can pay that debt.”
The 20-point decline over the last 12 years among those who say a degree is worth it — from 53% in 2013 to 33% now — is reflected across virtually every demographic group. But the shift in sentiment is especially striking among Republicans.
In 2013, 55% of Republicans called a college degree worth it, while 38% said it wasn’t worth it. In the new poll, just 22% of Republicans say the four-year degree is worth it, while 74% say it’s not.
Democrats have seen a significant shift too, but not to the same extent — a decline from 61% who said a degree was worth it in 2013 to 47% this year.
Over the same period, the composition of both parties has changed, with the Republican Party garnering new and deeper support from voters without college degrees, while the Democratic Party drew in more degree-holders.
Remarkably, less than half of voters with college degrees see those degrees as worth the cost: 46% now, down from 63% in 2013.
Those without a college degree were about split on the question in 2013. Now, 71% say a four-year degree is not worth the cost, while 26% say it is.
Preston Cooper, a senior fellow at the right-leaning American Enterprise Institute, said enough cracks have proliferated under the long-standing narrative that a college degree always pays off to create a serious rupture.
“Some people drop out, or sometimes people end up with a degree that is not worth a whole lot in the labor market, and sometimes people pay way too much for a degree relative to the value of what that credential is,” he said. “These cases have created enough exceptions to the rule that a bachelor’s degree always pays off, so that people are now more skeptical.”
The upshot is that interest in technical, vocational and two-year degree programs has soared.
“I think students are more wary about taking on the risk of a four-year or even a two-year degree,” he said. “They’re now more interested in any pathway that can get them into the labor force more quickly.”
Josiah Garcia, a 24-year-old in Virginia, said he recently enrolled in a program to receive a four-year engineering degree after working as an electrician’s apprentice. He said he was motivated to go back to school because he saw the degree as having a direct effect on his future earning potential.
But he added that he didn’t feel that those who sought other degrees in areas like art or theater could say the same.
“A lot of my friends who went to school for art or dance didn’t get the job they thought they could get after graduating,” he said, arguing that degrees for “softer skills” should be cheaper than those in STEM fields.
Jessica Burns, a 38-year-old Iowa resident and bachelor’s degree-holder who works for an insurance company, told NBC News that for her, the worth of a four-year-degree largely depends on the cost.
She went to a community college and then a state school to earn her degree, so she said she graduated without having to spend an “insane” amount of money.
But her husband went to a private college for his degree, and she quipped: “We are going to have student loan debt for him forever.”
Burns said she believes a college degree is “essential for a lot of jobs. You’re not going to get an interview if you don’t have a four-year degree for a lot of jobs in my field.”
But she framed the value of degrees more in terms of how society views them instead of intrinsic value.
“It’s not valuable because it’s brought a bunch of value added, it’s valuable because it’s the key to even getting in the door,” she said. “Our society needs to figure out that if we value it, we need to make it affordable.”
Burns said she believes that a lot more people in her millennial generation are “now saddled with a huge amount of debt, even as successful business professionals,” which will influence how her peers approach paying for college for their children.
There hasn’t just been a decline in the cost-benefit analysis of a degree. Gallup polling also shows a marked decline in public confidence in higher education over the last decade, albeit with a slight increase over the last year.
“This is a political problem. It’s also a real problem for higher education. Colleges and universities have lost that connection they’ve had with a large swath of the American people based on affordability,” Horwitt said. “They’re now seen as out of touch and not accessible to many Americans.”
The NBC News poll surveyed 1,000 registered voters Oct. 24-28 via a mix of telephone interviews and an online survey sent via text message. The margin of error is plus or minus 3.1 percentage points.
...
Read the original on www.nbcnews.com »
Fed up with trillion-dollar companies exploiting your data? Forced to use their services? Your data held for ransom? Your data used to train their AI models? Opt-outs for data collection instead of opt-ins?
Join the movement to make companies more like Clippy. Set your profile picture to Clippy, make your voice heard.
Below is a video that explains the Be Like Clippy movement. It’s a call to action for developers, companies, and users alike to embrace a more open, transparent, and user-friendly approach to technology.
...
Read the original on be-clippy.com »
Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other.
You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware.
If you’d like to support Advent of Code, you can do so indirectly by helping to [Share] it with others or directly via AoC++.
If you get stuck, try your solution against the examples given in the puzzle; you should get the same answers. If not, re-read the description. Did you misunderstand something? Is your program doing something you don’t expect? After the examples work, if your answer still isn’t correct, build some test cases for which you can verify the answer by hand and see if those work with your program. Make sure you have the entire puzzle input. If you’re still stuck, maybe ask a friend for help, or come back to the puzzle later. You can also ask for hints in the subreddit.
Is there an easy way to select entire code blocks? You should be able to triple-click code blocks to select them. You’ll need JavaScript enabled.
#!/usr/bin/env perl
use warnings;
use strict;
print “You can test it out by ”;
print “triple-clicking this code.\n”;
How does authentication work? Advent of Code uses OAuth to confirm your identity through other services. When you log in, you only ever give your credentials to that service - never to Advent of Code. Then, the service you use tells the Advent of Code servers that you’re really you. In general, this reveals no information about you beyond what is already public; here are examples from Reddit and GitHub. Advent of Code will remember your unique ID, names, URL, and image from the service you use to authenticate.
Why was this puzzle so easy / hard? The difficulty and subject matter varies throughout each event. Very generally, the puzzles get more difficult over time, but your specific skillset will make each puzzle significantly easier or harder for you than someone else. Making puzzles is tricky.
Why do the puzzles unlock at midnight EST/UTC-5? Because that’s when I can consistently be available to make sure everything is working. I also have a family, a day job, and even need sleep occasionally. If you can’t participate at midnight, that’s not a problem; if you want to race, many people use private leaderboards to compete with people in their area.
I find the text on the site hard to read. Is there a high contrast mode? There is a high contrast alternate stylesheet. Firefox supports these by default (View -> Page Style -> High Contrast).
I have a puzzle idea! Can I send it to you? Please don’t. Because of legal issues like copyright and attribution, I don’t accept puzzle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by accident.
Did I find a bug with a puzzle? Once a puzzle has been out for even an hour, many people have already solved it; after that point, bugs are very unlikely. Start by asking on the subreddit.
Should I try to get a fast solution time? Maybe. Solving puzzles is hard enough on its own, but trying for a fast time also requires many additional skills and a lot of practice; speed-solves often look nothing like code that would pass a code review. If that sounds interesting, go for it! However, you should do Advent of Code in a way that is useful to you, and so it is completely fine to choose an approach that meets your goals and ignore speed entirely.
Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).
What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)
While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc? If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.
Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
Can I copy/redistribute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re posting a code repository somewhere, please don’t include parts of Advent of Code like the puzzle text or your inputs. If you’re making a website, please don’t make it look like Advent of Code or name it something similar.
...
Read the original on adventofcode.com »
Landlock: What Is It?
Landlock is a Linux API that lets applications explicitly declare which resources they are allowed to access. Its philosophy is similar to OpenBSD’s unveil() and (less so) pledge(): programs can make a contract with the kernel stating, “I only need these files or resources — deny me everything else if I’m compromised.”
It provides a simple, developer-friendly way to add defense-in-depth to applications. Compared to traditional Linux security mechanisms, Landlock is vastly easier to understand and integrate.
This post is meant to be an accessible introduction, and hopefully persuade you to give Landlock a try.
How Does It Work?
Landlock is a Linux Security Module (LSM) available since Linux 5.13. Unlike MAC frameworks such as SELinux or AppArmor, Landlock applies transient restrictions: policies are created at runtime, enforced on the current thread and its future descendants, and disappear when the process exits.
You don’t tag files with labels or extended attributes. Instead, applications create policies dynamically.
Handled accesses — the categories of operations you want to restrict (e.g., filesystem read/write).
Access grants — an explicit allowlist of which objects are permitted for those operations.
For example, you could create a policy that handles all filesystem reads/writes and network binds, and grants:
The application then calls landlock_restrict_self() to enter the restricted domain. From that point on, that thread’s child threads and child processes are permanently constrained. Restrictions cannot be revoked.
Policies can be layered (up to 16 layers). A child layer may further reduce access, but cannot reintroduce permissions the parent layer removed. For example, a child thread may add a layer to this policy to restrict itself to only reading /home/user, but it cannot regain permission to bind to port 2222 once a layer omits this grant.
Landlock is unprivileged — any application can sandbox itself. It also uses ABI versioning, allowing programs to apply best-effort sandboxing even on older kernels lacking newer features.
It’s also a stackable LSM, meaning you can combine it with selinux or apparmor in a supplemental layer.
Why Should You Use It?
Landlock shines when an application has a predictable set of files or directories it needs. For example, a web server could restrict itself to accessing only /var/www/html and /tmp.
Unlike SELinux or AppArmor, Landlock policies don’t require administrator involvement or system-wide configuration. Developers can embed policies directly in application code, making sandboxing a natural part of the development process.
Because Landlock requires no privileges to use, adding it to most programs is straightforward.
Bindings exist for languages such as Rust, Go, and Haskell, and several projects provide user-friendly unveil-style wrappers.
A official c library doesn’t exist yet unfortunately, but there’s several out there you can try.
use landlock::{
ABI, Access, AccessFs, Ruleset, RulesetAttr, RulesetCreatedAttr, RulesetStatus, RulesetError,
path_beneath_rules,
fn restrict_thread() -> Result {
let abi = ABI::V1;
let status = Ruleset::default()
.handle_access(AccessFs::from_all(abi))?
.create()?
// Read-only access to /usr, /etc and /dev.
.add_rules(path_beneath_rules(&[“/usr”, “/etc”, “/dev”], AccessFs::from_read(abi)))?
// Read-write access to /home and /tmp.
.add_rules(path_beneath_rules(&[“/home”, “/tmp”], AccessFs::from_all(abi)))?
.restrict_self()?;
match status.ruleset {
RulesetStatus::FullyEnforced => println!(“Fully sandboxed.“),
RulesetStatus::PartiallyEnforced => println!(“Partially sandboxed.“),
RulesetStatus::NotEnforced => println!(“Not sandboxed! Please update your kernel.“),
Ok(())
The State of Linux Sandboxing: Why This Matters
As Linux adoption grows, so does the amount of malware targeting desktop users. While Linux has historically enjoyed relative safety, this is largely due to smaller market share and higher technical barriers compared to Windows — not because Linux is inherently safer.
Linux is not a security panacea. For example, on most major distributions:
Users can download and execute untrusted binaries with no warnings.
Shell scripts can be piped from the internet and executed blindly.
Many users run passwordless sudo, giving them root access on demand.
Unprivileged applications can typically:
Read ~/.ssh, ~/.bashrc, browser cookies, and anything else in $HOME
Several tools try to improve the state of security on linux, but each has significant drawbacks:
Many users break isolation by using –privileged or –network host.
Must be explicitly invoked each time, or you need a wrapper script.
Blacklists are fragile; new syscalls can break things.
Argument filtering is difficult and full of TOCTOU hazards.
Many users disable it due to complexity.
Not enabled on most distributions by default. (used a lot in android)
Easier than SELinux, but still requires admin-defined profiles.
Gets disabled by many distributions, but is more commonly used in the desktop.
What landlock could bring to the table:
Long-running system daemons that run with elevated privileges could benefit from landlock restrictions.
Desktop applications dealing with binary formats, like pdf readers, image viewers web browsers, and word processors can be restricted to accessing the files they originally opened.
FTP and HTTP servers can be bound to the files they need. Even if nginx is running as root, if an attacker gets a full reverse shell, they won’t be able to see access files outside the policy.
If the supervisor proposal gets added, we could bring an android-like permissions system to the linux desktop. Flatpak does a decent job at this, but imagine if every process in your desktop would need to explicitly ask (at least once) before accessing sensitive files or resources.
Pair that with an accessible GUI and a system for handling updates and saving permission grants, and we have potential for a safer, more secure linux user experience on the desktop.
Several promising features are under active development:
Supervise Mode
Lets a userspace “supervisor” interactively allow or deny access — similar to Android-style permission prompts.
Socket Restrictions
Fine-grained control over which types of sockets or ports processes may use.
LANDLOCK_RESTRICT_SELF_TSYNC
Ensures restrictions propagate to all threads in a process.
LANDLOCK_ADD_RULE_NO_INHERIT (disclosure: this is my patch series)
Prevents rules from unintentionally inheriting permissions from parent directories, giving finer-grained filesystem control.
Landlock is a simple, unprivileged, deny-by-default sandboxing mechanism for Linux.
It’s easy to understand, easy to integrate, and has tremendous potential for improving desktop and application security.
Give it a try in your application.
...
Read the original on blog.prizrak.me »
Among a few other integral tricks and techniques, Feynman’s trick was a strong reason that made me love evaluating integrals, and although the technique itself goes back to Leibniz being commonly known as the Leibniz integral rule, it was Richard Feynman who popularized it, which is why it is also referred to as Feynman’s trick. Here’s an excerpt from his book, Surely You’re Joking, Mr. Feynman:
“One thing I never did learn was contour integration. I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me.
One day he told me to stay after class. “Feynman,” he said, “you talk too much and you make too much noise. I know why. You’re bored. So I’m going to give you a book. You go up there in the back, in the corner, and study this book, and when you know everything that’s in this book, you can talk again.”
So every physics class, I paid no attention to what was going on with Pascal’s Law, or whatever they were doing. I was up in the back with this book: Advanced Calculus, by Woods. Bader knew I had studied Calculus for the Practical Man a little bit, so he gave me the real works — it was for a junior or senior course in college. It had Fourier series, Bessel functions, determinants, elliptic functions — all kinds of wonderful stuff that I didn’t know anything about.
That book also showed how to differentiate parameters under the integral sign — it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. So because I was self-taught using that book, I had peculiar methods of doing integrals.
The result was, when guys at MIT or Princeton had trouble doing a certain integral, it was because they couldn’t do it with the standard methods they had learned in school. If it was contour integration, they would have found it; if it was a simple series expansion, they would have found it. Then I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.”
For me, employing this trick felt like I was using cheat codes to deal with integrals. At the same time, it enabled a lot of creativity and wishful thinking, which transformed integrals into puzzles. Unfortunately, this also means that there is no clear path on how and when to use this technique. In addition, what Feynman wrote still applies today since the method isn’t taught much, if at all, in universities. Therefore, the trick can seem obscure and difficult to grasp for newcomers.
In the following section, we will embark on a journey to develop some rules of thumb to have at our disposal when using Feynman’s trick. These are merely some heuristics that I tend to use, so deviating from them can be perfectly acceptable. However, I hope that they can provide a path to follow when nothing obvious or intuitive occurs when someone tries to use this trick, or even better, so that they can serve as motivation for someone to start using the method.
Feynman already provided a significant hint about the trick when he mentioned differentiating under the integral sign, which is also an alternative name for the technique. More explicitly, if \(f(x,t)\) and \(\frac{\partial f(x,t)}{\partial t}\) is continuous with respect to both variables over the \([a,b]\) interval, then the following holds:
This is nice, but not so useful by itself since it doesn’t say anything about how and when to apply it. Moreover, learning is not a spectator sport and one has to get their hands dirty as there are no shortcuts to it. Take for example chess, most people could read and understand the rules in a few minutes, however, if they would go on to play a game then most likely they would get stomped by a more experienced player. This is because the other player, through practice, learned some strategies to use when playing.
Thus, with the goal to develop some strategies here as well, we will dive straight into action and approach Feynman’s trick using practical examples. As a “Hello, World!” introduction, let’s take a look at the following integral:
You are encouraged to try and evaluate the integral using basic methods, but the logarithm being in the denominator makes this integral quite stubborn to deal with. Feynman’s trick aims to get rid of this issue by differentiating under the intgeral sign, with respect to a parameter, in order to obtain an integral that is easier to evaluate.
Unfortunately in the integral from above we lack a parameter, therefore the first step is to parameterise the integral, which can even mean introducing a whole function, but for this example we will simply consider:
Keep in mind that our original integral is just \(I(1)\). Also, surely we could’ve placed a parameter in many different places, such as:
However, the main idea behind the trick is to obtain an integral that we can evaluate easier, after differentiating with respect to the new parameter. Let’s put this in action and see what happens to \(I(t)\).
Notice how easy it was to evaluate the integral \(I’(t)=\int_0^1x^tdx\) from above, had we kept \(I(a)\), \(I(b)\) or \(I(c)\) the things wouldn’t had simplified at all after differentiating, and most significantly is that we would still have the \(\ln x\) in the denominator, a thing which made the integral hard to deal with in the first place.
We can already sense that the following might be an important question in the future: How to parameterise the integral when using Feynman’s Trick?
We will worry about that a bit later, for now let’s finish the integral as we only found \(I’(t)\). Since we are looking to find \(I(1)\) we need to integrate \(I’(t)\) back and set \(t=1\) in order to arrive there. Here it’s useful to recall that:
For us, \(f(x)\) is just \(I(t)\) in the above expression. Luckily \(I(0)=0\), and as we are looking for \(I=I(1)\) we have:
So that is the big picture of Feynman’s trick - we have an integral that is hard to evaluate in it’s original form, therefore by differentiating under the integral sign we attempt to transform the integral so that it can be easier integrated, and in the end we go back to undo the differentiation step.
As emphasized above, the main goal of the technique is to obtain an integral that is easier to evaluate after differentiating with respect to a parameter, and one issue is that it is not always obvious how to parameterise the integral. In order to make things more intuitively we will play around with the integral from below.
The most annoying thing is the logarithm, so if we get rid of it everything should be straightforward. There are a few parameter possibilities which makes sense to consider, namely:
With the first one we are out of luck, as differentiating with respect to \(a\) gives:
Therefore, if we would try to go back to what we’re looking, which is \(I=I(1)-I(0)\), we would end up with \(I=I+\text{other stuff}\). This cancels out \(I\) and we wouldn’t be able to recover it. Unfortunately, there’s no magic formula that tells a priori whether placing a parameter in a specific place would succeed or fail in evaluating an integral - and sometimes we are simply unlucky.
In contrast, things work out nicely with the second choice from above.
Again, we are looking to find \(I(1)\), and as \(I(0) = 0\), we have:
This works, but we can do even better. Looking at the Hello, World! integral we can see that there we simplified the logarithm in the denominator while performing \(\frac{\partial}{\partial t}x^t\). This is also the first thing that I always attempt to look for when using this technique - namely, to simplify something from the integrand which is independent to the parameter when differentiating. Surely for the current integral we got rid of the logarithm, but the denominator remained intact.
In short this will be our first rule of thumb: if possible, place the parameter so that something from the integral, which is not related to the parameter, gets simplified.
In order to achieve this with our integral we would need to get rid of \(1+x^2\), and by using \(\ln x=\frac12\ln(x^2)\) we can rewrite the integral as:
Finally, in this form it’s more natural to place the parameter so that it simplifies \(1+x^2\) when differentiating with respect to \(t\), namely we can consider:
Like for \(I(b)\) we are looking to find \(I(1)\), however here \(I(0)\) is equal to \(\frac12\int_0^1\frac{\ln(2x)}{1+x^2}dx\) not \(0\).
For this specific integral we only avoided performing partial fractions so there wasn’t really a big improvement by simplifying the denominator. However I want to emphasize the importance of this because it will make things come way more natural when deciding where place the parameter. Of course, in case there’s not an appropiate or immediate way to achieve this, it’s perfectly fine to place the parameter elsewhere too.
As mentioned previously, practicing is the best approach to get along with new techniques, therefore below are more integrals to evaluate alongside some hidden steps in case those will be needed. However, I strongly recommend to try and deal with the integrals before looking at any hints, and only check them afterwards for correctness.
Consider introducing the following parameter: \[I(t)=\int_0^\frac{\pi}{2} \frac{\ln(1-t\sin x)}{\sin x}dx \Rightarrow I’(t)= -\frac{2\arctan\left(\sqrt{\frac{1+t}{1-t}}\right)}{\sqrt{1-t^2}}\] This should lead to: \[\int_0^\frac{\pi}{2} \frac{\ln(1-\sin x)}{\sin x}dx = I(1) - I(0)=\int_0^1 I’(t) dt \overset{\sqrt{\frac{1-t}{1+t}}=x} = -\frac{3\pi^2}{8}\] But it would be even better if the integral would be parameterised as: \[I(t)=\int_0^\frac{\pi}{2} \frac{\ln(1-\sin t\sin x)}{\sin x}dx\] That is because usually when having trigonometric functions, parameterising the integral with another trigonometric function, leads to a more smoother result.
Consider introducing the following parameter: \[I(t)=\int_0^1 \frac{\ln(1-t(x-x^2))}{x-x^2}dx\Rightarrow I’(t) = \frac{4\arctan\left(\sqrt{\frac{t}{4-t}}\right)}{\sqrt{t(4-t)}}\] This should lead to: \[I(1)=\int_0^1 \frac{\ln(1-x+x^2)}{x-x^2}dx = I(1) - I(0) = \int_0^1 I’(t)dt \overset{\sqrt{\frac{4-t}{t}}= x}= -\frac{\pi^2}{9}\]
Consider introducing the following parameter: \[I(t)=\int_0^\frac{\pi}{2} \frac{\arctan(t\sin x)}{\sin x}dx\Rightarrow I’(t)=\frac{\pi}{2\sqrt{1+t^2}}\] This should lead to: \[I(1)=\int_0^\frac{\pi}{2} \frac{\arctan(t\sin x)}{\sin x}dx = I(1)-I(0) = \int_0^1 I’(t)dt = \frac{\pi}{2}\ln(1+\sqrt 2)\] It will also work if the integral is parameterised as: \[I(t)=\int_0^\frac{\pi}{2} \frac{\arctan(\tan t\sin x)}{\sin x}dx\] However, in this case the first variant is simple enough to integrate back.
Consider introducing the following parameter: \[I(t)=\int_0^\infty x^2e^{-\left(4x^2+\frac{t}{x^2}\right)}dx\Rightarrow I’(t)=-\frac{\sqrt \pi}{4} e^{-4\sqrt t}\] Where the above result follows by using Glasser’s master theorem alongside the Gaussian integral. This should lead to: \[\int_0^\infty x^2e^{-\left(4x^2+\frac{9}{x^2}\right)}dx = I(9)- I(0) + I(0) = \int_0^9 I’(t) dt +\frac{\sqrt \pi}{32}=\frac{13}{32}\frac{\sqrt \pi}{e^{12}}\]
Consider parameterising the integral as: \[I(t)=\frac12\int_0^1\frac{\ln(1-t(1-x^2))}{1-x^2}dx\Rightarrow I’(t)=\frac{\arctan\left(\sqrt{\frac{t}{1-t}}\right)}{2\sqrt{t(1-t)}}\] This should lead to: \[\int_0^1 \frac{\ln x}{1-x^2}dx = I(1)- I(0) = \int_0^1 I’(t)dt \overset{\sqrt{\frac{1-t}{t}} = x}= -\frac{\pi^2}{8}\]
Consider parameterising the integral as: \[I(t)=\int_0^\infty \frac{e^{-t(1+x^2)}}{1+x^2}dx\Rightarrow I’(t) = -\frac{\sqrt \pi}{2\sqrt t}e^{-t}\] This should lead to: \[\int_0^\infty \frac{e^{-x^2}}{1+x^2}dx = e\left(I(1)-I(\infty)\right) = -e\int_1^\infty I’(t)dt= \frac{\pi e}{2}\operatorname{erfc}(1)\] Where \(\operatorname{erfc}(x)\) is the complementary error function.
Since \(1-x^2+x^4=(1+x^2)^2-3x^2\), consider parameterising the integral as: \[I(t)=\int_0^\infty \frac{\ln\left(\frac{t(1+x^2)^2-3x^2}{(1-x^2)^2}\right)}{(1+x^2)^2}dx\Rightarrow I’(t)=\frac{\pi}{2\sqrt{t(4t-3)}}\] And in order to go back it should be observed that \(\frac34(1+x^2)^2-3x^2=\frac34(1-x^2)^2\). \[\int_0^\infty \frac{\ln\left(\frac{1-x^2+x^4}{(1-x^2)^2}\right)}{(1+x^2)^2}dx=I(1)- I\left(\frac34\right)+ I\left(\frac34\right)\] \[=\int_\frac34^1 I’(t)dt + \frac{\pi}{4}\ln\left(\frac{3}{4}\right) = \frac{\pi}{2}\ln\left(\frac32\right)\]
The previous chapter emphasized to parameterise integrals so that something from the integral, which is not related to the parameter, gets simplified when differentiating (if possible). However there are times when even though we can introduce a parameter to accomplish that, it wouldn’t be enough to finish the integral.
In this chapter we will look at a different way to obtain this simplification. Let’s start by looking at a modified version of an integral that was previously given as an exercise.
With \(\int_{-\infty}^\infty \frac{e^{-x^2}}{1+x^2}dx\) it was quite direct to parameterise the integral as \(\int_{-\infty}^\infty \frac{e^{-t(1+x^2)}}{1+x^2}dx\) since it simplifies the denominator, however the similar way to do that for our integral, \(\int_{-\infty}^\infty \frac{e^{-x^2-t(1+x^4)}}{1+x^4}dx\), doesn’t seem to work as it complicates things a bit too much.
There is however a way to simplify the denominator and in the same time to obtain a decent integral afterwards. Without getting into too much details I will parameterise the integral as:
This will seem obscure, but fear not as we will never use this approach again. The whole point is to simplify \(1+x^4\), and the above function was created explicitly to achieve that, as \(\frac{\partial}{\partial t}e^{-tx^2}(x^2\sin t+\cos t)\) is \(-(1+x^4)e^{-tx^2}\sin t\). Note that even though we introduced a couple other terms, those aren’t disturbing.
Here we are looking to find \(I=I(0)\), and we also have \(I(\infty)=0\), therefore:
Where \(S(x)\) and \(C(x)\) are the Fresnel integrals. However, the approach is important here, not the result itself.
We can avoid the parametrisation from above by directly using \(\frac{1}{1+x^4}=\int_0^\infty e^{-tx^2}\sin t \, dt\), and then switch to double integrals, or put in other words: employ the accelerated Feynman’s trick (in which we skip the usual parameterisation step).
The rest goes exactly as with the previous method, as all we did here was to skip differentiation step and instead we switched to double integrals.
A natural question that arises here is how did \(\frac{1}{1+x^4}=\int_0^\infty e^{-tx^2}\sin t\, dt\) appear? Or even better, how can someone come up with similar results for other integrals? In the case from above, simply the Laplace transform of the sine function was used, however in general it’s useful to have a list of such identities. There are tables of integral results that can be used - for example: Table of Integrals, Series, and Products by Gradshteyn and Ryzhik - but alternatively one can build up their own list of results which tend to appear often while evaluating other integrals.
Let’s conclude this chapter by evaluating one of the most popular integrals that appears when Feynman’s trick gets into the conversation.
Since \(\int_0^\infty e^{-xt} dt = \frac{1}{x}\), we can make use of this to rewrite the integral as:
Alternatively, we can also consider the parameter version of this integral, \(\int_0^\infty \frac{\sin x}{x}e^{-xt}dx\), however I feel like switching to double integrals is way more intuitively.
It might be worth to highlight again that this method should be used preferable when parameterising the integral leads to nowhere. For the above integral, the natural introduction of \(\int_0^\infty \frac{\sin(tx)}{x}dx\) unfortunatelly does fail, as we obtain a divergent integral after differentiating under the integral sign.
Like in the previous chapter below are more integrals alongside some hints in order to practice with the accelerated variation of Feynman’s trick. However in this case I do recommend to peek at hints faster in case nothing obvious comes to mind, and afterwards to attempt and understand why the mentioned identity can be used.
Start by substituting \(x^2\to x\) and then switch to double integrals using: \[\int_0^\infty e^{-xt^2}dt = \frac{\sqrt \pi}{2\sqrt x}\] Where the latter result is due to the Gaussian integral. Also, this integral is one particular case of the Fresnel integral.
Switch directly to double integrals by using: \[\int_0^1 \frac{\ln t}{t-\frac{1}{x}}dt = \operatorname{Li}_2(x)\]
Switch to double integrals by using the following result: \[\int_0^x \frac{\arctan t}{1+xt}dt = \frac{\arctan x \ln(1+x^2)}{2x}\]
Consider switching to double integrals with: \[\frac{x}{\pi^2+x^2}=\Im\left(-\frac{1}{\pi+ix}\right)=-\Im\int_0^\infty e^{-(\pi+ix)t}dt\] It’s also really useful to try and see what happens when the Laplace transform of the cosine function is used instead, or the equivalent: \[\frac{x}{\pi^2+x^2}=\Re\left(\frac{1}{i\pi+x}\right)=\Re\int_0^\infty e^{-(i\pi+x)t}dt\]
Consider switching to double integrals using: \[\operatorname{Ci}^2(x)+\operatorname{si}^2(x)=\int_0^\infty \frac{e^{-xy}\ln(1+y^2)}{y}dy\]
Above \(\operatorname{Li}_2(x)\) denotes the dilogarithm function and \(\operatorname{Ci}(x)\), \(\operatorname{si}(x)\) are the cosine and the sine integral functions, defined as:
We already got familiar with a popular version of Feynman’s trick in the previous chapter. Similarly, now we will take a look at other interesting variants of Feynman’s trick, which although might appear less often, they can still help to expand the applicability of the technique.
We will start by taking a look at a much simpler case of Feynman’s trick, namely, in the situation when it would be enough to simply differentiate under the integral sign without performing that “undo” step to integrate back.
As a small note, it’s true that “differentiating under the integral sign” tends to be used as an alternative name for Feynman’s trick, however I prefer to keep this for the variant where only the differentiating process takes part, or as mentioned above, when there’s no need to integrate back the result, and the name describes quite literally what we are doing.
Let’s make this more clear by looking at the following integral:
We are already aware from the Hello, World! integral how \(\ln x\) can be simplified, since \(\frac{\partial}{\partial a}x^a = x^a \ln x\). However, by introducing the parameter in that original form as \(x^a \ln^2 x\), we would just produce a third logarithm, so that’s going in the opposite direction.
Fortunatelly, if we take a step back, we can observe that after we find the result of \(\int_0^1 x^a dx\), then differentiating it w.r.t.\(a\) would give us as many logarithms as we want. So, let’s put that integral to use.
Of course the integral itself was quite simple this time, however the important part that should be highlighted is that not always we need to perform that “undo” step after differentiating under the integral sign - and sometimes knowing a general integral result can provide us more useful integrals by differentiating it.
Further, we will take a look at how Feynman’s trick can be applied to indefinite integrals. Let’s consider:
In this form it makes no sense to differentiate the integral with respect to any parameter, but we can extend the integral with temporary bounds by writing:
After this we can go on apply Feynman’s trick, however, first we are going get rid of the square root via the substitution \(\frac{1}{\sqrt x}\to x\).
Here, we can notice that the derivative of \(ax-\frac{b}{x}\) is \(a+\frac{b}{x^2}\) so it would be quite helpful if we had that additional term. In the same time if we differentiate the integrand with respect to \(b\) we’ll produce \(a-\frac{b}{x^2}\), which is really useful as \((ax-b/x)^2\) is equal to \((ax+b/x)^2+4ab\) and the derivative of \(ax+\frac{b}{x}\) is \(a-\frac{b}{x^2}\). So let’s differentiate as mentioned above:
Where \(\operatorname{erfc}(x)\) is the complementary error function. Now we’ll go back to \(I(a,b,t)\), but we should be careful to replace the dummy variable \(b\), with something else as the \(b\) parameter does also appear in the bounds.
Or for the indefinite integral, this would lead to:
Next, we will take a look at how to combine Feynman’s trick with power series. For this we are going to look at:
We are already got familiar with what to do when there is a logarithm in the denominator as we saw that we can get rid of them by using \(\frac{d}{dt} x^t = x^t\ln x\), however here also the \(1-xy\) term appears. In order to solve this issue we’ll make use of the geoemtric series, namely \(\frac{1}{1-x}=\sum_{n=0}^\infty x^n\), but we will expand into series a bit later and for now continue with the following integral:
Now we have to to get back to \(I(n)\):
And finally, we’ll put the geometric series to use.
So the result is simply \(1-2\gamma\), where \(\gamma\) is the Euler-Mascheroni constant.
In what’s to come we are going to take a look at a combination between Feynman’s trick and differential equations. Let’s consider the following integral:
We can start by parameterising the cosine function and then employ the accelerated Feynman’s trick:
We haven’t made much progress above, since we simply arrived at another integral with \(x\sin(tx)\) instead of \(\cos(tx)\), thus complexity is the same. However, as \(\frac{\partial}{\partial t}\cos(tx)\) is \(x\sin(tx)\), differentiating \(I(t)\) gives us a differential equation to work with, namely:
\[I’(t)=- \int_0^\infty \frac{x\sin(tx)}{1+x^2}dx = - I(t) \Rightarrow \frac{I’(t)}{I(t)}=-1\Rightarrow I(t) = C e^{-t}\]
\[I(0)=\int_0^\infty \frac{1}{1+x^2}dx=\frac{\pi}{2} \Rightarrow I(t)=\frac{\pi}{2}e^{-t}\]
\[ I = I(1) \Rightarrow I = \frac{\pi}{2e}\]
As a small note for the starting step, although employing the accelerated Feynman’s trick was rather obvious as to get rid of the denominator, the additional introduction of the \(t\) parameter might be weird first. However performing the same steps without this parameter gives us:
Which indicates that one might put to use the fact that \(I(1)=-I’(1)\), by adding the additional \(t\) parameter.
So far we’ve seen the Feynman’s trick applied only when the parameter was inside the integrand, however it can also be used when the bounds are parameterised as well. More generally, the following holds:
We’ll put this to use with the integral from below.
Above we can see that the same \(\sqrt 2\) appears in both the lower bound and the \(\operatorname{arccosh}\) function, so we’ll parameterise the integral as:
We’re looking to find \(I=I\left(\sqrt 2\right)\), and since \(I\left(1\right)=0\), we have:
Now we’ll take a look at a fancier way to use Feynman’s trick, especially in order to generate new integrals, for this we’re considering:
Note that we are not trying to evaluate the above integral, instead we are simply using it in order to build up new integrals with the result that follows after differentiating w.r.t. \(t\).
We also have that \(I(\pi)=-\frac{\pi^2}{4}\) and \(I(0)=\frac{\pi^2}{8}\), therefore:
In retrospect, this integral also appeared as an exercise in the second chapter, and with the same suggestion from there, we can evaluate the integral by applying Feynman’s trick to:
Admittedly, following this parameterisation is much more intuitevely than what we’ve shown with the new variation, however it’s also useful to have this trick in the bag.
To keep the practice going, underneath are listed some integrals that can be evaluated with one version of Feynman’s trick described in this chapter.
Start by showing that: \[I(t)=\int_1^\infty \int_1^\infty e^{-t(x+y)}dxdy = \left(\frac{e^{-t}}{t}\right)^2\] Then differentiate both sides two times with respect to \(t\) and set \(t=1\).
Differentiate four times with respect to \(n\) the following extended indefinite integral: \[ I(n,t) = \int_0^t \cos(nx) dx \]
Solve the resulting differential equation after differentiating twice the following integral: \[ I(t) = \int_0^\infty \frac{\sin^2 (tx)}{x^2(1+x^2)}dx \]
...
Read the original on zackyzz.github.io »
This note is to be removed before publishing as an RFC.¶
Discussion of this draft takes place on the HTTP working group mailing list (ietf-http-wg@w3.org), which is archived at https://lists.w3.org/Archives/Public/ietf-http-wg/.¶
Working Group information can be found at https://httpwg.org/; source code and issues list for this draft can be found at https://github.com/httpwg/http-extensions/labels/query-method.¶
The changes in this draft are summarized in Appendix C.14.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.“¶
This Internet-Draft will expire on 22 May 2026.¶
Copyright (c) 2025 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
This specification defines the HTTP QUERY request method as a means of making a safe, idempotent request (Section 9.2 of [HTTP]) that encloses a representation describing how the request is to be processed by the target resource.¶
However, when the data conveyed is too voluminous to be encoded in the request’s URI, this pattern becomes problematic:¶
* often size limits are not known ahead of time because a request can pass through many uncoordinated
systems (but note that recommends senders and recipients to support at least 8000 octets),¶
* expressing certain kinds of data in the target URI is inefficient because of the overhead of encoding that data into a valid URI,¶
* request URIs are more likely to be logged than request content, and may also turn up in bookmarks,¶
* encoding queries directly into the request URI effectively casts every possible combination of query inputs as distinct
resources.¶
As an alternative to using GET, many implementations make use of the HTTP POST method to perform queries, as illustrated in the example below. In this case, the input to the query operation is passed as the request content as opposed to using the request URI’s query component.¶
A typical use of HTTP POST for requesting a query is:¶
In this variation, however, it is not readily apparent — absent specific knowledge of the resource and server to which the request is being sent — that a safe, idempotent query is being performed.¶
The QUERY method provides a solution that spans the gap between the use of GET and POST, with the example above being expressed as:¶
As with POST, the input to the query operation is passed as the content of the request rather than as part of the request URI. Unlike POST, however, the method is explicitly safe and idempotent, allowing functions like caching and automatic retries to operate.¶
Recognizing the design principle that any important resource ought to be identified by a URI, this specification describes how a server can assign URIs to both the query itself or a specific query result, for later use in a GET request.¶
The QUERY method is used to initiate a server-side query. Unlike the GET method, which requests a representation of the resource identified by the target URI (as defined by Section 7.1 of [HTTP]), the QUERY method is used to ask the target resource to perform a query operation within the scope of that target resource.¶
The content of the request and its media type define the query. The origin server determines the scope of the operation based on the target resource.¶
Servers MUST fail the request if the Content-Type request field ([HTTP], Section 8.3) is missing or is inconsistent with the request content.¶
As for all HTTP methods, the target URI’s query part takes part in identifying the resource being queried. Whether and how it directly affects the result of the query is specific to the resource and out of scope for this specification.¶
QUERY requests are safe with regard to the target resource ([HTTP], Section 9.2.1) — that is, the client does not request or expect any change to the state of the target resource. This does not prevent the server from creating additional HTTP resources through which additional information can be retrieved (see Sections 2.3
and 2.4).¶
Furthermore, QUERY requests are idempotent ([HTTP], Section 9.2.2) — they can be retried or repeated when needed, for instance after a connection failure.¶
As per Section 15.3 of [HTTP], a 2xx (Successful) response code signals that the request was successfully received, understood, and accepted.¶
In particular, a 200 (OK) response indicates that the query was successfully processed and the results of that processing are enclosed as the response content.¶
The “Accept-Query” response header field can be used by a resource to directly signal support for the QUERY method while identifying the specific query format media type(s) that may be used.¶
Accept-Query contains a list of media ranges (Section 12.5.1 of [HTTP]) using “Structured Fields” syntax ([STRUCTURED-FIELDS]). Media ranges are represented by a List Structured Header Field of either Tokens or Strings, containing the media range value without parameters.¶
Media type parameters, if any, are mapped to Structured Field Parameters of type String or Token. The choice of Token vs. String is semantically insignificant. That is, recipients MAY convert Tokens to Strings, but MUST NOT process them differently based on the received type.¶
Media types do not exactly map to Tokens, for instance they allow a leading digit. In cases like these, the String format needs to be used.¶
The only supported uses of wildcards are “*/*”, which matches any type, or “xxxx/*”, which matches any subtype of the indicated type.¶
The order of types listed in the field value is not significant.¶
The value of the Accept-Query field applies to every URI on the server that shares the same path; in other words, the query component is ignored. If requests to the same resource return different Accept-Query values, the most recently received fresh value (per Section 4.2 of [HTTP-CACHING]) is used.¶
Although the syntax for this field appears to be similar to other fields, such as “Accept” (Section 12.5.1 of [HTTP]), it is a Structured Field and thus MUST be processed as specified in Section 4 of [STRUCTURED-FIELDS].¶
The QUERY method is subject to the same general security considerations as all HTTP methods as described in [HTTP].¶
It can be used as an alternative to passing request information in the URI (e.g., in the query component). This is preferred in some cases, as the URI is more likely to be logged or otherwise processed by intermediaries than the request content. In other cases, where the query contains sensitive information, the potential for logging of the URI might motivate the use of QUERY over GET.¶
If a server creates a temporary resource to represent the results of a QUERY request (e.g., for use in the Location or Content-Location field), assigns a URI to that resource, and the request contains sensitive information that cannot be logged, then that URI SHOULD be chosen such that it does not include any sensitive portions of the original request content.¶
Caches that normalize QUERY content incorrectly or in ways that are significantly different from how the resource processes the content can return an incorrect response if normalization results in a false positive.¶
A QUERY request from user agents implementing CORS (Cross-Origin Resource Sharing) will require a “preflight” request, as QUERY does not belong to the set of CORS-safelisted methods (see “Methods” in [FETCH]).¶
The examples below are for illustrative purposes only; if one needs to send queries that are actually this short, it is likely better to use GET.¶
The media type used in most examples is “application/x-www-form-urlencoded” (as used in POST requests from browser user clients, defined in “application/x-www-form-urlencoded” in [URL]). The Content-Length fields have been omitted for brevity.¶
The HTTP Method Registry (http://www.iana.org/assignments/http-methods) already contains three other methods with the properties “safe” and “idempotent”: “PROPFIND” ([RFC4918]), REPORT” ([RFC3253]), and “SEARCH” ([RFC5323]).¶
It would have been possible to re-use any of these, updating it in a way that it matches what this specification defines as the new method “QUERY”. Indeed, the early stages of this specification used “SEARCH”.¶
The method name “QUERY” ultimately was chosen because:¶
* The alternatives use a generic media type for the request content (“application/xml”); the
semantics of the request depends solely on the request content.¶
* Furthermore, they all originate from the WebDAV activity, about which many have mixed feelings.¶
* “QUERY” captures the relation with the URI’s query component well.¶
This section is to be removed before publishing as an RFC.¶
We thank all members of the HTTP Working Group for ideas, reviews, and feedback.¶
The following individuals deserve special recognition: Carsten Bormann, Mark Nottingham, Martin Thomson, Michael Thornburgh, Roberto Polli, Roy Fielding, and Will Hawkins.¶
Ashok Malhotra participated in early discussions leading to this specification:¶
Discussion on the this HTTP method was reopened by Asbjørn Ulsberg during the HTTP Workshop in 2019:¶
...
Read the original on www.ietf.org »
In fairness to researchers, it can be difficult to run a randomized clinical trial for vitamin D supplements. That’s because most of us get the bulk of our vitamin D from sunlight. Our skin converts UVB rays into a form of the vitamin that our bodies can use. We get it in our diets, too, but not much. (The main sources are oily fish, egg yolks, mushrooms, and some fortified cereals and milk alternatives.)
The standard way to measure a person’s vitamin D status is to look at blood levels of 25-hydroxycholecalciferol (25(OH)D), which is formed when the liver metabolizes vitamin D. But not everyone can agree on what the “ideal” level is.
Even if everyone did agree on a figure, it isn’t obvious how much vitamin D a person would need to consume to reach this target, or how much sunlight exposure it would take. One complicating factor is that people respond to UV rays in different ways—a lot of that can depend on how much melanin is in your skin. Similarly, if you’re sitting down to a meal of oily fish and mushrooms and washing it down with a glass of fortified milk, it’s hard to know how much more you might need.
There is more consensus on the definition of vitamin D deficiency, though. (It’s a blood level below 30 nanomoles per liter, in case you were wondering.) And until we know more about what vitamin D is doing in our bodies, our focus should be on avoiding that.
For me, that means topping up with a supplement. The UK government advises everyone in the country to take a 10-microgram vitamin D supplement over autumn and winter. That advice doesn’t factor in my age, my blood levels, or the amount of melanin in my skin. But it’s all I’ve got for now.
...
Read the original on www.technologyreview.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.