10 interesting stories served every morning and every evening.
Web browsers are ubiquitous, but how do they work? This book explains, building a basic but complete web browser, from networking to JavaScript, in a couple thousand lines of Python.
Web Browser Engineering will be published by Oxford University Press before the end of the year. To get it as soon as it’s out, pre-order
now!
Follow this book’s blog or Twitter for updates. You can also talk about the book with others in our discussion
forum.
If you are enjoying the book, consider supporting us on Patreon.
Or just send us an
email!
...
Read the original on browser.engineering »
The World Conker Championships is investigating cheating allegations after the men’s winner was found to have a steel chestnut in his pocket.
David Jakins won the annual title in Southwick, Northamptonshire, on Sunday for the first time after competing since 1977.
But the 82-year-old was found to have a metal replica in his pocket when he was searched by organisers after his victory.
The retired engineer has denied using the metal variety in the tournament.
Jakins was responsible for drilling and inserting strings into other competitors’ chestnuts as the competition’s top judge, known as the “King Conker”.
Alastair Johnson-Ferguson, who lost in the men’s final against Jakins, said he suspected “foul play”, the Telegraph reported.
The 23-year-old said: “My conker disintegrated in one hit, and that just doesn’t happen … I’m suspicious of foul play and have expressed my surprise to organisers.”
Kelci Banschbach, 34, from Indianapolis, defeated the men’s champion in the grand final to become the first American to win the competition. More than 200 people took part.
Jakins said: “I was found with the steel conker in my pocket, but I only carry [it] around with me for humour value and I did not use it during the event.
“Yes, I did help prepare the conkers before the tournament. But this isn’t cheating or a fix, and I didn’t mark the strings.”
St John Burkett, a spokesperson for the World Conker Championships, said the cheating claims were being investigated.
“Allegations of foul play have been received that somehow King Conker swapped his real conker for the metal one later found in his pocket.
“Players select conkers from a sack before each round.
“There are also suggestions that King Conker had marked the strings of harder nuts. We can confirm he was involved in drilling and lacing the nuts before the event.
More than 2,000 conkers had been prepared prior to the event.
...
Read the original on www.theguardian.com »
Routine dental X-rays are not backed by evidence—experts want it to stop
The actual recommendations might surprise you—along with the state of modern dentistry.
An expert looking at a dental X-ray and saying “look at that unnecessary X-ray,” probably.
Has your dentist ever told you that it’s recommended to get routine dental X-rays every year? My (former) dentist’s office did this year—in writing, even. And they claimed that the recommendation came from the American Dental Association.
It’s a common refrain from dentists, but it’s false. The American Dental Association does not recommend annual routine X-rays. And this is not new; it’s been that way for well over a decade.
The association’s guidelines from 2012 recommended that adults who don’t have an increased risk of dental caries (myself included) need only bitewing X-rays of the back teeth every two to three years. Even people with a higher risk of caries can go as long as 18 months between bitewings. The guidelines also note that X-rays should not be preemptively used to look for problems: “Radiographic screening for the purpose of detecting disease before clinical examination should not be performed,” the guidelines read. In other words, dentists are supposed to examine your teeth before they take any X-rays.
But, of course, the 2012 guidelines are outdated—the latest ones go further. In updated guidance published in April, the ADA doesn’t recommend any specific time window for X-rays at all. Rather, it emphasizes that patient exposure to X-rays should be minimized, and any X-rays should be clinically justified.
There’s a good chance you’re surprised. Dentistry’s overuse of X-rays is a problem dentists do not appear eager to discuss—and would likely prefer to skirt. My former dentist declined to comment for this article, for example. And other dentists have been doing that for years. Nevertheless, the problem is well-established. A New York Times article from 2016, titled “You Probably Don’t Need Dental X-Rays Every Year,” quoted a dental expert noting the exact problem:
“Many patients of all ages receive bitewing X-rays far more frequently than necessary or recommended. And adults in good dental health can go a decade between full-mouth X-rays.”
The problem has bubbled up again in a series of commentary pieces published in JAMA Internal Medicine today. The pieces were all sparked by a viewpoint that Ars reported on in May, in which three dental and health experts highlighted that many routine aspects of dentistry, including biannual cleanings, are not evidence-based and that the industry is rife with overdiagnosis and overtreatment. That viewpoint, titled “Too Much Dentistry,” also appeared in JAMA Internal Medicine.
The new pieces take a more specific aim at dental radiography. But, as in the May viewpoint, experts also blasted dentistry more generally for being out of step with modern medicine in its lack of data to support its practices—practices that continue amid financial incentives to overtreat and little oversight to stop it, they note.
In a piece titled “Too Much Dental Radiography,” Sheila Feit, a retired medical expert based in New York, pointed out that using X-rays for dental screenings is not backed by evidence. “Data are lacking about outcomes,” she wrote. If anything, the weak data we have makes it look ineffective. For instance, a 2021 systemic review of 77 studies that included data on a total of 15,518 tooth sites or surfaces found that using X-rays to detect early tooth decay led to a high degree of false-negative results. In other words, it led to missed cases.
Feit called for gold-standard randomized clinical trials to evaluate the risks and benefits of X-ray screenings for patients, particularly adults at low risk of caries. “Financial aspects of dental radiography also deserve further study,” Feit added. Overall, Feit called the May viewpoint “a timely call for evidence to support or refute common clinical dental practices.”
In a response published simultaneously in JAMA Internal Medicine, oral medicine expert Yehuda Zadik championed Feit’s point, calling it “an essential discussion about the necessity and risks of routine dental radiography, emphasizing once again the need for evidence-based dental care.”
Zadik, a professor of dental medicine at The Hebrew University of Jerusalem, noted that the overuse of radiography in dentistry is a global problem, one aided by dentistry’s unique delivery:
“Dentistry is among the few remaining health care professions where clinical examination, diagnostic testing including radiographs, diagnosis, treatment planning, and treatment are all performed in place, often by the same care practitioner” Zadik wrote. “This model of care delivery prevents external oversight of the entire process.”
While routine X-rays continue at short intervals, Zadik notes that current data “favor the reduction of patient exposure to diagnostic radiation in dentistry,” while advancements in dentistry dictate that X-rays should be used at “longer intervals and based on clinical suspicion.”
Though the digital dental X-rays often used today provide smaller doses of radiation than the film X-rays used in the past, radiation’s harms are cumulative. Zadik emphasizes that with the primary tenet of medicine being “First, do no harm,” any unnecessary X-ray is an unnecessary harm. Further, other technology can sometimes be used instead of radiography, including electronic apex locators for root canal procedures.
“Just as it is now unimaginable that, in the past, shoe fittings for children were conducted using X-rays, in the future it will be equally astonishing to learn that the fit of dental crowns was assessed using radiographic imaging,” Zadik wrote.
X-rays do more harm than good in children
Feit’s commentary also prompted a reply from the three authors of the original May viewpoint: Paulo Nadanovsky, Ana Paula Pires dos Santos, and David Nunan. The three followed up on Feit’s point that data is weak on whether X-rays are useful for detecting early decay, specifically white spot lesions. The experts raise the damning point that even if dental X-rays were shown to be good at doing that, there’s still no evidence that that’s good for patients.
“[T]here is no evidence that detecting white spot lesions, with or without radiographs, benefits patients,” the researchers wrote. “Most of these lesions do not progress into dentine cavities,” and there’s no evidence that early treatments make a difference in the long run.
To bolster the point, the three note that data from children suggest that X-ray screening does more harm than good. In a randomized clinical trial published in 2021, 216 preschool children were split into two groups: one that received only a visual-tactile dental exam, while the others received both a visual-tactile exam and X-rays. The study found that adding X-rays caused more harm than benefit because the X-rays led to false positives and overdiagnosis of cavitated caries needing restorative treatment. The authors of the trial concluded that “visual inspection should be conducted alone in regular clinical practice.”
Like Zadik, the three researchers note that screenings for decay and cavities are not the only questionable use of X-rays in dental practice. Other common dental and orthodontic treatments involving radiography—practices often used in children and teens—might also be unnecessary harms. They raise the argument against the preventive removal of wisdom teeth, which is also not backed by evidence.
Like Feit, the three researchers reiterate the call for well-designed trials to back up or refute common dental practices.
Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph. D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.
Routine dental X-rays are not backed by evidence—experts want it to stop
Expert witness used Copilot to make up fake damages, irking judge
...
Read the original on arstechnica.com »
This site is home to the documentation for the SQLite project’s WebAssembly- and JavaScript-related APIs, which enable the use of
sqlite3 in modern WASM-capable browsers. These components were initially released for public beta with version 3.40 and will tentatively be made API-stable with the 3.41 release, pending community feedback.
Disclaimer: this site requires a modern, JavaScript-capable browser for full functionality. This site uses client-side storage for storing certain browsing preferences (like the bright/dark mode toggle) but does not store any user information server-side, except for logged-in developers. The only user-level details this site shares with any other systems are the public SCM-related details of this site’s own developers.
Making use of this project:
Third-party projects known to be using this project include (in order of their addition to this list)…
* Evolu Evolu is a local-first platform
designed for privacy, ease of use, and no vendor lock-in.
* SQLiteNext provides a demo of
integrating this project with next.js.
* sqlite-wasm-esm demonstrates
how to use this project with the Vite toolchain.
* sqlite-wasm-http
provides an SQLite VFS with read-only access to databases which are
served directly over HTTP.
The following links reference articles and documentation published about SQLite WASM by third parties:
...
Read the original on sqlite.org »
Zamba2 performs exceptionally well on standard language modeling evaluation sets, especially given its latency and generation speed. Among small language models (≤8B), we lead the pack in both quality and performance.
Zyphra’s team is committed to democratizing advanced AI systems, exploring novel architectures on the frontier of performance, and advancing the scientific study and understanding of powerful models. We look forward to collaborating with others who share our vision.
Zamba2-7B will be released under an open source license, allowing researchers, developers, and companies to leverage its capabilities. We invite the broader AI community to explore Zamba’s unique architecture and continue pushing the boundaries of efficient foundation models. A Huggingface integration is available here, and a pure-pytorch implementation is available here.
Zamba2-7B was trained on 128 H100 GPUS for approximately 50 days using our internal training framework developed atop Megatron-LM. Zamba2-7B thus demonstrates that at the 7B scale the frontier is still reachable and surpassable with a small team and moderate budget.
Zamba2-7B utilizes and extends our original Zamba hybrid SSM-attention architecture. The core Zamba architecture consists of a backbone of Mamba layers interleaved with one or more shared attention layers (one shared attention in Zamba1, two in Zamba2). This attention has shared weights to minimize the parameter cost of the model. We find that concatenating the original model embeddings of the input to this attention block improves performance, likely due to better maintenance of information across depth. The Zamba2 architecture also applies LoRA projection matrices to the shared MLP to gain some additional expressivity in each block and allow each shared block to specialize slightly to its own unique position while keeping the additional parameter overhead small.
Due to the exceptional quality of our pretraining and annealing datasets, Zamba2-7B performs extremely well on a per-training-token basis, sitting comfortably above the curve traced out by competitor models.
Zyphra’s team is committed to democratizing advanced AI systems, exploring novel architectures on the frontier of performance, and advancing the scientific study and understanding of powerful models. We look forward to collaborating with others who share our vision.
Zamba2-7B will be released under an open source license, allowing researchers, developers, and companies to leverage its capabilities. We invite the broader AI community to explore Zamba’s unique architecture and continue pushing the boundaries of efficient foundation models. A Huggingface integration is available here, and a pure-pytorch implementation is available here.
Zamba2-7B was trained on 128 H100 GPUS for approximately 50 days using our internal training framework developed atop Megatron-LM. Zamba2-7B thus demonstrates that at the 7B scale the frontier is still reachable and surpassable with a small team and moderate budget.
Zamba2-7B utilizes and extends our original Zamba hybrid SSM-attention architecture. The core Zamba architecture consists of a backbone of Mamba layers interleaved with one or more shared attention layers (one shared attention in Zamba1, two in Zamba2). This attention has shared weights to minimize the parameter cost of the model. We find that concatenating the original model embeddings of the input to this attention block improves performance, likely due to better maintenance of information across depth. The Zamba2 architecture also applies LoRA projection matrices to the shared MLP to gain some additional expressivity in each block and allow each shared block to specialize slightly to its own unique position while keeping the additional parameter overhead small.
We present histograms depicting distribution of cluster sizes in all the datasets (see Fig. 7-11). Please, note that all the figures are in log-log scale. We see a significant drop in the number of clusters starting from the size of around 100. This drop is present both in DCLM and FineWeb-Edu2 (see Fig. 8 and 9 respectively), and most likely is explained by a combination of the deduplication strategy and quality when creating both datasets: DCLM deduplication was done individually within 10 shards, while FineWeb-Edu2 was deduplicated within every Common Crawl snapshot. We find that large clusters usually contain low quality material (repeated advertisements, license agreements templates, etc), so it’s not surprising that such documents were removed. Notably, DCLM still contained one cluster with the size close to 1 million documents, containing low quality documents seemingly coming from the advertisements (see Appendix).We find both Zyda-1and Dolma-CC contain a small amount of duplicates, which is expected, since both datasets were deduplicated globally by their authors. Remaining duplicates are likely false negatives from the initial deduplication procedure. Note, that distribution of duplicates clusters sizes of these two datasets (Fig. 10 and 11) don’t contain any sharp drops, but rather hyper exponentially decreases with cluster size.
Below is an example of the document from the largest cluster (~1M documents) of duplicates in DCLM (quality score 0.482627):
Is safe? Is scam?
Is safe for your PC?
Is safe or is it scam?
Domain is SafeSafe score: 1
The higher the number, the more dangerous the website. Any number higher than 1 means DANGER.
Positive votes:
Negative votes:
Vote Up Vote Down review
Have you had bad experience with Warn us, please!
Below one will find a few documents with different quality scores from DCLM coming from the same duplicates cluster. Quality score varies from ~0.2 to ~0.04.
...
Read the original on www.zyphra.com »
The C23 edition of Modern C is now available for free download from
The C23 edition of Modern C is now available for free download from
This new edition has been the occasion to overhaul the presentation in many places, but its main purpose is the update to the new C standard, C23. The goal was to publish this new edition of Modern C at the same time as the new C standard goes through the procedure of ISO publication. The closest approximation of the contents of the new standard in a publically available document can be found here. New releases of major compilers already implement most of the new features that it brings.
Among the most noticeable changes and additions that we handle are those for integers: there are new bit-precise types coined _BitInt(N), new C library headers (for arithmetic with overflow check) and (for bit manipulation), possibilities for 128 bit types on modern architectures, and substantial improvements for enumeration types. Other new concepts in C23 include a nullptr constant and its underlying type, syntactic annotation with attributes, more tools for type generic programming such as type inference with auto and typeof, default initialization with {}, even for variable length arrays, and constexpr for named constants of any type. Furthermore, new material has been added, discussing compound expressions and lambdas, so-called “internationalization”, a comprehensive approach for program failure.
Also added has been an appendix and a temporary include header for an easy transition to C23 on existing platforms, that will allow you to start off with C23 right away.
Manning’s early access program (MEAP) for the new edition is still open at
Unfortunately they were not yet able to tell me when their version of the C23 edition will finally be published.
...
Read the original on gustedt.wordpress.com »
Pumpkin is a Minecraft server built entirely in Rust, offering a fast, efficient, and customizable experience. It prioritizes performance and player enjoyment while adhering to the core mechanics of the game.
* Compatibility: Supports the latest Minecraft server version and adheres to vanilla game mechanics.
* Flexibility: Highly configurable with the ability to disable unnecessary features.
* Be a drop-in replacement for vanilla or other servers
* Be compatible with plugins or mods for other servers
* Function as a framework for building a server from scratch.
Check out our Github Project to see current progress
See our Quick Start Guide to get Pumpkin running
Contributions are welcome! See CONTRIBUTING.md
The Documentation of Pumpkin can be found at https://snowiiii.github.io/Pumpkin/
Consider joining our discord to stay up-to-date on events, updates, and connect with other members.
If you want to fund me and help the project, Check out my GitHub sponsors
A big thanks to wiki.vg for providing valuable information used in the development of this project.
...
Read the original on github.com »
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of
Service and Cookie Policy.
...
Read the original on www.bloomberg.com »
Last year, I published an article on airships. The article was the fruit of a few years’ exploration of the industry. I spoke to multiple airship manufacturers under NDA. I traded ideas with other cargo airship enthusiasts. And ultimately, along with my friends Ian McKay and Matt Sattler, I hired an engineer to develop new data on a 500-ton cargo airship.
My article explained why airships could transform the freight market and offered my thoughts on how they should be designed and operated. Airships feature a tantalizing scaling property—the bigger they get, the better they perform. If you want a cargo airship that can compete in transpacific cargo, it needs to be big. No one in the industry was doing what I thought needed to be done—targeting the intercontinental cargo market with a large, rigid-body airship as quickly as possible using an iterative, hardware-rich approach.
Perhaps surprisingly, my airship article resonated with a lot of people. Veritasium made a great video based on it that has racked up 3.5 million views so far. Because so many people read the post and watched the video, I feel that I now must come clean and admit that I got something wrong. I aspire to be 100% accurate in all my posts, so I regret the error.
But I don’t regret it that much, because it turned out great.
One of the thousands of people who read my airship article was an engineer named Jim Coutre. Jim began his career at SpaceX, where he worked on complex multidisciplinary systems like the stage separation systems on Falcon 9 and the solar arrays on the Dragon capsule. He also spent several years at Hyperloop, where he rose to become chief engineer.
After reading the article, Jim started a spreadsheet in March 2023. He was methodical. He worked through all the systems that would be required to make an unmanned cargo airship work, and put cost and performance bounds around them. He devised a feature set to address cargo loading and unloading. He made manufacturing cost estimates. He did all the technoeconomics that were missing in my post.
Jim only found one major discrepancy. In my article, I noted that freight service operates in three tiers—ship, truck, and plane, or basically, slow, medium, and fast. There are no bridges across oceans, so on transpacific routes there is only slow and fast. There was an opportunity, I argued, to introduce a medium-speed mode at truck-like prices. Airships could be that medium-speed mode with truck-like economics.
The problem is that today’s air freight service is not as fast as I had assumed. You can cross the Pacific in a plane in less than a day. You can pay for parcel service that will get you your package in 2 to 3 days. But for air freight service, end-to-end delivery takes a week or more, involving multiple parties: in addition to the air carrier and freight forwarder, at both the origin and destination, there is a trucking company, a warehouse, a customs broker, and an airport. Each touchpoint adds cost, delay, and the risk of theft or breakage.
Once you account for all these delays and costs, the 4 to 5 days it takes to cross the Pacific on an airship starts to look pretty good. If you can pick up goods directly from a customer on one side and deliver them directly to a customer on the other, you can actually beat today’s air freight service on delivery time.
This changes everything. Since airships are, after all, competitive with 747s on delivery time, you can earn the full revenue associated with air freight, not just the lower trucking rates I had assumed. Cargo airship margins, therefore, can be much higher than I had realized.
Today’s 747 freighters have almost no margin. They operate in an almost perfectly competitive market and are highly sensitive to fuel costs. They simply won’t be able to compete with transpacific airships that are faster end to end, less subject to volatile fuel prices, and operating with cushy margins. A cargo airship designed to compete head to head in the air freight market could take the lion’s share of the revenue in the air cargo market while being highly profitable.
Last year, Casey Handmer introduced me to Jim, with and for whom he worked at Hyperloop. I got to know Jim a bit. We met up in person at Edge Esmeralda in June—he presented at the Hard Tech Weekend there that Ben Reinhardt and I co-organized. We talked about airships a lot. We strategized. We initiated some customer conversations. We received validation.
Over the summer, Jim incorporated Airship Industries. He hired a team of cracked ex-SpaceX engineers. And he raised a large pre-seed round, in which I’m delighted to say that my syndicate (which you should possibly join) is the largest investor.
Airship Industries is designing its vehicle to dominate transoceanic air freight. It checks all the right boxes. It shortens end-to-end freight delivery time. It lowers freight handling costs, delays, and breakage. It’s highly profitable on a unit basis. It lowers fuel burn and carbon emissions by 75 percent without any sustainable fuel breakthroughs.
In my last airship article, I expressed some doubt that cargo airships were startupable. Airship development and certification is capital intensive. I thought and still believe that airships can very likely match the economics of trucks. But even if you succeed at building a great cargo airship, if you are limited to charging trucking prices, the margins will be very thin. No one wants to invest a lot of capital in a risky endeavor for thin margins. But if, as I am now convinced, operating margins could be huge by competing directly in the air freight market, then airships are definitely startupable.
Here’s another way to look at it. Many software investors eschew hard tech startups because of their capital intensity, but it’s hard to deny that huge returns are possible in hard tech: just consider SpaceX. Bring me another SpaceX! the reluctant investors might say.
But even SpaceX looks like small potatoes next to an industry like global logistics. For a Falcon 9-sized investment, instead of revolutionizing a $2 billion/year (10 years ago) commercial launch market, you could transform a market that is at least 30 times bigger, with similar unit economics to SpaceX.
I am thrilled to see Airship Industries take shape. It’s happening. There will soon (soon in the grand scheme of things at least) be thousands of giant airships crossing our oceans, transforming global logistics and connecting economies. Cargo airships are going to be big.
...
Read the original on www.elidourado.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.