10 interesting stories served every morning and every evening.




1 1,060 shares, 43 trendiness

I Now Assume that All Ads on Apple News Are Scams

In 2024, Apple signed a deal with Taboola to serve ads in its app, no­tably Apple News. John Gruber, writ­ing in Daring Fireball said at the time:

If you told me that the ads in Apple News have been sold by Taboola for the last few years, I’d have said, Oh, that makes sense.” Because the ads in Apple News — at least the ones I see1 — al­ready look like chum­box Taboola ads. Even worse, they’re in­cred­i­bly rep­e­ti­tious.

I use Apple News to keep up on top­ics that I don’t find in sources I pay for (The Guardian and The New York Times). But there’s no way I’m go­ing to pay the ex­or­bi­tant price Apple wants for Apple News+ — £13 — be­cause, while you get more pub­li­ca­tions, you still get ads.

And those ads have got­ten worse re­cently. Many if not most of them look like and prob­a­bly are scams. Here are a few ex­am­ples from Apple News to­day.

Here are three ads that are scammy; the first two were clearly gen­er­ated by AI, and the third may have been cre­ated by AI.

Why are they scams? When I searched do­main in­for­ma­tion for the do­mains, I found that they were reg­is­tered very re­cently.

This re­cent reg­is­tra­tion does­n’t nec­es­sar­ily mean they are scams, but they don’t in­spire much con­fi­dence.

Here’s one ex­am­ple. This ad from Tidenox, whose web­site says I am re­tir­ing, show­ing a photo of an el­derly woman, who says, For 26 years, Tidenox has been port of your jour­ney in cre­at­ing earth and com­fort at home.” The im­age of the re­tir­ing owner is prob­a­bly made by AI. (Update: some­one on Hacker News pointed out the partly masked Google Gemini logo on the bot­tom right. I had­n’t spot­ted that, in part be­cause I don’t use any AI im­age gen­er­a­tion tools.)

These fake going out of busi­ness ads” have been around for a few years, and even the US Better Business Bureau warns about them, as they take peo­ples’ money then shut down. Does Apple care? Does Taboola care? Does Apple care that Taboola serves ads like this? My guess: no, no, and no.

Note the reg­is­tra­tion date for the tide­nox.com do­main. It’s nowhere near 26 years old, and it’s reg­is­tered in China:

Shame on Apple for cre­at­ing a hon­ey­pot for scam ads in what they con­sider to be a pre­mium news ser­vice. This com­pany can­not be trusted with ads in its prod­ucts any more.

...

Read the original on kirkville.com »

2 930 shares, 47 trendiness

A New Frontier For Autonomous Driving Simulation

Your web browser does not sup­port this video. The Waymo World Model: A New Frontier For Autonomous Driving SimulationThe Waymo Driver has trav­eled nearly 200 mil­lion fully au­tonomous miles, be­com­ing a vi­tal part of the ur­ban fab­ric in ma­jor U.S. cities and im­prov­ing road safety. What rid­ers and lo­cal com­mu­ni­ties don’t see is our Driver nav­i­gat­ing bil­lions of miles in vir­tual worlds, mas­ter­ing com­plex sce­nar­ios long be­fore it en­coun­ters them on pub­lic roads. Today, we are ex­cited to in­tro­duce the Waymo World Model, a fron­tier gen­er­a­tive model that sets a new bar for large-scale, hy­per-re­al­is­tic au­tonomous dri­ving sim­u­la­tion. Your web browser does not sup­port this video.Sim­u­la­tion of the Waymo Driver evad­ing a ve­hi­cle go­ing in the wrong di­rec­tion. The sim­u­la­tion ini­tially fol­lows a real event, and seam­lessly tran­si­tions to us­ing cam­era and li­dar im­ages au­to­mat­i­cally gen­er­ated by an ef­fi­cient real-time Waymo World Model.

Simulation is a crit­i­cal com­po­nent of Waymo’s AI ecosys­tem and one of the three key pil­lars of our ap­proach to demon­stra­bly safe AI. The Waymo World Model, which we de­tail be­low, is the com­po­nent that is re­spon­si­ble for gen­er­at­ing hy­per-re­al­is­tic sim­u­lated en­vi­ron­ments.The Waymo World Model is built upon Genie 3—Google DeepMind’s most ad­vanced gen­eral-pur­pose world model that gen­er­ates pho­to­re­al­is­tic and in­ter­ac­tive 3D en­vi­ron­ments—and is adapted for the rig­ors of the dri­ving do­main. By lever­ag­ing Genie’s im­mense world knowl­edge, it can sim­u­late ex­ceed­ingly rare events—from a tor­nado to a ca­sual en­counter with an ele­phant—that are al­most im­pos­si­ble to cap­ture at scale in re­al­ity. The mod­el’s ar­chi­tec­ture of­fers high con­trol­la­bil­ity, al­low­ing our en­gi­neers to mod­ify sim­u­la­tions with sim­ple lan­guage prompts, dri­ving in­puts, and scene lay­outs. Notably, the Waymo World Model gen­er­ates high-fi­delity, multi-sen­sor out­puts that in­clude both cam­era and li­dar data.This com­bi­na­tion of broad world knowl­edge, fine-grained con­trol­la­bil­ity, and multi-modal re­al­ism en­hances Waymo’s abil­ity to safely scale our ser­vice across more places and new dri­ving en­vi­ron­ments. In the fol­low­ing sec­tions we show­case the Waymo World Model in ac­tion, fea­tur­ing sim­u­la­tions of the Waymo Driver nav­i­gat­ing di­verse rare edge-case sce­nar­ios.Most sim­u­la­tion mod­els in the au­tonomous dri­ving in­dus­try are trained from scratch based on only the on-road data they col­lect. That ap­proach means the sys­tem only learns from lim­ited ex­pe­ri­ence. Genie 3’s strong world knowl­edge, gained from its pre-train­ing on an ex­tremely large and di­verse set of videos, al­lows us to ex­plore sit­u­a­tions that were never di­rectly ob­served by our fleet.Through our spe­cial­ized post-train­ing, we are trans­fer­ring that vast world knowl­edge from 2D video into 3D li­dar out­puts unique to Waymo’s hard­ware suite. While cam­eras ex­cel at de­pict­ing vi­sual de­tails, li­dar sen­sors pro­vide valu­able com­ple­men­tary sig­nals like pre­cise depth. The Waymo World Model can gen­er­ate vir­tu­ally any scene—from reg­u­lar, day-to-day dri­ving to rare, long-tail sce­nar­ios—across mul­ti­ple sen­sor modal­i­ties.Your web browser does not sup­port this video.Sim­u­la­tion: Driving on the Golden Gate Bridge, cov­ered in light snow. Waymo’s shadow is vis­i­ble in the front cam­era footage.

Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Sim­u­la­tion: Driving on a street with lots of palm trees in a trop­i­cal city, strangely cov­ered in snow.

Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Sim­u­la­tion: The lead­ing ve­hi­cle dri­ving into the tree branches.

Your web browser does not sup­port this video.Sim­u­la­tion: Driving be­hind a ve­hi­cle with pre­car­i­ously po­si­tioned fur­ni­ture on top.

Your web browser does not sup­port this video.Sim­u­la­tion: A mal­func­tioned truck fac­ing the wrong way, block­ing the road.

Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.In the in­ter­ac­tive view­ers be­low, you can im­mer­sively view the re­al­is­tic 4D point clouds gen­er­ated by the Waymo World Model.Interactive 3D vi­su­al­iza­tion of an en­counter with an ele­phant.

The Waymo World Model of­fers strong sim­u­la­tion con­trol­la­bil­ity through three main mech­a­nisms: dri­ving ac­tion con­trol, scene lay­out con­trol, and lan­guage con­trol.Dri­ving ac­tion con­trol al­lows us to have a re­spon­sive sim­u­la­tor that ad­heres to spe­cific dri­ving in­puts. This en­ables us to sim­u­late what if” coun­ter­fac­tual events such as whether the Waymo Driver could have safely dri­ven more con­fi­dently in­stead of yield­ing in a par­tic­u­lar sit­u­a­tion.Coun­ter­fac­tual dri­ving. We demon­strate sim­u­la­tions both un­der the orig­i­nal route in a past recorded drive, or a com­pletely new route. While purely re­con­struc­tive sim­u­la­tion meth­ods (e.g., 3D Gaussian Splats, or 3DGS) suf­fer from vi­sual break­downs due to miss­ing ob­ser­va­tions when the sim­u­lated route is too dif­fer­ent from the orig­i­nal dri­ving, the fully learned Waymo World Model main­tains good re­al­ism and con­sis­tency thanks to its strong gen­er­a­tive ca­pa­bil­i­ties.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Scene lay­out con­trol al­lows for cus­tomiza­tion of the road lay­outs, traf­fic sig­nal states, and the be­hav­ior of other road users. This way, we can cre­ate cus­tom sce­nar­ios via se­lec­tive place­ment of other road users, or ap­ply­ing cus­tom mu­ta­tions to road lay­outs.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Lan­guage con­trol is our most flex­i­ble tool that al­lows us to ad­just time-of-day, weather con­di­tions, or even gen­er­ate an en­tirely syn­thetic scene (such as the long-tail sce­nar­ios shown pre­vi­ously).Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Dur­ing a scenic drive, it is com­mon to record videos of the jour­ney on mo­bile de­vices or dash­cams, per­haps cap­tur­ing piled up snow banks or a high­way at sun­set. The Waymo World Model can con­vert those kinds of videos, or any taken with a reg­u­lar cam­era, into a mul­ti­modal sim­u­la­tion—show­ing how the Waymo Driver would see that ex­act scene. This process en­ables the high­est de­gree of re­al­ism and fac­tu­al­ity, since sim­u­la­tions are de­rived from ac­tual footage.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Your web browser does not sup­port this video.Some scenes we want to sim­u­late may take longer to play out, for ex­am­ple, ne­go­ti­at­ing pas­sage in a nar­row lane. That’s harder to do be­cause the longer the sim­u­la­tion, the tougher it is to com­pute and main­tain sta­ble qual­ity. However, through a more ef­fi­cient vari­ant of the Waymo World Model, we can sim­u­late longer scenes with dra­matic re­duc­tion in com­pute while main­tain­ing high re­al­ism and fi­delity to en­able large-scale sim­u­la­tions.🚀  Long roll­out (4x speed play­back) on an ef­fi­cient vari­ant of the Waymo World ModelYour web browser does not sup­port this video.Nav­i­gat­ing around in-lane stop­per and fast traf­fic on the free­way.

Your web browser does not sup­port this video.Your web browser does not sup­port this video.Dri­ving up a steep street and safely nav­i­gat­ing around mo­tor­cy­clists.

Your web browser does not sup­port this video.By sim­u­lat­ing the impossible”, we proac­tively pre­pare the Waymo Driver for some of the most rare and com­plex sce­nar­ios. This cre­ates a more rig­or­ous safety bench­mark, en­sur­ing the Waymo Driver can nav­i­gate long-tail chal­lenges long be­fore it en­coun­ters them in the real world.

The Waymo World Model is en­abled by the key re­search, en­gi­neer­ing and eval­u­a­tion con­tri­bu­tions from James Gunn, Kanaad Parvate, Lu Liu, Lucas Deecke, Luca Bergamini, Zehao Zhu, Raajay Viswanathan, Jiahao Wang, Sakshum Kulshrestha, Titas Anciukevičius, Luna Yue Huang, Yury Bychenkov, Yijing Bai, Yichen Shen, Stefanos Nikolaidis, Tiancheng Ge, Shih-Yang Su and Vincent Casser.We thank Chulong Chen, Mingxing Tan, Tom Walters, Harish Chandran, David Wong, Jieying Chen, Smitha Shyam, Vincent Vanhoucke and Drago Anguelov for their sup­port in defin­ing the vi­sion for this pro­ject, and for their strong lead­er­ship and guid­ance through­out.We would like to ad­di­tion­ally thank Jon Pedersen, Michael Dreibelbis, Larry Lansing, Sasho Gabrovski, Alan Kimball, Dave Richardson, Evan Birenbaum, Harrison McKenzie Chapter and Pratyush Chakraborty, Khoa Vo, Todd Hester, Yuliang Zou, Artur Filipowicz, Sophie Wang and Linn Bieske for their in­valu­able part­ner­ship in fa­cil­i­tat­ing and en­abling this pro­ject.We thank our part­ners from Google DeepMind: Jack Parker-Holder, Shlomi Fruchter, Philip Ball, Ruiqi Gao, Songyou Peng, Ben Poole, Fei Xia, Allan Zhou, Sean Kirmani, Christos Kaplanis, Matt McGill, Tim Salimans, Ruben Villegas, Xinchen Yan, Emma Wang, Woohyun Han, Shan Han, Rundi Wu, Shuang Li, Philipp Henzler, Yulia Rubanova, and Thomas Kipf for help­ful dis­cus­sions and for shar­ing in­valu­able in­sights for this pro­ject.

...

Read the original on waymo.com »

3 631 shares, 40 trendiness

OpenCiv3 Home

OpenCiv3 (formerly known by the co­de­name C7) is an open-source, cross-plat­form, mod-ori­ented, mod­ern­ized reimag­in­ing of Civilization III by the fan com­mu­nity built with the Godot Engine and C#, with ca­pa­bil­i­ties in­spired by the best of the 4X genre and lessons learned from mod­ding Civ3. Our vi­sion is to make Civ3 as it could have been, re­built for to­day’s mod­ders and play­ers: re­mov­ing ar­bitary lim­its, fix­ing bro­ken fea­tures, ex­pand­ing mod ca­pa­bil­i­ties, and sup­port­ing mod­ern graph­ics and plat­forms. A game that can go be­yond C3C but re­tain all of its game­play and con­tent.

OpenCiv3 is un­der ac­tive de­vel­op­ment and cur­rently in an early pre-al­pha state. It is a rudi­men­tary playable game but lack­ing many me­chan­ics and late-game con­tent, and er­rors are likely. Keep up with our de­vel­op­ment for the lat­est up­dates and op­por­tu­ni­ties to con­tribute!

New Players Start Here: An Introduction to OpenCiv3 at CivFanatics

NOTE: OpenCiv3 is not af­fil­i­ated with civ­fa­nat­ics.com, Firaxis Games, BreakAway Games, Hasbro Interactive, Infogrames Interactive, Atari Interactive, or Take-Two Interactive Software. All trade­marks are prop­erty of their re­spec­tive own­ers.

The OpenCiv3 team is pleased to an­nounce the first pre­view re­lease of the v0.3 Dutch” mile­stone. This is a ma­jor en­hance­ment over the Carthage” re­lease, and our de­but with stand­alone mode fea­tur­ing place­holder graph­ics with­out the need for Civ3 me­dia files. A lo­cal in­stal­la­tion of Civ3 is still rec­om­mended for a more pol­ished ex­pe­ri­ence. See the re­lease notes for a full list of new fea­tures in each ver­sion.

OpenCiv3 Dutch Preview 1 with the same game in Standalone mode (top) and with im­ported Civ3 graph­ics (bottom)

Download the ap­pro­pri­ate zip file for your OS from the Dutch Preview 1 re­lease

All of­fi­cial re­leases of OpenCiv3 along with more de­tailed re­lease notes can be found on the GitHub re­leases page.

64-bit Windows, Linux, or Mac OS. Other plat­forms may be sup­ported in fu­ture re­leases.

Minimum hard­ware re­quire­ments have not yet been iden­ti­fied. Please let us know if OpenCiv3 does not per­form well on your sys­tem.

Recommended: A lo­cal copy of Civilization III files (the game it­self does NOT have to run) from Conquests or the Complete edi­tion. Standalone mode is avail­able with place­holder graph­ics for those who do not have a copy.

Civilization III Complete is avail­able for a pit­tance from Steam or GOG

This is a Windows 64-bit ex­e­cutable. OpenCiv3 will look for a lo­cal in­stal­la­tion of Civilization III in the Windows reg­istry au­to­mat­i­cally, or you may use an en­vi­ron­ment vari­able to point to the files.

If it is blocked, you may need to un­block it by

Check the Unblock” check­box near the bot­tom but­tons in the Security” sec­tion

If your Civilization III in­stal­la­tion is not de­tected, you can set the en­vi­ron­ment vari­able CIV3_HOME point­ing to it and restart OpenCiv3

This is an x86-64 Linux ex­e­cutable. You may use an en­vi­ron­ment vari­able to point to the files from a Civilization III in­stal­la­tion. You can just copy or mount the top-level Sid Meier’s Civilization III Complete” (Sans Complete” if your in­stall was from pre-Com­plete CDs) folder and its con­tents to your Linux sys­tem, or in­stall the game via Steam or GOG.

Set the CIV3_HOME en­vi­ron­ment vari­able to point to the Civ3 files, e.g. ex­port CIV3_HOME=“/path/to/civ3”

From that same ter­mi­nal where you set CIV3_HOME, run OpenCiv3.x86_64

To make this vari­able per­ma­nent, add it to your .profile or equiv­a­lent.

This is a uni­ver­sal 64-bit ex­e­cutable, so it should run on both Intel and M1 Macs. You may use an en­vi­ron­ment vari­able to point to the files from a Civilization III in­stal­la­tion. You can just copy or mount the top-level Sid Meier’s Civilization III Complete” (Sans Complete” if your in­stall was from pre-Com­plete CDs) folder and its con­tents to your Mac sys­tem, or in­stall the game via Steam or GOG.

Download the zip; it may com­plain bit­terly, and you may have to tell it to keep the down­load in­stead of trash­ing it

Double click the zip file, and a folder with OpenCiv3.app and a json file will ap­pear

If you try to open OpenCiv3.app it will tell you it’s dam­aged and try to trash it; it is not dam­aged

To un­block the down­loaded app, from a ter­mi­nal run xattr -cr /path/to/OpenCiv3.app; you can avoid typ­ing the path out by typ­ing xattr -cr and then drag­ging the OpenCiv3.app icon onto the ter­mi­nal win­dow

Set the CIV3_HOME en­vi­ron­ment vari­able to point to the Civ3 files, e.g. ex­port CIV3_HOME=“/path/to/civ3”

From that same ter­mi­nal where you set CIV3_HOME, run OpenCiv3.app with open /path/to/OpenCiv3.app, or again just type open and drag the OpenCiv3 icon onto the ter­mi­nal win­dow and press en­ter

OpenCiv3 uses many prim­i­tive place­holder as­sets; load­ing files from a lo­cal Civilization III in­stall is rec­om­mended (see plat­form spe­cific setup in­struc­tions above)

Support for play­ing Civ3 BIQ or SAV files is in­com­plete; some files will not load cor­rectly and crashes may oc­cur

For Mac:

Mac will try hard not to let you run this; it will tell you the app is dam­aged and can’t be opened and help­fully of­fer to trash it for you. From a ter­mi­nal you can xattr -cr /path/to/OpenCiv3.app to en­able run­ning it.

Mac will crash if you hit but­tons to start a new game (New Game, Quick Start, Tutorial, or Load Scenario) be­cause it cant find our new game’ save file we’re us­ing as a stand-in for map gen­er­a­tion. But you can Load Game and load c7-sta­tic-map-save.json or open a Civ3 SAV file to open that map

Other spe­cific bugs will be tracked on the GitHub is­sues page.

© OpenCiv3 con­trib­u­tors. OpenCiv3 is free and open source soft­ware re­leased un­der the MIT License.

...

Read the original on openciv3.org »

4 478 shares, 23 trendiness

Animated Experience

...

Read the original on hackers-1995.vercel.app »

5 404 shares, 20 trendiness

An Update on Heroku

Today, Heroku is tran­si­tion­ing to a sus­tain­ing en­gi­neer­ing model fo­cused on sta­bil­ity, se­cu­rity, re­li­a­bil­ity, and sup­port. Heroku re­mains an ac­tively sup­ported, pro­duc­tion-ready plat­form, with an em­pha­sis on main­tain­ing qual­ity and op­er­a­tional ex­cel­lence rather than in­tro­duc­ing new fea­tures. We know changes like this can raise ques­tions, and we want to be clear about what this means for cus­tomers.

There is no change for cus­tomers us­ing Heroku to­day. Customers who pay via credit card in the Heroku dash­board—both ex­ist­ing and new—can con­tinue to use Heroku with no changes to pric­ing, billing, ser­vice, or day-to-day us­age. Core plat­form func­tion­al­ity, in­clud­ing ap­pli­ca­tions, pipelines, teams, and add-ons, is un­af­fected, and cus­tomers can con­tinue to rely on Heroku for their pro­duc­tion, busi­ness-crit­i­cal work­loads.

Enterprise Account con­tracts will no longer be of­fered to new cus­tomers. Existing Enterprise sub­scrip­tions and sup­port con­tracts will con­tinue to be fully hon­ored and may re­new as usual.

We’re fo­cus­ing our prod­uct and en­gi­neer­ing in­vest­ments on ar­eas where we can de­liver the great­est long-term cus­tomer value, in­clud­ing help­ing or­ga­ni­za­tions build and de­ploy en­ter­prise-grade AI in a se­cure and trusted way.

...

Read the original on www.heroku.com »

6 359 shares, 15 trendiness

microsoft/litebox: A security-focused library OS supporting kernel- and user-mode execution

This pro­ject is cur­rently ac­tively evolv­ing and im­prov­ing. While we are work­ing to­ward a sta­ble re­lease, some APIs and in­ter­faces may change as the de­sign con­tin­ues to ma­ture. You are wel­come to ex­plore and ex­per­i­ment, but if you need long-term sta­bil­ity, it may be best to wait for a sta­ble re­lease, or be pre­pared to adapt to up­dates along the way.

LiteBox is a sand­box­ing li­brary OS that dras­ti­cally cuts down the in­ter­face to the host, thereby re­duc­ing at­tack sur­face. It fo­cuses on easy in­terop of var­i­ous North” shims and South” plat­forms. LiteBox is de­signed for us­age in both ker­nel and non-ker­nel sce­nar­ios.

LiteBox ex­poses a Rust-y nix/​rustix-in­spired North” in­ter­face when it is pro­vided a Platform in­ter­face at its South”. These in­ter­faces al­low for a wide va­ri­ety of use-cases, eas­ily al­low­ing for con­nec­tion be­tween any of the North–South pairs.

See the fol­low­ing files for de­tails:

This pro­ject may con­tain trade­marks or lo­gos for pro­jects, prod­ucts, or ser­vices. Authorized use of Microsoft trade­marks or lo­gos is sub­ject to and must fol­low

Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trade­marks or lo­gos in mod­i­fied ver­sions of this pro­ject must not cause con­fu­sion or im­ply Microsoft spon­sor­ship. Any use of third-party trade­marks or lo­gos are sub­ject to those third-par­ty’s poli­cies.

...

Read the original on github.com »

7 323 shares, 18 trendiness

Design at the speed of light

Already have an ac­count? Sign in

Sign in to your ac­count

Don’t have an ac­count? Sign up

...

Read the original on vecti.com »

8 281 shares, 14 trendiness

Understanding Neural Network, Visually

An in­ter­ac­tive vi­su­al­iza­tion to un­der­stand how neural net­works work

tap/​click the right side of the screen to go for­ward →

I’ve al­ways been cu­ri­ous about how AI works.

But with the con­stant news and up­dates, I of­ten feel over­whelmed try­ing to keep up with it all.

So I de­cided to go back to the ba­sics and start learn­ing from the be­gin­ning, with neural net­works.

When I’m learn­ing, I find it eas­ier to un­der­stand how things work when I can vi­su­al­ize them in my mind.

So I made this vi­su­al­iza­tion, and I’m shar­ing it now.

I’m just hop­ing it can also be use­ful for those of you who are cu­ri­ous about AI and want to learn from the ba­sics.

But a quick dis­claimer: I’m not an ex­pert, and I might get things wrong here and there.

If you spot any­thing off, just let me know. I’d love to learn from you too!

So, what ex­actly is a neural net­work?

A neural net­work is in­spired by the struc­ture and func­tions of bi­o­log­i­cal neural net­works.

It works by tak­ing some data as in­put and pro­cess­ing it through a net­work of neu­rons.

Inside each neu­ron, there’s a rule that de­cides whether it should be ac­ti­vated.

When that hap­pens, it means the neu­ron found a pat­tern in the data that it has learned to rec­og­nize.

This process re­peats as the data moves through the lay­ers of the net­work.

The pat­tern of ac­ti­va­tion in the fi­nal layer rep­re­sents the out­put of the task.

Let’s start with a sim­ple use case for a neural net­work: rec­og­niz­ing a hand­writ­ten num­ber.

In this case, the in­put is an im­age of a num­ber, and we want the neural net­work to tell us what num­ber it is.

The out­put is de­ter­mined by which neu­rons in the last layer get ac­ti­vated.

Each one cor­re­sponds to a num­ber, and the one with the high­est ac­ti­va­tion tells us the net­work’s pre­dic­tion.

To do this, first we need to turn the im­age into data that the neural net­work can un­der­stand.

In this ex­am­ple, the data will be the bright­ness value of each pixel in the im­age.

The neu­ron will re­ceive a value de­pend­ing on how bright or dark that part of the im­age is.

The darker an area (which means there is some­thing writ­ten there), the more value a neu­ron will have.

Once this process is fin­ished, the in­put neu­rons will now have val­ues that re­sem­ble the in­put im­age.

These in­put val­ues are then passed on to the next layer of neu­rons to process.

But here’s the key part: be­fore be­ing passed, each value is mul­ti­plied by a cer­tain weight.

These weights will be var­ied for each con­nec­tion.

It might be pos­i­tive, neg­a­tive, less than 1, or more than 1.

The re­ceiv­ing neu­ron will then sum up all the weighted val­ues it gets.

Then comes the rule we men­tioned ear­lier — usu­ally called an ac­ti­va­tion func­tion.

There are ac­tu­ally dif­fer­ent types of ac­ti­va­tion func­tions, but let’s use a sim­ple rule for now:

If the to­tal value is greater than a thresh­old, the neu­ron ac­ti­vates. Otherwise, it keeps in­ac­tive.

If the neu­ron gets ac­ti­vated, it means it rec­og­nized some­thing in the im­age. Maybe a line, a curve, or a part of a num­ber.

Now imag­ine we have to do these same op­er­a­tions for every neu­ron in the next layer.

Each neu­ron has its own weight and thresh­old value, so it will re­act dif­fer­ently to the same in­put im­age.

To put it dif­fer­ently, each neu­ron is look­ing for a dif­fer­ent pat­tern in the im­age.

This process re­peats layer by layer un­til we reach the fi­nal layer.

At each layer, the neu­rons process the pat­terns de­tected by the pre­vi­ous layer, build­ing on them to rec­og­nize more com­plex pat­terns.

Until fi­nally, in the last layer, the net­work has enough in­for­ma­tion to de­duce what num­ber is in the im­age.

So that’s ba­si­cally how a neural net­work works in a nut­shell.

It’s a se­ries of sim­ple math op­er­a­tions that process in­put data to pro­duce an out­put.

With the right com­bi­na­tion of weights and thresh­olds, the net­work can learn to map in­puts to the right out­puts.

In this case, it’s used to map an im­age of a hand­writ­ten num­ber to the cor­rect num­ber.

I’ll stop here for now.

So far, we’ve looked at what a neural net­work is, how it reads in­put, per­forms cal­cu­la­tions, and gives an out­put.

But we haven’t an­swered the im­por­tant ques­tion:

How do we find the right weights and right thresh­olds, so that the cor­rect neu­ron is ac­ti­vated?

That part’s a lit­tle tricky — I’m still try­ing to wrap my head around it and find a good way to vi­su­al­ize it.

So I won’t go into it just yet.

But for now, I hope this gives you a ba­sic un­der­stand­ing of how neural net­works work.

See you in the next one 👋

vi­su­al­ram­bling.space is a per­sonal pro­ject by Damar, some­one who loves to learn about dif­fer­ent top­ics and ram­bling about them vi­su­ally.

If you also love this kind of stuff, feel free to fol­low me. I’ll try to post more con­tent like this in the fu­ture!

...

Read the original on visualrambling.space »

9 275 shares, 19 trendiness

Split a recovery key among friends

This is a tool that en­crypts files and splits the de­cryp­tion key among trusted friends us­ing Shamir’s Secret Sharing. For ex­am­ple, you can give pieces to 5 friends and re­quire any 3 of them to co­op­er­ate to re­cover the key. No sin­gle friend can ac­cess your data alone.

Each friend re­ceives a self-con­tained bun­dle with re­cover.html—a browser-based tool that works of­fline, with no servers or in­ter­net re­quired. If this web­site dis­ap­pears, re­cov­ery still works.

Your file is en­crypted, the key is split into shares, and friends com­bine shares to re­cover it.

Different friend com­bi­na­tions can re­cover the file (any 3 of 5)

Add Bob’s and Carol’s shares (drag their README.txt files onto the page)

Watch the au­to­matic de­cryp­tion when thresh­old is met

This is the best way to un­der­stand what your friends would ex­pe­ri­ence dur­ing a real re­cov­ery.

* The code is open source—you can read it on GitHub

* Everything runs lo­cally in your browser; your files don’t leave your de­vice

* Try the demo bun­dles first to see ex­actly how it works be­fore us­ing it with real se­crets

I wanted a way to en­sure trusted friends could ac­cess im­por­tant files if some­thing hap­pened to me—with­out trust­ing any sin­gle per­son or ser­vice with every­thing. Shamir’s Secret Sharing seemed like the right ap­proach, but I could­n’t find a tool that gave friends a sim­ple, self-con­tained way to re­cover files to­gether. So I built one. I’m shar­ing it in case it’s use­ful to oth­ers.

...

Read the original on eljojo.github.io »

10 244 shares, 15 trendiness

How to effectively write quality code with AI

5 Write high level spec­i­fi­ca­tions and test by your­self9 Find and mark func­tions that have a high se­cu­rity risk12 Do not gen­er­ate blindly or to much com­plex­ity at once

Enjoy the au­dio ver­sion of this ar­ti­cle:Your browser does not sup­port the au­dio el­e­ment.

i

You are a hu­man, you know how this world be­haves, how your team and col­leagues be­have, and what your users ex­pect. You have ex­pe­ri­enced the world, and you want to work to­gether with a sys­tem that has no ex­pe­ri­ence in this world you live in. Every de­ci­sion in your pro­ject that you don’t take and doc­u­ment will be taken for you by the AI.

Your re­spon­si­bil­ity of de­liv­er­ing qual­ity code can­not be met if not even you know where long-last­ing and dif­fi­cult-to-change de­ci­sions are taken.

You must know what parts of your code need to be thought through and what must be vig­or­ously tested.

Think about and dis­cuss the ar­chi­tec­ture, in­ter­faces, data struc­tures, and al­go­rithms you want to use. Think about how to test and val­i­date your code to these spec­i­fi­ca­tions.

You need to com­mu­ni­cate to the AI in de­tail what you want to achieve, oth­er­wise it will re­sult in code that is un­us­able for your pur­pose.

Other de­vel­op­ers also need to com­mu­ni­cate this in­for­ma­tion to the AI. That makes it ef­fi­cient to write as much doc­u­men­ta­tion as prac­ti­cal in a stan­dard­ized for­mat and into the code repos­i­tory it­self.

Document the re­quire­ments, spec­i­fi­ca­tions, con­straints, and ar­chi­tec­ture of your pro­ject in de­tail.

Document your cod­ing stan­dards, best prac­tices, and de­sign pat­terns.

Use flow­charts, UML di­a­grams, and other vi­sual aids to com­mu­ni­cate com­plex struc­tures and work­flows.

Write pseudocode for com­plex al­go­rithms and logic to guide the AI in un­der­stand­ing your in­ten­tions.

Develop ef­fi­cient de­bug sys­tems for the AI to use, re­duc­ing the need for mul­ti­ple ex­pen­sive CLI com­mands or browsers to ver­ify code func­tion­al­ity. This will save time and re­sources while sim­pli­fy­ing the process for the AI to iden­tify and re­solve code is­sues.

For ex­am­ple: Build a sys­tem that col­lects logs from all nodes in a dis­trib­uted sys­tem and pro­vides ab­stracted in­for­ma­tion like The Data was send to all nodes”, The Data X is saved on Node 1 but not on Node 2”.

Not all code is equally im­por­tant. Some parts of your code­base are crit­i­cal and need to be re­viewed with ex­tra care. Other parts are less im­por­tant and can be gen­er­ated with less over­sight.

Use a sys­tem that al­lows you to mark how thor­oughly each func­tion has been re­viewed.

For ex­am­ple you can use a prompt that will let the AI put the com­ment //A be­hind func­tions it wrote to in­di­cate that the func­tion has been writ­ten by an AI and is not yet re­viewed by a hu­man.

AIs will cheat and use short­cuts even­tu­ally. They will write mocks, stubs, and hard coded val­ues to make the code tests suc­ceed while the code it­self is not work­ing and most of the time dan­ger­ous. Often AIs will adapt or out­right delete test code to let the code pass tests.

You must dis­cour­age this be­hav­ior by writ­ing prop­erty based high level spec­i­fi­ca­tion tests your­self. Build them in a way that makes it hard for the AI to cheat with­out hav­ing big code seg­ments ded­i­cated to it.

For ex­am­ple, use prop­erty based test­ing, restart the server and check in be­tween if the data­base has the cor­rect val­ues.

Separate these test so the AI can­not edit them and prompt the AI not to change them.

Let an AI write prop­erty based in­ter­face tests for the ex­pected be­hav­ior with as lit­tle con­text of the rest of the code as pos­si­ble.

This will gen­er­ate tests that are un­in­flu­enced by the implementation AI which will pre­vent the tests from be­ing adapted to the im­ple­men­ta­tion in a way that makes them use­less or less ef­fec­tive.

Separate these tests so the AI can­not edit them with­out ap­proval and prompt the AI not to change them.

Use strict lint­ing and for­mat­ting rules to en­sure code qual­ity and con­sis­tency. This will help you and your AI to find is­sues early.

Save time and money by uti­liz­ing path spe­cific cod­ing agent prompts like CLAUDE.md.

You can gen­er­ate them au­to­mat­i­cally which will give your AI in­for­ma­tion it would oth­er­wise as to cre­ate from scratch every time.

Try to pro­vide as much high level in­for­ma­tion as prac­ti­cal, such as cod­ing stan­dards, best prac­tices, de­sign pat­terns, and spe­cific re­quire­ments for the pro­ject. This will help the AI to gen­er­ate code that is more aligned with your ex­pec­ta­tions and will re­duce lookup time and cost.

Identify and mark func­tions that have a high se­cu­rity risk, such as au­then­ti­ca­tion, au­tho­riza­tion, and data han­dling. These func­tions should be re­viewed and tested with ex­tra care and in such a way that a hu­man has com­pre­hended the logic of the func­tion in all its di­men­sions and is con­fi­dent about its cor­rect­ness and safety.

Make this ex­plicit with a com­ment like //HIGH-RISK-UNREVIEWED and //HIGH-RISK-REVIEWED to make sure that other de­vel­op­ers are aware of the im­por­tance of these func­tions and will re­view them with ex­tra care.

Make sure that the AI is in­structed to change the re­view state of these func­tions as soon as it changes a sin­gle char­ac­ter in the func­tion.

Developers must make sure that the sta­tus of these func­tions is al­ways cor­rect.

Aim to re­duce the com­plex­ity of the gen­er­ated code where pos­si­ble. Each sin­gle line of code will eat up your con­text win­dow and make it harder for the AI and You to keep track of the over­all logic of your code.

Each avoid­able line of code is cost­ing en­ergy, money and prob­a­bil­ity of fu­ture un­suc­cess­ful AI tasks.

AI writ­ten code is cheap, use this to your ad­van­tage by ex­plor­ing dif­fer­ent so­lu­tions to a prob­lem with ex­per­i­ments and pro­to­types with min­i­mal spec­i­fi­ca­tions. This will al­low you to find the best so­lu­tion to a prob­lem with­out in­vest­ing too much time and re­sources in a sin­gle so­lu­tion.

Break down com­plex tasks into smaller, man­age­able tasks for the AI. Instead of ask­ing the AI to gen­er­ate the com­plete pro­ject or com­po­nent at once, break it down into smaller tasks, such as gen­er­at­ing in­di­vid­ual func­tions or classes. This will help you to main­tain con­trol over the code and it’s logic.

You have to check each com­po­nent or mod­ule for its ad­her­ence to the spec­i­fi­ca­tions and re­quire­ments.

If you have lost the overview of the com­plex­ity and in­ner work­ings of the code, you have lost con­trol over your code and must restart from a state where you were in con­trol of your code.

...

Read the original on heidenstedt.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.