10 interesting stories served every morning and every evening.




1 671 shares, 31 trendiness

Leak confirms OpenAI is preparing ads on ChatGPT for public roll out

OpenAI is now in­ter­nally test­ing ads’ in­side ChatGPT that could re­de­fine the web econ­omy.

Up un­til now, the ChatGPT ex­pe­ri­ence has been com­pletely free.

While there are pre­mium plans and mod­els, you don’t see GPT sell you prod­ucts or show ads. On the other hand, Google Search has ads that in­flu­ence your buy­ing be­hav­iour.

As spot­ted by Tibor on X, ChatGPT Android app 1.2025.329 beta in­cludes new ref­er­ences to an ads fea­ture” with bazaar con­tent”, search ad” and search ads carousel.”

This move could dis­rupt the web econ­omy, as what most peo­ple don’t un­der­stand is that GPT likely knows more about users than Google.

For ex­am­ple, OpenAI could cre­ate per­son­alised ads on ChatGPT that pro­mote prod­ucts that you re­ally want to buy. It might also sneak in ads in the search ads, sim­i­lar to Google Search ads.

The leak sug­gests that ads will ini­tially be lim­ited to the search ex­pe­ri­ence only, but this may change in the fu­ture.

ChatGPT has roughly 800 mil­lion peo­ple us­ing it every week, up from 100 mil­lion weekly users in November 2023 and about 300 mil­lion weekly users in late 2024.

An OpenAI-backed study es­ti­mated 700 mil­lion users send­ing 18 bil­lion mes­sages per week by July 2025, which lines up with this growth, and other an­a­lysts now peg traf­fic at around 5–6 bil­lion vis­its per month.

GPT handles about 2.5 bil­lion prompts a day, and India has be­come the sin­gle biggest user base, ahead of the US.

ChatGPT has every­thing it needs for ads to suc­ceed. What do you think?

...

Read the original on www.bleepingcomputer.com »

2 524 shares, 38 trendiness

All it takes is for one to work out

More than a decade ago, when I was ap­ply­ing to grad­u­ate school, I went through a pe­riod of deep un­cer­tainty. I had tried the pre­vi­ous year and had­n’t got­ten in any­where. I wanted to try again, but I had a lot go­ing against me.

I’d spent most of my un­der­grad build­ing a stu­dent job-por­tal startup and had­n’t bal­anced it well with aca­d­e­mics. My GPA needed ex­plain­ing. My GMAT score was just okay. I did­n’t come from a big-brand em­ployer. And there was no short­age of peo­ple with sim­i­lar or stronger pro­files ap­ply­ing to the same schools.

Even though I had learned a few things from the first round, the sec­ond at­tempt was still dif­fi­cult. There were mul­ti­ple points af­ter I sub­mit­ted ap­pli­ca­tions where I lost hope.

But dur­ing that stretch, a friend and col­league kept re­peat­ing one line to me:

All it takes is for one to work out.”

He’d say it every time I spi­raled. And as much as it made me smile, a big part of me did­n’t fully be­lieve it. Still, it be­came a lit­tle maxim be­tween us. And even­tu­ally, he was right — that one did work out. And it changed my life.

I’ve thought about that fram­ing so many times since then.

You don’t need every job to choose you. You just need the one that’s the right fit.

You don’t need every house to ac­cept your of­fer. You just need the one that feels like home.

You don’t need every per­son to want to build a life with you. You just need the one.

You don’t need ten uni­ver­si­ties to say yes. You just need the one that opens the right door.

These processes — col­lege ad­mis­sions, job searches, home buy­ing, find­ing a part­ner — can be emo­tion­ally bru­tal. They can get you down in ways that feel per­sonal. But in those mo­ments, that truth can be ground­ing.

All it takes is for one to work out.

And that one is all you need.

...

Read the original on alearningaday.blog »

3 345 shares, 17 trendiness

Iceland declares ocean-current instability a national security risk

Iceland has taken the rare step of treat­ing a cli­mate-linked ocean threat as a mat­ter of na­tional sur­vival, launch­ing a co­or­di­nated gov­ern­ment re­sponse to one of the most feared po­ten­tial tip­ping points in the cli­mate sys­tem.

Officials say the shift re­flects mount­ing ev­i­dence that a key Atlantic cur­rent sys­tem could be head­ing to­ward dan­ger­ous in­sta­bil­ity.

According to CNN, Iceland’s National Security Council for­mally la­belled the pos­si­ble col­lapse of the Atlantic Meridional Overturning Circulation (AMOC) a na­tional se­cu­rity risk in September — the first time the coun­try has ap­plied such a des­ig­na­tion to a cli­mate im­pact.

The move fol­lowed a gov­ern­ment brief­ing on new re­search that raised grave con­cerns” about the sys­tem’s fu­ture sta­bil­ity.

Jóhann Páll Jóhannsson, Iceland’s min­is­ter for en­vi­ron­ment, en­ergy and cli­mate, said the risks ex­tend far be­yond weather.

Our cli­mate, econ­omy and se­cu­rity are deeply tied to the sta­bil­ity of the ocean cur­rents around us,” he told CNN.

He later de­scribed the threat as an ex­is­ten­tial threat,” warn­ing that a break­down could dis­rupt trans­port, dam­age in­fra­struc­ture and hit the coun­try’s fish­ing in­dus­try.

The AMOC — of­ten com­pared to a gi­ant con­veyor belt — car­ries warm wa­ter north­ward be­fore it cools and sinks, help­ing reg­u­late weather across the Atlantic basin.

CNN re­ported that sci­en­tists in­creas­ingly worry that warm­ing tem­per­a­tures and dis­rupted salin­ity lev­els are slow­ing the sys­tem.

Some stud­ies sug­gest a tip­ping point could be reached this cen­tury, though the ex­act time­line re­mains un­cer­tain.

Stefan Rahmstorf, an oceanog­ra­pher at Potsdam University, told CNN that a col­lapse cannot be con­sid­ered a low like­li­hood risk any­more.”

The con­se­quences, he said, would be dra­matic: surg­ing sea lev­els along US and European coasts, ma­jor mon­soon dis­rup­tions across Africa and Asia, and a deep freeze across parts of Europe.

For Iceland, he said, the coun­try would be close to the cen­ter of a se­ri­ous re­gional cool­ing,” with sea ice po­ten­tially sur­round­ing the is­land.

The se­cu­rity des­ig­na­tion means Iceland will now pur­sue a high-level, cross-gov­ern­ment ef­fort to analyse the threat and con­sider how to man­age or re­duce the con­se­quences. Jóhannsson said the de­ci­sion

reflects the se­ri­ous­ness of the is­sue and en­sures that the mat­ter gets the at­ten­tion it de­serves.”

Rahmstorf praised Iceland’s stance, telling CNN that other na­tions should treat the risk with sim­i­lar ur­gency.

Jóhannsson said the coun­try is con­fronting a stark pos­si­bil­ity: What we do know is that the cur­rent cli­mate might change so dras­ti­cally that it could be­come im­pos­si­ble for us to adapt… this is not just a sci­en­tific con­cern — it’s a mat­ter of na­tional sur­vival and se­cu­rity.”

...

Read the original on www.dagens.com »

4 321 shares, 56 trendiness

Boing

...

Read the original on boing.greg.technology »

5 292 shares, 13 trendiness

Datacenters in space are a terrible, horrible, no good idea.

In the in­ter­ests of clar­ity, I am a for­mer NASA en­gi­neer/​sci­en­tist with a PhD in space elec­tron­ics. I also worked at Google for 10 years, in var­i­ous parts of the com­pany in­clud­ing YouTube and the bit of Cloud re­spon­si­ble for de­ploy­ing AI ca­pac­ity, so I’m quite well placed to have an opin­ion here.

The short ver­sion: this is an ab­solutely ter­ri­ble idea, and re­ally makes zero sense what­so­ever. There are mul­ti­ple rea­sons for this, but they all amount to say­ing that the kind of elec­tron­ics needed to make a dat­a­cen­ter work, par­tic­u­larly a dat­a­cen­ter de­ploy­ing AI ca­pac­ity in the form of GPUs and TPUs, is ex­actly the op­po­site of what works in space. If you’ve not worked specif­i­cally in this area be­fore, I’ll cau­tion against mak­ing gut as­sump­tions, be­cause the re­al­ity of mak­ing space hard­ware ac­tu­ally func­tion in space is not nec­es­sar­ily in­tu­itively ob­vi­ous.

The first rea­son for do­ing this that seems to come up is abun­dant ac­cess to power in space. This re­ally is­n’t the case. You ba­si­cally have two op­tions: so­lar and nu­clear. Solar means de­ploy­ing a so­lar ar­ray with pho­to­voltaic cells — some­thing es­sen­tially equiv­a­lent to what I have on the roof of my house here in Ireland, just in space. It works, but it is­n’t some­how mag­i­cally bet­ter than in­stalling so­lar pan­els on the ground — you don’t lose that much power through the at­mos­phere, so in­tu­ition about the area needed trans­fers pretty well. The biggest so­lar ar­ray ever de­ployed in space is that of the International Space Station (ISS), which at peak can de­liver a bit over 200kW of power. It is im­por­tant to men­tion that it took sev­eral Shuttle flights and a lot of work to de­ploy this sys­tem — it mea­sures about 2500 square me­tres, over half the size of an American foot­ball field.

Taking the NVIDIA H200 as a ref­er­ence, the per-GPU-de­vice power re­quire­ments are on the or­der of 0.7kW per chip. These won’t work on their own, and power con­ver­sion is­n’t 100% ef­fi­cient, so in prac­tice 1kW per GPU might be a bet­ter base­line. A huge, ISS-sized, ar­ray could there­fore power roughly 200 GPUs. This sounds like a lot, but lets keep some per­spec­tive: OpenAI’s up­com­ing Norway dat­a­cen­ter is in­tend­ing to house 100,000 GPUs, prob­a­bly each more power hun­gry than the H200. To equal this ca­pac­ity, you’d need to launch 500 ISS-sized satel­lites. In con­trast, a sin­gle server rack (as sold by NVIDIA pre­con­fig­ured) will house 72 GPUs, so each mon­ster satel­lite is only equiv­a­lent to roughly three racks.

Nuclear won’t help. We are not talk­ing nu­clear re­ac­tors here — we are talk­ing about ra­dioiso­tope ther­mal gen­er­a­tors (RTGs), which typ­i­cally have a power out­put of about 50W - 150W. So not enough to even run a sin­gle GPU, even if you can per­suade some­one to give you a sub­crit­i­cal lump of plu­to­nium and not mind you hav­ing hun­dreds of chances to scat­ter it across a wide area when your launch ve­hi­cle ex­plo­sively self-dis­as­sem­bles.

I’ve seen quite a few com­ments about this con­cept where peo­ple are say­ing things like, Well, space is cold, so that will make cool­ing re­ally easy, right?”

Really, re­ally no.

Cooling on Earth is rel­a­tively straight­for­ward. Air con­vec­tion works pretty well — blow air across a sur­face, par­tic­u­larly one de­signed to have a large sur­face area to vol­ume ra­tio like a heatsink, will trans­fer heat from the heatsink to the air quite ef­fec­tively. If you need more power den­sity than can be di­rectly cooled in this way (and higher power GPUs are def­i­nitely in that cat­e­gory), you can use liq­uid cool­ing to trans­fer heat from the chip to a larger ra­di­a­tor/​heatsink else­where. In dat­a­cen­ters on Earth, it is com­mon to set up cool­ing loops where ma­chines are cooled via chilled coolant (usually wa­ter) that is pumped around racks, with the heat ex­tracted and cold coolant re­turned to the loop. Typically the coolant is cooled via con­vec­tive cool­ing to the air, so one way or an­other this is how things work on Earth.

In space, there is no air. The en­vi­ron­ment is close enough to a hard, to­tal vac­uum as makes no prac­ti­cal dif­fer­ence, so con­vec­tion just does­n’t hap­pen. On the space en­gi­neer­ing side, we typ­i­cally think about ther­mal man­age­ment, not just cool­ing. Thing is, space does­n’t re­ally have a tem­per­a­ture as-such. Only ma­te­ri­als have a tem­per­a­ture. It may come as a sur­prise, but in the Earth-Moon sys­tem the av­er­age tem­per­a­ture of pretty much any­thing is ba­si­cally the same as the av­er­age tem­per­a­ture of Earth, be­cause this is why Earth has that par­tic­u­lar tem­per­a­ture. If a satel­lite is ro­tat­ing, a bit like a chicken on a ro­tis­serie, it will tend to­ward hav­ing a con­sis­tent tem­per­a­ture that’s roughly sim­i­lar to that of the Earth sur­face. If it is­n’t ro­tat­ing, the side point­ing away from the sun will tend to get pro­gres­sively colder, with a limit due to the cos­mic mi­crowave back­ground, around 4 Kelvin, just a lit­tle bit above ab­solute zero. On the sun­ward side, things can get a bit cooked, hit­ting hun­dreds of centi­grade. Thermal man­age­ment there­fore re­quires very care­ful de­sign, mak­ing sure that heat is care­fully di­rected where it needs to go. Because there is no con­vec­tion in a vac­uum, this can only be achieved by con­duc­tion, or via some kind of heat pump.

I’ve de­signed space hard­ware that has flown in space. In one par­tic­u­lar case, I de­signed a cam­era sys­tem that needed to be very small and light­weight, whilst still pro­vid­ing sci­ence-grade imag­ing ca­pa­bil­i­ties. Thermal man­age­ment was front and cen­tre in the de­sign process — it had to be, be­cause power is scarce in small space­craft, and ther­mal man­age­ment has to be achieved whilst keep­ing mass to a min­i­mum. So no heat pumps or fancy stuff for me — I went in the other di­rec­tion, de­sign­ing the sys­tem to draw a max­i­mum of about 1 watt at peak, drop­ping to around 10% of that when the cam­era was idle. All this elec­tri­cal power turns into heat, so if I can draw 1 watt only while cap­tur­ing an im­age, then turn the im­age sen­sor off as soon as the data is in RAM, I can halve the con­sump­tion, then when the im­age has been down­loaded to the flight com­puter I can turn the RAM off and drop the power down to a com­par­a­tive trickle. The only ther­mal man­age­ment needed was bolt­ing the edge of the board to the chas­sis so the in­ter­nal cop­per planes in the board could trans­fer any heat gen­er­ated.

Cooling even a sin­gle H200 will be an ab­solute night­mare. Clearly a heatsink and fan won’t do any­thing at all, but there is a liq­uid cooled H200 vari­ant. Let’s say this was used. This heat would need to be trans­ferred to a ra­di­a­tor panel — this is­n’t like the ra­di­a­tor in your car, no con­vec­tion, re­mem­ber? — which needs to ra­di­ate heat into space. Let’s as­sume that we can point this away from the sun.

The Active Thermal Control System (ATCS) on the ISS is an ex­am­ple of such a ther­mal con­trol sys­tem. This is a very com­plex sys­tem, us­ing an am­mo­nia cool­ing loop and a large ther­mal ra­di­a­tor panel sys­tem. It has a dis­si­pa­tion limit of 16kW, so roughly 16 H200 GPUs, a bit over the equiv­a­lent to a quar­ter of a ground-based rack. The ther­mal ra­di­a­tor panel sys­tem mea­sures 13.6m x 3.12 m, i.e., roughly 42.5 square me­tres. If we use 200kW as a base­line and as­sume all of that power will be fed to GPUs, we’d need a sys­tem 12.5 times big­ger, i.e., roughly 531 square me­tres, or about 2.6 times the size of the rel­e­vant so­lar ar­ray. This is now go­ing to be a very large satel­lite, dwarf­ing the ISS in area, all for the equiv­a­lent of three stan­dard server racks on Earth.

This is get­ting into my PhD work now. Assuming you can both power and cool your elec­tron­ics in space, you have the fur­ther prob­lem of ra­di­a­tion tol­er­ance.

The first ques­tion is where in space?

If you are in low Earth or­bit (LEO), you are in­side the in­ner ra­di­a­tion belt, where ra­di­a­tion dose is sim­i­lar to that ex­pe­ri­enced by high al­ti­tude air­craft — more than an air­liner, but not ter­ri­ble. Further out, in mid Earth or­bit (MEO), where the GPS satel­lites live, they are not pro­tected by the Van Allen belts — worse, this or­bit is lit­er­ally in­side them. Outside the belts, you are es­sen­tially in deep space (details vary with how close to the Sun you hap­pen to be, but the prin­ci­ples are sim­i­lar).

There are two main sources of ra­di­a­tion in space — from our own star, the Sun, and from deep space. This ba­si­cally in­volves charged par­ti­cles mov­ing at a sub­stan­tial per­cent­age of the speed of light, from elec­trons to the nu­clei of atoms with masses up to roughly that of oxy­gen. These can cause di­rect dam­age, by smash­ing into the ma­te­r­ial from which chips are made, or in­di­rectly, by trav­el­ling through the sil­i­con die with­out hit­ting any­thing but still leav­ing a trail of charge be­hind them.

The most com­mon con­se­qence of this hap­pen­ing is a sin­gle-event up­set (SEU), where a di­rect im­pact or (more com­monly) a par­ti­cle pass­ing through a tran­sis­tor briefly (approx 600 pi­cosec­onds) causes a pulse to hap­pen where it should­n’t have. If this causes a bit to be flipped, we call this a SEU. Other than dam­age to data, they don’t cause per­ma­nent dam­age.

Worse is sin­gle-event latch-up. This hap­pens when a pulse from a charged par­ti­cle causes a volt­age to go out­side the power rails pow­er­ing the chip, caus­ing a tran­sis­tor es­sen­tially to turn on and stay on in­def­i­nitely. I’ll skip the semi­con­duc­tor physics in­volved, but the short ver­sion is that if this hap­pens in a bad way, you can get a path­way con­nected be­tween the power rails that should­n’t be there, burn­ing out a gate per­ma­nently. This may or may not de­stroy the chip, but with­out mit­i­ga­tion it can make it un­us­able.

For longer du­ra­tion mis­sions, which would be the case with space based dat­a­cen­ters be­cause they would be so ex­pen­sive that they would have to fly for a long time in or­der to be eco­nom­i­cally vi­able, it’s also nec­es­sary to con­sider to­tal dose ef­fects. Over time, the per­for­mance of chips in space de­grades, be­cause re­peated par­ti­cle im­pacts make the tiny field-ef­fect tran­sis­tors switch more slowly and turn on and off less com­pletely. In prac­tice, this causes max­i­mum vi­able clock rates to de­cay over time, and for power con­sump­tion to in­crease. Though not the hard­est is­sue to deal with, this must still be mit­i­gated or you tend to run into a sit­u­a­tion where a chip that was work­ing fine at launch stops work­ing be­cause ei­ther the power sup­ply or cool­ing has be­come in­ad­e­quate, or the clock is run­ning faster than the chip can cope with. It’s there­fore nec­es­sary to have a clock gen­er­a­tor that can throt­tle down to a lower speed as needed — this can also be used to con­trol power con­sump­tion, so rather than a chip ceas­ing to func­tion it will just get slower.

The next FAQ is, can’t you just use shield­ing? No, not re­ally, or maybe up to a point. Some kinds of shield­ing can make the prob­lem worse — an im­pact to the shield can cause a shower of par­ti­cles that then cause mul­ti­ple im­pact at once, which is far harder to mit­i­gate. The very strongest cos­mic rays can go through an as­ton­ish­ing amount of solid lead — since mass is al­ways at a pre­mium, it’s rarely pos­si­ble to de­ploy sig­nif­i­cant amounts of shield­ing, so ra­di­a­tion tol­er­ance must be built into the sys­tem (this is of­ten de­scribed as Radiation Hardness By Design, RHBD).

GPUs and TPUs and the high band­width RAM they de­pend on are ab­solutely worst case for ra­di­a­tion tol­er­ance pur­poses. Small geom­e­try tran­sis­tors are in­her­ently much more prone both to SEUs and latch-up. The very large sil­i­con die area also makes the fre­quency of im­pacts higher, since that scales with area.

Chips gen­uinely de­signed to work in space are taped out with dif­fer­ent gate struc­tures and much larger geome­tries. The proces­sors that are typ­i­cally used have the per­for­mance of roughly a 20-year-old PowerPC from 2005. Bigger geome­tries are in­her­ently more tol­er­ant, both to SEUs and to­tal dose, and the dif­fer­ent gate topolo­gies are im­mune to latch up, whilst pro­vid­ing some de­gree of SEU mit­i­ga­tion via fine-grained re­dun­dancy at the cir­cuit level. Taping out a GPU or TPU with this kind of ap­proach is cer­tainly pos­si­ble, but the per­for­mance would be a tiny frac­tion of that of a cur­rent gen­er­a­tion Earth-based GPU/TPU.

There is a you-only-live-once (my ter­mi­nol­ogy!) ap­proach, where you launch the thing and hope for the best. This is com­mon­place in small cube­sats, and also why small cube­sats of­ten fail af­ter a few weeks on or­bit. Caveat emp­tor!

Most satel­lites com­mu­ni­cate with the ground via ra­dio. It is dif­fi­cult to get much more than about 1Gbps re­li­ably. There is some in­ter­est­ing work us­ing lasers to com­mu­ni­cate with satel­lites, but this de­pends on good at­mos­pheric con­di­tions to be fea­si­ble. Contrasting this with a typ­i­cal server rack on Earth, where 100Gbps rack-to-rack in­ter­con­nect would be con­sid­ered at the low end, and it’s easy to see that this is also a sig­nif­i­cant gap.

I sup­pose this is just about pos­si­ble if you re­ally want to do it, but I think I’ve demon­strated above that it would firstly be ex­tremely dif­fi­cult to achieve, dis­pro­por­tion­ately costly in com­par­i­son with Earth-based dat­a­cen­ters, and of­fer mediocre per­for­mance at best.

If you still think this is worth do­ing, good luck, space is hard. Myself, I think it’s a cat­a­stroph­i­cally bad idea, but you do you.

...

Read the original on taranis.ie »

6 286 shares, 19 trendiness

Be Like Clippy

Fed up with tril­lion-dol­lar com­pa­nies ex­ploit­ing your data? Forced to use their ser­vices? Your data held for ran­som? Your data used to train their AI mod­els? Opt-outs for data col­lec­tion in­stead of opt-ins?

Join the move­ment to make com­pa­nies more like Clippy. Set your pro­file pic­ture to Clippy, make your voice heard.

Below is a video that ex­plains the Be Like Clippy move­ment. It’s a call to ac­tion for de­vel­op­ers, com­pa­nies, and users alike to em­brace a more open, trans­par­ent, and user-friendly ap­proach to tech­nol­ogy.

...

Read the original on be-clippy.com »

7 265 shares, 24 trendiness

In a dramatic shift, Americans no longer see four-year college degrees as worth the cost

Americans have grown sour on one of the long­time key in­gre­di­ents of the American dream.

Almost two-thirds of reg­is­tered vot­ers say that a four-year col­lege de­gree is­n’t worth the cost, ac­cord­ing to a new NBC News poll, a dra­matic de­cline over the last decade.

Just 33% agree a four-year col­lege de­gree is worth the cost be­cause peo­ple have a bet­ter chance to get a good job and earn more money over their life­time,” while 63% agree more with the con­cept that it’s not worth the cost be­cause peo­ple of­ten grad­u­ate with­out spe­cific job skills and with a large amount of debt to pay off.”

In 2017, U. S. adults sur­veyed were vir­tu­ally split on the ques­tion — 49% said a de­gree was worth the cost and 47% said it was­n’t. When CNBC asked the same ques­tion in 2013 as part of its All American Economic Survey, 53% said a de­gree was worth it and 40% said it was not.

The eye-pop­ping shift over the last 12 years comes against the back­drop of sev­eral ma­jor trends shap­ing the job mar­ket and the ed­u­ca­tion world, from ex­plod­ing col­lege tu­ition prices to rapid changes in the mod­ern econ­omy — which seems once again poised for rad­i­cal trans­for­ma­tion along­side ad­vances in AI.

It’s just re­mark­able to see at­ti­tudes on any is­sue shift this dra­mat­i­cally, and par­tic­u­larly on a cen­tral tenet of the American dream, which is a col­lege de­gree. Americans used to view a col­lege de­gree as as­pi­ra­tional — it pro­vided an op­por­tu­nity for a bet­ter life. And now that promise is re­ally in doubt,” said Democratic poll­ster Jeff Horwitt of Hart Research Associates, who con­ducted the poll along with the Republican poll­ster Bill McInturff of Public Opinion Strategies.

What is re­ally sur­pris­ing about it is that every­body has moved. It’s not just peo­ple who don’t have a col­lege de­gree,” Horwitt added.

National data from the Bureau of Labor Statistics shows that those with ad­vanced de­grees earn more and have lower un­em­ploy­ment rates than those with lower lev­els of ed­u­ca­tion. That’s been true for years.

But what has shifted is the price of col­lege. While there have been some small de­clines in tu­ition prices over the last decade, when ad­justed for in­fla­tion, College Board data shows that the av­er­age, in­fla­tion-ad­justed cost of pub­lic four-year col­lege tu­ition for in-state stu­dents has dou­bled since 1995. Tuition at pri­vate, four-year col­leges is up 75% over the same pe­riod.

Poll re­spon­dents who spoke with NBC News all em­pha­sized those ris­ing costs as a ma­jor rea­son why the value of a four-year de­gree has been un­der­cut.

Jacob Kennedy, a 28-year-old server and bar­tender liv­ing in Detroit, told NBC News that while he be­lieves an ed­u­cated pop­u­lace is the most im­por­tant thing for a coun­try to have,” if peo­ple can’t use those de­grees be­cause of the debt they’re car­ry­ing, it un­der­cuts the value.

Kennedy, who has a two-year de­gree, re­flected on the num­ber of peo­ple who I’ve met work­ing in the ser­vice in­dus­try who have four-year de­grees and then within a year of grad­u­at­ing im­me­di­ately quit their grown-up jobs’ to go back to the jobs they had.”

The cost over­whelms the value,” he con­tin­ued. You go to school with all that stu­dent debt — the jobs you get out of col­lege don’t pay that debt, so you have to go find some­thing else that can pay that debt.”

The 20-point de­cline over the last 12 years among those who say a de­gree is worth it — from 53% in 2013 to 33% now — is re­flected across vir­tu­ally every de­mo­graphic group. But the shift in sen­ti­ment is es­pe­cially strik­ing among Republicans.

In 2013, 55% of Republicans called a col­lege de­gree worth it, while 38% said it was­n’t worth it. In the new poll, just 22% of Republicans say the four-year de­gree is worth it, while 74% say it’s not.

Democrats have seen a sig­nif­i­cant shift too, but not to the same ex­tent — a de­cline from 61% who said a de­gree was worth it in 2013 to 47% this year.

Over the same pe­riod, the com­po­si­tion of both par­ties has changed, with the Republican Party gar­ner­ing new and deeper sup­port from vot­ers with­out col­lege de­grees, while the Democratic Party drew in more de­gree-hold­ers.

Remarkably, less than half of vot­ers with col­lege de­grees see those de­grees as worth the cost: 46% now, down from 63% in 2013.

Those with­out a col­lege de­gree were about split on the ques­tion in 2013. Now, 71% say a four-year de­gree is not worth the cost, while 26% say it is.

Preston Cooper, a se­nior fel­low at the right-lean­ing American Enterprise Institute, said enough cracks have pro­lif­er­ated un­der the long-stand­ing nar­ra­tive that a col­lege de­gree al­ways pays off to cre­ate a se­ri­ous rup­ture.

Some peo­ple drop out, or some­times peo­ple end up with a de­gree that is not worth a whole lot in the la­bor mar­ket, and some­times peo­ple pay way too much for a de­gree rel­a­tive to the value of what that cre­den­tial is,” he said. These cases have cre­ated enough ex­cep­tions to the rule that a bach­e­lor’s de­gree al­ways pays off, so that peo­ple are now more skep­ti­cal.”

The up­shot is that in­ter­est in tech­ni­cal, vo­ca­tional and two-year de­gree pro­grams has soared.

I think stu­dents are more wary about tak­ing on the risk of a four-year or even a two-year de­gree,” he said. They’re now more in­ter­ested in any path­way that can get them into the la­bor force more quickly.”

Josiah Garcia, a 24-year-old in Virginia, said he re­cently en­rolled in a pro­gram to re­ceive a four-year en­gi­neer­ing de­gree af­ter work­ing as an elec­tri­cian’s ap­pren­tice. He said he was mo­ti­vated to go back to school be­cause he saw the de­gree as hav­ing a di­rect ef­fect on his fu­ture earn­ing po­ten­tial.

But he added that he did­n’t feel that those who sought other de­grees in ar­eas like art or the­ater could say the same.

A lot of my friends who went to school for art or dance did­n’t get the job they thought they could get af­ter grad­u­at­ing,” he said, ar­gu­ing that de­grees for softer skills” should be cheaper than those in STEM fields.

Jessica Burns, a 38-year-old Iowa res­i­dent and bach­e­lor’s de­gree-holder who works for an in­sur­ance com­pany, told NBC News that for her, the worth of a four-year-de­gree largely de­pends on the cost.

She went to a com­mu­nity col­lege and then a state school to earn her de­gree, so she said she grad­u­ated with­out hav­ing to spend an insane” amount of money.

But her hus­band went to a pri­vate col­lege for his de­gree, and she quipped: We are go­ing to have stu­dent loan debt for him for­ever.”

Burns said she be­lieves a col­lege de­gree is essential for a lot of jobs. You’re not go­ing to get an in­ter­view if you don’t have a four-year de­gree for a lot of jobs in my field.”

But she framed the value of de­grees more in terms of how so­ci­ety views them in­stead of in­trin­sic value.

It’s not valu­able be­cause it’s brought a bunch of value added, it’s valu­able be­cause it’s the key to even get­ting in the door,” she said. Our so­ci­ety needs to fig­ure out that if we value it, we need to make it af­ford­able.”

Burns said she be­lieves that a lot more peo­ple in her mil­len­nial gen­er­a­tion are now sad­dled with a huge amount of debt, even as suc­cess­ful busi­ness pro­fes­sion­als,” which will in­flu­ence how her peers ap­proach pay­ing for col­lege for their chil­dren.

There has­n’t just been a de­cline in the cost-ben­e­fit analy­sis of a de­gree. Gallup polling also shows a marked de­cline in pub­lic con­fi­dence in higher ed­u­ca­tion over the last decade, al­beit with a slight in­crease over the last year.

This is a po­lit­i­cal prob­lem. It’s also a real prob­lem for higher ed­u­ca­tion. Colleges and uni­ver­si­ties have lost that con­nec­tion they’ve had with a large swath of the American peo­ple based on af­ford­abil­ity,” Horwitt said. They’re now seen as out of touch and not ac­ces­si­ble to many Americans.”

The NBC News poll sur­veyed 1,000 reg­is­tered vot­ers Oct. 24-28 via a mix of tele­phone in­ter­views and an on­line sur­vey sent via text mes­sage. The mar­gin of er­ror is plus or mi­nus 3.1 per­cent­age points.

...

Read the original on www.nbcnews.com »

8 261 shares, 11 trendiness

It’s Always the Process, Stupid!

Let’s rip the Band-Aid off im­me­di­ately: If your un­der­ly­ing busi­ness process is a mess, sprin­kling AI dust” on it won’t turn it into gold. It will just speed up the rate at which you gen­er­ate garbage. In the world of Business IT, we get se­duced by the shiny new toy. Right now, that toy is Artificial Intelligence. Boardrooms are buzzing with buzz­words like LLMs, agen­tic work­flows, and gen­er­a­tive rea­son­ing. Executives are fran­ti­cally ask­ing, What is our AI strat­egy?“Like every ma­jor tech­no­log­i­cal shift be­fore it—from the steam en­gine to the spread­sheet—AI does not in­her­ently make an or­ga­ni­za­tion smarter. AI, like any other tool, only makes faster.If you au­to­mate a stu­pid de­ci­sion, you just make stu­pid de­ci­sions at light speed. If you ap­ply an agen­tic AI work­flow to a bu­reau­cratic night­mare of an ap­proval chain, you haven’t fixed the bu­reau­cracy; you’ve just built a ro­bot that hates its job as much as your em­ploy­ees do.For decades, tra­di­tional soft­ware de­manded struc­ture. Rows, columns, booleans, and fixed fields. If data did­n’t fit the box, the com­puter could­n’t read it.Be­cause com­put­ers could­n’t han­dle the mess, hu­mans han­dled it (before AI). And hu­mans don’t al­ways fol­low a flow chart. These processes—like handling a com­plex cus­tomer com­plaint” or brainstorming a mar­ket­ing cam­paign”—are of­ten ad-hoc, in­tu­itive, and com­pletely un­doc­u­mented. They live in the heads of your se­nior staff, not in your SOPs.If you want to use AI to process un­struc­tured data, you must first bring struc­ture to the work­flow it­self. You need to im­prove your process de­sign to ac­count for the am­bi­gu­ity that AI han­dles.What is the trans­for­ma­tion? (What ex­actly is the hu­man—or now the AI—supposed to ex­tract or de­duce from that mess?)The Old Way: An an­a­lyst reads 50 con­tracts (unstructured), high­lights risks based on gut feel­ing (unstructured process), and sum­ma­rizes them in 3 days.The AI Way: An AI scans 50 con­tracts and ex­tracts spe­cific risk clauses based on de­fined pa­ra­me­ters in 3 min­utes.The process (Review Contracts -> Identify Risk -> Summarize) has­n’t changed, but it had to be rig­or­ously de­fined for the AI to work. The in­tel­li­gence (knowing what a risk” ac­tu­ally means) still re­quires hu­man gov­er­nance. What has changed is the ve­loc­ity.Go back to the white­board. Map out your value chain—es­pe­cially the messy, hu­man-cen­tric parts in­volv­ing un­struc­tured data that you pre­vi­ously ig­nored. Find the bot­tle­necks. Identify the waste.Tech­nol­ogy changes.

The rules of busi­ness ef­fi­ciency do not.

It’s al­ways the process, stu­pid!

And that’s where ac­tual AI Tools are miss­ing that point, be­cause they weren’t build for that

Von der Idee zur App ohne eine Zeile Code zu schreiben

Vom Datengrab zur Goldmine - KI Einsatz mit schnellem ROI (Promptcast)

Wie man KI am schnell­sten gewinnbrin­gend ein­set­zen kann (Diesmal nur als Prompcast)

Vom Datengrab zur Goldmine - KI Einsatz mit schnellem ROI0:00/894.6184131×

Live long and pros­per 😉🖖

Silicon Valleys KI-Burggraben hat ein Leck — es heißt Open Source

Der Mythos der un­ein­nehm­baren Festung

In den Strategie-Etagen des Silicon Valley erzählt man sich gerne die Geschichte von den un­ein­nehm­baren Burggräben. Der KI-Wettlauf, so die Legende, sei ein Spiel für Giganten mit Budgets so groß wie Kleinstaaten. Nur eine Handvoll US Tech-Konzerne könne hier mit­spie­len, der Rest der Welt schaut

Was, wenn der lauteste Teilnehmer im Raum nicht zwangsläu­fig der führende ist?

...

Read the original on its.promp.td »

9 232 shares, 47 trendiness

Zigbook is Plagiarizing the Zigtools Playground

For those un­fa­mil­iar, Zigtools was founded to sup­port the Zig com­mu­nity, es­pe­cially new­com­ers, by cre­at­ing ed­i­tor tool­ing such as ZLS, pro­vid­ing build­ing blocks for lan­guage servers writ­ten in Zig with lsp-kit, work­ing on tools like the Zigtools Playground, and con­tribut­ing to Zig ed­i­tor ex­ten­sions like vs­code-zig.

A cou­ple weeks ago, a Zig re­source called Zigbook was re­leased with a bold claim of zero AI and an orig­i­nal project-based” struc­ture.

Unfortunately, even a cur­sory look at the non­sense chap­ter struc­ture, book con­tent, ex­am­ples, generic web­site, or post-back­lash is­sue-dis­abled repo re­veals that the book is wholly LLM slop and the pro­ject it­self is struc­tured like some sort of syco­phan­tic psy-op, with bot­ted ac­counts and fake re­ac­tions.

We’re leav­ing out all di­rect links to Zigbook to not give them any more SEO trac­tion.

We thought that the broad com­mu­nity back­lash would be the end of the pro­ject, but Zigbook per­se­vered, re­leas­ing just last week a brand new fea­ture, a high-voltage beta” Zig play­ground.

As we at Zigtools have our own Zig play­ground (repo, web­site), our in­ter­est was im­me­di­ately piqued. The form and func­tion­al­ity looked pretty sim­i­lar and Zigbook even in­te­grated (in a non-func­tional man­ner) ZLS into their play­ground to pro­vide all the fancy ed­i­tor bells-and-whis­tles, like code com­ple­tions and goto de­f­i­n­i­tion.

Knowing Zigbook’s his­tory of de­cep­tion, we im­me­di­ately in­ves­ti­gated the WASM blobs. Unfortunately, the WASM blobs are byte-for-byte iden­ti­cal to ours. This can­not be a co­in­ci­dence given the two blobs (zig.wasm, a lightly mod­i­fied ver­sion of the Zig com­piler, and zls.wasm, ZLS with a mod­i­fied en­try point for WASI) are en­tirely cus­tom-made for the Zigtools Playground.

We archived the WASM files for your con­ve­nience, cour­tesy of the great Internet Archive:

We pro­ceeded to look at the JavaScript code, which we quickly de­ter­mined was sim­i­larly copied, but with LLM dis­tor­tions, likely to pre­vent the code from be­ing com­pletely iden­ti­cal. Still, cer­tain sec­tions were copied one-to-one, like the JavaScript worker data-pass­ing struc­ture and log­ging (original ZLS play­ground code, pla­gia­rized Zigbook code).

The fol­low­ing code from both files is iden­ti­cal:

try {

// @ts-ignore

const ex­it­Code = wasi.start(in­stance);

postMes­sage({

stderr: `\n\n–-\nexit with exit code ${exitCode}\n–-\n`,

} catch (err) {

postMes­sage({ stderr: `${err}` });

postMes­sage({

done: true,

on­mes­sage = (event) => {

if (event.data.run) {

run(event.data.run);

The \n\n–-\nexit with exit code ${exitCode}\n–-\n is per­haps the most ob­vi­ously copied string.

Funnily enough, de­spite copy­ing many parts of our code, Zigbook did­n’t copy the most im­por­tant part of the ZLS in­te­gra­tion code, the JavaScript ZLS API de­signed to work with the ZLS WASM bi­na­ry’s API. That JavaScript code is ab­solutely re­quired to in­ter­act with the ZLS bi­nary which they did pla­gia­rize. Zigbook ei­ther avoided copy­ing that JavaScript code be­cause they knew it would be too glar­ingly ob­vi­ous, be­cause they fun­da­men­tally do not un­der­stand how the Zigtools Playground works, or be­cause they plan to copy more of our code.

To be clear, copy­ing our code and WASM blobs is en­tirely per­mis­si­ble given that the play­ground and Zig are MIT li­censed. Unfortunately, Zigbook has not com­plied with the terms of the MIT li­cense at all, and seem­ingly claims the code and blobs as their own with­out cor­rectly re­pro­duc­ing the li­cense.

We sent Zigbook a neu­tral PR cor­rect­ing the li­cense vi­o­la­tions, but they quickly closed it and deleted the de­scrip­tion, seem­ingly to hide their mis­deeds.

The orig­i­nal de­scrip­tion (also avail­able in the edits” drop­down of the orig­i­nal PR com­ment) is re­pro­duced be­low:

We (@zigtools) no­ticed you were us­ing code from the Zigtools Playground, in­clud­ing byte-by-byte copies of our WASM blobs and ex­cerpts of our JavaScript source code. This is a vi­o­la­tion of the MIT li­cense that the Zigtools Playground is li­censed un­der along­side a vi­o­la­tion of the Zig MIT li­cense (for the zig.wasm blob).The above copy­right no­tice and this per­mis­sion no­tice shall be in­cluded in

all copies or sub­stan­tial por­tions of the Software.

We’ve fixed this by adding the li­censes in ques­tion to your repos­i­tory. As your repos­i­tory does not in­clude a di­rect link to the *.wasm de­pen­den­cies, we’ve added a li­cense dis­claimer on the play­ground page as well that men­tions the li­censes.

Zigbook’s afore­men­tioned bad be­hav­ior and their con­tin­ued vi­o­la­tion of our li­cense and un­will­ing­ness to fix the vi­o­la­tion mo­ti­vated us to write this blog post.

It’s sad that our first blog post is about the pla­gia­rism of our coolest sub­pro­ject. We chal­lenged our­selves by cre­at­ing a WASM-based client-side play­ground to en­able of­fline us­age, code pri­vacy, and no server costs.

This in­ci­dent has mo­ti­vated us to in­vest more time into our play­ground and has gen­er­ated a cou­ple of ideas:

* We’d like to en­able mul­ti­file sup­port to al­low more com­plex Zig pro­jects to be run in the browser

* We’d like to col­lab­o­rate with fel­low Ziguanas to in­te­grate the play­ground into their ex­cel­lent Zig tu­to­ri­als, books, and blog­postsA per­fect ex­am­ple use­case would be en­abling folks to hop into Ziglings on­line with the play­groundThe Zig web­site it­self would be a great tar­get as well!

* A per­fect ex­am­ple use­case would be en­abling folks to hop into Ziglings on­line with the play­ground

* The Zig web­site it­self would be a great tar­get as well!

* We’d like to sup­port stack traces us­ing DWARF de­bug info which is not yet emit­ted by the self-hosted Zig com­piler

As Zig com­mu­nity mem­bers, we ad­vise all other mem­bers of the Zig com­mu­nity to steer clear of Zigbook.

If you’re look­ing to learn Zig, we strongly rec­om­mend look­ing at the ex­cel­lent of­fi­cial Zig learn page which con­tains ex­cel­lent re­sources from the pre­vi­ously men­tioned Ziglings to Karl Seguin’s Learning Zig.

We’re also us­ing this op­por­tu­nity to men­tion that we’re fundrais­ing to keep ZLS sus­tain­able for our only full-time main­tainer, Techatrix. We’d be thrilled if you’d be will­ing to give just $5 a month. You can check out our OpenCollective or GitHub Sponsors.

...

Read the original on zigtools.org »

10 194 shares, 9 trendiness

Major AI conference flooded with peer reviews written fully by AI

What can re­searchers do if they sus­pect that their man­u­scripts have been peer re­viewed us­ing ar­ti­fi­cial in­tel­li­gence (AI)? Dozens of aca­d­e­mics have raised con­cerns on so­cial me­dia about man­u­scripts and peer re­views sub­mit­ted to the or­ga­niz­ers of next year’s International Conference on Learning Representations (ICLR), an an­nual gath­er­ing of spe­cial­ists in ma­chine learn­ing. Among other things, they flagged hal­lu­ci­nated ci­ta­tions and sus­pi­ciously long and vague feed­back on their work.

Graham Neubig, an AI re­searcher at Carnegie Mellon University in Pittsburgh, Pennsylvania, was one of those who re­ceived peer re­views that seemed to have been pro­duced us­ing large lan­guage mod­els (LLMs). The re­ports, he says, were very ver­bose with lots of bul­let points” and re­quested analy­ses that were not the stan­dard sta­tis­ti­cal analy­ses that re­view­ers ask for in typ­i­cal AI or ma­chine-learn­ing pa­pers.”

But Neubig needed help prov­ing that the re­ports were AI-generated. So, he posted on X (formerly Twitter) and of­fered a re­ward for any­one who could scan all the con­fer­ence sub­mis­sions and their peer re­views for AI-generated text. The next day, he got a re­sponse from Max Spero, chief ex­ec­u­tive of Pangram Labs in New York City, which de­vel­ops tools to de­tect AI-generated text. Pangram screened all 19,490 stud­ies and 75,800 peer re­views sub­mit­ted for ICLR 2026, which will take place in Rio de Janeiro, Brazil, in April. Neubig and more than 11,000 other AI re­searchers will be at­tend­ing.

Pangram’s analy­sis re­vealed that around 21% of the ICLR peer re­views were fully AI-generated, and more than half con­tained signs of AI use. The find­ings were posted on­line by Pangram Labs. People were sus­pi­cious, but they did­n’t have any con­crete proof,” says Spero. Over the course of 12 hours, we wrote some code to parse out all of the text con­tent from these pa­per sub­mis­sions,” he adds.

The con­fer­ence or­ga­niz­ers say they will now use au­to­mated tools to as­sess whether sub­mis­sions and peer re­views breached poli­cies on us­ing AI in sub­mis­sions and peer re­views. This is the first time that the con­fer­ence has faced this is­sue at scale, says Bharath Hariharan, a com­puter sci­en­tist at Cornell University in Ithaca, New York, and se­nior pro­gramme chair for ICLR 2026. After we go through all this process … that will give us a bet­ter no­tion of trust.”

The Pangram team used one of its own tools, which pre­dicts whether text is gen­er­ated or edited by LLMs. Pangram’s analy­sis flagged 15,899 peer re­views that were fully AI-generated. But it also iden­ti­fied many man­u­scripts that had been sub­mit­ted to the con­fer­ence with sus­pected cases of AI-generated text: 199 man­u­scripts (1%) were found to be fully AI-generated; 61% of sub­mis­sions were mostly hu­man-writ­ten; but 9% con­tained more than 50% AI-generated text.

Pangram de­scribed the model in a preprint1, which it sub­mit­ted to ICLR 2026. Of the four peer re­views re­ceived for the man­u­script, one was flagged as fully AI-generated and an­other as lightly AI-edited, the team’s analy­sis found.

AI is trans­form­ing peer re­view — and many sci­en­tists are wor­ried

For many re­searchers who re­ceived peer re­views for their sub­mis­sions to ICLR, the Pangram analy­sis con­firmed what they had sus­pected. Desmond Elliott, a com­puter sci­en­tist at the University of Copenhagen, says that one of three re­views he re­ceived seemed to have missed the point of the pa­per”. His PhD stu­dent who led the work sus­pected that the re­view was gen­er­ated by LLMs, be­cause it men­tioned nu­mer­i­cal re­sults from the man­u­script that were in­cor­rect and con­tained odd ex­pres­sions.

When Pangram re­leased its find­ings, Elliott adds, the first thing I did was I typed in the ti­tle of our pa­per be­cause I wanted to know whether my stu­den­t’s gut in­stinct was cor­rect”. The sus­pect peer re­view, which Pangram’s analy­sis flagged as fully AI-generated, gave the man­u­script the low­est rat­ing, leav­ing it on the bor­der­line be­tween ac­cept and re­ject”, says Elliott. It’s deeply frus­trat­ing”.

...

Read the original on www.nature.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.