10 interesting stories served every morning and every evening.




1 1,018 shares, 85 trendiness

Someone At YouTube Needs Glasses

Opened YouTube and was greeted with this abom­i­na­tion:

This is on a 32” 1440p dis­play. There are five (5) videos vis­i­ble, and 1/6 of the page would have been an enor­mous ad.

For ref­er­ence, here is YouTube as of January 2019:

There are 30 videos vis­i­ble and zero ads.

I re­ally, re­ally hope that this A/B test fails.

Unfortunately, us­ing an ad­vanced an­a­lyt­ics pack­age I’ve pro­jected that around May 2026 the YouTube home­page will just be one video, and by September there will be no videos at all on the home­page.

Presumably by then we’ll have our manda­tory NeuraLinks and the YouTube al­go­rithm will be able to in­ject real-time ML gen­er­ated con­tent (and ads) straight into our brains, tun­ing its out­put as needed to max­i­mize our dopamine re­sponse.

I miss YouTube be­fore they turned the pain dial all the way to­wards money.

...

Read the original on jayd.ml »

2 721 shares, 41 trendiness

Finland bans smartphones in schools

Pupils will be able to use their phones in some cir­cum­stances, but they will need to get per­mis­sion from teach­ers.

Pupils will be able to use their phones in some cir­cum­stances, but they will need to get per­mis­sion from teach­ers.

Finnish Parliament voted on Tuesday to ap­prove a law that re­stricts the use of mo­bile de­vices by pupils at pri­mary and sec­ondary schools.

The new rules are ex­pected to come into force af­ter the sum­mer break, in August.

The law does not en­tirely ban the use of mo­bile phones at school, and their use will be per­mit­ted in cer­tain sit­u­a­tions. But gen­er­ally, the use of phones dur­ing class time will be pro­hib­ited.

Pupils will need to get spe­cial per­mis­sion from teach­ers to use their phones, to as­sist them in stud­ies, or to take care of per­sonal health-re­lated mat­ters, for ex­am­ple.

The new law also gives school staff mem­bers the au­thor­ity to con­fis­cate mo­bile de­vices from pupils if they have caused teach­ing or learn­ing dis­rup­tions.

Late last year, Education Minister Anders Adlercreutz (SPP) em­pha­sised that kids’ dig­i­tal skills will still be sup­ported de­spite the phone re­stric­tions.

Users with an Yle ID can leave com­ments on our news sto­ries. You can cre­ate your Yle ID via this link. Our guide­lines on com­ment­ing and mod­er­a­tion are ex­plained here.

...

Read the original on yle.fi »

3 671 shares, 52 trendiness

Port of Los Angeles says shipping volume will plummet 35% next week as China tariffs start to bite

According to our own port op­ti­mizer, which mea­sures the load­ings in Asia, we’ll be down just a lit­tle bit over 35% next week com­pared to last year. And it’s a pre­cip­i­tous drop in vol­ume with a num­ber of ma­jor American re­tail­ers stop­ping all ship­ments from China based on the tar­iffs,” Seroka said.

Gene Seroka, ex­ec­u­tive di­rec­tor of the Port of Los Angeles, said Tuesday on CNBCs Squawk Box” that he ex­pects in­com­ing cargo vol­ume to slide by more than a third next week com­pared with the same pe­riod in 2024.

Shipments from China to the West Coast of the U. S. will plum­met next week as the im­pact of President Donald Trump’s tar­iffs leads com­pa­nies to cut their im­port or­ders.

According to our own port op­ti­mizer, which mea­sures the load­ings in Asia, we’ll be down just a lit­tle bit over 35% next week com­pared to last year. And it’s a pre­cip­i­tous drop in vol­ume with a num­ber of ma­jor American re­tail­ers stop­ping all ship­ments from China based on the tar­iffs,” Seroka said.

Gene Seroka, ex­ec­u­tive di­rec­tor of the Port of Los Angeles, said Tuesday on CNBCs Squawk Box” that he ex­pects in­com­ing cargo vol­ume to slide by more than a third next week com­pared with the same pe­riod in 2024.

Shipments from China to the West Coast of the U. S. will plum­met next week as the im­pact of President Donald Trump’s tar­iffs leads com­pa­nies to cut their im­port or­ders.

Shipments from China make up about 45% of the busi­ness for the Port of LA, though some trans­port com­pa­nies will be look­ing to pick up goods at other points in Southeast Asia to try to fill up their ships, Seroka said.

Realistically speak­ing, un­til some ac­cord or frame­work can be reached with China, the vol­ume com­ing out of there — save a cou­ple of dif­fer­ent com­modi­ties — will be very light at best,” Seroka said.

Along with the lower vol­ume of goods, Seroka said he ex­pects roughly a quar­ter of the usual num­ber of ar­riv­ing ships to the port to be can­celed in May.

Trump an­nounced a sharp in­crease in tar­iffs on Chinese goods on April 2, which led to es­ca­la­tion on both sides, even­tu­ally re­sult­ing in both the U. S. and China im­pos­ing levies of more than 100% on many goods from each other. U.S. Treasury Secretary Scott Bessent has de­scribed the sit­u­a­tion as unsustainable” but there has been no sign of sub­stan­tial ne­go­ti­a­tions be­tween the two coun­tries.

Data on ship­ments out of China had al­ready started to sig­nal slow­ing trade vol­ume to the U. S., alarm­ing some econ­o­mists. Apollo Global Management’s chief econ­o­mist, Torsten Slok, re­cently laid out a time­line where lower im­ports from China leads to lay­offs in trans­porta­tion and re­tail in­dus­tries in the U.S., empty shelves and a re­ces­sion this sum­mer.

Seroka said he thinks U. S. re­tail­ers have about five to seven weeks be­fore the im­pact of the cur­tailed ship­ments be­gins to bite, partly be­cause com­pa­nies stocked up ahead of Trump’s tar­iff an­nounce­ments.

I don’t see a com­plete empti­ness on store shelves or on­line when we’re buy­ing. But if you’re out look­ing for a blue shirt, you might find 11 pur­ple ones and one blue in a size that’s not yours. So we’ll start see­ing less choice on those shelves sim­ply be­cause we’re not get­ting the va­ri­ety of goods com­ing in here based on the ad­di­tional costs in place. And for that one blue shirt that’s still left, you’ll see a price hike,” Seroka said.

...

Read the original on www.cnbc.com »

4 561 shares, 32 trendiness

How I Created Perfect Wiki and Reached $250K in Annual Revenue Without Investors

Hi, my name is Ilia. I founded Per­fect Wiki — a SaaS prod­uct for cre­at­ing in­ter­nal com­pany knowl­edge bases that works di­rectly within Microsoft Teams. We cre­ated a sim­ple and con­ve­nient tool for stor­ing, edit­ing, and shar­ing knowl­edge within com­pa­nies. It all started with the idea to re­solve one spe­cific pain point: the built-in Wiki in Microsoft Teams of­fered was in­con­ve­nient, and there was no wor­thy al­ter­na­tives with full in­te­gra­tion to the plat­form.

In this ar­ti­cle, I want to share how the idea came about, the mis­takes I made, how I found my first cus­tomers, and how I grad­u­ally grew to a steady in­come of $250,000 a year over five years. All of this — with­out in­vestors, a 20-person team, or a Series A” round.

In May 2020, I lost my job and started think­ing about new pro­jects to launch or where to di­rect my ef­forts. The pan­demic dras­ti­cally changed the mar­ket: the mass tran­si­tion to re­mote work boosted in­ter­est in on­line com­mu­ni­ca­tion tools, and every­one wanted to launch their own video con­fer­enc­ing ser­vice. It felt like a gold rush, and I de­cided to fol­low the prin­ci­ple: in such times, those who sell shov­els win, not those who search for gold.

Zoom be­came hugely pop­u­lar dur­ing the pan­demic. I de­cided to try mak­ing a small app — a trans­la­tor — and pub­lished it on the Zoom Marketplace. But it turned out peo­ple were only in­ter­ested in the Zoom app it­self, and the mar­ket­place had al­most no traf­fic.

After that fail­ure, I moved on to Plan B: I tried pub­lish­ing the trans­la­tor app on the Mi­crosoft Teams Marketplace. It seemed like there were sig­nif­i­cantly more users, apps there had lots of rat­ings and in­stalls. The plat­form felt alive.” My in­tu­ition did­n’t fail me — just a few days af­ter pub­lish­ing, some­one bought a paid sub­scrip­tion. But I soon re­al­ized the trans­la­tor app was very lim­ited with no room for growth. Microsoft could eas­ily re­place it any­time.

That’s when I de­cided to dive deeper into an­a­lyz­ing what other prob­lems Microsoft Teams users were fac­ing and what kind of ser­vice I could of­fer them. I was con­fi­dent I’d find a niche be­cause the traf­fic and ac­tiv­ity on the mar­ket­place were high — a ready-made cus­tomer base was just in front of me. I just needed to find a prod­uct idea that would solve a real prob­lem.

I started read­ing fo­rums, com­ments, and on­line dis­cus­sions. It turned out the built-in Wiki in Microsoft Teams an­noyed users re­ally a lot. It was slow and in­con­ve­nient. That’s how the idea came about — I had to cre­ate a fast, user-friendly knowl­edge base built di­rectly into Microsoft Teams. The main goal was to make it sim­ple and in­tu­itive for peo­ple who weren’t tech-savvy — just reg­u­lar PC users.

I cre­ated and pub­lished the first ver­sion of the prod­uct in a fairly short time — it took me about three weeks. It al­ready had page cre­ation and edit­ing fea­tures, and most im­por­tantly, full-text search (a much-re­quested fea­ture the users lacked in the built-in Wiki).

I used tech­nolo­gies and tools I was al­ready very well fa­mil­iar with: Node.js + Express for the back­end and Re­act for the fron­tend.

Just a cou­ple of days af­ter pub­lish­ing Per­fect Wiki on the Microsoft Teams Marketplace, I got my first pay­ing user. My as­sump­tions were con­firmed — peo­ple were ac­tively look­ing for an al­ter­na­tive to the built-in Wiki, and they searched for it di­rectly in the Teams mar­ket­place. They found my app us­ing the key­word wiki.” It was an awe­some free ac­qui­si­tion chan­nel. Perfect Wiki was al­ways the top search re­sult be­cause there were no com­peti­tors. That’s when I re­al­ized I had found a real pain point  — and I could make money by solv­ing it.

Today, over 500 com­pa­nies around the world use Perfect Wiki. Our main mar­kets are the US, Canada, the UK, and Germany.

Over five years, the prod­uct has grown sig­nif­i­cantly. Revenue is now about $250,000 a year. However, it was­n’t al­ways smooth sail­ing — there were months with no growth, times when every­thing felt stuck. We had to change plans, im­prove the prod­uct, and look for new ideas.

In 2024, Microsoft even fea­tured us at Mi­crosoft Build as an ex­am­ple of an app that’s top-rated and highly val­ued among Teams users and the one the re­ally works — a big mile­stone for us.

Many of our clients came to us af­ter try­ing the Microsoft built-in Wiki. It was clunky, in­con­ve­nient, and did­n’t do the job well. We fo­cused on sim­plic­ity: the es­sen­tial fea­tures only, noth­ing ex­tra — and every­thing should func­tion in­side Microsoft Teams.

Integration with Microsoft Teams is the key. Unlike other knowl­edge base plat­forms, Perfect Wiki does­n’t re­quire switch­ing to a sep­a­rate site or tab. It’s avail­able right where em­ploy­ees al­ready spend most of their day — in Microsoft Teams. It saves time, does­n’t add any dif­fi­cul­ties, and makes work­ing with a knowl­edge base a nat­ural part of the work­flow.

Microsoft tried to ad­dress this is­sue via prod­ucts like Viva and Loop, but they turned out to be too bulky and con­fus­ing. Competitors like Confluence or Notion just aren’t in­te­grated into Teams in a way that’s con­ve­nient for users.

Perfect Wiki was built specif­i­cally for Microsoft Teams — and that’s been our main ad­van­tage from day one.

Currently, the team be­hind Perfect Wiki is just two peo­ple. I han­dle the de­vel­op­ment and prod­uct, and my col­league man­ages user sup­port. Despite hav­ing a tiny team, we man­age to achieve a lot: we launch new fea­tures quickly, com­mu­ni­cate with cus­tomers, test ideas, and main­tain sta­ble ser­vice.

We out­source some mar­ket­ing and con­tent tasks, but every­thing re­lated to the prod­uct and code we do our­selves.

Sometimes we bring in new peo­ple if we feel it’s time to grow. Right now is one of those mo­ments: if you’re an ex­pe­ri­enced de­vel­oper fa­mil­iar with Node.js + Express + React — send us your CV at hello@per­fectwiki.com

It all starts with com­mu­ni­ca­tion. We have an in­ter­nal app chat — peo­ple reg­u­larly send us ques­tions, sug­ges­tions, and feed­back. We also do demo calls, dis­cuss use-case sce­nar­ios, and every quar­ter, we reach out to ac­tive loyal users ask­ing for fea­ture and im­prove­ment ideas. This helps us to deeply un­der­stand user needs.

We don’t im­ple­ment fea­tures just be­cause they seem use­ful. Every new func­tion­al­ity in Perfect Wiki must be gen­uinely re­quested and needed by users. For ex­am­ple, I was­n’t sure whether a search within a page” was nec­es­sary. But af­ter sev­eral com­plaints about doc­u­ments get­ting longer, and Ctrl+F not work­ing in Teams — it be­came clear the fea­ture was needed.

Another ex­am­ple: users sug­gested a weekly di­gest with a list of new or up­dated knowl­edge base ar­ti­cles. They wanted to stay in the loop about changes.

That’s how we im­prove the prod­uct — not by sim­ple guess­ing, but in col­lab­o­ra­tion with our users.

And we ac­tu­ally use Per­fect Wiki our­selves — that helps us spot ar­eas for changes and growth. All our in­ter­nal doc­u­men­ta­tion, tasks, and plans are stored in Perfect Wiki. Even our pub­lic Help Center runs on our plat­form. This way, we test the prod­uct in real use and quickly no­tice what needs fix­ing or tweak­ing.

Every time I check out com­peti­tors’ sites — those who also build knowl­edge base or cus­tomer sup­port plat­forms — I no­tice some­thing odd. Almost all of them use third-party tools like Intercom or Zendesk to sup­port their own cus­tomers. That sur­prises me. If your prod­uct is so great — why don’t you use it your­self? For me, that’s a golden rule: your prod­uct should be so good you want to use it your­self. If not, that means some­thing’s wrong.

Right now, I earn around $25,000 per month. My monthly ex­penses are pretty mod­est:

Everything else is my profit.

The most im­por­tant rule: don’t be afraid to build niche prod­ucts for a nar­row au­di­ence. It’s vi­tal to cre­ate some­thing that solves a spe­cific prob­lem re­ally well.

Second les­son I learned: sim­plic­ity wins. The sim­pler and more un­der­stand­able your prod­uct, the eas­ier it is to sell and main­tain. When you have a small team and lim­ited re­sources, sim­plic­ity is­n’t a lux­ury — it’s a ne­ces­sity. It keeps you from drown­ing in fea­tures, end­less re­quests, and tech debt.

Honestly? I did­n’t have big am­bi­tions. I just wanted to earn a sta­ble $70–80K a year — about what I earned at my pre­vi­ous job. Everything be­yond that has been a pleas­ant bonus. Perfect Wiki has grown more than I ever ex­pected. All with­out in­vest­ments, of­fices, or a big team. Just be­cause the prod­uct was in de­mand — and we kept mak­ing it bet­ter, step by step.

Perfect Wiki has al­ready be­come more than just an add-on to Microsoft Teams. Now it can also be used in Slack, via Chat­GPT, or as a chat­bot on your web­site. You can even cre­ate a pub­lic sup­port por­tal for your cus­tomers — our Help Center is a prime ex­am­ple.

We’re con­stantly adding new in­te­gra­tions, im­prov­ing search, and most im­por­tantly — al­ways lis­ten­ing to our users. The best is still ahead!

P. S. If you’re cu­ri­ous to fol­low our prod­uct jour­ney, I have a Telegram chan­nel and Twitter.

...

Read the original on habr.com »

5 449 shares, 31 trendiness

Port of L.A. executive director says retailers will soon have only about 7 weeks of full inventories left amid U.S.-China trade war

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site con­sti­tutes ac­cep­tance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information

FORTUNE is a trade­mark of Fortune Media IP Limited, reg­is­tered in the U. S. and other coun­tries. FORTUNE may re­ceive com­pen­sa­tion for some links to prod­ucts and ser­vices on this web­site. Offers may be sub­ject to change with­out no­tice.

...

Read the original on fortune.com »

6 402 shares, 24 trendiness

XiaomiMiMo/MiMo: MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining

This code repos­i­tory is li­censed un­der the Apache2.0 License.

Currently, most suc­cess­ful RL works, in­clud­ing open-source re­search, rely on rel­a­tively large base mod­els, e.g., 32B mod­els, par­tic­u­larly for en­hanc­ing code rea­son­ing ca­pa­bil­i­ties. Moreover, it was widely con­sid­ered that achiev­ing uni­form and si­mul­ta­ne­ous im­prove­ments in both math­e­mat­i­cal and code ca­pa­bil­i­ties within a small model is chal­leng­ing. Nonetheless, we be­lieve that the ef­fec­tive­ness of the RL trained rea­son­ing model re­lies on the in­her­ent rea­son­ing po­ten­tial of the base model. To fully un­lock the rea­son­ing po­ten­tial of lan­guage mod­els, ef­forts must fo­cus not only on post-train­ing but also on pre-train­ing strate­gies tai­lored to rea­son­ing.

In this work, we pre­sent MiMo-7B, a se­ries of mod­els trained from scratch and born for rea­son­ing tasks. Our RL ex­per­i­ments from MiMo-7B-Base show that our model pos­sesses ex­tra­or­di­nary rea­son­ing po­ten­tial, even sur­pass­ing much larger 32B mod­els. Additionally, we per­form RL train­ing on a cold-started SFT model, re­sult­ing in MiMo-7B-RL, which demon­strates su­pe­rior per­for­mance on both math­e­mat­ics and code rea­son­ing tasks, match­ing the per­for­mance of OpenAI o1-mini.

We open-source MiMo-7B se­ries, in­clud­ing check­points of the base model, SFT model, RL model trained from base model, and RL model trained from the SFT model. We be­lieve this re­port along with the mod­els will pro­vides valu­able in­sights to de­velop pow­er­ful rea­son­ing LLM that ben­e­fit the larger com­mu­nity.

We op­ti­mize data pre­pro­cess­ing pipeline, en­hanc­ing text ex­trac­tion toolk­its and ap­ply­ing multi-di­men­sional data fil­ter­ing to in­crease rea­son­ing pat­tern den­sity in pre-train­ing data. We also em­ploy mul­ti­ple strate­gies to gen­er­ate mas­sive di­verse syn­thetic rea­son­ing data.

We adopt a three-stage data mix­ture strat­egy for pre-train­ing. Overall, MiMo-7B-Base is pre-trained on ap­prox­i­mately 25 tril­lion to­kens.

We in­cor­po­rate Multiple-Token Prediction as an ad­di­tional train­ing ob­jec­tive, which en­hances model per­for­mance and ac­cel­er­ates in­fer­ence.

* We op­ti­mize data pre­pro­cess­ing pipeline, en­hanc­ing text ex­trac­tion toolk­its and ap­ply­ing multi-di­men­sional data fil­ter­ing to in­crease rea­son­ing pat­tern den­sity in pre-train­ing data. We also em­ploy mul­ti­ple strate­gies to gen­er­ate mas­sive di­verse syn­thetic rea­son­ing data.

* We adopt a three-stage data mix­ture strat­egy for pre-train­ing. Overall, MiMo-7B-Base is pre-trained on ap­prox­i­mately 25 tril­lion to­kens.

* We in­cor­po­rate Multiple-Token Prediction as an ad­di­tional train­ing ob­jec­tive, which en­hances model per­for­mance and ac­cel­er­ates in­fer­ence.

We cu­rate 130K math­e­mat­ics and code prob­lems as RL train­ing data, which can be ver­i­fied by rule-based ver­i­fiers. Each prob­lem un­der­goes care­ful clean­ing and dif­fi­culty as­sess­ment to en­sure qual­ity. We em­ploy only rule-based ac­cu­racy re­wards to avoid po­ten­tial re­ward hack­ing.

To mit­i­gate the sparse re­ward is­sue for chal­leng­ing code prob­lems, we in­tro­duce a test dif­fi­culty dri­ven code re­ward. By as­sign­ing fine-grained scores for test cases with vary­ing dif­fi­culty lev­els, the pol­icy can be more ef­fec­tively op­ti­mized via dense re­ward sig­nal.

We im­ple­ment a data re-sam­pling strat­egy for easy prob­lems to en­hance roll­out sam­pling ef­fi­ciency and sta­bi­lize pol­icy up­dates, par­tic­u­larly in the later phases of RL train­ing.

* We cu­rate 130K math­e­mat­ics and code prob­lems as RL train­ing data, which can be ver­i­fied by rule-based ver­i­fiers. Each prob­lem un­der­goes care­ful clean­ing and dif­fi­culty as­sess­ment to en­sure qual­ity. We em­ploy only rule-based ac­cu­racy re­wards to avoid po­ten­tial re­ward hack­ing.

* To mit­i­gate the sparse re­ward is­sue for chal­leng­ing code prob­lems, we in­tro­duce a test dif­fi­culty dri­ven code re­ward. By as­sign­ing fine-grained scores for test cases with vary­ing dif­fi­culty lev­els, the pol­icy can be more ef­fec­tively op­ti­mized via dense re­ward sig­nal.

* We im­ple­ment a data re-sam­pling strat­egy for easy prob­lems to en­hance roll­out sam­pling ef­fi­ciency and sta­bi­lize pol­icy up­dates, par­tic­u­larly in the later phases of RL train­ing.

We de­velop a Seamless Rollout Engine to ac­cel­er­ate RL train­ing and val­i­da­tion. Our de­sign in­te­grates con­tin­u­ous roll­out, asyn­chro­nous re­ward com­pu­ta­tion, and early ter­mi­na­tion to min­i­mize GPU idle time, achiev­ing 2.29$\times$ faster train­ing and 1.96$\times$ faster val­i­da­tion.

We sup­port MTP in vLLM and en­hance the ro­bust­ness of the in­fer­ence en­gine in RL sys­tem.

* We de­velop a Seamless Rollout Engine to ac­cel­er­ate RL train­ing and val­i­da­tion. Our de­sign in­te­grates con­tin­u­ous roll­out, asyn­chro­nous re­ward com­pu­ta­tion, and early ter­mi­na­tion to min­i­mize GPU idle time, achiev­ing 2.29$\times$ faster train­ing and 1.96$\times$ faster val­i­da­tion.

* We sup­port MTP in vLLM and en­hance the ro­bust­ness of the in­fer­ence en­gine in RL sys­tem.

[Recommended] We of­fi­cial sup­port in­fer­ence with MiMo-MTP us­ing our fork of vLLM.

from vllm im­port LLM, SamplingParams

mod­el_­path = /path/to/MiMo”

llm = LLM(

model=mod­el_­path,

trust_re­mote_­code=True,

num_spec­u­la­tive_­to­kens=1,

dis­able_log_s­tats=False

sam­pling_­params = SamplingParams(temperature=0.6)

con­ver­sa­tion = [

role”: system”,

content”:

role”: user”,

content”: Write an es­say about the im­por­tance of higher ed­u­ca­tion.”,

out­puts = llm.chat(con­ver­sa­tion,

sam­pling_­params=sam­pling_­params,

use_tqdm=False)

for out­put in out­puts:

prompt = out­put.prompt

gen­er­at­ed_­text = out­put.out­puts[0].text

print(f”Prompt: {prompt!r}, Generated text: {generated_text!r}“)

print(“=” * 80)

Or, you can reg­is­ter a vLLM loader for MiMo with­out load­ing MTP pa­ra­me­ters.

You can copy the reg­istry/​reg­is­ter_mi­mo_in­_vllm.py to your di­rec­tory and im­port it with

im­port reg­is­ter_mi­mo_in­_vllm

from vllm im­port LLM, SamplingParams

mod­el_­path = /path/to/MiMo”

llm = LLM(

model=mod­el_­path,

trust_re­mote_­code=True,

# num_spec­u­la­tive_­to­kens=1,

dis­able_log_s­tats=False

sam­pling_­params = SamplingParams(temperature=0.6)

from trans­form­ers im­port AutoModel, AutoModelForCausalLM, AutoTokenizer

mod­el_­path = /path/to/MiMo”

model = AutoModelForCausalLM.from_pretrained(model_path, trust_re­mote_­code=True)

to­k­enizer = AutoTokenizer.from_pretrained(model_path)

in­puts = to­k­enizer([“To­day is”], re­turn_ten­sors=‘pt’)

out­put = model.gen­er­ate(**in­puts, max_new_­to­kens = 100)

print(to­k­enizer.de­code(out­put.tolist()[0]))

* We rec­om­mend us­ing our fork of vLLM which is de­vel­oped based on vLLM 0.7.3.

We haven’t ver­i­fied MiMo with other in­fer­ence en­gines and wel­come con­tri­bu­tions based on the model de­f­i­n­i­tion in the Huggingface repo 💻.

@misc{xiaomi2025mimo,

ti­tle={MiMo: Unlocking the Reasoning Potential of Language Model — From Pretraining to Posttraining},

au­thor={{Xi­aomi LLM-Core Team}},

year={2025},

pri­ma­ryClass={cs.CL},

url={https://​github.com/​Xi­aomiM­iMo/​MiMo},

Please con­tact us at mimo@xi­aomi.com or open an is­sue if you have any ques­tions.

...

Read the original on github.com »

7 368 shares, 16 trendiness

You Wouldn't Download a Hacker News

And now I can an­a­lyze it with DuckDB. Behold the frac­tion of to­tal com­ments and sto­ries ref­er­enc­ing key top­ics over time!

As part of build­ing hn.un­lurker.com, I wrote a

HN API client. There are al­ready a bunch of other clients, but I wanted to try the lat­est Go fea­tures and lin­ters on a new pro­ject. I’m glad I did; it was a lot of fun.

The client can re­trieve ac­tive items, lists of items, etc. (comments and sto­ries are called items” in the HN API). Although I only re­ally needed re­cent items for my pro­ject, for com­plete­ness I added scan” which down­loads all the items, in or­der, from zero to the lat­est or the other way around.

I won­dered — could I just down­load the whole thing? Extrapolating from a few thou­sand items, it would only be tens of GiB of JSON. I thought I’d give it a try.

hn scan –no-cache –asc -c- -o full.json

I had to CTRL-C a stalled down­load a few times, but scan is re­sum­able so af­ter a few hours I was done. I had a 20 GiB JSON file of every­thing that has ever hap­pened on Hacker News, and I can just re-run the com­mand above to top it off” any time I need the lat­est. But what could I do with it?

First I just grepped for things. How many times has the phrase correct horse bat­tery sta­ple” ap­peared on the site? Quite a few: 231 times (the last one

just to­day). But grep­ping stuff is old news, so I thought I’d try out DuckDB.

In the data­base world, DuckDB is unique: a su­per-fast em­bed­d­a­ble an­a­lyt­ics ex­e­cu­tion en­gine also avail­able as a com­mand-line tool. I spend most of my day wran­gling a

dif­fer­ent data­base (there’s the plug my cowork­ers will be look­ing for) but I’ve been mean­ing to try DuckDB and it seemed per­fect for this one-off task.

As it turns out, with their new UI for novices like me, it’s a breeze to use. AND LLMs are pretty good at help­ing craft the SQL queries. I just had to im­port the data:

CREATE TABLE items AS

SELECT *

FROM read­_j­son_auto(‘/​home/​ja­son/​full.json’, for­mat=‘nd’, sam­ple_­size=-1);

Then query it. Here’s a 12-week mov­ing av­er­age of the frac­tion of to­tal items con­tain­ing the terms I am in­ter­ested in:

WITH weekly AS (

SELECT

DATE_TRUNC(‘week’, TO_TIMESTAMP(time)) AS week_s­tart,

COUNT(*) FILTER (WHERE text ILIKE %python%’)::float  / NULLIF(COUNT(*),0)

AS python_prop,

COUNT(*) FILTER (WHERE text ILIKE %javascript%’)::float / NULLIF(COUNT(*),0)

AS javascrip­t_prop,

COUNT(*) FILTER (WHERE text ILIKE %java%’)::float  / NULLIF(COUNT(*),0)

AS ja­va_prop,

COUNT(*) FILTER (WHERE text ILIKE %ruby%’)::float  / NULLIF(COUNT(*),0)

AS ru­by_prop,

COUNT(*) FILTER (WHERE text ILIKE %rust%’)::float  / NULLIF(COUNT(*),0)

AS rust_prop

FROM items

GROUP BY week_s­tart

SELECT

week_s­tart,

AVG(python_prop) OVER (

ORDER BY week_s­tart

ROWS BETWEEN 11 PRECEDING AND CURRENT ROW

) AS avg_python_12w,

AVG(javascript_prop) OVER (

ORDER BY week_s­tart

ROWS BETWEEN 11 PRECEDING AND CURRENT ROW

) AS avg_­javascrip­t_12w,

AVG(java_prop) OVER (

ORDER BY week_s­tart

ROWS BETWEEN 11 PRECEDING AND CURRENT ROW

) AS avg_­ja­va_12w,

AVG(ruby_prop) OVER (

ORDER BY week_s­tart

ROWS BETWEEN 11 PRECEDING AND CURRENT ROW

) AS avg_ru­by_12w,

AVG(rust_prop) OVER (

ORDER BY week_s­tart

ROWS BETWEEN 11 PRECEDING AND CURRENT ROW

) AS avg_rust_12w

FROM weekly

ORDER BY week_s­tart;

Overall DuckDB seems re­ally great for an­a­lyz­ing data sets of this size.

Now that I have a lo­cal down­load of all Hacker News con­tent, I can train hun­dreds of LLM-based bots on it and run them as con­trib­u­tors, slowly and in­evitably re­plac­ing all hu­man text with the out­put of a chi­nese room os­cil­la­tor per­pet­u­ally echo­ing and re­cy­cling the past.

Or al­ter­na­tively, I think for this pro­ject I am done. Someone else will have to take it to the next log­i­cal step.

Thanks for read­ing! Please check out hn.un­lurker.com, take a look at my

other ar­ti­cles, or find me on X.

...

Read the original on www.jasonthorsness.com »

8 305 shares, 29 trendiness

deepseek-ai/DeepSeek-Prover-V2

We in­tro­duce DeepSeek-Prover-V2, an open-source large lan­guage model de­signed for for­mal the­o­rem prov­ing in Lean 4, with ini­tial­iza­tion data col­lected through a re­cur­sive the­o­rem prov­ing pipeline pow­ered by DeepSeek-V3. The cold-start train­ing pro­ce­dure be­gins by prompt­ing DeepSeek-V3 to de­com­pose com­plex prob­lems into a se­ries of sub­goals. The proofs of re­solved sub­goals are syn­the­sized into a chain-of-thought process, com­bined with DeepSeek-V3′s step-by-step rea­son­ing, to cre­ate an ini­tial cold start for re­in­force­ment learn­ing. This process en­ables us to in­te­grate both in­for­mal and for­mal math­e­mat­i­cal rea­son­ing into a uni­fied model.

To con­struct the cold-start dataset, we de­velop a sim­ple yet ef­fec­tive pipeline for re­cur­sive the­o­rem prov­ing, uti­liz­ing DeepSeek-V3 as a uni­fied tool for both sub­goal de­com­po­si­tion and for­mal­iza­tion. We prompt DeepSeek-V3 to de­com­pose the­o­rems into high-level proof sketches while si­mul­ta­ne­ously for­mal­iz­ing these proof steps in Lean 4, re­sult­ing in a se­quence of sub­goals.

We use a smaller 7B model to han­dle the proof search for each sub­goal, thereby re­duc­ing the as­so­ci­ated com­pu­ta­tional bur­den. Once the de­com­posed steps of a chal­leng­ing prob­lem are re­solved, we pair the com­plete step-by-step for­mal proof with the cor­re­spond­ing chain-of-thought from DeepSeek-V3 to cre­ate cold-start rea­son­ing data.

We cu­rate a sub­set of chal­leng­ing prob­lems that re­main un­solved by the 7B prover model in an end-to-end man­ner, but for which all de­com­posed sub­goals have been suc­cess­fully re­solved. By com­pos­ing the proofs of all sub­goals, we con­struct a com­plete for­mal proof for the orig­i­nal prob­lem. This proof is then ap­pended to DeepSeek-V3′s chain-of-thought, which out­lines the cor­re­spond­ing lemma de­com­po­si­tion, thereby pro­duc­ing a co­he­sive syn­the­sis of in­for­mal rea­son­ing and sub­se­quent for­mal­iza­tion.

After fine-tun­ing the prover model on the syn­thetic cold-start data, we per­form a re­in­force­ment learn­ing stage to fur­ther en­hance its abil­ity to bridge in­for­mal rea­son­ing with for­mal proof con­struc­tion. Following the stan­dard train­ing ob­jec­tive for rea­son­ing mod­els, we use bi­nary cor­rect-or-in­cor­rect feed­back as the pri­mary form of re­ward su­per­vi­sion.

The re­sult­ing model, DeepSeek-Prover-V2-671B, achieves state-of-the-art per­for­mance in neural the­o­rem prov­ing, reach­ing % pass ra­tio on the MiniF2F-test and solv­ing 49 out of 658 prob­lems from PutnamBench. The proofs gen­er­ated by DeepSeek-Prover-V2 for the miniF2F dataset are avail­able for down­load as a ZIP archive.

we in­tro­duce ProverBench, a bench­mark dataset com­pris­ing 325 prob­lems. Of these, 15 are for­mal­ized from num­ber the­ory and al­ge­bra ques­tions fea­tured in the re­cent AIME com­pe­ti­tions (AIME 24 and 25), of­fer­ing au­then­tic high-school com­pe­ti­tion-level chal­lenges. The re­main­ing 310 prob­lems are drawn from cu­rated text­book ex­am­ples and ed­u­ca­tional tu­to­ri­als, con­tribut­ing a di­verse and ped­a­gog­i­cally grounded col­lec­tion of for­mal­ized math­e­mat­i­cal prob­lems. This bench­mark is de­signed to en­able more com­pre­hen­sive eval­u­a­tion across both high-school com­pe­ti­tion prob­lems and un­der­grad­u­ate-level math­e­mat­ics.

We re­lease DeepSeek-Prover-V2 in two model sizes: 7B and 671B pa­ra­me­ters. DeepSeek-Prover-V2-671B is trained on top of DeepSeek-V3-Base. DeepSeek-Prover-V2-7B is built upon DeepSeek-Prover-V1.5-Base and fea­tures an ex­tended con­text length of up to 32K to­kens.

You can di­rectly use Huggingface’s Transformers for model in­fer­ence. DeepSeek-Prover-V2-671B shares the same ar­chi­tec­ture as DeepSeek-V3. For de­tailed in­for­ma­tion and sup­ported fea­tures, please re­fer to the DeepSeek-V3 doc­u­men­ta­tion on Hugging Face.

The fol­low­ing is a ba­sic ex­am­ple of gen­er­at­ing a proof for a prob­lem from the miniF2F dataset:

from trans­form­ers im­port AutoModelForCausalLM, AutoTokenizer

im­port torch

torch.man­u­al_seed(30)

mod­el_id = DeepSeek-Prover-V2-7B” # or DeepSeek-Prover-V2-671B

to­k­enizer = AutoTokenizer.from_pretrained(model_id)

for­mal_s­tate­ment = ”″

im­port Mathlib

im­port Aesop

set_op­tion max­Heart­beats 0

open BigOperators Real Nat Topology Rat

/– What is the pos­i­tive dif­fer­ence be­tween $120\%$ of 30 and $130\%$ of 20? Show that it is 10.-/

the­o­rem math­d_al­ge­bra_10 : abs ((120 : ℝ) / 100 * 30 - 130 / 100 * 20) = 10 := by

sorry

″“”.strip()

prompt = ”″

Complete the fol­low­ing Lean 4 code:

```lean4

Before pro­duc­ing the Lean 4 code to for­mally prove the given the­o­rem, pro­vide a de­tailed proof plan out­lin­ing the main proof steps and strate­gies.

The plan should high­light key ideas, in­ter­me­di­ate lem­mas, and proof struc­tures that will guide the con­struc­tion of the fi­nal for­mal proof.

″“”.strip()

chat = [

{“role”: user”, content”: prompt.for­mat(for­mal_s­tate­ment)},

model = AutoModelForCausalLM.from_pretrained(model_id, de­vice_map=“auto”, torch_d­type=torch.bfloat16, trust_re­mote_­code=True)

in­puts = to­k­enizer.ap­ply_chat_tem­plate(chat, to­k­enize=True, ad­d_­gen­er­a­tion_prompt=True, re­turn_ten­sors=“pt”).to(model.de­vice)

im­port time

start = time.time()

out­puts = model.gen­er­ate(in­puts, max_new_­to­kens=8192)

print(to­k­enizer.batch_de­code(out­puts))

print(time.time() - start)

The use of DeepSeek-Prover-V2 mod­els is sub­ject to the Model License.

If you have any ques­tions, please raise an is­sue or con­tact us at ser­vice@deepseek.com.

...

Read the original on github.com »

9 298 shares, 21 trendiness

"AI-first" is the new Return To Office

The lat­est fad amongst tech CEOs is no longer founder mode”, or tak­ing drugs that they would fire you for tak­ing, or telling every­body to re­turn to the of­fice — it’s de­mand­ing that all work be AI-first! This is a great idea if you think no­body at your com­pany is great at what they do. It may oth­er­wise be a sub­op­ti­mal strat­egy. Let’s dive in!

Let’s use me as a case study. I’m pretty okay at writ­ing. For ex­am­ple, one time I wrote a fairly tech­ni­cal analy­sis of Twitter’s plat­form strat­egy that in­spired Will. I.Am of the Black Eyed Peas to start Twitter beef with me two years later when he read the post and took of­fense to my re­fer­ring to him as nobody’s fa­vorite rap­per”.

This is some­thing your GPTs can­not do, I as­sure you. An av­er­age LLM won’t even know that Drake’s fa­vorite MIME type is ap­pli­ca­tion/​pdf. Chalk one up for the great­ness of hu­man cre­ativ­ity.

Shopify’s CEO Tobi Lütke (personal motto: what if a Canadian was all the worst things about the United States?“) started the AI-first” trend, with one of those big memos that in­cluded, amongst other things, the de­c­la­ra­tion that We will add Al us­age ques­tions to our per­for­mance and peer re­view ques­tion­naire.” This is un­usual — did your boss ever have to send you a memo de­mand­ing that you use a smart­phone? Was there a per­for­mance re­view re­quir­ing you to use Slack? I’m ac­tu­ally old enough that I was at dif­fer­ent work­places when they started us­ing spread­sheets and email and the web, and I can tell you, they ab­solutely did­n’t have to drive adop­tion by mak­ing peo­ple fill out pa­per­work about how they were def­i­nitely us­ing the cool new tech­nol­ogy. Isn’t that in­ter­est­ing?

Some of the other CEOs talk­ing about the use of AI are a lit­tle more rea­son­able. Duolingo’s CEO Luis von Ahn seems to be try­ing to be some­what more mod­er­ate in his memo, stat­ing plainly that he does­n’t see AI re­plac­ing his em­ploy­ees. (Though that does im­me­di­ately raise the who brought that up?” ques­tion…) Yet even in this more even-handed take, we still get the in­sis­tence that Al use will be part of what we eval­u­ate in per­for­mance re­views”. This is re­ally weird!

The funny thing is, I’m not say­ing LLMs are with­out their uses. Let’s use me as a case study again. I’m a lousy coder, these days. I haven’t had time to keep up my skills, and the area I fo­cused on for most of my dev ca­reer (front end web de­vel­op­ment) changes par­tic­u­larly quickly. So I use some of the mod­ern tools to help me get up to speed and get more done in a lim­ited amount of time, be­cause oth­er­wise I’m woe­fully un­pro­duc­tive in the short win­dows I have to code in my free time.

To be ex­plicit: I code on the week­ends, not pro­fes­sion­ally. That means I’m not very good at it. I’m cer­tainly noth­ing like the in­cred­i­bly tal­ented de­vel­op­ers that I’ve had the good for­tune to work with over the years. I’m just flu­ent enough to be able to de­bug the bro­ken code that LLMs gen­er­ate, or to catch the bugs that they spew out by de­fault. And I’m sure I don’t even catch all the bugs that pop up, but for­tu­nately, I’m not mak­ing any pro­duc­tion sys­tems; I’m just build­ing lit­tle toy apps and sites for my­self.

This is an im­por­tant il­lus­tra­tion: AI is re­ally good for help­ing you if you’re bad at some­thing, or at least be­low av­er­age. But it’s prob­a­bly not the right tool if you’re great at some­thing. So why would these CEOs be say­ing, al­most all us­ing the ex­act same phras­ing, that every­one at their com­pa­nies should be us­ing these tools? Do they think their em­ploy­ees are all bad at their jobs?

Big tech CEOs and VCs re­ally love per­form­ing for each other. We know they hang out in group chats like high school­ers, preen­ing and send­ing each other texts, each try­ing to make sure they’re all wear­ing the lat­est fash­ions, whether it’s a gold chain or a MAGA hat or just re­peat­ing a phrase that they heard from an­other founder. A key way of show­ing that they’re part of this co­hort is to make sure they’re hav­ing a tantrum and act­ing out against their work­ers fairly reg­u­larly.

The re­turn to of­fice fad was a big part of this ef­fort, of­ten largely mo­ti­vated by re­act­ing to the show of worker power in the racial jus­tice ac­tivism ef­forts of 2020. Similarly, be­ing AI-first shows that a com­pany is par­tic­i­pat­ing in the AI trend in the right” way, by im­pos­ing it on work­ers, rather than trust­ing work­ers to judge what tools are use­ful for them to do their jobs.

A more nor­mal pol­icy on AI at a com­pany might be some­thing like this:

Our IT de­part­ment has eval­u­ated a set of LLM tools and de­ter­mined that these ones meet our re­quire­ments for se­cu­rity, per­for­mance, data gov­er­nance, re­li­a­bil­ity, man­age­abil­ity and in­te­gra­tion with our work­flows. We’ll be do­ing a con­trolled de­ploy­ment of these tools and you can choose to use them if you think they’ll help you with your work; please share your feed­back on whether they are help­ful, and what might make them more use­ful for you over time. Here are the ways these AI tools meet our cor­po­rate stan­dards for com­pli­ance with in­tel­lec­tual prop­erty con­sent, sus­tain­abil­ity and en­vi­ron­men­tal goals, and ac­ces­si­bil­ity.

This would not get you in­vited to the fas­cist VC group chat, tho!

How did we get here? What can we do? Maybe it starts by try­ing to just… be nor­mal about tech­nol­ogy.

There’s an or­tho­doxy in tech ty­coon cir­cles that’s in­creas­ingly re­ferred to, iron­i­cally, as tech op­ti­mism”. I say ironically”, be­cause there’s noth­ing op­ti­mistic about it. The cul­ture is one of deep in­se­cu­rity, re­act­ing de­fen­sively, or even lash­ing out ag­gres­sively, when faced with any crit­i­cal con­ver­sa­tion about new tech­nol­ogy. That ten­dency is paired with a des­per­ate and facile cheer­lead­ing of star­tups, ig­nor­ing the of­ten equally in­ter­est­ing tech­nolo­gies sto­ries that come from acad­e­mia, or from ma­ture in­dus­tries, or from non­com­mer­cial and open source com­mu­ni­ties that don’t get tons of me­dia cov­er­age, but qui­etly push for­ward in­no­vat­ing with­out the fame and for­tune. By con­trast, those of us who ac­tu­ally are op­ti­mistic about tech­nol­ogy (usually be­cause we ei­ther cre­ate it, or are in com­mu­ni­ties with those who do) are just hap­pily mov­ing for­ward, not wor­ry­ing when peo­ple point out the bugs that we all ought to be fix­ing to­gether.

We don’t ac­tu­ally have to fol­low along with the nar­ra­tives that tech ty­coons make up for each other. We choose the tools that we use, based on the util­ity that they have for us. It’s strange to have to say it, but… there are peo­ple pick­ing up and adopt­ing AI tools on their own, be­cause they find them use­ful. This is true, de­spite the fact that there is so god­damn much AI hype out there, with snake oil sales­man push­ing their bull­shit re­li­gion of mag­i­cal think­ing ma­chines and over­promis­ing that these AI tools can do tasks that they’re sim­ply not ca­pa­ble of per­form­ing. It’s telling that the cre­ators of so many of the AI tools don’t even have enough con­fi­dence in their of­fer­ings to sim­ply let users choose to adopt them, and are in­stead forc­ing them into users’ faces in every pos­si­ble cor­ner of their apps and web­sites.

The strangest part is, the AI push­ers don’t have to lie about what AI can do! If, as they say, AI tools are go­ing to get bet­ter quickly, then let them do so and trust that smart peo­ple will pick them up and use them. If you think your work­ers and col­leagues are too stu­pid to rec­og­nize good tools that will help them do their jobs bet­ter, then… you are a bad leader and should step down. Because you’ve cre­ated a bro­ken cul­ture.

But I don’t think the au­di­ence for these memos is re­ally the peo­ple who work at these com­pa­nies. I think the au­di­ence is the other CEOs and in­vestors and VCs in the in­dus­try, just as it was for the other fads of the last few years. And I ex­pect that AI will in­deed be part of how we eval­u­ate per­for­mance in the fu­ture, but mostly in that the way CEOs com­mu­ni­cate to their teams about tech­nolo­gies like AI will be part of how we all eval­u­ate their per­for­mance as lead­ers.

...

Read the original on www.anildash.com »

10 272 shares, 39 trendiness

Google Play sees 47% decline in apps since start of last year

From the start of 2024 to the pre­sent, the Android app mar­ket­place went from host­ing about 3.4 mil­lion apps world­wide to just around 1.8 mil­lion, ac­cord­ing to a new analy­sis by app in­tel­li­gence provider Appfigures. That’s a de­cline of about 47%, rep­re­sent­ing a sig­nif­i­cant purge of the apps that have been avail­able to Android users glob­ally.

The de­cline is not part of some larger global trend, the firm also notes. During the same pe­riod, Apple’s iOS App Store went from host­ing 1.6 mil­lion apps to now just around 1.64 mil­lion apps, for in­stance — a slight in­crease.

In Google’s case, the de­cline in apps could be a re­lief for Android de­vice own­ers who have had to sort through scammy, spammy, and oth­er­wise poor-qual­ity apps to find the best ones to in­stall. The re­duc­tion could also help de­vel­op­ers who have had to fight for vis­i­bil­ity.

Over the years, Google Play’s less strin­gent re­quire­ments for app re­view have led to the mar­ket­place be­ing over­run with lower-qual­ity apps. While Apple con­tin­ues to en­force strict app re­view mea­sures be­fore pub­li­ca­tion, Google of­ten re­lies on au­to­mated checks com­bined with mal­ware scans to speed up the app-re­view process. It tends to have a shorter app-re­view pe­riod as a re­sult of its lighter touch in terms of hu­man re­view.

In July 2024, Google an­nounced it would raise the min­i­mum qual­ity re­quire­ments for apps, which may have im­pacted the num­ber of avail­able Play Store app list­ings.

Instead of only ban­ning bro­ken apps that crashed, would­n’t in­stall, or run prop­erly, the com­pany said it would be­gin ban­ning apps that demon­strated limited func­tion­al­ity and con­tent.” That in­cluded sta­tic apps with­out app-spe­cific fea­tures, such as text-only apps or PDF-file apps. It also in­cluded apps that pro­vided lit­tle con­tent, like those that only of­fered a sin­gle wall­pa­per. Additionally, Google banned apps that were de­signed to do noth­ing or have no func­tion, which may have been tests or other aban­doned de­vel­oper ef­forts.

Reached for com­ment, Google con­firmed that its new poli­cies were fac­tors here, which also in­cluded an ex­panded set of ver­i­fi­ca­tion re­quire­ments, re­quired app test­ing for new per­sonal de­vel­oper ac­counts, and ex­panded hu­man re­views to check for apps that try to de­ceive or de­fraud users.

In ad­di­tion, the com­pany pointed to other 2024 in­vest­ments in AI for threat de­tec­tion, stronger pri­vacy poli­cies, im­proved de­vel­oper tools, and more. As a re­sult, Google pre­vented 2.36 mil­lion pol­icy-vi­o­lat­ing apps from be­ing pub­lished on its Play Store and banned more than 158,000 de­vel­oper ac­counts that had at­tempted to pub­lish harm­ful apps, it said.

One fac­tor Google did­n’t cite was the new trader sta­tus rule en­forced by the EU as of this February, which be­gan re­quir­ing de­vel­op­ers to share their names and ad­dresses in the ap­p’s list­ing. Those who failed to do so would see their apps re­moved from EU app stores. (It’s worth point­ing out that Apple also be­gan re­quir­ing trader sta­tus in­for­ma­tion in February and did not see a de­cline in avail­able apps as a re­sult.)

Appfigures ad­di­tion­ally notes it be­gan see­ing a de­cline in the num­ber of apps on the Google Play Store even be­fore the of­fi­cial start of the purge last sum­mer; it does­n’t yet have an ex­pla­na­tion for this change. However, the firm says there have been 10,400 re­leases on Google Play so far this year, up 7.1% year-over-year as of April.

...

Read the original on techcrunch.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.