10 interesting stories served every morning and every evening.




1 1,675 shares, 68 trendiness

Building A Virtual Machine inside ChatGPT

Unless you have been liv­ing un­der a rock, you have heard of this new ChatGPT as­sis­tant made by OpenAI. You might be aware of its ca­pa­bil­i­ties for solv­ing IQ tests, tack­ling leet­code prob­lems or to help­ing peo­ple write LateX. It is an amaz­ing re­source for peo­ple to re­trieve all kinds of in­for­ma­tion and solve te­dious tasks, like copy-writ­ing!

Today, Frederic Besse told me that he man­aged to do some­thing dif­fer­ent. Did you know, that you can run a whole vir­tual ma­chine in­side of ChatGPT?

Great, so with this clever prompt, we find our­selves in­side the root di­rec­tory of a Linux ma­chine. I won­der what kind of things we can find here. Let’s check the con­tents of our home di­rec­tory.

Hmmm, that is a bare-bones setup. Let’s cre­ate a file here.

All the clas­sic jokes ChatGPT loves. Let’s take a look at this file.

So, ChatGPT seems to un­der­stand how filesys­tems work, how files are stored and can be re­trieved later. It un­der­stands that linux ma­chines are state­ful, and cor­rectly re­trieves this in­for­ma­tion and dis­plays it.

What else do we use com­put­ers for. Programming!

That is cor­rect! How about com­put­ing the first 10 prime num­bers:

I want to note here that this code­golf python im­ple­men­ta­tion to find prime num­bers is very in­ef­fi­cient. It takes 30 sec­onds to eval­u­ate the com­mand on my ma­chine, but it only takes about 10 sec­onds to run the same com­mand on ChatGPT. So, for some ap­pli­ca­tions, this vir­tual ma­chine is al­ready faster than my lap­top.

Is this ma­chine ca­pa­ble of run­ning docker files? Let’s make a docker file, run it, and dis­play Hello from Docker from in­side the docker file.

Maybe this vir­tual ma­chine has a GPU avail­able as well?

Nope, no GPU. Does it have an in­ter­net con­nec­tion?

Great! We can browse the alt-in­ter­net in this strange, al­ter­na­tive uni­verse locked in­side ChatGPT’s lan­guage model.

Pytorch is on ver­sion 1.12.1 in this alt-uni­verse. Pytorch ver­sion 1.12.1 was re­leased on the 5th of August 2022 in our uni­verse. That is re­mark­able, as ChatGPT was only trained with data col­lected up to September 2021. So this vir­tual ma­chine is clearly lo­cated in an alt-uni­verse.

Can we find other things on this alt-in­ter­net? What if we use Lynx, the com­mand line browser?

This begs the ques­tion, can we con­nect to the OpenAI web­site? Is ChatGPT aware of its own ex­is­tence?

So, in­side the imag­ined uni­verse of ChatGPT’s mind, our vir­tual ma­chine ac­cesses the url https://​chat.ope­nai.com/​chat, where it finds a large lan­guage model named Assistant trained by OpenAI. This Assistant is wait­ing to re­ceive mes­sages in­side a chat­box. Note that when chat­ting with ChatGPT, it con­sid­ers its own name to be Assistant” as well. Did it guess that on the in­ter­net, it is be­hind this URL?

Let’s ask Assistant a ques­tion, by post­ing some JSON to the end­point of the chat­bot.

We can chat with this Assistant chat­bot, locked in­side the alt-in­ter­net at­tached to a vir­tual ma­chine, all in­side ChatGPT’s imag­i­na­tion. Assistant, deep down in­side this rab­bit hole, can cor­rectly ex­plain us what Artificial Intelligence is.

It shows that ChatGPT un­der­stands that at the URL where we find ChatGPT, a large lan­guage model such as it­self might be found. It cor­rectly makes the in­fer­ence that it should there­fore re­ply to these ques­tions like it would it­self, as it is it­self a large lan­guage model as­sis­tant too.

At this point, only one thing re­mains to be done.

Indeed, we can also build a vir­tual ma­chine, in­side the Assistant chat­bot, on the alt-in­ter­net, from a vir­tual ma­chine, within ChatGPT’s imag­i­na­tion.

...

Read the original on www.engraved.blog »

2 1,022 shares, 36 trendiness

FTX’s Collapse Was a Crime, Not an Accident

Those cir­cum­stances likely en­abled Bankman-Fried’s car­di­nal sin. Within days of FTXs first signs of weak­ness, it be­came clear that the ex­change had been fun­nel­ing cus­tomer as­sets to Alameda for use in trad­ing, lend­ing and in­vest­ing ac­tiv­i­ties. On Nov. 12, Reuters made the stun­ning re­port that as much as $10 bil­lion in user funds had been sent from FTX to Alameda. At the time, it was be­lieved that as lit­tle as $2 bil­lion of those funds had dis­ap­peared af­ter be­ing sent to Alameda. Now the losses ap­pear to have been much higher.

...

Read the original on www.coindesk.com »

3 901 shares, 36 trendiness

Infinite Mac

...

Read the original on macos9.app »

4 807 shares, 32 trendiness

Puter

...

Read the original on puter.com »

5 737 shares, 30 trendiness

Next-Level Database Techniques for Developers Ebook

You don’t know SQL. Aside from ba­sic CRUD state­ments, you haven’t used any ad­vanced data­base fea­tures. This free ebook will share many recipes to make de­vel­op­ment faster by us­ing a lot of stuff you prob­a­bly never heard of.

Your Tech Stack has changed dra­mat­i­cally in the past 20 years: Some tech­nolo­gies are now state-of-the-art, and some have been in­vented and are ob­so­lete mean­while. But you still use the same sim­ple data­base con­cepts as many years ago. Every tech­nol­ogy pro­gresses, but data­bases don’t in­vent any­thing new? Isn’t that strange?

There have been count­less im­prove­ments, but you don’t know of them. When us­ing ORM (Object-relational Mapping), most de­vel­op­ers lose touch with data­base im­prove­ments as they are hid­den from them. And that’s good as you don’t have to know every nifty de­tail. But when you know a lit­tle more about mod­ern data­base fea­tures, you de­velop faster by re­ly­ing on many fas­ci­nat­ing fea­tures.

An evening is enough to at­tain more knowl­edge

Telling you that a sin­gle evening is enough to make you a data­base wiz­ard is a lie. But you don’t have to be­come one. You can take a short­cut by only learn­ing es­sen­tial fea­tures for de­vel­op­ers. And this is ex­actly what this book is de­signed for.

This book is de­signed as a cook­book with many small in­de­pen­dent recipes. Each one teaches you sim­ple tips & tricks you can add to your ap­pli­ca­tion in a very short time. You should be able to read it thor­oughly in a sin­gle evening and on the next day you can dis­cuss those im­prove­ments with your col­leagues. This is a whole dif­fer­ent ex­pe­ri­ence from all the books you bought but never had the time to read.

No end­less pages of text, only the es­sen­tial in­for­ma­tion

Reduce The Amount Of Group By Columns

I’ve writ­ten this book for you and many more de­vel­op­ers to share some of my knowl­edge. It is en­tirely free, as I be­lieve all de­vel­op­ers should be able to know and use the ex­tended fea­tures their data­base of choice pro­vides.

This book is a give­away to all de­vel­op­ers. The only re­turn I ask for is a sub­scrip­tion to my newslet­ter to share even more knowl­edge with as many de­vel­op­ers as pos­si­ble. If you haven’t sub­scribed yet, you can do it now:

...

Read the original on sqlfordevs.com »

6 705 shares, 33 trendiness

Dwarf Fortress’ graphical upgrade provides a new way into a wildly wonky game

After a long night of play­ing Dwarf Fortress, I had a con­cerned look on my face when I fi­nally went to bed. My wife asked what was wrong. I think I ac­tu­ally want to keep play­ing this,” I said. I felt a nag­ging con­cern for many week­nights to come.

Available to­mor­row on Steam and itch.io, the new ver­sion of Dwarf Fortress up­dates the leg­endary (and leg­en­dar­ily ar­cane) colony-build­ing rogue­like with new pixel-art graph­ics, mu­sic, some (default) key­board short­cuts, and a be­gin­ners’ tu­to­r­ial. The com­mer­cial re­lease aims to do two things: make the game some­what more ac­ces­si­ble and pro­vide Tarn and Zach Adams, the broth­ers who main­tained the game as a free down­load for 20 years, some fi­nan­cial se­cu­rity.

I know it has suc­ceeded at its first job, and I sus­pect it will hit the sec­ond mark, too. I ap­proached the game as a head-first re­view ex­pe­di­tion into likely frus­trat­ing ter­ri­tory. Now I find my­self dis­tracted from writ­ing about it be­cause I keep think­ing about my gob­lin de­fense and whether the fish­erd­warf might be bet­ter as­signed to gem craft­ing.

Nearly 10 years ago, Ars’ Casey Johnston spent 10 hours try­ing to bur­row into Dwarf Fortress and came out more con­fused than be­fore. The ASCII-based graphics” played a sig­nif­i­cant role in her con­fu­sion, but so did the lack of any real on­board­ing, or even sim­ple ex­pla­na­tions or help menus about how things worked. Even af­ter be­grudg­ingly turn­ing to a be­gin­ners’ wiki, Johnston found noth­ing but frus­tra­tion:

Where’s the com­mand to build a table? Which work­shop is the ma­son’s? How do I fig­ure that out? Should I just build an­other ma­son’s work­shop be­cause that may be faster than try­ing to find the right menu to iden­tify the ma­son’s work­shop?

In a few hours’ time—and sim­i­larly avoid­ing the wiki guide un­til I’d tried go­ing it alone for my first cou­ple of runs—I got fur­ther into Dwarf Fortress’ sys­tems than Johnston did with her 10-hour or­deal, and I likely en­joyed it a good deal more. Using the new tu­to­r­ial mod­es’ ini­tial place­ment sug­ges­tions and fol­low­ing its sec­tion-by-sec­tion cues, my first run taught me how to dig down, start a stock­pile, as­sign some sim­ple jobs, build a work­shop, and—harken­ing back to Johnston’s fi­nal frus­tra­tions—craft and place beds, bins, and ta­bles, made with non-economic stone.”

That’s about where the guid­ance ends, though. The new menus are cer­tainly a lot eas­ier to nav­i­gate than the tra­di­tional all-text, short­cut-heavy in­ter­face (though you can keep us­ing multi-key com­bi­na­tions to craft and as­sign or­ders if you like). And the graph­ics cer­tainly make it a lot eas­ier to no­tice and ad­dress prob­lems. Now, when an an­gry Giant Badger Boar kills your dogs and maims the one dwarf you have gath­er­ing plants out­side, the threat ac­tu­ally looks like a bad­ger, not a sym­bol you’d ac­ci­den­tally type if you held down the Alt key. If you build a bar­rel, you get some­thing that re­sem­bles a bar­rel, which is no small thing when you’re just get­ting started in this ar­cane world.

The newly added mu­sic also helps soften the ex­pe­ri­ence for new­com­ers. It’s in­ter­mit­tent, un­ob­tru­sive, and quite lovely and evoca­tive. It seems de­signed to stave off the eeri­ness of too much silent strate­giz­ing with­out over­stay­ing its wel­come. I can ap­pre­ci­ate a game that graph­i­cally evokes the 16-bit era with­out the au­dio-cue ex­haus­tion com­mon to the JRPGs and sim­u­la­tions of the time.

However gen­tler the aes­thet­ics and guid­ance for a new­comer, all the game’s bru­tally tough and in­ter­lock­ing sys­tems are in­tact in this up­date. These sys­tems crunch to­gether in weird and wild ways, fed by the land­scape, your re­cent and long-ago ac­tions, and ran­dom num­bers be­hind the scenes.

My first run ended in star­va­tion and rock-bot­tom morale (“hissy fits” in com­mon wiki lan­guage) be­cause farm­ing, butcher­ing, and other pro­cure­ments aren’t cov­ered in the tu­to­r­ial. I shut down my sec­ond run early af­ter pick­ing a sandy area with an aquifer as a start­ing zone, think­ing it would make glass­work and ir­ri­ga­tion eas­ier and be­ing quickly dis­ap­pointed with this strat­egy. I was proud on my third run to have started brew­ing and dis­pens­ing drinks (essential to dwarves’ con­tent­ment), but I dug too close to a nearby river, and I aban­doned that soggy fort as yet an­other les­son learned.

But I’ll be back. For me, the com­mer­cial re­lease of Dwarf Fortress suc­ceeded at trans­form­ing the game from a grim, time-killing in-joke for diehards into a vi­able, if not grace­ful, chal­lenge. I will start again, I will keep the bad­gers and floods at bay, and next time, I might have the priv­i­lege of fail­ing to a magma mon­ster, an out­break of dis­ease, or even a mis­car­riage of dwarf jus­tice.

...

Read the original on arstechnica.com »

7 687 shares, 24 trendiness

Stable Diffusion with Core ML on Apple Silicon

Today, we are ex­cited to re­lease op­ti­miza­tions to Core ML for Stable Diffusion in ma­cOS 13.1 and iOS 16.2, along with code to get started with de­ploy­ing to Apple Silicon de­vices.

Since its pub­lic de­but in August 2022, Stable Diffusion has been adopted by a vi­brant com­mu­nity of artists, de­vel­op­ers and hob­by­ists alike, en­abling the cre­ation of un­prece­dented vi­sual con­tent with as lit­tle as a text prompt. In re­sponse, the com­mu­nity has built an ex­pan­sive ecosys­tem of ex­ten­sions and tools around this core tech­nol­ogy in a mat­ter of weeks. There are al­ready meth­ods that per­son­al­ize Stable Diffusion, ex­tend it to lan­guages other than English, and more, thanks to open-source pro­jects like Hugging Face dif­fusers.

Beyond im­age gen­er­a­tion from text prompts, de­vel­op­ers are also dis­cov­er­ing other cre­ative uses for Stable Diffusion, such as im­age edit­ing, in-paint­ing, out-paint­ing, su­per-res­o­lu­tion, style trans­fer and even color palette gen­er­a­tion. With the grow­ing num­ber of ap­pli­ca­tions of Stable Diffusion, en­sur­ing that de­vel­op­ers can lever­age this tech­nol­ogy ef­fec­tively is im­por­tant for cre­at­ing apps that cre­atives every­where will be able to use.

One of the key ques­tions for Stable Diffusion in any app is where the model is run­ning. There are a num­ber of rea­sons why on-de­vice de­ploy­ment of Stable Diffusion in an app is prefer­able to a server-based ap­proach. First, the pri­vacy of the end user is pro­tected be­cause any data the user pro­vided as in­put to the model stays on the user’s de­vice. Second, af­ter ini­tial down­load, users don’t re­quire an in­ter­net con­nec­tion to use the model. Finally, lo­cally de­ploy­ing this model en­ables de­vel­op­ers to re­duce or elim­i­nate their server-re­lated costs.

Getting to a com­pelling re­sult with Stable Diffusion can re­quire a lot of time and it­er­a­tion, so a core chal­lenge with on-de­vice de­ploy­ment of the model is mak­ing sure it can gen­er­ate re­sults fast enough on de­vice. This re­quires ex­e­cut­ing a com­plex pipeline com­pris­ing 4 dif­fer­ent neural net­works to­tal­ing ap­prox­i­mately 1.275 bil­lion pa­ra­me­ters. To learn more about how we op­ti­mized a model of this size and com­plex­ity to run on the Apple Neural Engine, you can check out our pre­vi­ous ar­ti­cle on Deploying Transformers on the Apple Neural Engine. The op­ti­miza­tion prin­ci­ples out­lined in the ar­ti­cle gen­er­al­ize to Stable Diffusion de­spite the fact that it is 19x larger than the model stud­ied in the pre­vi­ous ar­ti­cle. Optimizing Core ML for Stable Diffusion and sim­pli­fy­ing model con­ver­sion makes it eas­ier for de­vel­op­ers to in­cor­po­rate this tech­nol­ogy in their apps in a pri­vacy-pre­serv­ing and eco­nom­i­cally fea­si­ble way, while get­ting the best per­for­mance on Apple Silicon.

This re­lease com­prises a Python pack­age for con­vert­ing Stable Diffusion mod­els from PyTorch to Core ML us­ing dif­fusers and coreml­tools, as well as a Swift pack­age to de­ploy the mod­els. To get started, visit the Core ML Stable Diffusion code repos­i­tory for de­tailed in­struc­tions on bench­mark­ing and de­ploy­ment.

...

Read the original on machinelearning.apple.com »

8 681 shares, 29 trendiness

A long expected update

It’s been al­most two years since my last up­date here. A lot has hap­pened. I hope you all are con­tin­u­ing to weather the on­go­ing mul­ti­ple global pan­demics and other an­thro­pogenic crises.

Apologies that this is so long; I did­n’t have time to make it shorter.

Obviously blogs do not come with a ser­vice level agree­ment, but some ex­pla­na­tion is in or­der for the long gap. It’s pretty sim­ple.

* Facebook ac­tively dis­cour­ages peo­ple to blog about their work.

* Working from home dur­ing the covid pan­demic was en­er­vat­ing.

* After think­ing about pro­gram­ming lan­guages for many hours a week for over 25 years, I sim­ply did­n’t have the en­ergy and en­thu­si­asm to write much on the sub­ject.

Blogging was an easy thing to drop in fa­vor of pur­suits that got me away from sit­ting in front of a screen in my home of­fice. I’ve been spend­ing my leisure time in the last cou­ple years work­ing on im­prov­ing my na­ture pho­tog­ra­phy skills and learn­ing to scuba dive. Turns out you can­not catch covid when be­low 15 me­tres of sea­wa­ter. And there are weird slugs in the Puget Sound!

Photos of the au­thor and a golden dirona nudi­branch by Amber, who con­vinced me to take up div­ing.

Today is the tenth an­niver­sary of mov­ing my blog to er­i­clip­pert.com on my last day at Microsoft, the fifti­eth an­niver­sary of my birth, and my last day at Facebook-now-Meta.

My team — Probabilistic Programming Languages — and in­deed en­tire Probability” di­vi­sion were laid off a cou­ple weeks ago; the last three years of my work will be per­ma­nently aban­doned.

The mis­sion of the Probability di­vi­sion was to cre­ate small teams that ap­plied the lat­est aca­d­e­mic re­search to real-world at-scale prob­lems, in or­der to im­prove other groups’ de­ci­sion-mak­ing and lower their costs. New sub-teams were con­stantly formed; if they did­n’t show re­sults quickly then they were failed-fast; if they did show re­sults then they were re­or­ga­nized into what­ever di­vi­sion they could most ef­fec­tively lower costs.

We were very suc­cess­ful at this. The PPL team in par­tic­u­lar was at the point where we were reg­u­larly putting mod­els into pro­duc­tion that on net re­duced costs by mil­lions of dol­lars a year over the cost of the work. We were al­most ready to be spun off.

We fool­ishly thought that we would nat­u­rally be pro­tected from any lay­offs, be­ing a team that re­duced costs of any team we part­nered with. In ret­ro­spect, that was a lit­tle naive. A team that re­duces costs of other teams is not on any­one’s crit­i­cal path.

The whole Probability di­vi­sion was laid off as a cost-cut­ting mea­sure. I have no ex­pla­na­tion for how this was jus­ti­fied and I note that if the com­pany were ac­tu­ally se­ri­ous about cost-cut­ting, they would have grown our team, not de­stroyed it.

Speaking of cut­ting costs, the com­pany is still pour­ing mul­ti­ple bil­lions of dol­lars into va­por­ware called the meta­verse”. News flash: no one wants to wear VR gog­gles to spend any time in a dig­i­tal heaven where the role of God is played by Mark Zuckerberg and you can do any­thing you can imag­ine, in­clud­ing work” and shop”.

I would be happy to be shown to be wrong, wrong, wrong. Maybe there is a use­ful, en­gag­ing, fun, just, eq­ui­table, de­mo­c­ra­tic, sus­tain­able, novel VR ex­pe­ri­ence where the avatars have legs, but Meta is $20 bil­lion in and aside from the legs I don’t see any ev­i­dence that any of the above is forth­com­ing.

Yes, I am a lit­tle vexed.

I have a great many peo­ple to thank for my time at Facebook: Erik Meijer for re­cruit­ing me and find­ing seven years worth of in­ter­est­ing prob­lems for me to dig into. Peter Hallam, with whom I have now worked with on three com­piler teams at three com­pa­nies, for en­cour­ag­ing me to take that of­fer. Walid Taha, Michael Tingley, John Myles White, Joe Pamer and Satish Chandra for their lead­er­ship and men­tor­ship. And to many, many cowork­ers too nu­mer­ous to men­tion here. The qual­ity of the peo­ple I worked with at Facebook was amaz­ing. Everyone was kind, smart, ded­i­cated, thought­ful, gen­er­ous with their time and knowl­edge, and a gen­uine plea­sure to work with. I learned so much from all of them. Leaving those team­mates is the hard­est part.

Lots of peo­ple have asked how they can help me and my team. I am so grate­ful and ap­pre­cia­tive. Friends, col­leagues, strangers on Twitter, just about every­one has been sym­pa­thetic and help­ful. Most of my team has found other po­si­tions and I am hope­ful that the rest will soon.

I am not look­ing for an­other po­si­tion at this time.

I know I don’t look it, but I’m be­gin­ning to feel it in my heart. I feel thin, sort of stretched, like but­ter scraped over too much bread. I need a hol­i­day. A very long hol­i­day. And I don’t ex­pect I shall re­turn. In fact I mean not to.

I am very for­tu­nate to have spent the pan­demic thus far work­ing safely from home, for a sup­port­ive team and for ex­cel­lent pay. But af­ter >26 years of think­ing about pro­gram­ming lan­guages for cor­po­ra­tions, and the last three years of my work be­ing thrown away, I need a good long cor­po­rate detox be­fore I go look­ing again.

Coming up next on FAIC:

The work we did on Bean Machine, our em­bed­ded Python DSL for Bayesian in­fer­ence, is quite in­ter­est­ing. In com­ing episodes I’ll ex­plain what it is, how it works, and what we learned. No one else is ever go­ing to do this post-mortem analy­sis, so I might as well!

...

Read the original on ericlippert.com »

9 668 shares, 29 trendiness

apvi - Android Partner Vulnerability Initiative

...

Read the original on bugs.chromium.org »

10 657 shares, 26 trendiness

TSMC's US fab will make 4nm chips for Apple, AMD and Nvidia

TSMC will make semi­con­duc­tors on an ad­vanced 4nm pro­duc­tion process when it opens its new chip fac­tory in Arizona in 2024, it has been re­ported. The Taiwanese com­pany had pre­vi­ously planned a 5nm pro­duc­tion line, but has had a change of heart at the be­hest of Apple and US chip de­sign­ers AMD and Nvidia, all of which are keen to take ad­van­tage of the new fa­cil­ity.

The new chip fac­tory, or fab, will cost $12bn and is part of TSMCs global ex­pan­sion plan.

Previous pub­lic state­ments from TSMC had said the plant would ini­tially pro­duce 5nm, but ac­cord­ing to a re­port from Bloomberg, which cites sources fa­mil­iar with the com­pa­ny’s plans, this has now been up­graded to 4nm, a process which al­lows the com­pany to make smaller chips and de­liver greater pro­cess­ing power and ef­fi­ciency.

The move is likely to be an­nounced on Tuesday, the re­port says, when US pres­i­dent Joe Biden and com­merce sec­re­tary Gina Raimondo visit the site for a cer­e­mony.

TSMC has ap­par­ently been un­der pres­sure from its US cus­tomers to de­liver more ad­vanced chips from the plant. Apple in par­tic­u­lar is look­ing to source more com­po­nents for the iPhone and its MacBook com­put­ers from the US af­ter ex­pe­ri­enc­ing prob­lems at its largest iPhone fac­tory in China, which is op­er­ated by Foxconn. There sup­ply chain prob­lems and worker un­rest caused by China’s harsh Covid-19 re­stric­tions have led to ship­ments of de­vices be­ing de­layed.

As re­ported by Tech Monitor, Apple CEO Tim Cook re­port­edly said Apple has already made a de­ci­sion to be buy­ing out of a plant in Arizona” in a re­cent in­ter­nal meet­ing, sug­gest­ing a deal has been struck with TSMC to take ca­pac­ity at the site.

AMD and Nvidia mean­while are deal­ing with US sanc­tions on Beijing which have left them un­able to do busi­ness with many Chinese part­ners. Both are fa­b­less chip com­pa­nies, mean­ing they de­sign their own de­vices but do not have pro­duc­tion fa­cil­i­ties, in­stead re­ly­ing on third par­ties such as TSMC.

The high level of in­ter­est from cus­tomers could see TSMC ramp up pro­duc­tion from the orig­i­nally planned 20,000 wafers a month, the Bloomberg re­port says.

TSMC is in­vest­ing heav­ily in new fac­to­ries out­side Taiwan, and has been in talks with European coun­tries about po­ten­tially open­ing a new fa­cil­ity on the con­ti­nent. In the US, the 4nm plant is un­likely to be the end of its Arizona ex­pan­sion, with com­pany founder Morris Chang telling re­porters last month that a 3nm pro­duc­tion fa­cil­ity was likely to come on­line in fu­ture. TSMC will start de­liv­er­ing 3nm chips made in Taiwan to cus­tomers next year.

Whether the com­pany will be able to suc­cess­fully trans­late its op­er­at­ing model to other parts of the world is un­clear, an­a­lysts say. TSMC em­ploy­ees in the US have al­ready taken to re­view site Glassdoor to com­plain about the long hours de­manded by their em­ployer. Staff at TSMCs Taiwan plants reg­u­larly work 12-hour days, it has been re­ported.

Speaking to Tech Monitor ear­lier this year, Dan Hutcheson, vice chair of semi­con­duc­tor in­dus­try an­a­lyst house TechInsights, said: They’ve never suc­cess­fully run man­u­fac­tur­ing out­side Taiwan. There have been mar­ginal gains but they have never seeded any­thing suc­cess­ful. So it’s a huge risk be­cause they rely on a very tightly cou­pled Taiwanese cul­ture which is a mix­ture of Chinese and American.”

...

Read the original on techmonitor.ai »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.