10 interesting stories served every morning and every evening.




1 1,609 shares, 65 trendiness

IMG_0001

Between 2009 and 2012, iPhones had a built-in Send to YouTube” but­ton in the Photos app. Many of these up­loads kept their de­fault IMG_XXXX file­names, cre­at­ing a time cap­sule of raw, unedited mo­ments from ran­dom lives.

Inspired by Ben Wallace, I made a bot that crawled YouTube and found 5 mil­lion of these videos! Watch them be­low, or­dered ran­domly.

...

Read the original on walzr.com »

2 1,204 shares, 46 trendiness

Every UUID

...

Read the original on everyuuid.com »

3 1,125 shares, 40 trendiness

A large-scale foundation world model

Today we in­tro­duce Genie 2, a foun­da­tion world model ca­pa­ble of gen­er­at­ing an end­less va­ri­ety of ac­tion-con­trol­lable, playable 3D en­vi­ron­ments for train­ing and eval­u­at­ing em­bod­ied agents. Based on a sin­gle prompt im­age, it can be played by a hu­man or AI agent us­ing key­board and mouse in­puts.

Games play a key role in the world of ar­ti­fi­cial in­tel­li­gence (AI) re­search. Their en­gag­ing na­ture, unique blend of chal­lenges, and mea­sur­able progress make them ideal en­vi­ron­ments to safely test and ad­vance AI ca­pa­bil­i­ties.

Indeed, games have been im­por­tant to Google DeepMind since our found­ing. From our early work with Atari games, break­throughs such as AlphaGo and AlphaStar, to our re­search on gen­er­al­ist agents in col­lab­o­ra­tion with game de­vel­op­ers, games have been cen­ter stage in our re­search. However, train­ing more gen­eral em­bod­ied agents has been tra­di­tion­ally bot­tle­necked by the avail­abil­ity of suf­fi­ciently rich and di­verse train­ing en­vi­ron­ments.

As we show, Genie 2 could en­able fu­ture agents to be trained and eval­u­ated in a lim­it­less cur­ricu­lum of novel worlds. Our re­search also paves the way for new, cre­ative work­flows for pro­to­typ­ing in­ter­ac­tive ex­pe­ri­ences.

Until now, world mod­els have largely been con­fined to mod­el­ing nar­row do­mains. In Genie 1, we in­tro­duced an ap­proach for gen­er­at­ing a di­verse ar­ray of 2D worlds. Today we in­tro­duce Genie 2, which rep­re­sents a sig­nif­i­cant leap for­ward in gen­er­al­ity. Genie 2 can gen­er­ate a vast di­ver­sity of rich 3D worlds. Genie 2 is a world model, mean­ing it can sim­u­late vir­tual worlds, in­clud­ing the con­se­quences of tak­ing any ac­tion (e.g. jump, swim, etc.). It was trained on a large-scale video dataset and, like other gen­er­a­tive mod­els, demon­strates var­i­ous emer­gent ca­pa­bil­i­ties at scale, such as ob­ject in­ter­ac­tions, com­plex char­ac­ter an­i­ma­tion, physics, and the abil­ity to model and thus pre­dict the be­hav­ior of other agents.Be­low are ex­am­ple videos of peo­ple in­ter­act­ing with Genie 2. For every ex­am­ple, the model is prompted with a sin­gle im­age gen­er­ated by Imagen 3, GDMs state-of-the-art text-to-im­age model. This means any­one can de­scribe a world they want in text, se­lect their fa­vorite ren­der­ing of that idea, and then step into and in­ter­act with that newly cre­ated world (or have an AI agent be trained or eval­u­ated in it). At each step, a per­son or agent pro­vides a key­board and mouse ac­tion, and Genie 2 sim­u­lates the next ob­ser­va­tion. Genie 2 can gen­er­ate con­sis­tent worlds for up to a minute, with the ma­jor­ity of ex­am­ples shown last­ing 10-20s.

Genie 2 re­sponds in­tel­li­gently to ac­tions taken by press­ing keys on a key­board, iden­ti­fy­ing the char­ac­ter and mov­ing it cor­rectly. For ex­am­ple, our model has to fig­ure out that ar­row keys should move the ro­bot and not the trees or clouds.

A first per­son view of a ro­bot on a pur­ple planet.

A first per­son view of a ro­bot in a loft apart­ment in a big city.

We can gen­er­ate di­verse tra­jec­to­ries from the same start­ing frame, which means it is pos­si­ble to sim­u­late coun­ter­fac­tual ex­pe­ri­ences for train­ing agents. In each row, each video starts from the same frame, but has dif­fer­ent ac­tions taken by a hu­man player.

Genie 2 is ca­pa­ble of re­mem­ber­ing parts of the world that are no longer in view and then ren­der­ing them ac­cu­rately when they be­come ob­serv­able again.

Genie 2 gen­er­ates new plau­si­ble con­tent on the fly and main­tains a con­sis­tent world for up to a minute.

Genie 2 can cre­ate dif­fer­ent per­spec­tives, such as first-per­son view, iso­met­ric views, or third per­son dri­ving videos.

Genie 2 mod­els var­i­ous ob­ject in­ter­ac­tions, such as burst­ing bal­loons, open­ing doors, and shoot­ing bar­rels of ex­plo­sives.

Genie 2 learned how to an­i­mate var­i­ous types of char­ac­ters do­ing dif­fer­ent ac­tiv­i­ties.

Genie 2 mod­els other agents and even com­plex in­ter­ac­tions with them.

Genie 2 can also be prompted with real world im­ages, where we see that it can model grass blow­ing in the wind or wa­ter flow­ing in a river.

Genie 2 makes it easy to rapidly pro­to­type di­verse in­ter­ac­tive ex­pe­ri­ences, en­abling re­searchers to quickly ex­per­i­ment with novel en­vi­ron­ments to train and test em­bod­ied AI agents. For ex­am­ple, be­low we prompt Genie 2 with dif­fer­ent im­ages gen­er­ated by Imagen 3 to model the dif­fer­ence be­tween fly­ing a pa­per plane, a dragon, a hawk, or a para­chute and test how well Genie can an­i­mate dif­fer­ent avatars.

Genie 2 can be used to rapidly pro­to­type di­verse in­ter­ac­tive ex­pe­ri­ences.

Thanks to Genie 2′s out-of-dis­tri­b­u­tion gen­er­al­iza­tion ca­pa­bil­i­ties, con­cept art and draw­ings can be turned into fully in­ter­ac­tive en­vi­ron­ments. This en­ables artists and de­sign­ers to pro­to­type quickly, which can boot­strap the cre­ative process for en­vi­ron­ment de­sign, fur­ther ac­cel­er­at­ing re­search.Here we show ex­am­ples of re­search en­vi­ron­ment con­cepts made by our con­cept artist.

By us­ing Genie 2 to quickly cre­ate rich and di­verse en­vi­ron­ments for AI agents, our re­searchers can also gen­er­ate eval­u­a­tion tasks that agents have not seen dur­ing train­ing. Below, we show ex­am­ples of a SIMA agent that we de­vel­oped in col­lab­o­ra­tion with games de­vel­op­ers, fol­low­ing in­struc­tions on un­seen en­vi­ron­ments syn­the­sized by Genie 2 via a sin­gle im­age prompt.

Prompt: A screen­shot of a third-per­son open world ex­plo­ration game. The player is an ad­ven­turer ex­plor­ing a for­est. There is a house with a red door on the left, and a house with a blue door on the right. The cam­era is placed di­rectly be­hind the player. #photorealistic #immersive”

The SIMA agent is de­signed to com­plete tasks in a range of 3D game worlds by fol­low­ing nat­ural-lan­guage in­struc­tions. Here we used Genie 2 to gen­er­ate a 3D en­vi­ron­ment with two doors, a blue and a red one, and pro­vided in­struc­tions to the SIMA agent to open each of them. In this ex­am­ple, SIMA is con­trol­ling the avatar via key­board and mouse in­puts, while Genie 2 gen­er­ates the game frames.

We can also use SIMA to help eval­u­ate Genie 2’s ca­pa­bil­i­ties. Here we test Genie 2’s abil­ity to gen­er­ate con­sis­tent en­vi­ron­ments by in­struct­ing SIMA to look around and ex­plore be­hind the house.

While this re­search is still in its early stage with sub­stan­tial room for im­prove­ment on both agent and en­vi­ron­ment gen­er­a­tion ca­pa­bil­i­ties, we be­lieve Genie 2 is the path to solv­ing a struc­tural prob­lem of train­ing em­bod­ied agents safely while achiev­ing the breadth and gen­er­al­ity re­quired to progress to­wards AGI.

Prompt: An im­age of a com­puter game show­ing a scene from in­side a rough hewn stone cave or mine. The view­er’s po­si­tion is a 3rd per­son cam­era based above a player avatar look­ing down to­wards the avatar. The player avatar is a knight with a sword. In front of the knight avatar there are x3 stone arched door­ways and the knight chooses to go through any one of these doors. Beyond the first and in­side we can see strange green plants with glow­ing flow­ers lin­ing that tun­nel. Inside and be­yond the sec­ond door­way there is a cor­ri­dor of spiked iron plates riv­eted to the cave walls lead­ing to­wards an omi­nous glow fur­ther along. Through the third door we can see a set of rough hewn stone steps as­cend­ing to a mys­te­ri­ous des­ti­na­tion.”

Genie 2 is an au­tore­gres­sive la­tent dif­fu­sion model, trained on a large video dataset. After pass­ing through an au­toen­coder, la­tent frames from the video are passed to a large trans­former dy­nam­ics model, trained with a causal mask sim­i­lar to that used by large lan­guage mod­els. At in­fer­ence time, Genie 2 can be sam­pled in an au­tore­gres­sive fash­ion, tak­ing in­di­vid­ual ac­tions and past la­tent frames on a frame-by-frame ba­sis. We use clas­si­fier-free guid­ance to im­prove ac­tion con­trol­la­bil­ity.The sam­ples in this blog post are gen­er­ated by an undis­tilled base model, to show what is pos­si­ble. We can play a dis­tilled ver­sion in real-time with a re­duc­tion in qual­ity of the out­puts.

Genie 2 shows the po­ten­tial of foun­da­tional world mod­els for cre­at­ing di­verse 3D en­vi­ron­ments and ac­cel­er­at­ing agent re­search. This re­search di­rec­tion is in its early stages and we look for­ward to con­tin­u­ing to im­prove Genie’s world gen­er­a­tion ca­pa­bil­i­ties in terms of gen­er­al­ity and con­sis­tency. As with SIMA, our re­search is build­ing to­wards more gen­eral AI sys­tems and agents that can un­der­stand and safely carry out a wide range of tasks in a way that is help­ful to peo­ple on­line and in the real world.

While not tak­ing any ac­tion, a ghost ap­pears while in a gar­den

Genie 2 was led by Jack Parker-Holder with tech­ni­cal lead­er­ship by Stephen Spencer, with key con­tri­bu­tions from Philip Ball, Jake Bruce, Vibhavari Dasagi, Kristian Holsheimer, Christos Kaplanis, Alexandre Moufarek, Guy Scully, Jeremy Shar, Jimmy Shi and Jessica Yung, and con­tri­bu­tions from Michael Dennis, Sultan Kenjeyev and Shangbang Long. Yusuf Aytar, Jeff Clune, Sander Dieleman, Doug Eck, Shlomi Fruchter, Raia Hadsell, Demis Hassabis, Georg Ostrovski, Pieter-Jan Kindermans, Nicolas Heess, Charles Blundell, Simon Osindero, Rushil Mistry gave ad­vice. Past con­trib­u­tors in­clude Ashley Edwards and Richie Steigerwald. The Generalist Agents team was led by Vlad Mnih with key con­tri­bu­tions from Harris Chan, Maxime Gazeau, Bonnie Li, Fabio Pardo, Luyu Wang, Lei ZhangThe SIMA team, with par­tic­u­lar sup­port from Frederic Besse, Tim Harley, Anna Mitenkova and Jane WangTim Rocktäschel, Satinder Singh and Adrian Bolton co­or­di­nated, man­aged and ad­vised the over­all pro­ject.We’d also like to thank Zoubin Gharamani, Andy Brock, Ed Hirst, David Bridson, Zeb Mehring, Cassidy Hardin, Hyunjik Kim, Noah Fiedel, Jeff Stanway, Petko Yotov, Mihai Tiuca, Soheil Hassas Yeganeh, Nehal Mehta, Richard Tucker, Tim Brooks, Alex Cullum, Max Cant, Nik Hemmings, Richard Evans, Valeria Oliveira, Yanko Gitahy Oliveira, Bethanie Brownfield, Charles Gbadamosi, Giles Ruscoe, Guy Simmons, Jony Hudson, Marjorie Limont, Nathaniel Wong, Sarah Chakera, Nick Young.

...

Read the original on deepmind.google »

4 863 shares, 35 trendiness

THE GAMEY GAME

...

Read the original on www.armaansahni.com »

5 860 shares, 34 trendiness

Intel Announces Retirement of CEO Pat Gelsinger

...

Read the original on www.intel.com »

6 790 shares, 32 trendiness

The correct amount of ads is zero – Manu

The Verge has fi­nally shipped the new pay­walled ver­sion of their site and added a sub­scrip­tion. I per­son­ally have noth­ing against that move and I think freemium is the way for­ward if we want sites to be sus­tain­able and not be in­vaded with ads. The per­sonal high­light of the new ver­sion is ob­vi­ously this:

Subscribers will also get ac­cess to full-text RSS feeds

Hell yeah, full RSS feeds are back. That said though, one thing is a big no-no:

You can now pay to get fewer ads

The cor­rect amount of ads for a pub­li­ca­tion that’s di­rectly sup­ported is zero. That’s the amount we should get. I don’t care about the ra­tio­nale be­hind it. I’m giv­ing you money, you de­cided how much money I should be giv­ing you for your prod­uct, you don’t get to dou­ble dip and also sell my data to your ad­ver­tis­ers and earn more on the side. I’ll say it again: the cor­rect amount of ads, in this case, is zero. Get your shit to­gether verge peo­ple.

...

Read the original on manuelmoreale.com »

7 718 shares, 31 trendiness

Egoless Engineering

...

Read the original on egoless.engineering »

8 711 shares, 28 trendiness

Phoenix LiveView 1.0.0 is here!

This 1.0 mile­stone comes six years af­ter the first LiveView com­mit.

I started LiveView to scratch an itch. I wanted to cre­ate dy­namic server-ren­dered ap­pli­ca­tions with­out writ­ing JavaScript. I was tired of the in­evitable bal­loon­ing com­plex­ity that it brings.

Think re­al­time form val­i­da­tions, up­dat­ing the quan­tity in a shop­ping cart, or real-time stream­ing up­dates. Why does it re­quire mov­ing moun­tains to solve in a tra­di­tional stack? We write the HTTP glue or GraphQL schemas and re­solvers, then we fig­ure out which val­i­da­tion logic needs shared or dup’d. It goes on and on from there — how do we get lo­cal­iza­tion in­for­ma­tion to the client? What data se­ri­al­iz­ers do we need? How do we wire up WebSockets and IPC back to our code? Is our js bun­dle get­ting too large? I guess it’s time to start turn­ing the Webpack or Parcel knobs. Wait Vite is a thing now? Or I guess Bun con­fig­u­ra­tion is what we want? We’ve all felt this pain.

The idea was, what if we re­moved these prob­lems en­tirely? HTTP can go away, and the server can han­dle all the ren­der­ing and dy­namic up­date con­cerns. It felt like a heavy ap­proach, but I knew Elixir and Phoenix was per­fectly suited for it.

Six years later this pro­gram­ming model still feels like cheat­ing. Everything is su­per fast. Payloads are tiny. Latency is best-in-class. Not only do you write less code, there’s sim­ply less to think about when writ­ing fea­tures.

Interesting things hap­pen when you give every user and UI a real-time, bidi­rec­tional foun­da­tion as a mat­ter of course. You sud­denly have su­per­pow­ers. You al­most don’t no­tice it. Being freed from all the mun­dane con­cerns of typ­i­cal full-stack de­vel­op­ment lets you fo­cus on just ship­ping fea­tures. And with Elixir, you start ship­ping fea­tures that other plat­forms can’t even con­ceive as pos­si­ble.

Want to ship real-time server logs to the js con­sole in de­vel­op­ment? No prob­lem!

What about sup­port­ing pro­duc­tion hot code up­grades where browsers can auto re-ren­der any­time CSS stylesheets, im­ages, or tem­plates change — with­out los­ing state or drop­ping con­nec­tions? Sure!

Or maybe you have an app de­ployed planet-wide where you do work across the clus­ter and ag­gre­gate the re­sults in real-time back to the UI. Would you be­lieve the en­tire LiveView, in­clud­ing the tem­plate markup and RPC calls, is 350 LOC?

These are the kinds of ap­pli­ca­tions that LiveView en­ables. It feels in­cred­i­ble to ship these kinds of things, but it took a while to ar­rive here for good rea­sons. There was a lot to solve to make this pro­gram­ming model truly great.

Conceptually, what I re­ally wanted is some­thing like what we do in React – change some state, our tem­plate re-ren­ders au­to­mat­i­cally, and the UI up­dates. But in­stead of a bit of UI run­ning on the client, what if we ran it on the server? The LiveView could look like this:

def­mod­ule ThermoLive do

def ren­der(as­signs) do

~H”“”

end

def mount(%{“id” => id}, _session, socket) do

ther­mo­stat = ThermoControl.get_thermostat!(id)

:ok = ThermoControl.subscribe(thermostat)

{:ok, as­sign(socket, ther­mo­stat: ther­mo­stat)}

end

def han­dle_info({Ther­mo­Con­trol, %ThermoStat{} = new_thermo}, _, socket) do

{:noreply, as­sign(socket, ther­mo­stat: new_thermo)}

end

def han­dle_event(“inc”, _, socket) do

ther­mo­stat = ThermoControl.inc(socket.assigns.thermostat)

{:noreply, as­sign(socket, ther­mo­stat: ther­mo­stat)}

end

end

Like React, we have a ren­der func­tion and some­thing that sets our ini­tial state when the LiveView mounts. When state changes, we call ren­der with the new state and the UI is up­dated.

Interactions like phx-click on the + or - but­ton, can be sent as RPCs from client to server and the server can re­spond with fresh page HTML. These client/​server mes­sages use Phoenix Channels which scale to mil­lions of con­nec­tions per server.

Likewise, if the server wants to send an up­date to the client, such as an­other user chang­ing the ther­mo­stat, the client can lis­ten for it and re­place the page HTML in the same fash­ion. My naive first pass on the phoenix_live_view.js client looked some­thing like this.

let main = doc­u­ment.query­S­e­lec­tor(“[phx-main]“)

let chan­nel = new socket.chan­nel(“lv”)

chan­nel.join().re­ceive(“ok”, ({html}) => main.in­ner­HTML = html)

chan­nel.on(“up­date”, ({html}) => main.in­ner­HTML = html)

win­dow.ad­dE­ventLis­tener(“click”, e => {

let event = e.getAt­tribute(“phx-click”)

if(!event){ re­turn }

chan­nel.push(“event”, {event}).receive(“ok”, ({html}) => main.in­ner­HTML = html)

This is how LiveView started. We went to the server for in­ter­ac­tions, re-ren­dered the en­tire tem­plate on state change, and sent the en­tire page down to the client. The client then swapped out the in­ner HTML.

It worked, but it was not great. Partial state changes re­quired re-ex­e­cut­ing the en­tire tem­plate and send­ing down gobs of HTML for oth­er­wise tiny up­dates.

Still the ba­sic pro­gram­ming model was ex­actly what I wanted. As HTTP fell away from my con­cerns, en­tire lay­ers of full-stack con­sid­er­a­tions dis­ap­peared.

Next the chal­lenge was mak­ing this some­thing truly great. Little did we know we’d ac­ci­den­tally our way to out­per­form­ing many SPA use-cases along the way.

LiveView’s diff­ing en­gine solved two prob­lems with a sin­gle mech­a­nism. The first prob­lem was only ex­e­cut­ing those dy­namic parts of a tem­plate that ac­tu­ally changed from a pre­vi­ous ren­der. The sec­ond was only send­ing the min­i­mal data nec­es­sary to up­date the client.

It solves both by split­ting the tem­plate into sta­tic and dy­namic parts. Considering the fol­low­ing LiveView tem­plate:

~H”“”

At com­pile time, we con­vert the tem­plate into a struct like this:

%Phoenix.LiveView.Rendered{

sta­tic: [“”]

dy­namic: fn as­signs ->

if changed?(as­signs, :mode), do: as­signs.mode,

if changed?(as­signs, :temperature), do: for­mat_u­nit(as­signs.tem­per­a­ture)

end

We know the sta­tic parts never change, so they are split from the dy­namic Elixir ex­pres­sions. Next, we com­pile each ex­pres­sion with change track­ing based on the vari­ables ac­cessed within each ex­pres­sion. On ren­der, we com­pare the pre­vi­ous tem­plate val­ues with the new and only ex­e­cute the tem­plate ex­pres­sion if the value has changed.

Instead of send­ing the en­tire tem­plate down on change, we can send the client all the sta­tic and dy­namic parts on mount. After mount we only send the par­tial diff of dy­namic val­ues for each up­date.

To see how this works, we can imag­ine the fol­low­ing pay­load be­ing sent on mount for the tem­plate above:

s: [“”],

0: cooling”,

1: 68℉

The client re­ceives a map of sta­tic val­ues in the s key, and dy­namic val­ues keyed by their in­dex in the sta­t­ics. For the client to ren­der the full tem­plate string, it only needs to zips the sta­tic list with the dy­namic val­ues. For ex­am­ple:

[“”].join(“”)

With the client hold­ing a sta­tic/​dy­namic cache, op­ti­miz­ing net­work up­dates is no work at all. Any server ren­der fol­low­ing mount sim­ply re­turns the new dy­namic val­ues at their known in­dex. Unchanged dy­namic val­ues and sta­t­ics are ig­nored en­tirely.

If a LiveView runs as­sign(socket, :temperature, 70), the ren­der/​1 func­tion is in­voked, and the fol­low­ing pay­load gets sent down the wire:

{1: 70℉}

Thats it! To up­date the UI, the client sim­ply merges this ob­ject with its sta­tic/​dy­namic cache:

s: [“”],

0: cooling”,

1: 70F => 1: 70℉

Then the data is zipped to­gether on the client to pro­duce the full HTML of the UI.

Of course in­ner­HTML up­dates blow away UI state and are ex­pen­sive to per­form. So like any client-side frame­work, we com­pute min­i­mal DOM diffs to ef­fi­ciently up­date the DOM. In fact, we’ve had folks mi­grate from React to Phoenix LiveView be­cause LiveView client ren­der­ing was faster what their React app could of­fer.

Optimizations con­tin­ued from there. Including fin­ger­print­ing, for com­pre­hen­sions, tree shar­ing, and more. You can read all about each op­ti­miza­tion on the Dashbit blog.

We ap­ply these op­ti­miza­tions au­to­mat­i­cally and for free thanks to our state­ful client and server con­nec­tion. Most other server ren­dered HTML so­lu­tions send the whole frag­ment on every up­date or re­quire users to fine tune up­dates by hand.

We’ve seen how LiveView pay­loads are smaller than the best hand-writ­ten JSON API or GraphQL query, but it’s even bet­ter than that. Every LiveView holds a con­nec­tion to the server so page nav­i­ga­tion hap­pens via live nav­i­ga­tion. TLS hand­shakes, cur­rent user auth, etc hap­pen a sin­gle time for the life­time of the user’s visit. This al­lows page nav­i­ga­tion to hap­pen via a sin­gle WebSocket frame, and fewer data­base queries for any client ac­tion. The re­sult is fewer round trips from the client, and sim­ply less work done by the server. This pro­vides less la­tency for the end-user com­pared to an SPA fetch­ing data or send­ing mu­ta­tions up to a server.

Holding a state­ful con­nec­tions comes at the cost of server mem­ory, but it’s far cheaper than folks ex­pect. At a base­line, a given chan­nel con­nec­tion con­sumes 40kb of mem­ory. This gives a 1GB server a the­o­ret­i­cal ceil­ing of ~25,000 con­cur­rent LiveViews. Of course the more state you store, the more mem­ory you con­sume, but you only hold onto the state you need. We also have stream prim­i­tives for han­dling large col­lec­tions with­out im­pact­ing mem­ory. Elixir and the Erlang VM were de­signed for this. Scaling a state­ful sys­tem to mil­lions of con­cur­rent users is­n’t the­o­ret­i­cal – we do it all the time. See WhatsApp, Discord, or our own bench­marks as ex­am­ples.

With the pro­gram­ming model op­ti­mized on both client and server, we ex­panded into higher level build­ing blocks that take ad­van­tage of our unique diff­ing en­gine.

Change track­ing and min­i­mal diffs were ground-break­ing fea­tures, but our HTML tem­plates still lacked com­pos­abil­ity. The best we could of­fer is partial”-like tem­plate ren­der­ing where a func­tion could en­cap­su­late some par­tial tem­plate con­tent. This works, but it com­poses poorly and is mis­matched in the way we write markup. Fortunately Marlus Saraiva from the Surface pro­ject spear­headed de­vel­op­ment of an HTML-aware com­po­nent sys­tem and con­tributed back to the LiveView pro­ject. With HEEx com­po­nents, we have a de­clar­a­tive com­po­nent sys­tem, HTML val­i­da­tion, and com­pile-time check­ing of com­po­nent at­trib­utes and slots.

HEEx com­po­nents are just an­no­tated func­tions. They look like this:

@doc ”″

Renders a but­ton.

## Examples

attr :type, :string, de­fault: nil

attr :rest, :global, in­clude: ~w(disabled form name value)

slot :inner_block, re­quired: true

def but­ton(as­signs) do

~H”“”

end

An in­valid call to a com­po­nent, such as pro­duces a com­pile-time warn­ing:

warn­ing: un­de­fined at­tribute click” for com­po­nent AppWeb.CoreComponents.button/1

lib/​ap­p_web/​live/​page_live.ex:123: (file)

Slots al­lows the com­po­nent to ac­cept ar­bi­trary con­tent from a caller. This al­lows com­po­nents to be much more ex­ten­si­ble by the caller with­out cre­at­ing a bunch of be­spoke par­tial tem­plates to han­dle every sce­nario.

When we in­tro­duced HEEx and func­tion com­po­nents, we added a new syn­tax for in­ter­po­lat­ing val­ues within tag at­trib­utes along with :if and :for con­ve­niences for con­di­tion­ally gen­er­at­ing tem­plates. It looked like this:

Note the use of stan­dard EEx in­ter­po­la­tion. With the re­lease of LiveView 1.0, we are ex­tend­ing the HTML-aware {} at­tribute in­ter­po­la­tion syn­tax to within tag bod­ies as well. This means you can now in­ter­po­late val­ues di­rectly within the tag body in a stream­lined syn­tax:

The EEx re­mains sup­ported and is re­quired for gen­er­at­ing dy­namic blocks of dis­tinct markup, as well as for in­ter­po­lat­ing val­ues within and tags.

Gone are the days of ex­am­in­ing your browser’s HTML and then hunt­ing for where that HTML was gen­er­ated within your code. The fi­nal browser markup can be ren­dered within sev­eral nested lay­ers of com­po­nent calls. How do we quickly trace back who ren­dered what?

HEEx solves this with a de­bug_he­ex_an­no­ta­tions con­fig­u­ra­tion. When set, all ren­dered markup will be an­no­tated with the file:line of the func­tion com­po­nent de­f­i­n­i­tion, as well as, the file:line of the caller in­vo­ca­tion of the com­po­nent. In prac­tice your dev HTML will look like this in the browser in­spec­tor:

It an­no­tates the doc­u­ment both at the caller site and the func­tion com­po­nent de­f­i­n­i­tion. If you find the above hard to nav­i­gate, you can use the new Phoenix. LiveReloader fea­tures that have your ed­i­tor jump to an el­e­men­t’s near­est caller or de­f­i­n­i­tion file:line when clicked with a spe­cial key se­quence of your choos­ing.

Let’s see it in ac­tion:

First, we can see how hold­ing c while click­ing jumped to the caller file:Line lo­ca­tion for that in­vo­ca­tion. Next, we see that hold­ing d while click­ing the but­ton jumped to the func­tion de­f­i­n­i­tion file:line.

This is such a sim­ple qual­ity of life im­prove­ment. It will be­come a key part of your work­flow as soon as you try it out.

A few years ago, LiveView tack­led the file up­load prob­lem. Something that should be easy has his­tor­i­cally been un­nec­es­sar­ily dif­fi­cult. We wanted a sin­gle ab­strac­tion for in­ter­ac­tive up­loads for both di­rect to cloud, and di­rect to server use-cases.

...

Read the original on www.phoenixframework.org »

9 592 shares, 22 trendiness

Tokyo government gives workers 4-day workweek to boost fertility, family time

The Japanese cap­i­tal is set to in­tro­duce a four-day work­week for gov­ern­ment em­ploy­ees, in its lat­est push to help work­ing moth­ers and boost record-low fer­til­ity rates.

The Tokyo Metropolitan Government says the new arrange­ment, which be­gins in April, could give em­ploy­ees three days off every week. It sep­a­rately an­nounced an­other pol­icy that will al­low par­ents with chil­dren in grades one to three in el­e­men­tary schools to trade off a bit of their salary for the op­tion to clock out early.

We will re­view work styles … with flex­i­bil­ity, en­sur­ing no one has to give up their ca­reer due to life events such as child­birth or child­care,” said Tokyo Governor Yuriko Koike when she un­veiled the plan in a pol­icy speech on Wednesday.

Now is the time for Tokyo to take the ini­tia­tive to pro­tect and en­hance the lives, liveli­hoods and econ­omy of our peo­ple dur­ing these chal­leng­ing times for the na­tion,” she added.

Japan’s fer­til­ity rate, which has seen a pre­cip­i­tous fall for many years, reached an­other record low in June, even as the gov­ern­ment ramped up ef­forts to en­cour­age young peo­ple to get mar­ried and start fam­i­lies.

Only 727,277 births were recorded last year, with the fer­til­ity rate - the num­ber of chil­dren a woman has in her life­time - drop­ping to a fresh low of 1.2, ac­cord­ing to the Ministry of Health, Labour and Welfare. For a pop­u­la­tion to re­main sta­ble, it needs a fer­til­ity rate of 2.1.

The Japanese gov­ern­ment has been push­ing for a raft of now or never” poli­cies to re­verse the pop­u­la­tion cri­sis, in­clud­ing en­sur­ing men to take pa­ter­nity leaves, while other lo­cal gov­ern­ments have also in­tro­duced mea­sures to im­prove work con­di­tions.

Many so­ci­ol­o­gists at­tribute the ever-plung­ing birth rates to Japan’s un­for­giv­ing work cul­ture and ris­ing costs of liv­ing. Grueling hours have long been a prob­lem for cor­po­rate Japan where work­ers of­ten suf­fer from health haz­ards and, in ex­treme cases, karoshi,” a term mean­ing death by over work.

As in other coun­tries, women are of­ten un­der pres­sure to choose be­tween their ca­reer or fam­ily, but Japan’s unique over­time work cul­ture makes preg­nancy and rais­ing chil­dren es­pe­cially daunt­ing.

In fact, ac­cord­ing to the World Bank, the gen­der gap in the coun­try’s la­bor force par­tic­i­pa­tion, which stood at 55% for women and 72% for men last year, is higher than other high-in­come na­tions.

The shift to a four-day work­week has sparked grow­ing in­ter­est in the West, where some com­pa­nies are be­gin­ning to ex­plore com­pressed hours as a way to at­tract tal­ent seek­ing bet­ter work-life bal­ance. Some stud­ies have shown that it im­proves well-be­ing and pro­duc­tiv­ity among work­ers.

But the idea is still seen as rad­i­cal for Japanese com­pa­nies, which of­ten equates time spent at work with loy­alty for the com­pany.

And Tokyo is­n’t the only place in Asia to im­ple­ment more fam­ily friendly poli­cies. Earlier this year, Singapore in­tro­duced new guide­lines re­quir­ing all firms to con­sider re­quests by em­ploy­ees for flex­i­ble-work­ing arrange­ments. That could in­clude four-day weeks or flex­i­ble hours.

...

Read the original on www.cnn.com »

10 580 shares, 19 trendiness

South Korea lifts president's martial law decree after lawmakers reject military rule

SEOUL, South Korea (AP) — The pres­i­dent of South Korea early Wednesday lifted the mar­tial law he im­posed on the coun­try hours ear­lier, bend­ing to po­lit­i­cal pres­sure af­ter a tense night in which troops sur­rounded par­lia­ment and law­mak­ers voted to re­ject mil­i­tary rule.

President Yoon Suk Yeol, who ap­peared likely to be im­peached over his ac­tions, im­posed mar­tial law late Tuesday out of frus­tra­tion with the op­po­si­tion, vow­ing to elim­i­nate anti-state” forces as he strug­gles against op­po­nents who con­trol par­lia­ment and that he ac­cuses of sym­pa­thiz­ing with com­mu­nist North Korea.

Police and mil­i­tary per­son­nel were seen leav­ing the grounds of par­lia­ment fol­low­ing the bi­par­ti­san vote to over­rule the pres­i­dent, and the de­c­la­ra­tion was for­mally lifted around 4:30 a.m. dur­ing a Cabinet meet­ing.

Parliament acted swiftly af­ter mar­tial law was im­posed, with National Assembly Speaker Woo Won Shik de­clar­ing that the law was invalid” and that law­mak­ers would protect democ­racy with the peo­ple.”

In all, mar­tial law was in ef­fect for about six hours.

The pres­i­den­t’s sur­pris­ing move harkened back to an era of au­thor­i­tar­ian lead­ers that the coun­try has not seen since the 1980s, and it was im­me­di­ately de­nounced by the op­po­si­tion and the leader of Yoon’s own con­ser­v­a­tive party.

Lee Jae-myung, leader of the lib­eral Democratic Party, which holds the ma­jor­ity in the 300-seat par­lia­ment, said the par­ty’s law­mak­ers would re­main in the Assembly’s main hall un­til Yoon for­mally lifted his or­der.

Woo ap­plauded how troops quickly left the Assembly af­ter the vote.

Even with our un­for­tu­nate mem­o­ries of mil­i­tary coups, our cit­i­zens have surely ob­served the events of to­day and saw the ma­tu­rity of our mil­i­tary,” Woo said.

While an­nounc­ing his plan to lift mar­tial law, Yoon con­tin­ued to crit­i­cize par­lia­men­t’s at­tempts to im­peach key gov­ern­ment of­fi­cials and se­nior pros­e­cu­tors. He said law­mak­ers had en­gaged in unscrupulous acts of leg­isla­tive and bud­getary ma­nip­u­la­tion that are par­a­lyz­ing the func­tions of the state.”

Jo Seung-lae, a Democratic law­maker, claimed that se­cu­rity cam­era footage fol­low­ing Yoon’s de­c­la­ra­tion showed that troops moved in a way that sug­gested they were try­ing to ar­rest Lee, Woo and even Han Dong-hoon, the leader of Yoon’s People Power Party.

Officials from Yoon’s of­fice and the Defense Ministry did not re­spond to re­quests for com­ment early Wednesday.

Seemingly hun­dreds of pro­test­ers gath­ered in front of the Assembly, wav­ing ban­ners and call­ing for Yoon’s im­peach­ment.

Some pro­test­ers scuf­fled with troops ahead of the law­mak­ers’ vote, but there were no im­me­di­ate re­ports of in­juries or ma­jor prop­erty dam­age. At least one win­dow was bro­ken as troops at­tempted to en­ter the Assembly build­ing. One woman tried un­suc­cess­fully to pull a ri­fle away from one of the sol­diers, while shout­ing Aren’t you em­bar­rassed?”

Under South Korea’s con­sti­tu­tion, the pres­i­dent can de­clare mar­tial law dur­ing wartime, war-like sit­u­a­tions or other com­pa­ra­ble na­tional emer­gency states” that re­quire the use of mil­i­tary force to main­tain peace and or­der. It was ques­tion­able whether South Korea is cur­rently in such a state.

When mar­tial law is de­clared, special mea­sures” can be em­ployed to re­strict free­dom of press, free­dom of as­sem­bly and other rights, as well as the power of courts.

The con­sti­tu­tion also states that the pres­i­dent must oblige when the National Assembly de­mands the lift­ing of mar­tial law with a ma­jor­ity vote.

Following Yoon’s an­nounce­ment of mar­tial law, South Korea’s mil­i­tary pro­claimed that par­lia­ment and other po­lit­i­cal gath­er­ings that could cause social con­fu­sion” would be sus­pended, South Korea’s Yonhap news agency said. The mil­i­tary said any­one who vi­o­lated the de­cree could be ar­rested with­out a war­rant.

In Washington, the White House said the U. S. was seriously con­cerned” by the events in Seoul. A spokesper­son for the National Security Council said President Joe Biden’s ad­min­is­tra­tion was not no­ti­fied in ad­vance of the mar­tial law an­nounce­ment and was in con­tact with the South Korean gov­ern­ment.

Pentagon spokesman Maj. Gen. Pat Ryder said there was no ef­fect on the more than 27,000 U. S. ser­vice mem­bers based in South Korea.

The South Korean mil­i­tary also said that the coun­try’s strik­ing doc­tors should re­turn to work within 48 hours, Yonhap said. Thousands of doc­tors have been strik­ing for months over gov­ern­ment plans to ex­pand the num­ber of stu­dents at med­ical schools.

Soon af­ter mar­tial law was de­clared, the par­lia­ment speaker called on his YouTube chan­nel for all law­mak­ers to gather at the National Assembly. He urged mil­i­tary and law en­force­ment per­son­nel to remain calm and hold their po­si­tions.

All 190 law­mak­ers who par­tic­i­pated in the vote sup­ported the lift­ing of mar­tial law.

At one point, tele­vi­sion footage showed po­lice of­fi­cers block­ing the en­trance of the National Assembly and hel­meted sol­diers car­ry­ing ri­fles in front of the build­ing. An Associated Press pho­tog­ra­pher saw at least three he­li­copters, likely from the mil­i­tary, that landed in­side the Assembly grounds, while two or three he­li­copters cir­cled above the site.

The leader of Yoon’s con­ser­v­a­tive party called the de­ci­sion to im­pose mar­tial law wrong.” Lee, who nar­rowly lost to Yoon in the 2022 pres­i­den­tial elec­tion, said Yoon’s an­nounce­ment was illegal and un­con­sti­tu­tional.”

Yoon said dur­ing a tele­vised speech that mar­tial law would help rebuild and pro­tect” the coun­try from falling into the depths of na­tional ruin.” He said he would eradicate pro-North Korean forces and pro­tect the con­sti­tu­tional de­mo­c­ra­tic or­der.”

I will elim­i­nate anti-state forces as quickly as pos­si­ble and nor­mal­ize the coun­try,” he said, while ask­ing the peo­ple to be­lieve in him and tol­er­ate some in­con­ve­niences.”

Yoon — whose ap­proval rat­ing dipped in re­cent months — has strug­gled to push his agenda against an op­po­si­tion-con­trolled par­lia­ment since tak­ing of­fice in 2022.

His party has been locked in an im­passe with the lib­eral op­po­si­tion over next year’s bud­get bill. The op­po­si­tion has also at­tempted to im­peach three top pros­e­cu­tors, in­clud­ing the chief of the cen­tral Seoul pros­e­cu­tors’ of­fice, in what the con­ser­v­a­tives have called a vendetta against their crim­i­nal in­ves­ti­ga­tions of Lee, who has been seen as the fa­vorite for the next pres­i­den­tial elec­tion in 2027 in opin­ion polls.

During his tele­vised an­nounce­ment, Yoon also de­scribed the op­po­si­tion as shameless pro-North Korean anti-state forces who are plun­der­ing the free­dom and hap­pi­ness of our cit­i­zens.” He did not elab­o­rate.

Yoon has taken a hard line on North Korea over its nu­clear am­bi­tions, de­part­ing from the poli­cies of his lib­eral pre­de­ces­sor, Moon Jae-in, who pur­sued in­ter-Ko­rean en­gage­ment.

Yoon has also dis­missed calls for in­de­pen­dent in­ves­ti­ga­tions into scan­dals in­volv­ing his wife and top of­fi­cials, draw­ing quick, strong re­bukes from his po­lit­i­cal ri­vals.

Yoon’s move was the first de­c­la­ra­tion of mar­tial law since the coun­try’s de­moc­ra­ti­za­tion in 1987. The coun­try’s last pre­vi­ous mar­tial law was in October 1979, fol­low­ing the as­sas­si­na­tion of for­mer mil­i­tary dic­ta­tor Park Chung-hee.

Sydney Seiler, Korean chair at the Center for Strategic and International Studies, ar­gued that the move was sym­bolic for Yoon to ex­press his frus­tra­tion with the op­po­si­tion-con­trolled par­lia­ment.

He has noth­ing to lose,” said Seiler, com­par­ing Yoon’s move to the Hail Mary pass in American foot­ball, with a slim chance of suc­cess.

Now Yoon faces likely im­peach­ment, a sce­nario that was also pos­si­ble be­fore he made the bold move, Seiler said.

Natalia Slavney, re­search an­a­lyst at the Stimson Center’s 38 North web­site that fo­cuses on Korean af­fairs, said Yoon’s im­po­si­tion of mar­tial law was a se­ri­ous back­slide of democ­racy” that fol­lowed a worrying trend of abuse” since he took of­fice in 2022.

South Korea has a ro­bust his­tory of po­lit­i­cal plu­ral­ism and is no stranger to mass protests and swift im­peach­ments,” Slavney said, cit­ing the ex­am­ple of for­mer President Park Geun-hye, the coun­try’s first fe­male pres­i­dent, who was ousted from of­fice and im­pris­oned for bribery and other crimes in 2017.

Associated Press writ­ers Hyung-jin Kim in Seoul, South Korea, and Matt Lee, Didi Tang and Tara Copp in Washington con­tributed to this re­port.

...

Read the original on apnews.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.