10 interesting stories served every morning and every evening.
Between 2009 and 2012, iPhones had a built-in “Send to YouTube” button in the Photos app. Many of these uploads kept their default IMG_XXXX filenames, creating a time capsule of raw, unedited moments from random lives.
Inspired by Ben Wallace, I made a bot that crawled YouTube and found 5 million of these videos! Watch them below, ordered randomly.
...
Read the original on walzr.com »
Today we introduce Genie 2, a foundation world model capable of generating an endless variety of action-controllable, playable 3D environments for training and evaluating embodied agents. Based on a single prompt image, it can be played by a human or AI agent using keyboard and mouse inputs.
Games play a key role in the world of artificial intelligence (AI) research. Their engaging nature, unique blend of challenges, and measurable progress make them ideal environments to safely test and advance AI capabilities.
Indeed, games have been important to Google DeepMind since our founding. From our early work with Atari games, breakthroughs such as AlphaGo and AlphaStar, to our research on generalist agents in collaboration with game developers, games have been center stage in our research. However, training more general embodied agents has been traditionally bottlenecked by the availability of sufficiently rich and diverse training environments.
As we show, Genie 2 could enable future agents to be trained and evaluated in a limitless curriculum of novel worlds. Our research also paves the way for new, creative workflows for prototyping interactive experiences.
Until now, world models have largely been confined to modeling narrow domains. In Genie 1, we introduced an approach for generating a diverse array of 2D worlds. Today we introduce Genie 2, which represents a significant leap forward in generality. Genie 2 can generate a vast diversity of rich 3D worlds. Genie 2 is a world model, meaning it can simulate virtual worlds, including the consequences of taking any action (e.g. jump, swim, etc.). It was trained on a large-scale video dataset and, like other generative models, demonstrates various emergent capabilities at scale, such as object interactions, complex character animation, physics, and the ability to model and thus predict the behavior of other agents.Below are example videos of people interacting with Genie 2. For every example, the model is prompted with a single image generated by Imagen 3, GDM’s state-of-the-art text-to-image model. This means anyone can describe a world they want in text, select their favorite rendering of that idea, and then step into and interact with that newly created world (or have an AI agent be trained or evaluated in it). At each step, a person or agent provides a keyboard and mouse action, and Genie 2 simulates the next observation. Genie 2 can generate consistent worlds for up to a minute, with the majority of examples shown lasting 10-20s.
Genie 2 responds intelligently to actions taken by pressing keys on a keyboard, identifying the character and moving it correctly. For example, our model has to figure out that arrow keys should move the robot and not the trees or clouds.
A first person view of a robot on a purple planet.
A first person view of a robot in a loft apartment in a big city.
We can generate diverse trajectories from the same starting frame, which means it is possible to simulate counterfactual experiences for training agents. In each row, each video starts from the same frame, but has different actions taken by a human player.
Genie 2 is capable of remembering parts of the world that are no longer in view and then rendering them accurately when they become observable again.
Genie 2 generates new plausible content on the fly and maintains a consistent world for up to a minute.
Genie 2 can create different perspectives, such as first-person view, isometric views, or third person driving videos.
Genie 2 models various object interactions, such as bursting balloons, opening doors, and shooting barrels of explosives.
Genie 2 learned how to animate various types of characters doing different activities.
Genie 2 models other agents and even complex interactions with them.
Genie 2 can also be prompted with real world images, where we see that it can model grass blowing in the wind or water flowing in a river.
Genie 2 makes it easy to rapidly prototype diverse interactive experiences, enabling researchers to quickly experiment with novel environments to train and test embodied AI agents. For example, below we prompt Genie 2 with different images generated by Imagen 3 to model the difference between flying a paper plane, a dragon, a hawk, or a parachute and test how well Genie can animate different avatars.
Genie 2 can be used to rapidly prototype diverse interactive experiences.
Thanks to Genie 2′s out-of-distribution generalization capabilities, concept art and drawings can be turned into fully interactive environments. This enables artists and designers to prototype quickly, which can bootstrap the creative process for environment design, further accelerating research.Here we show examples of research environment concepts made by our concept artist.
By using Genie 2 to quickly create rich and diverse environments for AI agents, our researchers can also generate evaluation tasks that agents have not seen during training. Below, we show examples of a SIMA agent that we developed in collaboration with games developers, following instructions on unseen environments synthesized by Genie 2 via a single image prompt.
Prompt: “A screenshot of a third-person open world exploration game. The player is an adventurer exploring a forest. There is a house with a red door on the left, and a house with a blue door on the right. The camera is placed directly behind the player. #photorealistic #immersive”
The SIMA agent is designed to complete tasks in a range of 3D game worlds by following natural-language instructions. Here we used Genie 2 to generate a 3D environment with two doors, a blue and a red one, and provided instructions to the SIMA agent to open each of them. In this example, SIMA is controlling the avatar via keyboard and mouse inputs, while Genie 2 generates the game frames.
We can also use SIMA to help evaluate Genie 2’s capabilities. Here we test Genie 2’s ability to generate consistent environments by instructing SIMA to look around and explore behind the house.
While this research is still in its early stage with substantial room for improvement on both agent and environment generation capabilities, we believe Genie 2 is the path to solving a structural problem of training embodied agents safely while achieving the breadth and generality required to progress towards AGI.
Prompt: “An image of a computer game showing a scene from inside a rough hewn stone cave or mine. The viewer’s position is a 3rd person camera based above a player avatar looking down towards the avatar. The player avatar is a knight with a sword. In front of the knight avatar there are x3 stone arched doorways and the knight chooses to go through any one of these doors. Beyond the first and inside we can see strange green plants with glowing flowers lining that tunnel. Inside and beyond the second doorway there is a corridor of spiked iron plates riveted to the cave walls leading towards an ominous glow further along. Through the third door we can see a set of rough hewn stone steps ascending to a mysterious destination.”
Genie 2 is an autoregressive latent diffusion model, trained on a large video dataset. After passing through an autoencoder, latent frames from the video are passed to a large transformer dynamics model, trained with a causal mask similar to that used by large language models. At inference time, Genie 2 can be sampled in an autoregressive fashion, taking individual actions and past latent frames on a frame-by-frame basis. We use classifier-free guidance to improve action controllability.The samples in this blog post are generated by an undistilled base model, to show what is possible. We can play a distilled version in real-time with a reduction in quality of the outputs.
Genie 2 shows the potential of foundational world models for creating diverse 3D environments and accelerating agent research. This research direction is in its early stages and we look forward to continuing to improve Genie’s world generation capabilities in terms of generality and consistency. As with SIMA, our research is building towards more general AI systems and agents that can understand and safely carry out a wide range of tasks in a way that is helpful to people online and in the real world.
While not taking any action, a ghost appears while in a garden
Genie 2 was led by Jack Parker-Holder with technical leadership by Stephen Spencer, with key contributions from Philip Ball, Jake Bruce, Vibhavari Dasagi, Kristian Holsheimer, Christos Kaplanis, Alexandre Moufarek, Guy Scully, Jeremy Shar, Jimmy Shi and Jessica Yung, and contributions from Michael Dennis, Sultan Kenjeyev and Shangbang Long. Yusuf Aytar, Jeff Clune, Sander Dieleman, Doug Eck, Shlomi Fruchter, Raia Hadsell, Demis Hassabis, Georg Ostrovski, Pieter-Jan Kindermans, Nicolas Heess, Charles Blundell, Simon Osindero, Rushil Mistry gave advice. Past contributors include Ashley Edwards and Richie Steigerwald. The Generalist Agents team was led by Vlad Mnih with key contributions from Harris Chan, Maxime Gazeau, Bonnie Li, Fabio Pardo, Luyu Wang, Lei ZhangThe SIMA team, with particular support from Frederic Besse, Tim Harley, Anna Mitenkova and Jane WangTim Rocktäschel, Satinder Singh and Adrian Bolton coordinated, managed and advised the overall project.We’d also like to thank Zoubin Gharamani, Andy Brock, Ed Hirst, David Bridson, Zeb Mehring, Cassidy Hardin, Hyunjik Kim, Noah Fiedel, Jeff Stanway, Petko Yotov, Mihai Tiuca, Soheil Hassas Yeganeh, Nehal Mehta, Richard Tucker, Tim Brooks, Alex Cullum, Max Cant, Nik Hemmings, Richard Evans, Valeria Oliveira, Yanko Gitahy Oliveira, Bethanie Brownfield, Charles Gbadamosi, Giles Ruscoe, Guy Simmons, Jony Hudson, Marjorie Limont, Nathaniel Wong, Sarah Chakera, Nick Young.
...
Read the original on deepmind.google »
...
Read the original on www.intel.com »
The Verge has finally shipped the new paywalled version of their site and added a subscription. I personally have nothing against that move and I think freemium is the way forward if we want sites to be sustainable and not be invaded with ads. The personal highlight of the new version is obviously this:
Subscribers will also get access to full-text RSS feeds
Hell yeah, full RSS feeds are back. That said though, one thing is a big no-no:
You can now pay to get fewer ads
The correct amount of ads for a publication that’s directly supported is zero. That’s the amount we should get. I don’t care about the rationale behind it. I’m giving you money, you decided how much money I should be giving you for your product, you don’t get to double dip and also sell my data to your advertisers and earn more on the side. I’ll say it again: the correct amount of ads, in this case, is zero. Get your shit together verge people.
...
Read the original on manuelmoreale.com »
This 1.0 milestone comes six years after the first LiveView commit.
I started LiveView to scratch an itch. I wanted to create dynamic server-rendered applications without writing JavaScript. I was tired of the inevitable ballooning complexity that it brings.
Think realtime form validations, updating the quantity in a shopping cart, or real-time streaming updates. Why does it require moving mountains to solve in a traditional stack? We write the HTTP glue or GraphQL schemas and resolvers, then we figure out which validation logic needs shared or dup’d. It goes on and on from there — how do we get localization information to the client? What data serializers do we need? How do we wire up WebSockets and IPC back to our code? Is our js bundle getting too large? I guess it’s time to start turning the Webpack or Parcel knobs. Wait Vite is a thing now? Or I guess Bun configuration is what we want? We’ve all felt this pain.
The idea was, what if we removed these problems entirely? HTTP can go away, and the server can handle all the rendering and dynamic update concerns. It felt like a heavy approach, but I knew Elixir and Phoenix was perfectly suited for it.
Six years later this programming model still feels like cheating. Everything is super fast. Payloads are tiny. Latency is best-in-class. Not only do you write less code, there’s simply less to think about when writing features.
Interesting things happen when you give every user and UI a real-time, bidirectional foundation as a matter of course. You suddenly have superpowers. You almost don’t notice it. Being freed from all the mundane concerns of typical full-stack development lets you focus on just shipping features. And with Elixir, you start shipping features that other platforms can’t even conceive as possible.
Want to ship real-time server logs to the js console in development? No problem!
What about supporting production hot code upgrades where browsers can auto re-render anytime CSS stylesheets, images, or templates change — without losing state or dropping connections? Sure!
Or maybe you have an app deployed planet-wide where you do work across the cluster and aggregate the results in real-time back to the UI. Would you believe the entire LiveView, including the template markup and RPC calls, is 350 LOC?
These are the kinds of applications that LiveView enables. It feels incredible to ship these kinds of things, but it took a while to arrive here for good reasons. There was a lot to solve to make this programming model truly great.
Conceptually, what I really wanted is something like what we do in React – change some state, our template re-renders automatically, and the UI updates. But instead of a bit of UI running on the client, what if we ran it on the server? The LiveView could look like this:
defmodule ThermoLive do
def render(assigns) do
~H”“”
end
def mount(%{“id” => id}, _session, socket) do
thermostat = ThermoControl.get_thermostat!(id)
:ok = ThermoControl.subscribe(thermostat)
{:ok, assign(socket, thermostat: thermostat)}
end
def handle_info({ThermoControl, %ThermoStat{} = new_thermo}, _, socket) do
{:noreply, assign(socket, thermostat: new_thermo)}
end
def handle_event(“inc”, _, socket) do
thermostat = ThermoControl.inc(socket.assigns.thermostat)
{:noreply, assign(socket, thermostat: thermostat)}
end
end
Like React, we have a render function and something that sets our initial state when the LiveView mounts. When state changes, we call render with the new state and the UI is updated.
Interactions like phx-click on the + or - button, can be sent as RPC’s from client to server and the server can respond with fresh page HTML. These client/server messages use Phoenix Channels which scale to millions of connections per server.
Likewise, if the server wants to send an update to the client, such as another user changing the thermostat, the client can listen for it and replace the page HTML in the same fashion. My naive first pass on the phoenix_live_view.js client looked something like this.
let main = document.querySelector(“[phx-main]“)
let channel = new socket.channel(“lv”)
channel.join().receive(“ok”, ({html}) => main.innerHTML = html)
channel.on(“update”, ({html}) => main.innerHTML = html)
window.addEventListener(“click”, e => {
let event = e.getAttribute(“phx-click”)
if(!event){ return }
channel.push(“event”, {event}).receive(“ok”, ({html}) => main.innerHTML = html)
This is how LiveView started. We went to the server for interactions, re-rendered the entire template on state change, and sent the entire page down to the client. The client then swapped out the inner HTML.
It worked, but it was not great. Partial state changes required re-executing the entire template and sending down gobs of HTML for otherwise tiny updates.
Still the basic programming model was exactly what I wanted. As HTTP fell away from my concerns, entire layers of full-stack considerations disappeared.
Next the challenge was making this something truly great. Little did we know we’d accidentally our way to outperforming many SPA use-cases along the way.
LiveView’s diffing engine solved two problems with a single mechanism. The first problem was only executing those dynamic parts of a template that actually changed from a previous render. The second was only sending the minimal data necessary to update the client.
It solves both by splitting the template into static and dynamic parts. Considering the following LiveView template:
~H”“”
At compile time, we convert the template into a struct like this:
%Phoenix.LiveView.Rendered{
static: [“”]
dynamic: fn assigns ->
if changed?(assigns, :mode), do: assigns.mode,
if changed?(assigns, :temperature), do: format_unit(assigns.temperature)
end
We know the static parts never change, so they are split from the dynamic Elixir expressions. Next, we compile each expression with change tracking based on the variables accessed within each expression. On render, we compare the previous template values with the new and only execute the template expression if the value has changed.
Instead of sending the entire template down on change, we can send the client all the static and dynamic parts on mount. After mount we only send the partial diff of dynamic values for each update.
To see how this works, we can imagine the following payload being sent on mount for the template above:
s: [“”],
0: “cooling”,
1: “68℉”
The client receives a map of static values in the s key, and dynamic values keyed by their index in the statics. For the client to render the full template string, it only needs to zips the static list with the dynamic values. For example:
[“”].join(“”)
With the client holding a static/dynamic cache, optimizing network updates is no work at all. Any server render following mount simply returns the new dynamic values at their known index. Unchanged dynamic values and statics are ignored entirely.
If a LiveView runs assign(socket, :temperature, 70), the render/1 function is invoked, and the following payload gets sent down the wire:
{1: “70℉“}
Thats it! To update the UI, the client simply merges this object with its static/dynamic cache:
s: [“”],
0: “cooling”,
1: “70F” => 1: “70℉”
Then the data is zipped together on the client to produce the full HTML of the UI.
Of course innerHTML updates blow away UI state and are expensive to perform. So like any client-side framework, we compute minimal DOM diffs to efficiently update the DOM. In fact, we’ve had folks migrate from React to Phoenix LiveView because LiveView client rendering was faster what their React app could offer.
Optimizations continued from there. Including fingerprinting, for comprehensions, tree sharing, and more. You can read all about each optimization on the Dashbit blog.
We apply these optimizations automatically and for free thanks to our stateful client and server connection. Most other server rendered HTML solutions send the whole fragment on every update or require users to fine tune updates by hand.
We’ve seen how LiveView payloads are smaller than the best hand-written JSON API or GraphQL query, but it’s even better than that. Every LiveView holds a connection to the server so page navigation happens via live navigation. TLS handshakes, current user auth, etc happen a single time for the lifetime of the user’s visit. This allows page navigation to happen via a single WebSocket frame, and fewer database queries for any client action. The result is fewer round trips from the client, and simply less work done by the server. This provides less latency for the end-user compared to an SPA fetching data or sending mutations up to a server.
Holding a stateful connections comes at the cost of server memory, but it’s far cheaper than folks expect. At a baseline, a given channel connection consumes 40kb of memory. This gives a 1GB server a theoretical ceiling of ~25,000 concurrent LiveViews. Of course the more state you store, the more memory you consume, but you only hold onto the state you need. We also have stream primitives for handling large collections without impacting memory. Elixir and the Erlang VM were designed for this. Scaling a stateful system to millions of concurrent users isn’t theoretical – we do it all the time. See WhatsApp, Discord, or our own benchmarks as examples.
With the programming model optimized on both client and server, we expanded into higher level building blocks that take advantage of our unique diffing engine.
Change tracking and minimal diffs were ground-breaking features, but our HTML templates still lacked composability. The best we could offer is “partial”-like template rendering where a function could encapsulate some partial template content. This works, but it composes poorly and is mismatched in the way we write markup. Fortunately Marlus Saraiva from the Surface project spearheaded development of an HTML-aware component system and contributed back to the LiveView project. With HEEx components, we have a declarative component system, HTML validation, and compile-time checking of component attributes and slots.
HEEx components are just annotated functions. They look like this:
@doc “”″
Renders a button.
## Examples
attr :type, :string, default: nil
attr :rest, :global, include: ~w(disabled form name value)
slot :inner_block, required: true
def button(assigns) do
~H”“”
end
An invalid call to a component, such as produces a compile-time warning:
warning: undefined attribute “click” for component AppWeb.CoreComponents.button/1
lib/app_web/live/page_live.ex:123: (file)
Slots allows the component to accept arbitrary content from a caller. This allows components to be much more extensible by the caller without creating a bunch of bespoke partial templates to handle every scenario.
When we introduced HEEx and function components, we added a new syntax for interpolating values within tag attributes along with :if and :for conveniences for conditionally generating templates. It looked like this:
Note the use of standard EEx interpolation. With the release of LiveView 1.0, we are extending the HTML-aware {} attribute interpolation syntax to within tag bodies as well. This means you can now interpolate values directly within the tag body in a streamlined syntax:
The EEx remains supported and is required for generating dynamic blocks of distinct markup, as well as for interpolating values within and tags.
Gone are the days of examining your browser’s HTML and then hunting for where that HTML was generated within your code. The final browser markup can be rendered within several nested layers of component calls. How do we quickly trace back who rendered what?
HEEx solves this with a debug_heex_annotations configuration. When set, all rendered markup will be annotated with the file:line of the function component definition, as well as, the file:line of the caller invocation of the component. In practice your dev HTML will look like this in the browser inspector:
It annotates the document both at the caller site and the function component definition. If you find the above hard to navigate, you can use the new Phoenix. LiveReloader features that have your editor jump to an element’s nearest caller or definition file:line when clicked with a special key sequence of your choosing.
Let’s see it in action:
First, we can see how holding c while clicking jumped to the caller file:Line location for that invocation. Next, we see that holding d while clicking the button jumped to the function definition file:line.
This is such a simple quality of life improvement. It will become a key part of your workflow as soon as you try it out.
A few years ago, LiveView tackled the file upload problem. Something that should be easy has historically been unnecessarily difficult. We wanted a single abstraction for interactive uploads for both direct to cloud, and direct to server use-cases.
...
Read the original on www.phoenixframework.org »
The Japanese capital is set to introduce a four-day workweek for government employees, in its latest push to help working mothers and boost record-low fertility rates.
The Tokyo Metropolitan Government says the new arrangement, which begins in April, could give employees three days off every week. It separately announced another policy that will allow parents with children in grades one to three in elementary schools to trade off a bit of their salary for the option to clock out early.
“We will review work styles … with flexibility, ensuring no one has to give up their career due to life events such as childbirth or childcare,” said Tokyo Governor Yuriko Koike when she unveiled the plan in a policy speech on Wednesday.
“Now is the time for Tokyo to take the initiative to protect and enhance the lives, livelihoods and economy of our people during these challenging times for the nation,” she added.
Japan’s fertility rate, which has seen a precipitous fall for many years, reached another record low in June, even as the government ramped up efforts to encourage young people to get married and start families.
Only 727,277 births were recorded last year, with the fertility rate - the number of children a woman has in her lifetime - dropping to a fresh low of 1.2, according to the Ministry of Health, Labour and Welfare. For a population to remain stable, it needs a fertility rate of 2.1.
The Japanese government has been pushing for a raft of “now or never” policies to reverse the population crisis, including ensuring men to take paternity leaves, while other local governments have also introduced measures to improve work conditions.
Many sociologists attribute the ever-plunging birth rates to Japan’s unforgiving work culture and rising costs of living. Grueling hours have long been a problem for corporate Japan where workers often suffer from health hazards and, in extreme cases, “karoshi,” a term meaning death by over work.
As in other countries, women are often under pressure to choose between their career or family, but Japan’s unique overtime work culture makes pregnancy and raising children especially daunting.
In fact, according to the World Bank, the gender gap in the country’s labor force participation, which stood at 55% for women and 72% for men last year, is higher than other high-income nations.
The shift to a four-day workweek has sparked growing interest in the West, where some companies are beginning to explore compressed hours as a way to attract talent seeking better work-life balance. Some studies have shown that it improves well-being and productivity among workers.
But the idea is still seen as radical for Japanese companies, which often equates time spent at work with loyalty for the company.
And Tokyo isn’t the only place in Asia to implement more family friendly policies. Earlier this year, Singapore introduced new guidelines requiring all firms to consider requests by employees for flexible-working arrangements. That could include four-day weeks or flexible hours.
...
Read the original on www.cnn.com »
SEOUL, South Korea (AP) — The president of South Korea early Wednesday lifted the martial law he imposed on the country hours earlier, bending to political pressure after a tense night in which troops surrounded parliament and lawmakers voted to reject military rule.
President Yoon Suk Yeol, who appeared likely to be impeached over his actions, imposed martial law late Tuesday out of frustration with the opposition, vowing to eliminate “anti-state” forces as he struggles against opponents who control parliament and that he accuses of sympathizing with communist North Korea.
Police and military personnel were seen leaving the grounds of parliament following the bipartisan vote to overrule the president, and the declaration was formally lifted around 4:30 a.m. during a Cabinet meeting.
Parliament acted swiftly after martial law was imposed, with National Assembly Speaker Woo Won Shik declaring that the law was “invalid” and that lawmakers would “protect democracy with the people.”
In all, martial law was in effect for about six hours.
The president’s surprising move harkened back to an era of authoritarian leaders that the country has not seen since the 1980s, and it was immediately denounced by the opposition and the leader of Yoon’s own conservative party.
Lee Jae-myung, leader of the liberal Democratic Party, which holds the majority in the 300-seat parliament, said the party’s lawmakers would remain in the Assembly’s main hall until Yoon formally lifted his order.
Woo applauded how troops quickly left the Assembly after the vote.
“Even with our unfortunate memories of military coups, our citizens have surely observed the events of today and saw the maturity of our military,” Woo said.
While announcing his plan to lift martial law, Yoon continued to criticize parliament’s attempts to impeach key government officials and senior prosecutors. He said lawmakers had engaged in “unscrupulous acts of legislative and budgetary manipulation that are paralyzing the functions of the state.”
Jo Seung-lae, a Democratic lawmaker, claimed that security camera footage following Yoon’s declaration showed that troops moved in a way that suggested they were trying to arrest Lee, Woo and even Han Dong-hoon, the leader of Yoon’s People Power Party.
Officials from Yoon’s office and the Defense Ministry did not respond to requests for comment early Wednesday.
Seemingly hundreds of protesters gathered in front of the Assembly, waving banners and calling for Yoon’s impeachment.
Some protesters scuffled with troops ahead of the lawmakers’ vote, but there were no immediate reports of injuries or major property damage. At least one window was broken as troops attempted to enter the Assembly building. One woman tried unsuccessfully to pull a rifle away from one of the soldiers, while shouting “Aren’t you embarrassed?”
Under South Korea’s constitution, the president can declare martial law during “wartime, war-like situations or other comparable national emergency states” that require the use of military force to maintain peace and order. It was questionable whether South Korea is currently in such a state.
When martial law is declared, “special measures” can be employed to restrict freedom of press, freedom of assembly and other rights, as well as the power of courts.
The constitution also states that the president must oblige when the National Assembly demands the lifting of martial law with a majority vote.
Following Yoon’s announcement of martial law, South Korea’s military proclaimed that parliament and other political gatherings that could cause “social confusion” would be suspended, South Korea’s Yonhap news agency said. The military said anyone who violated the decree could be arrested without a warrant.
In Washington, the White House said the U. S. was “seriously concerned” by the events in Seoul. A spokesperson for the National Security Council said President Joe Biden’s administration was not notified in advance of the martial law announcement and was in contact with the South Korean government.
Pentagon spokesman Maj. Gen. Pat Ryder said there was no effect on the more than 27,000 U. S. service members based in South Korea.
The South Korean military also said that the country’s striking doctors should return to work within 48 hours, Yonhap said. Thousands of doctors have been striking for months over government plans to expand the number of students at medical schools.
Soon after martial law was declared, the parliament speaker called on his YouTube channel for all lawmakers to gather at the National Assembly. He urged military and law enforcement personnel to “remain calm and hold their positions.
All 190 lawmakers who participated in the vote supported the lifting of martial law.
At one point, television footage showed police officers blocking the entrance of the National Assembly and helmeted soldiers carrying rifles in front of the building. An Associated Press photographer saw at least three helicopters, likely from the military, that landed inside the Assembly grounds, while two or three helicopters circled above the site.
The leader of Yoon’s conservative party called the decision to impose martial law “wrong.” Lee, who narrowly lost to Yoon in the 2022 presidential election, said Yoon’s announcement was “illegal and unconstitutional.”
Yoon said during a televised speech that martial law would help “rebuild and protect” the country from “falling into the depths of national ruin.” He said he would “eradicate pro-North Korean forces and protect the constitutional democratic order.”
“I will eliminate anti-state forces as quickly as possible and normalize the country,” he said, while asking the people to believe in him and tolerate “some inconveniences.”
Yoon — whose approval rating dipped in recent months — has struggled to push his agenda against an opposition-controlled parliament since taking office in 2022.
His party has been locked in an impasse with the liberal opposition over next year’s budget bill. The opposition has also attempted to impeach three top prosecutors, including the chief of the central Seoul prosecutors’ office, in what the conservatives have called a vendetta against their criminal investigations of Lee, who has been seen as the favorite for the next presidential election in 2027 in opinion polls.
During his televised announcement, Yoon also described the opposition as “shameless pro-North Korean anti-state forces who are plundering the freedom and happiness of our citizens.” He did not elaborate.
Yoon has taken a hard line on North Korea over its nuclear ambitions, departing from the policies of his liberal predecessor, Moon Jae-in, who pursued inter-Korean engagement.
Yoon has also dismissed calls for independent investigations into scandals involving his wife and top officials, drawing quick, strong rebukes from his political rivals.
Yoon’s move was the first declaration of martial law since the country’s democratization in 1987. The country’s last previous martial law was in October 1979, following the assassination of former military dictator Park Chung-hee.
Sydney Seiler, Korean chair at the Center for Strategic and International Studies, argued that the move was symbolic for Yoon to express his frustration with the opposition-controlled parliament.
“He has nothing to lose,” said Seiler, comparing Yoon’s move to the Hail Mary pass in American football, with a slim chance of success.
Now Yoon faces likely impeachment, a scenario that was also possible before he made the bold move, Seiler said.
Natalia Slavney, research analyst at the Stimson Center’s 38 North website that focuses on Korean affairs, said Yoon’s imposition of martial law was “a serious backslide of democracy” that followed a “worrying trend of abuse” since he took office in 2022.
South Korea “has a robust history of political pluralism and is no stranger to mass protests and swift impeachments,” Slavney said, citing the example of former President Park Geun-hye, the country’s first female president, who was ousted from office and imprisoned for bribery and other crimes in 2017.
Associated Press writers Hyung-jin Kim in Seoul, South Korea, and Matt Lee, Didi Tang and Tara Copp in Washington contributed to this report.
...
Read the original on apnews.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.