10 interesting stories served every morning and every evening.
SWIPE LEFTPRESS RIGHT ARROW
In the small town of Kawara in Fukuoka Prefecture, something unexpected is happening at the Saidosho Community Center. While kids in most parts of Japan are obsessed with Pokémon cards — or perhaps the franchise’s latest smartphone game, Pokémon TCG Pocket — the children of Kawara are clutching to something a little closer to home.
They are playing a trading card game (TCG) where the stars aren’t fantasy creatures, anime heroes or even famous baseball players, but ojisan (middle-aged or older men) from the local community of Saidosho.
On the surface, this Ojisan TCG looks like any other collectible card game. As of March 18, there are 47 different cards in the collection, including 28 featuring local men with stats and special abilities.
Take the Firewall card, for example. It features Mr. Honda (74), a former fire brigade chief who helped keep the town safe for decades. Then there’s Soba Master Mr. Takeshita (81), who runs a local soba noodle-making class and now holds legendary status among the town’s youth.
The most popular of them all is probably All-Rounder Mr. Fujii (68), a former prison officer turned community volunteer. His card is so sought after that local kids have even started asking him for autographs.
“I was honestly shocked when they asked me to sign it,” Mr. Fujii said, laughing. “I never imagined I’d become a trading card, let alone have fans.”
Each card comes with different attacks and an assigned elemental type — just like a classic fantasy card game — but with a humorous, real-world edge. For instance, Firewall is, unsurprisingly, a fire type, equipped with a Super Guard move, which deals a hefty 200 damage.
A card called Plasma Conductor, featuring a smiling, bespectacled ojisan juxtaposed with an image of a lightbulb, is an electric type. At the bottom of the card, there is a line of text explaining how he can fix any electrical appliance without fail.
The creator of the game is Eri Miyahara, the Secretary General of the Saidosho Community Council.
“We wanted to strengthen the connection between the children and the older generations in the community. There are so many amazing people here. I thought it was such a shame that no one knew about them,” she said in an interview with Fuji News Network (FNN). “Since the card game went viral, so many kids are starting to look up to these men as heroic figures.”
The plan worked. Kids have started attending local events and volunteering for community activities — just for a chance to meet the ojisan from their cards. Participation in town events has reportedly doubled since the game launched.
Interestingly, Ojisan TCG didn’t start as a competitive game. The first set of cards was designed purely for collecting, but the local children quickly turned it into something more dynamic. They began comparing stats and declaring, “My card is stronger than yours.”
Seeing this, the game’s creator decided to take it to the next level. New rules were introduced, allowing the cards to be used in actual battles. The objective isn’t to defeat the opponent’s card but to outplay it based on the characters’ skills and abilities.
The rarity of a card isn’t based on fantasy stats — it’s tied to real-world contributions. The more actively the ojisan engages in volunteer work or community service, the higher the chances of their card being upgraded to a shiny version with a glossy laminated effect.
While the cards — all made entirely by hand — are currently in high demand and often out of stock, they are only available for sale at the Saidosho Community Center.
A pack of three cards costs ¥100, while a pack of six cards — including a shiny card — costs ¥500. Despite the limited supply, demand remains high, with many kids saving their pocket money for a chance to score a shiny version of their favorite ojisan.
...
Read the original on www.tokyoweekender.com »
I’m new to Substack—this is my first post.
Hey, I’m Matt. I founded Vizzly as a first-time entrepreneur and exited in under three years, selling the company to WPP, the world’s largest advertising organization. A few months after starting the company, we were accepted into Y-Combinator, and I spent about four months engrossed in San Francisco’s tech hustle culture learning from people much smarter than I. I’m not anti-SF (far from it; I have some incredible memories there), but I didn’t buy into the all-or-nothing mindset of startups.
It’s worth noting that after Demo Day, we set out to raise more than we ended up with—essentially, a failed Seed round. At the time, I saw this as a negative, assuming fundraising was a form of validation (turns out it’s not, by the way).
A year later, I came to realize that this was the best thing to happen to both the company and myself. For starters, it forced us to become very revenue-centric.
Additionally, it opened the door to smaller acquisitions. Because our preference stack (more on this later) was tiny, even a relatively small deal would produce a life-changing outcome for the founding team.
Writing this, I could go in a million directions—and I probably will in future posts, diving into different aspects. For now, I’ll focus on the ‘missing middle path.’
On my LinkedIn feed, it feels like there are two dominant narratives in B2B SaaS: the VC path—go big or go home, maybe take a secondary at Series A or B, but push hard no matter what—or the Bootstrapper route—build sustainable value with an asset you own on a shoestring budget, and reap the rewards of cash flow and dividends.
What’s rarely talked about is the third path: raise a small amount—say, less than $1M—while keeping 90%+ of your equity, avoiding board seats, and building for profitability and strong asset value. All without draining your personal finances or spending a decade bootstrapping through revenue.
The reason it’s not more popular is that it’s a founder-centric approach that doesn’t fit the venture capital model, making it a controversial stance. In fact, if you pitched this pathway to a VC, I’m sure they’d ghost you quicker than their last Hinge date.
In short: the top 3% of investments are expected to return the fund and more. So, if you’re aiming for a $10M outcome, they won’t be interested. Success and returns often increase with more funding, which is why VCs push for constant fundraising. With a preference stack, they get paid back first, leaving you effectively in debt. This forces you to scale to justify the time and effort needed to generate a meaningful outcome.
For those wondering what a preference stack is, it’s basically the order in which different investors get paid when a startup is sold or liquidated. You might see a headline in TechCrunch saying a startup has sold for $40M. For all you know, they raised $30M, grinded for eight years, and the deal is 10% cash and 90% equity exchange—meaning the founders involved (assume there are two) are basically exchanging overinflated equity that will probably be subject to a four-year vest.
This is obviously a hypothetical and a great outcome based on the probability of a startup’s success, but it’s still a pretty modest outcome if you think that’s 12 years of prime life one would have sacrificed (sorry, invested).
I have an incredible amount of admiration for folks on the venture journey — it’s high risk, high reward, as I do for folks who bootstrap and pursue a more unconventional route. The problem with bootstrapping is simple: not everyone can sustain it.
Many bootstrapped B2B SaaS startups were agencies beforehand, such as Mailchimp, Basecamp, Lemlist, etc. I probably don’t need to labor this point too much, but this pathway can also take a really long time to create and drive value.
Of course it’s subjective, but I think the middle path provides the optimal combination of return size, time to return, all with minimal financial risk to the founders involved. For most B2B SaaS businesses, you shouldn’t need more than $1M in capital to get to PMF, find a GTM motion that works (not that it needs to scale), and reach an ARR figure where the revenue multiple for valuation starts to look pretty tasty—enough to offer significant upside even after investors are paid out.
What no VC will tell you is that there’s a window of opportunity — somewhere between Pre-Seed and Series A — where founders can tap into significant upside. This is rarely discussed, and too many companies fail because they raise too much, only to end up with little to nothing in a liquidation event (as described above).
We stumbled into the middle path, and I’m grateful for it.
There are still drawbacks and important factors to consider—particularly how you choose to source your investment. While most VCs may not align with this philosophy, angel investors are generally more open to a 2-3x ROI.
As with everything, there are trade-offs. The key is understanding your values, defining what you want from the journey and the outcome, and knowing what you’re willing—and, more importantly, unwilling—to sacrifice along the way. Once you have that clarity, you can make an informed decision about the financing route to pursue.
* A middle path exists; you don’t need a moonshot or bootstrapping.
* There’s a sweet spot between Pre-Seed and Series A.
* Know your ‘why’ and avoid getting drawn into a journey you don’t need.
In future posts—or during caffeine-induced stints of motivation—I’ll dive deeper into topics such as fundraising, M&A, GTM, PMF and a few other needless acronyms.
...
Read the original on mattgiustwilliamson.substack.com »
My favorite part about getting older is that sometimes I wake up at 3:30 AM for no reason and can’t fall back asleep. This happened this weekend and by 5:30 or so I had exhausted my usual phone games (Wordle, Quordle, Blossom, Immaculate, Strands, etc). Reading my book about the extinction of the dinosaurs and rewatching the 9th inning of a thrilling Texas baseball win only woke me up more.
Then I saw an article from Axios about how travel to the US from other countries fell off a cliff in March. As a true nerd, what excited me about the article was that they linked to an official government data source that I wasn’t familiar with.
I went to the CBP’s Average Wait Time website and found a rich dataset that can be extremely useful for describing current trends. The system is kind of clunky in that you have to select individual agencies and terminals, and the online data appears to only go back for 3 years despite the data collection starting in 2008.
But the data was easily exportable and, with a little bit of elbow grease, you can break down US and non-US passengers being processed by CBP each and every day at the nation’s busiest international airports. That’s really useful for showing if, say, some sort of policy had been implemented that was making people from abroad question whether they should come to the U. S.
Having downloaded the data for 8 of the busiest airports in the country I wanted to show how entries are changing. Travel can be very seasonal so I wanted to compare this year to the same time last year. And I rolled it over 30 days to be able to show any recent changes while not getting lost in the noise of daily change.
The result is below:
The last 30 days in this group of major airports has seen a greater than 10 percent reduction in foreign travelers being processed. I sent out an early version of this graph on Bluesky and some attention was paid to it. This immediately made me nervous because there is always the risk of systemic underreporting any time you’re working with very recent data.
Working with the CBP data immediately made me think about similar concerns when that come to mind when working with crime data — especially early in the year.
I’ve talked before about how much I like the Texas Department of Public Safety Uniform Crime Report website. It gives you a ton of options for downloading pretty much everything Texas is reporting to the FBI as soon as agencies report it to the state.
San Antonio Police Department is one agency that does a great job of posting the most recent completed month’s data early in the following month. The February 2025 data for SAPD, for example, was published very early in March 2025. So if I want to see how thefts this year compare to last year I can easily pull up the YTD NIBRS report and there it is!
Some back of the spreadsheet math points to an 18 percent decline in theft offenses in San Antonio through February 2025 compared to the first two months of 2024.
But you have to be careful when using this data.
Running an Ad Hoc Query on thefts in San Antonio this year clearly shows why. This report lets you break down offenses by type and day. There were 121 thefts reported per day in San Antonio between January 1 and February 18, 105 per day from February 19 through 24, and 88, 67, 75, and 18 on February 25, 26, 27, and 28 respectively.
Law enforcement agencies have over a year until they have to submit final 2025 data to the FBI, so these numbers will be adjusted (usually up). The January numbers are prone to adjustment as well though the amount of change tends to diminish substantially after a month or two.
Theft fell 5 percent in San Antonio in 2024 and the early data from January and February points to a decline so far in 2025. But uncertainty with the data suggests cautioned is warranted when guesstimating what that decline may end up being.
Carl Sagan preached that extraordinary claims require extraordinary evidence. Claiming that theft in San Antonio is falling a lot isn’t exactly extraordinary, but the degree of decline shown in the topline figures screams for more evidence to substantiate it. The closer look helps to evaluate and contextualize the likelihood of a decline so far this year allowing for an appropriate level of analytic caution.
Which brings us back to analyzing air travelers coming to the US.
As I said, I’d never worked with this dataset until very recently, so it made me nervous that my first usage pointed to a huge, sudden, very recent decline. One easy way to figure out if the decline was a real change or a data reporting issue is to do the same process but for US travelers being processed by CBP. I assume that US travelers haven’t changed behavior, so if there’s a huge decline there then we’re probably looking at a data reporting issue.
Here is what we get doing this exercise with New York’s JFK airport.
And just for fun I did Orlando which shows a wild change:
All four airports show an identical trend over the last 30 days with a huge drop in non-US passengers going through CBP processing and an increase in US passengers which doesn’t stand out. This strongly suggests that what we’re seeing is not slow data entry — because I’d expect US and non-US passengers to show the same trend if that was the case.
Then, to be sure it isn’t a data reporting issue I went back the next day and downloaded the last week of CBP processing at JFK. The old data showed 260,250 entries between 3/29 and 4/3/2025 while the newly updated data showed 260,250 entries over that span. An exact match gives very strong confidence that we’re seeing a trend rather than a data issue here.
There are quirks in the data that need to be remembered when contextualizing the data.
Travelers heading to the US from Canadian airports tend to go through customs at the Canadian airport so the data we have now may actually be understating the decline. And some posters on Bluesky pointed out that Easter is much later this year which may mean fewer travelers to Orlando for Spring Break, but no such impact exists with Easter 2024 being earlier than Easter 2023, so I’m skeptical that it’s playing much of a role.
The evidence clearly shows that foreigners have stopped coming to the United States as much as they were at this point last year. This newsletter doesn’t delve into politics, and the reason for this change is both pretty obvious and not in need of further discussion here.
From a data perspective, however, it’s an interesting case study in finding an important trend and working to ensure what you’re seeing is real. This is an issue that deserves close attention as time goes on and the CBP’s dataset is a great way to follow it to see if things get better, worse, or stay the same as the year goes on.
...
Read the original on jasher.substack.com »
What is this?If you want to automate actions on a website, like repeatedly fill out a form, you normally can’t do it with AI apps like Cursor or Claude because they don’t have access to a web browser. With Browser MCP, you can connect AI apps to your browser so they can automate tasks on your behalf.
Speed up and automate your workflowsEnable your AI editor to automatically test your code end-to-end. Test user flows, validate UI elements, and ensure your application works as expected across different scenarios. Automate repetitive web-based tasks with ease. From data collection and form filling to workflow automation, save time and reduce errors in your day-to-day operations.
Automation happens locally on your machine, resulting in better performance without network latency. Since automation happens locally, your browser activity stays on your device and isn’t sent to remote servers.Uses your existing browser profile, keeping you logged into all your services.Avoids bot detection and CAPTCHAs by using your real browser fingerprint.
Get started with Browser MCP in three simple stepsAdd the Browser MCP extension to your browser. Add the Browser MCP server to your AI application to connect with the browser extension.Begin creating automated workflows and let your AI assistant handle repetitive tasks in your browser.
Everything you need to automate your browserGo forward to the next pageWait for a specified time in secondsPerform drag and drop between two elementsGet the console logs from the browser
...
Read the original on browsermcp.io »
is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021.
is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021.
JPGs are ready to post just about anywhere, while RAWs yield an unfinished file filled with extra data that allows for much richer post-processing. That option for a RAW file (and even the generic name, RAW) has been standardized across the camera industry — but despite that, the camera world has never actually settled on one standardized RAW format.
Most cameras capture RAW files in proprietary formats, like Canon’s CR3, Nikon’s NEF, and Sony’s ARW. The result is a world of compatibility issues. Photo editing software needs to specifically support not just each manufacturer’s file type but also make changes for each new camera that shoots it. That creates pain for app developers and early camera adopters who want to know that their preferred software will just work.
Adobe tried to solve this problem years ago with a universal RAW format, DNG (Digital Negative), which it open-sourced for anyone to use. A handful of camera manufacturers have since adopted DNG as their RAW format. But the largest names in the space still use their own proprietary files. And there’s no sign of that changing anytime soon.
Some smaller camera manufacturers have been using the DNG format for years, while others like Sigma have adopted it more recently. The whole point of DNG is to offer flexibility, ease of use, and ideally, a little more futureproofing — since the format is open to use and royalty-free.
DNG was created in 2004 by Thomas Knoll, one of the co-creators of Photoshop, and was based on the even older TIFF image specification. DNG is capable of holding additional camera metadata embedded within it. While other RAW formats are usually coupled with an XMP sidecar file for storing metadata, DNG is slightly streamlined since it’s just one file that allows nondestructive metadata edits within it.
Regardless of which camera brand you use, a RAW processing workflow is mostly the same: you take a picture, import it to a computer, open and process the file using editing software, and then export a “finished” file in a universal format (like JPG) for sharing or printing. Where things often get the messiest is with software compatibility.
Sigma: “Adding proprietary data allows the camera information to be given more accurately to the developing software. However, it is less versatile,” Sigma said in a statement provided by US PR rep Jack Howard. (Some of its cameras use DNG, while others use the proprietary X3F format.)
Pentax: “The advantage of proprietary formats is that they can evolve on their own; the disadvantage is that they may not be supported by third-party applications,” said Shigeru Wakashiro, General Manager of Product Planning for Ricoh Imaging Company (which owns Pentax). Of course, Pentax is the lone manufacturer that gives users a choice between a proprietary PEF file and DNG. The Ricoh executive added, “The disadvantage of using DNG is that if all manufacturers use the DNG format, it will be difficult to manage the format separately for each manufacturer.”
Sony’s software for processing ARW RAW files is called Imaging Edge. Like most first-party software from camera manufacturers, it’s terrible and unintuitive to use — and should be saved for situations like a high-resolution multishot mode where it’s the only method to use a camera’s proprietary feature. The same goes for other first-party apps like Canon Digital Photo Professional and Nikon NX Studio.
The only other time it may be necessary to use those apps is if you buy a camera as soon as it’s released and its RAW files aren’t yet supported by Adobe or other software makers. That moment of friction is when a proprietary RAW format is most annoying, primarily affecting early adopters. It’s a restriction that severely affects camera reviewers and YouTubers, who often can’t try out RAW files in any initial hands-on testing.
Getting that software support out to users as soon as possible takes a bunch of testing and work, and it’s not always completed as quickly as new camera owners would like. “For new cameras, this means making sure that we add support for new or changed behaviors in RAW file formats,” Eric Chan, a digital imaging fellow at Adobe, tells The Verge. “For example, new compression modes, capture modes such as High Dynamic Range and more. In addition, measuring each new device sensor for characteristics such as color and noise.”
“I have yet to hear a good reason for using proprietary RAW formats. The underlying data is the same. If a manufacturer comes up with additional data that isn’t included in the DNG standard, the format is extensible enough that a camera manufacturer can throw it in there, anyway.” That’s what Ben Sandofsky, developer at Lux Optics, makers of Halide, told me. So maybe some camera brands are set in their ways and like having full control. Ultimately, we’re sort of at their mercy and whether they choose to be more guarded with a proprietary format or use an open one like DNG.
...
Read the original on www.theverge.com »
Knowing where your engineer salary comes fromWith the recent flurry of US federal firings, many people are pointing and laughing at the Trump-voting federal employees who are just now finding out that they’ve voted for themselves to be let go. How could you have this poor a mental model of what your job even is? Well. In my opinion, many software engineers are operating under a mental model that’s just as bad, and are often doing the equivalent of voting for the person promising to fire them.
I won’t quote the tweets, but I regularly see stories like “I convinced my idiot bosses to finally let me do only tech debt work, and would you believe they fired me after a few months?” Or “I’ve been busting my ass on this underfunded project and I still got a bad performance review”. Or “it’s so unfair that I haven’t been promoted - look at all this amazing accessibility/standards/open-source work I’ve been doing!” The basic structure goes like this:
A bright-eyed engineer joins a tech company, excited to go and make the world a better place
They throw themselves into various pieces of work that don’t make money (improving FCP performance, better screenreader support, refactoring)
Their managers desperately try to redirect them to work that does make money, causing a long frustrating power struggle
Eventually the bright-eyed engineer gives up and unhappily focuses on Profitable Product X, or
The bright-eyed engineer leaves or is fired, and goes on Twitter to complain about their important work not being valued
The complaint in this story is basically equivalent to “I can’t believe Trump is firing me from the IRS when I voted for the guy”. It represents a fundamental misunderstanding of what tech companies are.
So what is the right understanding? Let’s start as simple as possible. Tech companies are run by small groups of people with the goal of making money. Successful tech companies make a lot of money, by definition. They hire software engineers in order to continue doing the things that make that money, or to do new things that make more money.
At successful tech companies, engineering work is valued in proportion to how much money it makes the company (directly or indirectly). Patrick McKenzie has an excellent post on this:
Profit Centers are the part of an organization that bring in the bacon: partners at law firms, sales at enterprise software companies, “masters of the universe” on Wall Street, etc etc. Cost Centers are, well, everybody else. You really want to be attached to Profit Centers because it will bring you higher wages, more respect, and greater opportunities for everything of value to you. It isn’t hard: a bright high schooler, given a paragraph-long description of a business, can usually identify where the Profit Center is. If you want to work there, work for that. If you can’t, either a) work elsewhere or b) engineer your transfer after joining the company.
Companies value work more the closer it is to a profit center. I don’t think you necessarily have to work in profit centers (in most tech companies, that would mean abandoning your engineering title and role). But you need to demonstrate your value to the profit center in order for your own work to be valued. I’m not saying you need to do this to keep your job. Companies pay lots of people who don’t deliver value, sometimes for many years. What I’m saying is this:
If your work isn’t clearly connected to company profit, your position is unstable
In other words, you’re probably depending on a kind-hearted manager (or CEO) who personally values your work. When they leave, you’re in trouble. Or you’re depending on a large company not really caring to check if a small team is bringing in profits. When they look, you’re in trouble. Or you’re depending on a cultural climate where your work has temporary cultural cachet (e.g. biofuels in the early 2000s). When that changes, you’re in trouble. The only way to have a stable position is to be connected to the way the company makes money.
In order to know if your work is connected to company profits, you have to know two things:
What is your company’s business model? How do they make money?
How does your work support the business model?
Publicly traded companies must publish their business model and finances every year, which means you can either just go read that or read what people are writing about it on business blogs, magazines, and so on. (If you’re working for a well-known company, you can probably just ask a LLM). If you’re working for a private company, that can be harder, but usually it’s not too hard to get the gist. For instance, the broad strokes of where Valve get their money from are pretty clear: it’s Steam, not their first-party games.
Being an engineer at a company will give you much more visibility into the business model. For instance, you can run analytics queries that identify the ten largest customers. Often you won’t have to run these queries directly - they’ll be shared among the product-and-business folks, in channels that most engineers have no interest in. It’s worth trying to learn about the business model. For instance, if I were working for Valve, I’d want a much clearer answer than “it’s Steam”: I’d want to know which kind of games were bringing in the most money, the distribution between new users and existing users, and so on.
Once you have an idea how your company makes money, you can gauge how your work supports it. If you build a product that many people are buying, this is easy: calculate how big a percentage of company profit your product is. What if you don’t build a product? Say you’re on the accessibility team, or the German localization team. In that case, you ought to figure out why your company is investing in these things. For example, working on accessibility might be valuable because:
It allows us to sell to (e.g.) visually-impaired customers, growing the total addressable customer base by X%
It enables us to meet specific regulatory requirements that allow us to sell to large enterprise customers (e.g. governments)
It makes us look good, or at least avoids us looking bad
It’s just good work that is worth doing because it’s the right thing to do
Some reasons are only important when times are good. If the company is doing great and has more money than it knows what to do with, the last two points are probably worth spending money on. (If the company is doing really great, like a lot of companies in 2019, “literally anything” is worth spending money on in order to accumulate engineers). When interest rates rise, those reasons vanish.
These reasons will apply to some companies more than others. For instance, if you work at Google, the first reason is important because growing a customer base worth ~270B by 2% unlocks ~5.5B of new revenue. If Google is paying your team less than 5B in total, your team is probably making enough money to justify its existence. But if you work at a smaller company with revenue measured in millions, that math goes the other way.
There’s an obvious consequence to this: if you want your work to be valued (i.e. you don’t want to be reshuffled or fired), and you want to work on personally-satisfying features like accessibility, or UI polishing, or anything else not directly connected to profit - you need to go and work for a very profitable company.
When I wrote about this idea - that very large tech companies deliver marginal features as a way of slightly growing their massive addressable customer base - some readers found the idea depressing. Maybe so! But at least it’s a theory for how it might be possible to work on these kinds of features and get paid for it. The alternative theory is something like:
Accessibility, clean code, good performance, and so on are all Good Features
I just need to keep looking until I find a Good Company and not a Bad Company
I don’t think that a smart engineer who thinks about this problem will come away believing this. But lots of smart engineers don’t like thinking about how their work connects to company profits, so their implicit beliefs often add up to something like this. These engineers will often go through the five-step process I mentioned in the introduction to this post. I hate to see technically strong, motivated, kind-hearted engineers run headlong into burnout for completely predictable reasons.
It’s easy to fall into the trap of thinking that you get paid for work because it’s important
You get paid for work because it makes money. If your work doesn’t contribute to that, your position is inherently unstable
If you want a stable position, you should try and figure out how your work connects to company profits, and strengthen that connection if possible
All kinds of seemingly-unprofitable work makes money, particularly at large companies where small percentages are a lot
If you want to work on seemingly-unprofitable work, you’re probably better off working for large successful tech companies
posts │ resume │ github │ linkedin │ rss │ what I’m working on now← Paths through the space of all possible solutionsBuilding your sense of what’s important at a tech company →
...
Read the original on www.seangoedecke.com »
An artificial intelligence (AI) system has for the first time figured out how to collect diamonds in the hugely popular video game Minecraft — a difficult task requiring multiple steps — without being shown how to play. Its creators say the system, called Dreamer, is a step towards machines that can generalize knowledge learn in one domain to new situations, a major goal of AI.
“Dreamer marks a significant step towards general AI systems,” says Danijar Hafner, a computer scientist at Google DeepMind in San Francisco, California. “It allows AI to understand its physical environment and also to self-improve over time, without a human having to tell it exactly what to do.” Hafner and his colleagues describe Dreamer in a study in Nature published on 2 April1.
In Minecraft, players explore a virtual 3D world containing a variety of terrains, including forests, mountains, deserts and swamps. Players use the world’s resources to create objects, such as chests, fences and swords — and collect items, among the most prized of which are diamonds.
Importantly, says Hafner, no two experiences are the same. “Every time you play Minecraft, it’s a new, randomly generated world,” he says. This makes it useful for challenging an AI system that researchers want to be able to generalize from one situation to the next. “You have to really understand what’s in front of you; you can’t just memorize a specific strategy,” he says.
Collecting a diamond is “a very hard task”, says computer scientist Jeff Clune at the University of British Columbia in Vancouver, Canada, who was part of a separate team that trained a program to find diamonds using videos of human play2. “There is no question this represents a major step forward for the field.”
AI researchers have focused on finding diamonds, says Hafner, because it requires a series of complicated steps, including finding trees and breaking them down to gather wood, which players can use to build a crafting table.
This, together with more wood, can be used to make a wooden pickaxe — and so on, until players have assembled the correct tools to collect a diamond, which is buried deep underground. “There’s a long chain of these milestones, and so, it requires very deep exploration,” he says.
Previous attempts to get AI systems to collect diamonds relied on using videos of human play2 or researchers leading systems through the steps.
By contrast, Dreamer explores everything about the game on its own, using a trial-and-error technique called reinforcement learning — it identifies actions that are likely to beget rewards, repeats them and discards others. Reinforcement learning underpins some major advances in AI. But previous programs were specialists — they could not apply knowledge in new domains from scratch.
Key to Dreamer’s success, says Hafner, is that it builds a model of its surroundings and uses this ‘world model’ to ‘imagine’ future scenarios and guide decision-making. Rather like our own abstract thoughts, the world model is not an exact replica of its surroundings. But it allows the Dreamer agent to try things out and predict the potential rewards of different actions using less computation than would be needed to complete those actions in Minecraft. “The world model really equips the AI system with the ability to imagine the future,” says Hafner.
This ability could also help to create robots that can learn to interact in the real world — where the costs of trial and error are much higher than in a video game, says Hafner.
Testing Dreamer on the diamond challenge was an afterthought. “We built this whole algorithm without that in mind,” says Hafner. But it occurred to the team that it was the ideal way to test whether its algorithm could work, out of the box, on an unfamiliar task.
...
Read the original on www.nature.com »
Twenty years ago today, Linus Torvalds made the very first commit to Git, the information manager from hell.
Over these last 20 years, Git went from a small, simple, personal project to the most massively dominant version control system ever built.
I have personally had a hell of a ride on this particular software roller coaster.
I started using Git for something you might not imagine it was intended for, only a few months after it’s first commit. I then went on to cofound GitHub, write arguably the most widely read book on Git, build the official website of the project, start the annual developer conference, etc - this little project has changed the world of software development, but more personally, it has massively changed the course of my life.
I thought it would be fun today, as the Git project rolls into it’s third decade, to remember the earliest days of Git and explain a bit why I find this project so endlessly fascinating.
Before we get into the history of Git and my relationship with it, I want to start with why Git exists and the mindset that it was started with.
Git started from frustration in the Linux kernel development community over version control and collaboration.
The kernel community has always used mailing lists for collaboration. It’s actually a pretty fascinating method of collaboration - it is massively scalable, highly distributed, local first, capable of fine grained discussion of patches, cryptographically securable, etc.
The gist of the mailing list collaboration flow is:
* publish a tarball (sort of zip file) of a known state of the project
* people download that and expand it locally
* modify it with whatever feature or fix they want to change
* run GNU diff on it to create a patch that the maintainer can apply to that initial known state to add that feature
* email that patch or a series of them to a mailing list
* the list discusses the changes
* the maintainer applies the patch for the next tarball release or asks for changes
I would love to do a whole blog post about how mailing list collaboration works and how cool various aspects of it are, but that’s for another time.
However, in this world, the version control systems of the time were simply not helpful - they seemed like a step backward in functionality. They had clunky access control mechanisms, they weren’t distributed, they were incredibly slow.
The community worked primarily with patches and tarballs and existing SCMs were just not good enough.
If you think about it, the patches and tarballs workflow is sort of the first distributed version control system - everyone has a local copy, the changes can be made locally, access to “merge” is whomever can push a new tarball to the server.
However, the process was still a bit cumbersome - managing the patches, remembering what was applied and who contributed it, keeping multiple series in flight, dealing with conflicts or rebasing changes.
The Bitkeeper tool was developed specifically for the kernel’s use case to try to build a version control system that worked for this workflow and Linus did like it, but the licensing they wanted to use didn’t vibe with the community it was built for.
It’s important to understand that this is why Git was created. Not to be a version control system really, but fundamentally to be a better way to do patches and tarballs - snapshot a set of files and show differences that can be discussed.
This is primarily how it’s data structure was designed (linked lists of trees of files, content addressable blob storage) and that structure is fundamentally unchanged, from the first commit to this very day.
Since we’re on the topic, what did the first commit look like? What could Git do from it’s first moment of existence?
Well, it was a stupid content tracker. As Linus himself put it from day one:
The first commit was a collection of seven simple stand alone tools. They weren’t things like git commit, they were very low level database tools like write-tree and commit-tree (this changed a few weeks into the project, when everything started being prefixed with git-).
Some of them evolved into the plumbing commands that still exist to this day, like git cat-file and git write-tree, others were fundamentally different (for example, git read-tree is a current Git plumbing command but the original read-tree was more like the current git ls-files), however, at a low level the concepts all exist still.
Essentially, with the very first commit, Git could:
* Build up a “snapshot” by using update-cache to build a cache of contents (essentially a tarball), and write-tree to write it as an object to the database.
* Write a “changeset” (commit) with commit-tree that commented on the changes introduced with a new tarball and the parent it was based on in order to setup a history of “tarballs”.
* Read out those database structures with cat-file (pull an object out of the database), read-tree (list out what the cache looks like) and show-diff (show the diff of the cache to the working directory).
From the very first days, Linus mentions that he really only wanted to build this plumbing and have it be the backend for some UI (“porcelain”) be scripted on top of it.
He meant to build an efficient tarball history database toolset, not really a version control system. He assumed that someone else would write that layer.
More on that in a minute. But first…
I was personally first introduced to Git around this point in the timeline by my friend and coworker, Nick Hengeveld at the ill fated startup we both worked at called Reactrix.
Interestingly, we used Git more in the way that Linus thought of the tools - as a distributed content tracker - rather than as the version control system that you probably mainly think of it as today.
We essentially worked for an advertising company that managed a bunch of digital signage displays with rather heavy-weight assets. Each of our hundreds of displays had a unique combination of ads they needed to run, most were on slow cell data uplinks, and the ads changed a lot. So we needed an efficient way to say “for machine A, we need ads 1, 2 and 3 (v1). for machine B, we need ads 2, 3 (v2) and 4” and incrementally update them if there were new versions of an existing ad.
We used Git - not to track changes in source code, but as a content distribution mechanism. We would use a script to look at the upcoming scheduling, write out unique trees of only the ads that each machine needed, commit that tree to a branch for the machine and then have each machine fetch and hard checkout every night.
This approach had a bunch of interesting advantages.
* If an ad updated, we only transferred the changed files and the changes were delta compressed against an object that was potentially already on the machine.
* All the shared assets had a single blob that could be checked out in multiple contexts - the content addressable filesystem aspect of Git was great for this.
* We could have thousands of combinations of hundreds of assets without storing any content twice anywhere or transferring the same thing over the network more than once.
Nick was a fairly heavy contributor to the early Git project to get it to work for our use case (adding SSL support to http-fetch, adding resumable and parallel HTTP transfers, the first HTTP based push solution, etc). His first patch was in September, a mere 6 months after Linus’s first commit.
His introduction of Git to me, my struggle to grasp it, and my eventual lightbulb moment of thinking it was pretty cool was what motivated me to write about it and try to make it easier for people to learn.
This pushed me to compile the Git Community Book, the Git Internals Peepcode PDF, build the git-scm.com website and write the Pro Git book - all of which eventually led me to GitHub.
So, how did this stupid content tracker become the most widely used SCM in the world?
Well, I go over a lot of the reasons I think Git and GitHub “won” in a previous blog post, but I do think it’s worth a quick glance at why Git itself ended up looking the way that it does today. Also maybe tell some fun anecdotes about the origins of stuff you know and love along the way.
As you may have deduced by the occasionally unfriendly, obscure or inconsistent nature of the Git commands, this wasn’t a system that someone sat down and meticulously designed from a usability standpoint from day one.
For the first several months, the git commands were all incredibly low level - even if you know the existing plumbing commands, you may not recognize a single one of the commands that existed in June of 2005 (rev-tree, mkdelta, tar-tree?)
It was fairly clear from the very beginning that the approach was that Git would just be this very low level database/filesystem type toolset and that (possibly several) other tools would use Git as their infrastructure.
So if Linus and the early Git team didn’t initially imagine Git being an actual version control tool and just wanted to build the plumbing, where did the “porcelain“ commands that we know today actually come from?
The answer is that they sort of slowly eked their way in over several years, mostly as shell scripts that evolved to scratch a series of itches.
In the early days, there were a number of user interfaces that scripted Linus’s backend tooling to be more user friendly. The earliest and for the first few years the most popular was git-pasky, which quickly became “Cogito” by Petr Baudis. The first version of these scripts were released only days after Git.
In early release announcements, you can get a feeling for the tooling that would start to become Git.
A few months in, trying to keep the line between porcelain and plumbing begins to break down as tooling in git starts to compete with tooling in the porcelan scripts.
Over the next year or two, more and more scripts continued to make their way into the core Git code until it eventually became clear that people’s time would be better spent working on the tools distributed with Git rather than trying to maintain this plumbing/porcelain distinction in the tooling.
In 2007, Cogito finally got “put up for sale” and the idea of some other porcelain being the main way that Git was used was more or less abandoned.
Looking back through these commits and emails from 20 years ago, it’s fascinating to see the birth of some of the infamous tools that many of us use every day.
The first version of git log was a wrapper script that called git-rev-list –pretty, piped it through a pager and was hard-coded to start at HEAD. Here is the original “git log” program in it’s entirety:
#!/bin/sh
git-rev-list –pretty HEAD | LESS=-S ${PAGER:-less}
Actually, a lot of the current commands started out this way - a few lines long shell or Perl script that ran some core plumbing commands. Eventually almost everything has been rewritten in C as builtins for portability, but there was a lot of first versions of commands in these scripting languages.
There are lots of these “firsts”, but I’ll just do one more because I find it so interesting. The infamous “rebase” command was born from a conversation about workflow between Junio and Linus in June 2005.
Junio tells Linus what his workflow has been:
Linus comments that the type of merge that developers really want here is to “re-base” the work:
Junio then responds with a simple script to use a new command called git cherry to “re-base” a series of commits.
As far as I can tell, this is the first time the phrase “rebase” was used in version control. It’s fun to see history born.
I’ve been asked many times where GitHub came up with the “Octocat” and the answer also lies in these early archives.
The first usage of the word “octopus” that I’ve seen in the Git mailing list is Junio telling Linus that his patches were applied sequentially, not “octopus”.
This referred to creating a merge commit with multiple parents, which is the other way the different patches could have been merged. Eventually, “octopus merge” became one of the valid merge strategies that Git had in it’s quiver. (fun fact, Git once had “stupid” as a merge strategy)
At some point in the very, very early days of GitHub, Tom was looking for anything that could be anthropomorphically used as a Git totem and “Octopus” was the only term in the Git lexicon that seemed to fit the bill. Tom searched for clipart that featured an “octopus” and this Simon Oxley image was the cutest that fit the bill. So the “octocat” was born.
Twenty years and a day later, one might ask what the future of this unlikely hero is.
The funny thing is that I’m still using Git in some ways in the same way that it was originally intended. GitButler uses Git not only to do normal commits for tracking code changes, but also uses the git database to track the history of your project. In the end, it’s still a damn good stupid content tracker, as Linus first intended.
So, happy birthday Git. You are still weird. You are still wonderful. Thanks for all the fish.
...
Read the original on blog.gitbutler.com »
...
Read the original on www.axios.com »
Diane Papan is a California Assembly member introduced a bill, California AB-501, that would stand in the way of converting OpenAI from a nonprofit into a for-profit. (Stuart Russell, Yann LeCun, Lawrence Lessig, and many others including me signed a petition in its support.) Tens of billions of dollars are at stake. Here’s what AB-501 looked like back when I signed the petition a few days ago.
Here’s what it looks like now. Can you spot the difference?
My sources assure me that this is NOT a clerical error and that Papan’s office confirms that the change is real.
Rumor has it that Altman called Papan just before the change. What did he tell her? StopAI, perhaps first to report this, took a dark view:
Media, get out there and find out what happened.
...
Read the original on garymarcus.substack.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.