10 interesting stories served every morning and every evening.
Cast your mind back to when you learned to drive, ride a bike, speak a foreign language, perform a tracheostomy, or acquire any other skill. Wasn’t confidence the key to your success? Whatever we do in life, confidence is always critical. If you run a business, one of the metrics that are likely to be collected is confidence in your business, as that’s such an important economic indicator. Confidence is every bit as important in computing.
Over the last few weeks I’ve been discovering problems that have been eroding confidence in macOS. From text files that simply won’t show up in Spotlight search, to Clock timers that are blank and don’t function, there’s one common feature: macOS encounters an error or fault, but doesn’t report that to the user, instead just burying it deep in the log.
When you can spare the time, the next step is to contact Apple Support, who seem equally puzzled. You’re eventually advised to reinstall macOS or, in the worst case, to wipe a fairly new Apple silicon Mac and restore it in DFU mode, but have no reason to believe that will stop the problem from recurring. You know that Apple Support doesn’t understand what’s going wrong, and despite the involvement of support engineers, they seem as perplexed as you.
One reason for this is that macOS so seldom reports errors, and when it does, it’s uninformative if not downright misleading. Here’s a small gallery of examples I’ve encountered over the last few years, to bring back unhappy memories.
Maybe you saved an important webpage in Safari 26.1 using its Web Archive format, then a couple of days later discovered you couldn’t open it. There’s no error message, just a blank window, so you try again with the same result. Another site shows the same problem, forcing you to conclude that it’s a bug in Safari. Are you now going to devote your time to obtaining sufficient information to report that to Apple using Feedback? Or to contact Apple Support and pursue its escalation to an engineer who might fortuitously discover the cause?
Silent failures like these are least likely to be reported to Apple. In most cases, we find ourselves a workaround, here to abandon Web Archives and switch to saving webpages as PDF instead. When someone else mentions they too have the same problem, we advise them that Web Archives are broken, and our loss of confidence spreads by contagion.
Honest and understandable error reporting is essential to confidence. It enables us to tackle problems rather than just giving up in frustration, assuming that it’s yet another feature we used to rely on that has succumbed in the rush to get the next version of macOS out of the door.
Eroding confidence is also a problem that the vendors of AI appear to have overlooked, or at least seriously underestimated. It’s all very well using the euphemism of hallucination to play down the severity of errors generated by LLMs. But those can only cause users to lose confidence, no matter how ‘intelligent’ you might think your AI is becoming. Go talk to the lawyers who have been caught out by courts submitting AI fabrications whether they still have full confidence in your product.
...
Read the original on eclecticlight.co »
At the start of each year, on January 1st, a new crop of works enter the public domain and become free to enjoy, share, and reuse for any purpose. Due to differing copyright laws around the world, there is no one single public domain — and here we focus on three of the most prominent. Newly entering the public domain in 2026 will be:
* works by people who died in 1955, for countries with a copyright term of “life plus 70 years” (e.g. UK, Russia, most of EU and South America);
* works by people who died in 1975, for countries with a term of “life plus 50 years” (e.g. New Zealand, and most of Africa and Asia);
* films and books (incl. artworks featured) published in 1930 for the United States.
In our advent-style calendar below, find our top pick of what lies in store for 2026. Each day, as we move through December, we’ll open a new window to reveal our highlights! By public domain day on January 1st they will all be unveiled — look out for a special blogpost from us on that day. (And, of course, if you want to dive straight in and explore the vast swathe of new entrants for yourself, just visit the links above).
...
Read the original on publicdomainreview.org »
Ghostty for the web with xterm.js API compatibility — giving you a proper VT100 implementation in the browser.
* Migrate from xterm by changing your import: @xterm/xterm → ghostty-web
* WASM-compiled parser from Ghostty—the same code that runs the native app
Originally created for Mux (a desktop app for isolated, parallel agentic development), but designed to be used anywhere.
Live Demo on an ephemeral VM (thank you to Greg from disco.cloud for hosting).
npx @ghostty-web/demo@next
This starts a local HTTP server with a real shell on http://localhost:8080. Works best on Linux and macOS.
xterm.js is everywhere—VS Code, Hyper, countless web terminals. But it has fundamental issues:
xterm.js reimplements terminal emulation in JavaScript. Every escape sequence, every edge case, every Unicode quirk—all hand-coded. Ghostty’s emulator is the same battle-tested code that runs the native Ghostty app.
npm install ghostty-web
ghostty-web aims to be API-compatible with the xterm.js API.
import { init, Terminal } from ‘ghostty-web’;
await init();
const term = new Terminal({
fontSize: 14,
theme: {
background: ‘#1a1b26’,
foreground: ‘#a9b1d6’,
term.open(document.getElementById(‘terminal’));
term.onData((data) => websocket.send(data));
websocket.onmessage = (e) => term.write(e.data);
For a comprehensive client server example, refer to the demo.
ghostty-web builds from Ghostty’s source with a patch to expose additional functionality.
bun run build
Mitchell Hashimoto (author of Ghostty) has been working on libghostty which makes this all possible. The patches are very minimal thanks to the work the Ghostty team has done, and we expect them to get smaller.
This library will eventually consume a native Ghostty WASM distribution once available, and will continue to provide an xterm.js compatible API.
At Coder we’re big fans of Ghostty, so kudos to that team for all the amazing work.
...
Read the original on github.com »
I’ve written about JPEG XL in the past. First, I noted Google’s move to kill the format in Chromium in favor of the homegrown and inferior AVIF. Then, I had a deeper look at the format, and visually compared JPEG XL with AVIF on a handful of images.
The latter post started with a quick support test:
“If you are browsing this page around 2023, chances are that your browser supports AVIF but does not support JPEG XL.”
Well, here we are at the end of 2025, and this very sentence still holds true. Unless you are one of the 17% of users using Safari, or are adventurous enough to use a niche browser like Thorium, LibreWolf or the newer Zen Browser, chances are you see the AVIF banner in green and the JPEG XL image in black/red.
The good news is, this will change soon. In a dramatic turn of events, the Chromium team has reversed its Obsolete tag, and has decided to support the format in Blink (the engine behind Chrome/Chromium/Edge). Given Chrome’s position in the browser market share, I predict the format will become a de factor standard for images in the near future.
I’ve been following JPEG XL since its experimental support in Blink. What started as a promising feature was quickly axed by the team in a bizarre and ridiculous manner. First, they asked the community for feedback on the format. Then, the community responded very positively. And I don’t only mean a couple of guys in their basement. Meta, Intel, Cloudinary, Adobe, ffmpeg, libvips, Krita, and many more. After that came the infamous comment:
Thank you everyone for your comments and feedback regarding JPEG XL. We will be removing the JPEG XL code and flag from Chromium for the following reasons:Experimental flags and code should not remain indefinitelyThere is not enough interest from the entire ecosystem to continue experimenting with JPEG XLThe new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by defaultBy removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome
Yes, right, “not enough interest from the entire ecosystem”. Sure.
Anyway, following this comment, a steady stream of messages pointed out how wrong that was, from all the organizations mentioned above and many more. People were noticing in blog posts, videos, and social media interactions.
Strangely, the following few years have been pretty calm for JPEG XL. However, a few notable events did take place. First, the Firefox team showed interest in a JPEG XL Rust decoder, after describing their stance on the matter as “neutral”. They were concerned about the increased attack surface resulting from including the current 100K+ lines C++ libjxl reference decoder, even though most of those lines are testing code. In any case, they kind of requested a “memory-safe” decoder. This seems to have kick-started the Rust implementation, jxl-rs, from Google Research.
To top it off, a couple of weeks ago, the PDF Association announced their intent to adopt JPEG XL as a preferred image format in their PDF specification. The CTO of the PDF Association, Peter Wyatt, expressed their desire to include JPEG XL as the preferred format for HDR content in PDF files.
All of this pressure exerted steadily over time made the Chromium team reconsider the format. They tried to kill it in favor of AVIF, but that hasn’t worked out. Rick Byers, on behalf of Chromium, made a comment in the Blink developers Google group about the team welcoming a performant and memory-safe JPEG XL decoder in Chromium. He stated that the change of stance was in light of the positive signs from the community we have exposed above (Safari support, Firefox updating their position, PDF, etc.). Quickly after that, the Chromium issue state was changed from Obsolete to Assigned.
This is great news for the format, and I believe it will give it the final push for mass adoption. The format is excellent for all kinds of purposes, and I’ll be adopting it pretty much instantly for this and the Gaia Sky website when support is shipped. Some of the features that make it superior to the competition are:
* Lossless re-compression of JPEG images. This means you can re-compress your current JPEG library without losing information and benefit from a ~30% reduction in file size for free. This is a killer feature that no other format has.
* Support for image sizes of up to 1,073,741,823x1,073,741,824. You won’t run out of image space anytime soon. AVIF is ridiculous in this aspect, capping at 8,193x4,320. WebP goes up to 16K2, while the original 1992 JPEG supports 64K2.
* Maximum of 32 bits per channel. No other format (except for the defunct JPEG 2000) offers this.
* Maximum of 4,099 channels. Most other formats support 4 or 5, with the exception of JPEG 2000, which supports 16,384.
* JXL supports progressive decoding, which is essential for web delivery, IMO. WebP or HEIC have no such feature. Progressive decoding in AVIF was added a few years back.
For a full codec feature breakdown, see Battle of the Codecs.
JPEG XL is the future of image formats. It checks all the right boxes, and it checks them well. Support in the overwhelmingly most popular browser engine is probably going to be a crucial stepping stone in the format’s path to stardom. I’m happy that the Chromium team reconsidered their inclusion, but I am sad that it took so long and so much pressure from the community to achieve it.
...
Read the original on tonisagrista.com »
JavaScript isn’t enabled in your browser, so this file can’t be opened. Enable and reload.
...
Read the original on docs.google.com »
In November 2015, I started a blog on Blogger. My first post was a book review of The Martian by Andy Weir. 10 years and a couple of blog migrations later, I’m still writing. I wanted to share some thoughts and learnings I picked up throughout this time. Some of it is specific to writing a blog, but some is generally applicable to writing in any format.
One of the main reasons I maintain this blog is to become a better writer. I really appreciate when someone’s writing feels effortless. Whether it’s in a book, an article, or even a technical document—communicating effectively is a fine art. I’m not there yet, but I enjoy the process of improving.
My style has certainly improved since my early days of writing. Reading my old stuff is painful. I would use too many qualifiers and verbose phrases. It was a direct translation of the way I spoke, which turns out is a bad strategy for how you should write. If your goal is to have other people read—and hopefully enjoy—your writing, you should make an effort to edit your thoughts.
Here’s a sample of the useless phrases I would add to the start or end of almost every sentence:
This was my worst habit when I started. It’s just fluff that makes it exhausting to read. It’s redundant to say “I think” at any point in an opinion piece.
keep all that pondering to yourself buddy
Using this “careful” language just softens your ideas to the point of being inarguable. If you start a sentence with “I feel…” then no one can dispute anything that follows, since it’s just your feeling. This is boring to read.
Writing a blog, or anything really, is your contribution to public discourse. Sure, this blog only averages 10 page views a week (9 are bots and 1 is me) but I’m still throwing my ideas out there into the digital ether. If you’re publishing something on the internet, you might as well stand tall behind your words and wait for someone to call bullshit.
Using multiple adjectives is another bad habit I struggled with in the past. Phrases like:
These are unnecessarily descriptive and, more often than not, redundant. Just use one punctilious adjective instead. Open a thesaurus if you need to.
My goal now is to use less words to convey an idea. Everyone’s interpretation of words is different, so using more precise language will just muddle your ideas. To use a metaphor from electronic communication—there’s so much noise in the channel that modulating your signal doesn’t provide any extra information.
The writing process should be highly iterative—many drafts are needed before you arrive at something you’re happy with. Taking time between drafts can help too, so you come back to it with a different perspective on what you wrote. If we’re talking about a blog, there’s really no strict timeline for getting a piece of content out, so when you choose to publish is up to you. Even after publishing, there’s nothing that stops you from updating the content afterwards.
You should write down ideas when you have them. Literally, I wrote the genesis of this paragraph while in bed at 5am in January. You never know when inspiration will strike, so I find it best to get the thought down quickly and then expand on it later.
It really helps to make the ability to write as accessible to you as possible. For example, I use Obsidian for all my drafts now. It has cross-device support with cloud syncing, so “writing from anywhere” (mostly my phone) is easy now.
I can now publish my smart toaster review directly from my smart toaster
There’s a lot of talk about the value of “manual” writing in the age of generative AI. GenAI, specifically Large Language Models, can be thought of as calculators for writing; they can generate coherent written ideas instantly from any input. So just like how nobody does long division by hand anymore, maybe people won’t do much writing by hand one day.
The introduction of GenAI has increased the surplus of written content to infinity, essentially. So from an economics standpoint, without any resource scarcity the value of written words has been reduced to zero. But is there still value in human produced writing? Subjectively, yes. Objectively? I’m not sure. I think there’s a lot of personal value in writing though.
Book reviews, for example, are essential for gaining a better understanding of what you read. It helps crystallize the knowledge in some way and integrates it into your mental map of the world. The reviews I post vary in content—sometimes it’s a critique, or a summary, or an extrapolation of a concept from the book I’ll do additional research on. Either way, this process helps to remember something about the book long-term.
I think of it like breathing but for ideas. We do so much reading all day—there should be a natural balance with producing words too. Inhale, exhale, inhale, exhale…
And I’m still not a great writer by any means. There’s a lot of ways to improve, which is kind of motivating and excites me to keep writing.
I often write “too much” and struggle to really condense my thoughts into a sharpened essay. Most of my posts are 2000+ words…nowadays I’m trying to restrict myself to 1000 words. The limit forces me to really think about the core idea I want to share.
...
Read the original on flowtwo.io »
STARFlow-V is the first normalizing flow-based causal video generator demonstrating that normalizing flows can match video diffusion models in visual quality while offering end-to-end training, exact likelihood estimation, and native multi-task support across T2V/I2V/V2V generation.
Normalizing flows (NFs) are end-to-end likelihood-based generative models for continuous data, and have recently regained attention with encouraging progress on image generation. Yet in the video generation domain, where spatiotemporal complexity and computational cost are substantially higher, state-of-the-art systems almost exclusively rely on diffusion-based models. In this work, we revisit this design space by presenting STARFlow-V, a normalizing flow-based video generator with substantial benefits such as end-to-end learning, robust causal prediction, and native likelihood estimation. Building upon the recently proposed STARFlow, STARFlow-V operates in the spatiotemporal latent space with a global-local architecture which restricts causal dependencies to a global latent space while preserving rich local within-frame interactions. This eases error accumulation over time, a common pitfall of standard autoregressive diffusion model generation. Additionally, we propose flow-score matching, which equips the model with a light-weight causal denoiser to improve the video generation consistency in an autoregressive fashion. To improve the sampling efficiency, STARFlow-V employs a video-aware Jacobi iteration scheme that recasts inner updates as parallelizable iterations without breaking causality. Thanks to the invertible structure, the same model can natively support text-to-video, image-to-video as well as video-to-video generation tasks. Empirically, STARFlow-V achieves strong visual fidelity and temporal consistency with practical sampling throughput relative to diffusion-based baselines. These results present the first evidence, to our knowledge, that NFs are capable of high-quality autoregressive video generation, establishing them as a promising research direction for building world models.
Figure: STARFlow-V pipeline. The model processes text prompts and noise through a Deep Autoregressive Block (global temporal reasoning) to produce intermediate latents, which are then refined by Shallow Flow Blocks (local within-frame details). A Learnable Causal Denoiser (trained via Flow-Score Matching) cleans the output. The model is trained end-to-end with two objectives: Maximum Likelihood for the flow and Flow-Score Matching for the denoiser.
A novel two-level architecture that separates global temporal reasoning from local within-frame details. A deep causal Transformer block processes the video autoregressively in compressed latent space to capture long-range spatiotemporal dependencies, while shallow flow blocks operate independently on each frame to model rich local structures. This design mitigates compounding errors common in pixel-space autoregressive models.
A unified training framework that combines normalizing flow maximum likelihood with flow-score matching for denoising. Instead of using imperfect or non-causal denoisers, we train a lightweight causal
neural denoiser alongside the main flow model. This denoiser learns to predict the score (gradient of log-probability) of the model’s own distribution, enabling high-quality single-step refinement while preserving causality.
Generation (flow inversion) is recast as solving a nonlinear system, enabling block-wise
parallel updates of multiple latents simultaneously instead of one-by-one generation. Combined with video-aware initialization that uses temporal information from adjacent frames and pipelined execution between deep and shallow blocks, this achieves significant speedup while maintaining generation quality.
STARFlow-V is trained on 70M text-video pairs and 400M text-image pairs, with a final 7B parameter model that can generate 480p video at 16fps. The model operates in a compressed latent space and leverages the invertible nature of normalizing flows to natively support multiple generation tasks without any architectural changes or retraining.
Navigate through the tabs above to see our model’s capabilities across different generation tasks. Each category demonstrates specific aspects of STARFlow-V, from standard text-to-video generation to long-form video creation and comparisons with diffusion-based baselines.
If you find STARFlow-V useful in your research, please consider citing our work:
@article{gu2025starflowv,
title={STARFlow-V: End-to-End Video Generative Modeling with Scalable Normalizing Flows},
author={Gu, Jiatao and Shen, Ying and Chen, Tianrong and Dinh, Laurent and Wang, Yuyang and Bautista, Miguel 'Angel and Berthelot, David and Susskind, Josh and Zhai, Shuangfei},
journal={arXiv preprint arXiv:XXXX. XXXXX},
year={2025}
Generate videos from input images while maintaining temporal consistency. Due to the autoregressive nature of our model, we don’t need to change the architecture at all—one model handles all tasks seamlessly.
Our model can extend and transform existing videos while maintaining temporal consistency. Due to the autoregressive nature of our model, we don’t need to change the architecture at all—one model handles all tasks seamlessly.
Extended video generation (10s, 15s, 30s) using autoregressive segment-by-segment generation. The tail of each 5s segment is re-encoded as the prefix for the next segment, leveraging the invertibility of normalizing flows.
Side-by-side comparisons with baseline Autoregressive diffusion models. All prompts are sampled from VBench (Huang, 2023). Each video shows three methods from left to right: NOVA (https://github.com/baaivision/NOVA), WAN-Causal (finetuned from WAN provided by https://huggingface.co/gdhe17/Self-Forcing/blob/main/checkpoints/ode_init.pt), and STARFlow-V (Ours).
Examples where our model struggles or produces suboptimal results, particularly on complex motion and physical interactions. These limitations stem from: (1) insufficient training due to resource constraints, (2) low-quality training data, and (3) the absence of post-training refinement—we perform only pretraining without supervised fine-tuning (SFT) or reinforcement learning (RL).
...
Read the original on starflow-v.github.io »
Instagram chief Adam Mosseri is ordering most US staff in his organization back to the office five days a week starting February 2, according to an internal memo obtained by Business Insider.
The memo, titled “Building a Winning Culture in 2026,” says the change applies to employees in US offices with assigned desks and is part of a broader push to make Instagram “more nimble and creative” as competition intensifies.
“I believe that we are more creative and collaborative when we are together in-person,” Mosseri wrote. “I felt this pre-COVID and I feel it any time I go to our New York office where the in-person culture is strong.”
Earlier this year, Amazon told many corporate employees to return to the office five days a week. Other tech giants such as Alphabet, Apple, and Microsoft have taken a slightly softer approach, generally requiring staff to be in the office at least three days a week.
The memo, first reported by Alex Heath’s Sources newsletter, also announced a slew of other changes. Recurring meetings will be canceled every six months and only re-added if “absolutely necessary.” Employees are encouraged to decline meetings that interfere with focus time.
“I want most of your time focused on building great products, not preparing for meetings,” Mosseri wrote.
The Instagram chief also called for more product prototypes than slide decks.
“Prototypes allow us to establish a proof of concept and get a real sense for social dynamics, and we use them far too infrequently,” Mosseri wrote.
“2026 is going to be tough, as was 2025, but I’m excited about our momentum and our plans for next year,” Mosseri wrote. “These changes are going to meaningfully help us move Instagram forward in a way we can all be proud of — with creativity, boldness, and craft.”
We’ve made good progress this year on Instagram standing for creativity and Threads standing for perspectives, but we still need to do more if we want to lead in both of these areas. A big part of this will come down to strategy, and I feel good about the plan we’ve put together for next half. Equally important is how well we work. I’ve been thinking a lot about how we can be more nimble and creative in order to stay competitive. It’s clear we have to evolve, so we’re going to make a series of changes next year:
1. Back to the office: I believe that we are more creative and collaborative when we are together in-person. I felt this pre-COVID and I feel it any time I go to our New York office where the in-person culture is strong.
Starting February 2, I’m asking everyone in my rollup based in a US office with assigned desks to come back full time (five days a week). The specifics:
* You’ll still have the flexibility to work from home when you need to, since I recognize there will be times you won’t be able to come into the office. I trust you all to use your best judgment in figuring out how to adapt to this schedule.
* In the NY office, we won’t expect you to come back full time until we’ve alleviated the space constraints. We’ll share more once we have a better sense of timeline.
* In MPK, we’ll move from MPK21 to MPK22 on January 26 so everyone has an assigned desk. We’re also offering the option to transfer from the MPK to SF office for those people whose commute would be the same or better with that change. We’ll reach out directly to those people with more info.
* XFN partners will continue to follow their own org norms.
* There is no change for employees who are currently remote.
2. Fewer meetings: We all spend too much time in meetings that are not effective, and it’s slowing us down. Every six months, we’ll cancel all recurring meetings and only re-add the ones that are absolutely necessary. I also support everyone in making recurring 1:1s biweekly by default and declining meetings if they fall during your focus blocks.
3. More demos, less decks: Most product overviews should be prototypes instead of decks. Prototypes allow us to establish a proof of concept and get a real sense for social dynamics, and we use them far too infrequently. If a strategy doc is appropriate, it should be three pages, max, and follow this template. If a deck is necessary, it should be as tight as possible. For all reviews, make it very clear up front what the goal of the meeting is and what the key points are that you need to discuss. I want most of your time focused on building great products, not preparing for meetings.
4. Faster decision-making: We’re going to have a more formalized unblocking process with DRIs, and I’ll be at the priorities progress unblocking meeting every week. (On weeks where I’m not able to attend, I’ll delegate decision-making to one of my directs.) This way open decisions don’t sit for more than a few days, max.
At next week’s All Hands, I’ll talk more about these changes, and you’ll hear from people around the team about our priorities for next year. 2026 is going to be tough, as was 2025, but I’m excited about our momentum and our plans for next year. These changes are going to meaningfully help us move Instagram forward in a way we can all be proud of — with creativity, boldness, and craft.
Have a tip? Contact Pranav Dixit via email at pranavdixit@protonmail.com or Signal at 1-408-905-9124. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.
...
Read the original on www.businessinsider.com »
Skip to content
Most metros are adding jobs more slowly than normal. Charlotte leads in job growth among major metros, while Austin and Denver fall far short of their historically strong pace.
High-income sectors are contracting, while Education and Healthcare are expanding faster than normal across most metros.
Employment composition matters as much as total growth for local housing market strength. Metros reliant on lower-wage job growth are likely to face softer for-sale demand.
The national labor market is softening, with implications for local housing markets. Most major metros are adding jobs more slowly than normal. We analyzed employment performance by metro and industry, comparing today’s growth to long-term trends since 2010. Red represents job losses, yellow shows slower-than-normal growth, and green represents faster-than-normal growth.
The job market drives housing demand, but the type of jobs created or lost impacts the type of housing. High-income sectors—Information, Professional Services, and Financial Activities—are shrinking across most major metros. Workers in these industries drive for-sale housing demand more than rental demand. Nationally, high-income sector employment remained flat YOY in August, well below its long-term compound annual growth of +1.6%.The Education and Healthcare sectors account for the bulk of new jobs added in most metros and are growing faster than normal in almost every market. Many of these jobs pay lower wages on average and often generate rental demand more than homebuying activity. Nationally, education and healthcare employment rose +3.3% YOY in August, well above its long-term compound annual growth of +2.1%
Philadelphia (+1.8% YOY) and New York (+1.7% YOY) show stronger job growth than their historical trends (+1.1% and +1.6%, respectively). However, this improvement reflects recovery from weak post-Great Financial Crisis baselines rather than genuine outperformance. Charlotte (+2.6% YOY) is a standout performer, maintaining robust job growth supported by Professional Services expansion (+4.5% YOY)—a rare bright spot for for-sale demand.Austin (+0.8% YOY) and Denver (+0.0% YOY) are growing much more slowly than their historically strong employment trends (+3.8% and +2.3%, respectively). Tech and Professional Services jobs are declining in both markets, and even healthcare—which is expanding faster than normal in most metros—shows weak growth here. This reduction in high-paying jobs is weakening demand for both home purchases and rentals.The Bay Area continues to lose jobs across high-income sectors (-0.4% YOY), driving modest overall employment declines. These job losses have slowed compared to a year ago but remain negative YOY. Despite generating substantial spending and wealth, the AI-driven tech boom hasn’t added meaningful employment to the region.
What this means for your business
Whether you build, invest, or advise in housing markets, these employment shifts will impact your growth opportunities in 2026 and beyond:Rental operators: Prepare for sustained demand from renters employed in healthcare and education.
Our Metro and Regional Housing research package includes analysis of the latest demand, supply, and affordability fundamentals for each metro and region as well as results from our proprietary surveys. Our consulting team continually evaluates market feasibility, absorption/pricing/product recommendations, and overall investment/expansion strategy in markets nationwide. Combining these two areas of expertise yields qualitative and quantitative insight for more intelligent decision-making.
This package provides a complete picture of housing supply, demand, and affordability through local insight, proprietary surveys, and extensive data analysis. We currently provide an overview of major housing and economic trends across 100 MSAs nationwide.
Our research services enable our clients to gauge housing market conditions and better align their business and strategic investments in the housing industry. We provide a thoughtful and unique holistic approach of both quantitative and qualitative analysis to help clients make informed housing investment decisions.
Our experienced team of consultants helps clients make sound housing investment decisions. We thrive on their success and work with many clients over multiple years and numerous projects.
Connect with me on LinkedIn
John leads JBREC’s Southern California market coverage for the Metro Analysis and Forecast reports, produces the Regional Analysis and Forecast and Homebuilder Analysis and Forecast reports, and assists with coverage of the public homebuilder space.
If you have any questions about our services or if you would like to speak to one of our experts about we can help your business, please contact Client Relations at clientservices@jbrec.com.
Want to interview one of our experts?
Media professionals seeking expert analysis and authoritative commentary on US housing market trends, policy impacts, and industry developments can email our team for interviews, quotes, and data-driven insights.
Every week, we deliver analysis to over 40,000 subscribers with our Building Market Intelligence™ newsletter. Subscribe to our weekly BMI newsletters to stay current on pressing topics in the housing industry.
What’s ahead for housing—Insights from our 2026 Housing Market Outlook conference
...
Read the original on jbrec.com »
In the last week we’ve had three major model updates: Gemini 3 Pro, Codex Max 5.1, Claude Opus 4.5. We thought we’d give them a challenge:
Build a basic version of Counter Strike. The game had to be a 3D UI and it had to be multiplayer.
If you’re curious, pop open (an ideally large computer screen) and you can try out each model’s handiwork yourself:
We have a full video of us going through the build here, but for those who prefer text, you get this post.
We’ll go over some of our high-level impressions on each model, then dive deeper into the performance of specific prompts.
We signed up for the highest-tier plan on each model provider and used the defaults set for their CLI. For Codex, that’s 5.1 codex-max on the medium setting. For Claude it’s Opus 4.5. And with Gemini it’s 3 pro.
We then gave each model about 7 consecutive prompts. Prompts were divided into two categories:
Frontend: At first agents only having to worry about the game mechanics. Design the scene, the enemies, the logic for shooting, and some sound effects.
Backend: Once that was done agents would then make the game multiplayer. They would need to build be selection of rooms. Users could join them and start shooting.
So, how’d each model do?
In a familiar tune with the other Anthropic models, Opus 4.5 won out on the frontend. It made nicer maps, nicer characters, nicer guns, and generally had the right scene from the get-go.
Once the design was done, Gemini 3 Pro started to win in the backend. It got less errors adding multiplayer and persistence. In general Gemini did the best with making logical rather than visual changes.
Codex Max felt like an “in-between” model on both frontend and backend. It got a lot of “2nd place” points in our book. It did reasonably well on the frontend and reasonably well on the backend, but felt less spikey then the other models.
Okay, now let’s get deeper into each prompt.
Goal number 1 was to set up the physics for the game. Models needed to design a map with a first-person viewpoint, and the ability to shoot enemies.
I want you to create a browser-based version of counter strike, using three js.
For now, just make this local: don’t worry about backends, Instant, or
anything like that.
For the first version, just make the main character a first-person view with
a cross hair. Put enemies at random places. Enemies have HP. You can
shoot them, and kill them. When an enemy is killed, they respawn.
Here’s a side-by-side comparison of the visuals each model came up with:
Visually Claude came up with the most interesting map. There were obstacles, a nice floor, and you could see everything well.
Gemini got the something nice working too.
Codex had an error on it’s first run [1] (it called a function without importing it), but it fixed it real quick. Once bugs were fixed, it’s map was the least visually pleasing. Things were darker, there were no obstacles, and it was hard to tell the floor.
Now that we had a map and some polygons, we asked the models to style up the characters. This was our prompt:
I want you to make the enemies look more like people. Use a bunch of square polygons to represent a person, and maybe a little gun
Here’s the result of their work:
Again it feels like Claude did the best job here. The character look quite human — almost at the level of design in Minecraft. Gemini did well too. Codex made it’s characters better, but everything was a single color, which really diminished it compared to the others.
We then asked each model to add a gun to our first-person view. When we shoot, we wanted a recoil animation.
I want you to make it so I also have a gun in my field of view. When I shoot, the gun moves a bit.
Here’s the side-by-side of how the recoil felt for each model:
Here both Claude and Codex got the gun working in one shot. Claude’s gone looks like a real darn pistol though.
Gemini had an issue trying to stick the gun to the camera. This got us in quite a back and forth, until we realized that the gun was transparent.
We were almost done the frontend: the final step was sound. Here’s what we asked:
I want you to use chiptunes to animate the sound of shots. I also want to animate deaths.
All models added sounds pretty easily. The ending part in our prompt: “I also want to animate deaths.” was added at the spur of the moment in the video. Our intention was to add sound to deaths. But that’s not what happened.
All 3 models misunderstood the sentence in in the same way: they thought the wanted to animate how the characters died. Fair enough, re-reading the sentence again, we would understand it that way too.
Here’s the results they came up with:
All the models got the sound done easily. They all got animations, but we thought Claude’s animation felt the most fun.
Now that all models had a real frontend, we asked them to make it multiplayer.
We didn’t want the models to worry about shots just yet: goal 1 was to share the movement positions. Here’s what we asked it to do:
I want you to use Instant presence.
Don’t save anything in the database, just use presence and topics. You can
look up the docs.
There should should just be one single room.
You no longer the need to have the enemies that are randomly placed. All the players are what get placed.
For now, don’t worry about shots. Let’s just make it so the positions of the players are what get set in presence.
Gemini got this right in one shot. Both Codex and Claude needed some more prodding.
It was interesting to see how each model tried to solve problems:
Codex used lots of introspection. It would constantly look at the typescript library and look at the functions that were available. It didn’t seem to look at the docs as much.
Claude looks at the docs a bunch. It read and re-read our docs on presence, but rarely introspected the library like Codex did.
Gemini seemed to do both. It looked at the docs, but then I think because it constantly ran the build step, it found any typescript errors it had, and fixed it up.
Gemini made the fastest progress here, though all of them got through, as long as we pasted the errors back.
Then we moved to getting shots to work. Here was the prompt:
Now let’s make shots work. When I shoot, send the shot as a topic, and
make it affect the target’s HP. When the target HP goes to zero, they should die and respawn.
Claude got this right in one shot. Gemini and Codex had a few issues to fix, but just pasting the errors got them though.
Now that all models had a single room working, it was time to get them supporting multiple rooms.
The reason we added this challenge, was to see (a) how they would deal with a new API (persistence), and (b) how they would deal with the refactor necessary for multiple rooms.
So, now I want you to make it so the front page is actually a list of
maps. Since our UI is using lots of polygons, make the style kind of
polygonish
Make the UI look like the old counter strike map selection screen.
I want you to save these maps in the database. Each map has a name.
Use a script to generate 5 random maps with cool names.
Then, push up some permissions so that anyone can view maps, but they cannot
create or edit them.
When you join a map, you can just use the map id as the room id for
presence.
All models did great with the UI. Here’s how each looked:
We kind of like Gemini’s UI the most, but they were all pretty cool.
And the persistence worked well too. They all dutifully created schema for maps, pushed a migration, and seeded 5 maps.
But things got complicated in the refactor.
Gemini got things done in one shot. It also chose to keep the map id in the URL, which made it much handier to use. Codex took one back and forth with a query error.
But Claude really got stuck. The culprit was hooks. Because useEffect can run multiple times, it ended up having a few very subtle bugs. For example, it made 2 canvas objects instead of 1. It also had multiple animation refs running at once.
It was hard to get it to fix things by itself. We had to put our engineer hats on and actually look at the code to unblock Claude here.
This did give us a few ideas though:
Claude’s issues were human-like. How many of us get tripped up with useEffect running twice, or getting dependency arrays wrong? I think improving the React DX on these two issues could really push humans and agents further.
And would have happened if a non-programmer was building this? They would have gotten really stuck. We think there needs to be more tools to go from “strictly vibe coding”, to “real programming”. Right now the jump feels too steep.
At the end, all models built real a multiplayer FPS, with zero code written by hand! That’s pretty darn cool.
Well, models have definitely improved. They can take much higher-level feedback, and much higher-level documentation. What really strikes us though is how much they can iterate on their own work thanks to the CLI.
There’s still lots to go though. The promise that you never have to look at the code doesn’t quite feel real yet.
...
Read the original on www.instantdb.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.