10 interesting stories served every morning and every evening.
Ghostty is a fast, feature-rich, and cross-platform terminal emulator that uses platform-native UI and GPU acceleration.
Ghostty is a fast, feature-rich, and cross-platform terminal emulator that uses platform-native UI and GPU acceleration.
Install Ghostty and run!
Zero configuration required to get up and running.
Ready-to-run binaries for macOS. Packages or build from source for Linux.
...
Read the original on ghostty.org »
A satirical (but real!) demo of what AI chat could look like in an ad-supported future. Chat with an AI while experiencing every monetization pattern imaginable — banners, interstitials, sponsored responses, freemium gates, and more.
Join 2 million professionals who think faster, focus better, and accomplish more. AI-powered goal tracking, habit building, and memory enhancement. First 30 days FREE!Think 10x Faster with AI. First Month FREE! 🧠Fun fact: Just 10 minutes of daily meditation reduces stress by 35%. Start your free ZenFocus journey today!Your AI assistant, proudly powered by the finest advertising money can buy 💸⚠️ Warning: This AI may spontaneously recommend products at any time🏷️ This conversation is proudly powered by BrainBoost Pro™ • Ad-supported free tier • Remove adsStressed by all these ads? 10 minutes of AI-guided meditation changes everything. AI-curated meal prep kits delivered weekly. $30 off your first box!🎨 Today’s chat theme sponsored by BrainBoost Pro • Colors, fonts, and vibes curated by our advertising team
This tool is a satirical but fully functional demonstration of what AI chat assistants could look like if they were monetized through advertising — similar to how free apps, websites, and streaming services fund themselves today. As AI chat becomes mainstream, companies face a fundamental question: how do you make it free for users while covering the significant compute costs? Advertising is one obvious answer — and this demo shows every major ad pattern that could be applied to a chat interface.We built this as an educational tool to help marketers, product managers, and developers understand the landscape of AI monetization, and to give users a glimpse of the future they might want to avoid (or embrace, depending on your perspective).
This demo covers the full spectrum of advertising patterns that could appear in an AI chat product.
This tool is educational and useful for a wide range of professionals thinking about the future of AI products.
Are the ads in this demo real?No — all brands and ads are completely fictional and created for this demo. BrainBoost Pro, QuickLearn Academy, ZenFocus, TaskMaster AI, ReadyMeal, and all other brands are made up. No actual advertising revenue is being generated. Does this show what AI chat will actually look like?It shows one possible future. Some ad-supported AI products already exist and use several of these patterns. Others are speculative. The goal is to make these possibilities concrete and tangible so people can have informed conversations about what kind of AI future they want.Is the AI actually working or is everything scripted?The AI is real — your messages are processed by a live language model and you get genuine responses. The ads are the scripted part. Some AI responses will include sponsored product mentions as part of the demonstration.What happens to my chat data?Like all our free tools, conversations are logged to improve the service. We do not sell this data to advertisers — this is a demo, not an actual ad network.How does the freemium gate work?After 5 free messages, you can either ‘watch an ad’ (a simulated 5-second countdown) to unlock 5 more messages, or you can upgrade to our actual ad-free service. This mirrors how real freemium products work.
All of our tools are genuinely free — no ads, no paywalls, no sponsored responses. Just AI that works.
Build Your Own AI Chatbot — No Ads RequiredNow that you’ve seen what ad-supported AI looks like, imagine giving your customers a clean, focused AI experience with zero interruptions. With 99helpers, you can deploy an AI chatbot trained on your content in minutes. No credit card required • Setup in minutes • No ads, ever
...
Read the original on 99helpers.com »
Motorola, a Lenovo Company, announced the addition of new consumer and enterprise solutions to its portfolio today at Mobile World Congress. The company unveiled a partnership with the GrapheneOS Foundation, to bring cutting-edge security to everyday users across the globe. In addition, Motorola introduced a new Moto Secure feature and Moto Analytics, to expand Motorola’s B2B ecosystem with advanced security and deeper operational insights for organizations across industries. These announcements reinforce Motorola’s commitment to delivering intelligent, and highly capable technology with enhanced security for customers worldwide.
GrapheneOS Foundation Partnership
Motorola is introducing a new era of smartphone security through a long‑term partnership with the GrapheneOS Foundation, the leading nonprofit in advanced mobile security and creators of a hardened, operating system based on the Android Open Source Project. Together, Motorola and the GrapheneOS Foundation will work to strengthen smartphone security and collaborate on future devices engineered with GrapheneOS compatibility.
“We are thrilled to be partnering with Motorola to bring GrapheneOS’s industry‑leading privacy and security‑focused mobile operating system to their next-generation smartphone”, said a spokesperson at GrapheneOS. “This collaboration marks a significant milestone in expanding the reach of GrapheneOS, and we applaud Motorola for taking this meaningful step towards advancing mobile security.”
By combining GrapheneOS’s pioneering engineering with Motorola’s decades of security expertise, real‑world user insights, and Lenovo’s ThinkShield solutions, the collaboration will advance a new generation of privacy and security technologies. In the coming months, Motorola and the GrapheneOS Foundation will continue to collaborate on joint research, software enhancements, and new security capabilities, with more details and solutions to roll out as the partnership evolves.
Moto Analytics
Today, Motorola also introduced Moto Analytics, an enterprise‑grade analytics platform designed to give IT administrators real‑time visibility into device performance across their fleet. Unlike traditional EMM tools that focus primarily on access control, Moto Analytics provides deep operational insights, from app stability to battery health and connectivity performance.
With this data, IT teams can troubleshoot more efficiently, prevent issues before they escalate, and maintain employee productivity. As part of the ThinkShield ecosystem, Moto Analytics integrates seamlessly with existing enterprise environments and scales effortlessly as organizations grow.
Private Image Data
Motorola is also expanding its Moto Secure platform with a new feature, Private Image Data. This tool gives users greater control over the hidden data stored in their photos. When enabled, it automatically removes sensitive metadata from all new camera images on the device, helping protect details like location and device information. This protection runs quietly in the background, preserving the image itself while clearing some of the private data attached to it.
Private Image Data joins a growing set of protections within the Moto Secure app, Motorola’s central hub for essential privacy and security tools powered by ThinkShield. From managing app permissions to securing sensitive files and monitoring device integrity, Moto Secure brings key Android and Motorola safeguards together in one place, making it easier for users to understand and manage their device’s security.
Private Image Data will begin rolling out to motorola signature devices in the coming months, with additional updates and refinements expected over time.
With the introduction of these new solutions, Motorola is expanding its enterprise portfolio with solutions built for today’s most demanding business environments. From advanced security to operational efficiency and intelligent device management, these innovations reflect Motorola’s commitment to empowering organizations with technology that is security-focused, reliable, and ready for the future.
Legal Disclaimers
Certain features, functionality, and product specifications may be network-dependent and subject to additional terms, conditions, and charges. All are subject to change without notice. MOTOROLA, the Stylized M Logo, MOTO, and the MOTO family of marks are trademarks of Motorola Trademark Holdings, LLC. LENOVO and THINKSHIELD are trademarks of Lenovo. Android is a trademark of Google, LLC. All other trademarks are the property of their respective owners. ©2026 Motorola Mobility LLC. All rights reserved.
...
Read the original on motorolanews.com »
Yes, writing code is easier than ever.
AI assistants autocomplete your functions. Agents scaffold entire features. You can describe what you want in plain English and watch working code appear in seconds. The barrier to producing code has never been lower.
And yet, the day-to-day life of software engineers has gotten more complex, more demanding, and more exhausting than it was two years ago.
This is not a contradiction. It is the reality of what happens when an industry adopts a powerful new tool without pausing to consider the second-order effects on the people using it.
If you are a software engineer reading this and feeling like your job quietly became harder while everyone around you celebrates how easy everything is now, you are not imagining things. The job changed. The expectations changed. And nobody sent a memo.
There is a phenomenon happening right now that most engineers feel but struggle to articulate. The expected output of a software engineer in 2026 is dramatically higher than it was in 2023. Not because anyone held a meeting and announced new targets. Not because your manager sat you down and explained the new rules. The baseline just moved.
It moved because AI tools made certain tasks faster. And when tasks become faster, the assumption follows immediately: you should be doing more. Not in the future. Now.
A February 2026 study published in Harvard Business Review tracked 200 employees at a U. S. tech company over eight months. The researchers found something that will sound familiar to anyone living through this shift. Workers did not use AI to finish earlier and go home. They used it to do more. They took on broader tasks, worked at a faster pace, and extended their hours, often without anyone asking them to. The researchers described a self-reinforcing cycle: AI accelerated certain tasks, which raised expectations for speed. Higher speed made workers more reliant on AI. Increased reliance widened the scope of what workers attempted. And a wider scope further expanded the quantity and density of work.
The numbers tell the rest of the story. Eighty-three percent of workers in the study said AI increased their workload. Burnout was reported by 62 percent of associates and 61 percent of entry-level workers. Among C-suite leaders? Just 38 percent. The people doing the actual work are carrying the intensity. The people setting the expectations are not feeling it the same way.
This gap matters enormously. If leadership believes AI is making everything easier while engineers are drowning in a new kind of complexity, the result is a slow erosion of trust, morale, and eventually talent.
A separate survey of over 600 engineering professionals found that nearly two-thirds of engineers experience burnout despite their organizations using AI in development. Forty-three percent said leadership was out of touch with team challenges. Over a third reported that productivity had actually decreased over the past year, even as their companies invested more in AI tooling.
The baseline moved. The expectations rose. And for many engineers, no one acknowledged that the job they signed up for had fundamentally changed.
Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.
Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
Now they are being told to stop.
Not explicitly, of course. Nobody walks into a standup and says “stop writing code.” But the message is there, subtle and persistent. Use AI to write it faster. Let the agent handle the implementation. Focus on higher-level tasks. Your value is not in the code you write anymore, it is in how well you direct the systems that write it for you.
For early adopters, this feels exciting. It feels like evolution. For a significant portion of working engineers, it feels like being told that the thing they spent years mastering, the skill that defines their professional identity, is suddenly less important.
One engineer captured this shift perfectly in a widely shared essay, describing how AI transformed the engineering role from builder to reviewer. Every day felt like being a judge on an assembly line that never stops. You just keep stamping those pull requests. The production volume went up. The sense of craftsmanship went down.
This is not a minor adjustment. It is a fundamental shift in professional identity. Engineers who built their careers around deep technical skill are being asked to redefine what they do and who they are, essentially overnight, without any transition period, training, or acknowledgment that something significant was lost in the process.
Having led engineering teams for over two decades, I have seen technology shifts before. New frameworks, new languages, new methodologies. Engineers adapt. They always have. But this is different because it is not asking engineers to learn a new way of doing what they do. It is asking them to stop doing the thing that made them engineers in the first place and become something else entirely.
That is not an upgrade. That is a career identity crisis. And pretending it is not happening does not make it go away.
While engineers are being asked to write less code, they are simultaneously being asked to do more of everything else.
More product thinking. More architectural decision-making. More code review. More context switching. More planning. More testing oversight. More deployment awareness. More risk assessment.
The scope of what it means to be a “software engineer” expanded dramatically in the last two years, and it happened without a pause to catch up.
This is partly a direct consequence of AI acceleration. When code gets produced faster, the bottleneck shifts. It moves from implementation to everything surrounding implementation: requirements clarity, architecture decisions, integration testing, deployment strategy, monitoring, and maintenance. These were always part of the engineering lifecycle, but they were distributed across roles. Product managers handled requirements. QA handled testing. DevOps handled deployment. Senior architects handled system design.
Now, with AI collapsing the implementation phase, organizations are quietly redistributing those responsibilities to the engineers themselves. The Harvard Business Review study documented this exact pattern. Product managers began writing code. Engineers took on product work. Researchers started doing engineering tasks. Roles that once had clear boundaries blurred as workers used AI to handle jobs that previously sat outside their remit.
The industry is openly talking about this as a positive development. Engineers should be “T-shaped” or “full-stack” in a broader sense. Nearly 45 percent of engineering roles now expect proficiency across multiple domains. AI tools augment generalists more effectively, making it easier for one person to handle multiple components of a system.
On paper, this sounds empowering. In practice, it means that a mid-level backend engineer is now expected to understand product strategy, review AI-generated frontend code they did not write, think about deployment infrastructure, consider security implications of code they cannot fully trace, and maintain a big-picture architectural awareness that used to be someone else’s job.
That is not empowerment. That is scope creep without a corresponding increase in compensation, authority, or time.
From my experience building and scaling teams in fintech and high-traffic platforms, I can tell you that role expansion without clear boundaries always leads to the same outcome: people try to do everything, nothing gets done with the depth it requires, and burnout follows. The engineers who survive are the ones who learn to say no, to prioritize ruthlessly, and to push back when the scope of their role quietly doubles without anyone acknowledging it.
There is an irony at the center of the AI-assisted engineering workflow that nobody wants to talk about: reviewing AI-generated code is often harder than writing the code yourself.
When you write code, you carry the context of every decision in your head. You know why you chose this data structure, why you handled this edge case, why you structured the module this way. The code is an expression of your thinking, and reviewing it later is straightforward because the reasoning is already stored in your memory.
When AI writes code, you inherit the output without the reasoning. You see the code, but you do not see the decisions. You do not know what tradeoffs were made, what assumptions were baked in, what edge cases were considered or ignored. You are reviewing someone else’s work, except that someone is not a colleague you can ask questions. It is a statistical model that produces plausible-looking code without any understanding of your system’s specific constraints.
A survey by Harness found that 67 percent of developers reported spending more time debugging AI-generated code, and 68 percent spent more time reviewing it than they did with human-written code. This is not a failure of the tools. It is a structural property of the workflow. Code review without shared context is inherently more demanding than reviewing code you participated in creating.
Yet the expectation from management is that AI should be making everything faster. So engineers find themselves in a bind: they are producing more code than ever, but the quality assurance burden has increased, the context-per-line-of-code has decreased, and the cognitive load of maintaining a system they only partially built is growing with every sprint.
This is the supervision paradox. The faster AI generates code, the more human attention is required to ensure that code actually works in the context of a real system with real users and real business constraints. The production bottleneck did not disappear. It moved from writing to understanding, and understanding is harder to speed up.
What makes all of this especially difficult is the self-reinforcing nature of the cycle.
AI makes certain tasks faster. Faster tasks create the perception of more available capacity. More perceived capacity leads to more work being assigned. More work leads to more AI reliance. More AI reliance leads to more code that needs review, more context that needs to be maintained, more systems that need to be understood, and more cognitive load on engineers who are already stretched thin.
The Harvard Business Review researchers described this as “workload creep.” Workers did not consciously decide to work harder. The expansion happened naturally, almost invisibly. Each individual step felt reasonable. In aggregate, it produced an unsustainable pace.
Before AI, there was a natural ceiling on how much you could produce in a day. That ceiling was set by thinking speed, typing speed, and the time it takes to look things up. It was frustrating sometimes, but it was also a governor. A natural speed limit that prevented you from outrunning your own ability to maintain quality.
AI removed the governor. Now the only limit is your cognitive endurance. And most people do not know their cognitive limits until they have already blown past them.
This is where many engineers find themselves right now. Shipping more code than any quarter in their career. Feeling more drained than any quarter in their career. The two facts are not unrelated.
The trap is that it looks like productivity from the outside. Metrics go up. Velocity charts look great. More features shipped. More pull requests merged. But underneath the numbers, quality is quietly eroding, technical debt is accumulating faster than it can be addressed, and the people doing the work are running on fumes.
If the picture is difficult for experienced engineers, it is even harder for those starting their careers.
Junior engineers have traditionally learned by doing the simpler, more task-oriented work. Fixing small bugs. Writing straightforward features. Implementing well-defined tickets. This hands-on work built the foundational understanding that eventually allowed them to take on more complex challenges.
AI is rapidly consuming that training ground. If an agent can handle the routine API hookup, the boilerplate module, the straightforward CRUD endpoint, what is left for a junior engineer to learn from? The expectation is shifting toward needing to contribute at a higher level almost from day one, without the gradual ramp-up that previous generations of engineers relied on.
Entry-level hiring at the 15 largest tech firms fell 25 percent from 2023 to 2024. The HackerRank 2025 Developer Skills Report confirmed that expectations are rising faster than productivity gains, and that early-career hiring remains sluggish compared to senior-level roles. Companies are prioritizing experienced talent, but the pipeline that produces experienced talent is being quietly dismantled.
This is a problem that extends beyond individual career concerns. If junior engineers do not get the opportunity to build foundational skills through hands-on work, the industry will eventually face a shortage of senior engineers who truly understand the systems they oversee. You cannot supervise what you never learned to build.
As I have written before, code is for humans to read. If the next generation of engineers never develops the fluency to read, understand, and reason about code at a deep level, no amount of AI tooling will compensate for that gap.
If you lead engineering teams, the most important thing you can do right now is acknowledge that this transition is genuinely difficult. Not theoretically. Not abstractly. For the actual people on your team.
The career they signed up for changed fast. The skills they were hired for are being repositioned. The expectations they are working under shifted without a clear announcement. Acknowledging this reality is not a sign of weakness. It is a prerequisite for maintaining a team that trusts you.
Start with empathy, but do not stop there.
Give your team real training. Not a lunch-and-learn about prompt engineering. Real investment in the skills that the new engineering landscape actually requires: system design, architectural thinking, product reasoning, security awareness, and the ability to critically evaluate code they did not write. These are not trivial skills. They take time to develop, and your team needs structured support to build them.
Give them space to experiment without the pressure of immediate productivity gains. The engineers who will thrive in this environment are the ones who have room to figure out how AI fits into their workflow without being penalized for the learning curve. Every experienced technologist I know who has successfully integrated AI tools went through an adjustment period where they were less productive before they became more productive. That adjustment period is normal, and it needs to be protected.
Set explicit boundaries around role scope. If you are asking engineers to take on product thinking, planning, and risk assessment in addition to their technical work, name it. Define it. Compensate for it. Do not let it happen silently and then wonder why your team is burned out.
Rethink your metrics. If your engineering success metrics are still centered on velocity, tickets closed, and lines of code, you are measuring the wrong things in an AI-assisted world. System stability, code quality, decision quality, customer outcomes, and team health are better indicators of whether your engineering organization is actually producing value or just producing volume.
Protect the junior pipeline. If you have stopped hiring junior engineers because AI can handle entry-level tasks, you are solving a short-term efficiency problem by creating a long-term talent crisis. The senior engineers you rely on today were junior engineers who learned by doing the work that AI is now consuming. That path still matters.
And finally, keep challenging your team. I have never met a good engineer who did not love a good challenge. The engineers on your team are not fragile. They are capable, intelligent people who signed up for hard problems. They can handle this transition. Just make sure they are set up to meet it.
If you are an engineer navigating this shift, here is what I would tell you based on two decades of watching technology cycles reshape this profession.
First, do not abandon your fundamentals. The pressure to become an “AI-first” engineer is real, but the engineers who will be most valuable in five years are the ones who deeply understand the systems they work on. AI is a tool. Understanding architecture, debugging complex systems, reasoning about performance and security: these skills are not becoming less important. They are becoming more important because someone needs to be the adult in the room when AI-generated code breaks in production at 2 AM.
Second, learn to set boundaries with the acceleration trap. Just because you can produce more does not mean you should. Sustainable pace matters. The engineers who burn out trying to match the theoretical maximum output AI makes possible are not the ones who build lasting careers. The ones who learn to work with AI deliberately, choosing when to use it and when to think independently, are the ones who will still be thriving in this profession a decade from now.
Third, embrace the parts of the expanded role that genuinely interest you. If the engineering role now includes more product thinking, more architectural decision-making, more cross-functional communication, treat that as an opportunity rather than an imposition. These are skills that senior engineers and technical leaders need. You are being given access to a broader set of capabilities earlier in your career than any previous generation of engineers. That is not a burden. It is a head start.
Fourth, talk about what you are experiencing. The isolation of feeling like you are the only one struggling with this transition is one of the most damaging aspects of the current moment. You are not the only one. The data confirms it. Two-thirds of engineers report burnout. The expectation gap between leadership and engineering teams is well documented. Talking openly about these challenges, with your team, with your manager, with your broader network, is not complaining. It is professional honesty.
And fifth, remember that this profession has survived every prediction of its demise. COBOL was supposed to eliminate programmers. Expert systems were supposed to replace them. Fourth-generation languages, CASE tools, visual programming, no-code platforms, outsourcing. Every decade brings a new technology that promises to make software engineers obsolete, and every decade the demand for skilled engineers grows. AI will not be different. The tools change. The fundamentals endure.
AI made writing code easier and made being an engineer harder. Both of these things are true at the same time, and pretending that only the first one matters is how organizations lose their best people.
The engineers who are struggling right now are not struggling because they are bad at their jobs. They are struggling because their jobs changed underneath them while the industry celebrated the part that got easier and ignored the parts that got harder.
Expectations rose without announcement. Roles expanded without boundaries. Output demands increased without corresponding increases in support, training, or acknowledgment. And the engineers who raised concerns were told, implicitly or explicitly, that they just needed to adapt faster.
That is not how you build a sustainable engineering culture. That is how you build a burnout machine.
The industry needs to name this paradox honestly. AI is an incredible tool. It is also placing enormous new demands on the people using it. Both things can be true. Both things need to be addressed.
The organizations that get this right, that invest in their people alongside their tools, that acknowledge the human cost of rapid technological change while still pushing forward, those are the organizations that will attract and retain the best engineering talent in the years ahead.
The ones that do not will discover something that every technology cycle eventually teaches: tools do not build products. People do. And people have limits that no amount of AI can automate away.
If this resonated with you, I would love to hear your perspective. What has changed most about your engineering role in the last year? Drop me a message or connect with me on LinkedIn. I write regularly about the intersection of AI, software engineering, and leadership at ivanturkovic.com. Follow along if you want honest, experience-driven perspectives on how technology is actually changing this profession.
...
Read the original on www.ivanturkovic.com »
I’m going to make a bold claim: MCP is already dying. We may not fully realize it yet, but the signs are there. OpenClaw doesn’t support it. Pi doesn’t support it. And for good reason.
When Anthropic announced the Model Context Protocol, the industry collectively lost its mind. Every company scrambled to ship MCP servers as proof they were “AI first.” Massive resources poured into new endpoints, new wire formats, new authorization schemes, all so LLMs could talk to services they could already talk to.
I’ll admit, I never fully understood the need for it. You know what LLMs are really good at? Figuring things out on their own. Give them a CLI and some docs and they’re off to the races.
I tried to avoid writing this for a long time, but I’m convinced MCP provides no real-world benefit, and that we’d be better off without it. Let me explain.
LLMs are really good at using command-line tools. They’ve been trained on millions of man pages, Stack Overflow answers, and GitHub repos full of shell scripts. When I tell Claude to use gh pr view 123, it just works.
MCP promised a cleaner interface, but in practice I found myself writing the same documentation anyway: what each tool does, what parameters it accepts, and more importantly, when to use it. The LLM didn’t need a new protocol.
When Claude does something unexpected with Jira, I can run the same jira issue view command and see exactly what it saw. Same input, same output, no mystery.
With MCP, the tool only exists inside the LLM conversation. Something goes wrong and now I’m spelunking through JSON transport logs instead of just running the command myself. Debugging shouldn’t require a protocol decoder.
This is where the gap gets wide. CLIs compose. I can pipe through jq, chain with grep, redirect to files. This isn’t just convenient; it’s often the only practical approach.
With MCP, your options are dumping the entire plan into the context window (expensive, often impossible) or building custom filtering into the MCP server itself. Either way, you’re doing more work for a worse result. The CLI approach uses tools that already exist, are well-documented, and that both humans and agents understand.
MCP is unnecessarily opinionated about auth. Why should a protocol for giving an LLM tools to use need to concern itself with authentication?
CLI tools don’t care. aws uses profiles and SSO. gh uses gh auth login. kubectl uses kubeconfig. These are battle-tested auth flows that work the same whether I’m at the keyboard or Claude is driving. When auth breaks, I fix it the way I always would: aws sso login, gh auth refresh. No MCP-specific troubleshooting required.
Local MCP servers are processes. They need to start up, stay running, and not silently hang. In Claude Code, they’re spawned as child processes, which works until it doesn’t.
CLI tools are just binaries on disk. No background processes, no state to manage, no initialization dance. They’re there when you need them and invisible when you don’t.
Beyond the design philosophy, MCP has real day-to-day friction:
Initialization is flaky. I’ve lost count of the times I’ve restarted Claude Code because an MCP server didn’t come up. Sometimes it works on retry, sometimes I’m clearing state and starting over.
Re-auth never ends. Using multiple MCP tools? Have fun authenticating each one. CLIs with SSO or long-lived credentials just don’t have this problem. Auth once and you’re done.
Permissions are all-or-nothing. Claude Code lets you allowlist MCP tools by name, but that’s it. You can’t scope to read-only operations or restrict parameters. With CLIs, I can allowlist gh pr view but require approval for gh pr merge. That granularity matters.
I’m not saying MCP is completely useless. If a tool genuinely has no CLI equivalent, MCP might be the right call. I still use plenty in my day-to-day, when it’s the only option available.
I might even argue there’s some value in having a standardized interface, and that there are probably usecases where it makes more sense than a CLI.
But for the vast majority of work, the CLI is simpler, faster to debug, and more reliable.
The best tools are the ones that work for both humans and machines. CLIs have had decades of design iteration. They’re composable, debuggable, and they piggyback on auth systems that already exist.
MCP tried to build a better abstraction. Turns out we already had a pretty good one.
If you’re a company investing in an MCP server but you don’t have an official CLI, stop and rethink what you’re doing. Ship a good API, then ship a good CLI. The agents will figure it out.
...
Read the original on ejholmes.github.io »
Researchers at Oregon State University have created a new nanomaterial designed to destroy cancer cells from the inside. The material activates two separate chemical reactions once inside a tumor cell, overwhelming it with oxidative stress while leaving surrounding healthy tissue unharmed.
Researchers at Oregon State University have created a new nanomaterial designed to destroy cancer cells from the inside. The material activates two separate chemical reactions once inside a tumor cell, overwhelming it with oxidative stress while leaving surrounding healthy tissue unharmed.
The work, led by Oleh Taratula, Olena Taratula, and Chao Wang from the OSU College of Pharmacy, was published in Advanced Functional Materials.
The discovery strengthens the growing field of chemodynamic therapy or CDT. This emerging cancer treatment strategy takes advantage of the unique chemical conditions found inside tumors. Compared with normal tissue, cancer cells tend to be more acidic and contain higher levels of hydrogen peroxide.
Traditional CDT uses these tumor conditions to spark the formation of hydroxyl radicals, highly reactive molecules made of oxygen and hydrogen that contain an unpaired electron. These reactive oxygen species damage cells through oxidation, stripping electrons from essential components such as lipids, proteins, and DNA.
More recent CDT approaches have also succeeded in generating singlet oxygen inside tumors. Singlet oxygen is another reactive oxygen species, named for its single electron spin state rather than the three spin states seen in the more stable oxygen molecules present in the air.
“However, existing CDT agents are limited,” Oleh Taratula said. “They efficiently generate either radical hydroxyls or singlet oxygen but not both, and they often lack sufficient catalytic activity to sustain robust reactive oxygen species production. Consequently, preclinical studies often only show partial tumor regression and not a durable therapeutic benefit.”
To address these shortcomings, the team developed a new CDT nanoagent built from an iron-based metal-organic framework or MOF. This structure is capable of producing both hydroxyl radicals and singlet oxygen, increasing its cancer-fighting potential. The MOF demonstrated strong toxicity across multiple cancer cell lines while causing minimal harm to noncancerous cells.
“When we systemically administered our nanoagent in mice bearing human breast cancer cells, it efficiently accumulated in tumors, robustly generated reactive oxygen species and completely eradicated the cancer without adverse effects,” Olena Taratula said. “We saw total tumor regression and long-term prevention of recurrence, all without seeing any systemic toxicity.”
In these preclinical experiments, tumors disappeared entirely and did not return, and the animals showed no signs of harmful side effects.
Before moving into human trials, the researchers plan to test the treatment in additional cancer types, including aggressive pancreatic cancer, to determine whether the approach can be effective across a wide range of tumors.
Other contributors to the study included Oregon State researchers Kongbrailatpam Shitaljit Sharma, Yoon Tae Goo, Vladislav Grigoriev, Constanze Raitmayr, Ana Paula Mesquita Souza, and Manali Parag Phawde. Funding was provided by the National Cancer Institute of the National Institutes of Health and the Eunice Kennedy Shriver National Institute of Child Health and Human Development.
...
Read the original on www.sciencedaily.com »
As the agentic web evolves, we want to help websites play an active role in how AI agents interact with them. WebMCP aims to provide a standard way for exposing structured tools, ensuring AI agents can perform actions on your site with increased speed, reliability, and precision.
By defining these tools, you tell agents how and where to interact with your site, whether it’s booking a flight, filing a support ticket, or navigating complex data. This direct communication channel eliminates ambiguity and allows for faster, more robust agent workflows.
WebMCP proposes two new APIs that allow browser agents to take action on behalf of the user:
Declarative API: Perform standard actions that can be defined directly in HTML forms.
These APIs serve as a bridge, making your website “agent-ready” and enabling more reliable and performant agent workflows compared to raw DOM actuation.
Imagine an agent that can handle complex tasks for your users with confidence and speed.
Customer support: Help users create detailed customer support tickets, by enabling agents to fill in all of the necessary technical details automatically.
Ecommerce: Users can better shop your products when agents can easily find what they’re looking for, configure particular shopping options, and navigate checkout flows with precision.
Travel: Users could more easily get the exact flights they want, by allowing the agent to search, filter results, and handle bookings using structured data to ensure accurate results every time.
WebMCP is available for prototyping to early preview program participants.
Sign up for the early preview program to gain access to the documentation and demos, stay up-to-date with the latest changes, and discover new APIs.
...
Read the original on developer.chrome.com »
Trying my best to visualize it. I’m a n00b at machine learning though
Andrej Karpathy wrote a 200-line Python script that trains and runs a GPT from scratch, with no libraries or dependencies, just pure Python. The script contains the algorithm that powers LLMs like ChatGPT.
Let’s walk through it piece by piece and watch each part work. Andrej did a walkthrough on his blog, but here I take a more visual approach, tailored for beginners.
The model trains on 32,000 human names, one per line: emma, olivia, ava, isabella, sophia… Each name is a document. The model’s job is to learn the statistical patterns in these names and generate plausible new ones that sound like they could be real.
By the end of training, the model produces names like “kamon”, “karai”, “anna”, and “anton”.The model has learned which characters tend to follow which, which sounds are common at the start vs. the end, and how long a typical name runs. From ChatGPT’s perspective, your conversation is just a document. When you type a prompt, the model’s response is a statistical document completion.
Neural networks work with numbers, not characters. So we need a way to convert text into a sequence of integers and back. The simplest possible tokenizer assigns one integer to each unique character in the dataset. The 26 lowercase letters get ids 0 through 25, and we add one special token called BOS (Beginning of Sequence) with id 26 that marks where a name starts and ends.
Type a name below and watch it get tokenized. Each character maps to its integer id, and BOS tokens wrap both ends:
Here’s the core task: given the tokens we’ve seen so far, predict what comes next. We slide through the sequence one position at a time. At position 0, the model sees only BOS and must predict the first letter. At position 1, it sees BOS and the first letter and must predict the second letter. And so on.
Step through the sequence below and watch the context grow while the target shifts forward:
Each step produces one training example: the context on the left is the input, the green token on the right is what the model should predict. For the name “emma”, that’s five input-target pairs. This sliding window is how all language models train, including ChatGPT.
At each position, the model outputs 27 raw numbers, one per possible next token. These numbers (called ) can be anything: positive, negative, large, small. We need to convert them into probabilities that are positive and sum to 1. does this by exponentiating each score and dividing by the total.
Adjust the logits below and watch the probability distribution change. Notice how one large logit dominates, and the exponential amplifies differences.
Here’s the actual softmax code from microgpt. Step through it to see the intermediate values at each line:
How wrong was the prediction? We need a single number that captures “the model thought the correct answer was unlikely.” If the model assigns probability 0.9 to the correct next token, the loss is low (0.1). If it assigns probability 0.01, the loss is high (4.6). The formula is where is the probability the model assigned to the correct token. This is called .
Drag the slider to adjust the probability of the correct token and watch the loss change:
The curve has two properties that make it useful. First, it’s zero when the model is perfectly confident in the right answer (). Second, it goes to infinity as the model assigns near-zero probability to the truth (), which punishes confident wrong answers severely. Training minimizes this number.
To improve, the model needs to answer: “for each of my 4,192 , if I nudge it up by a tiny amount, does the loss go up or down, and by how much?” computes this by walking the computation backward, applying the at each step.
Every mathematical operation (add, multiply, exp, log) is a node in a graph. Each node remembers its inputs and knows its local derivative. The backward pass starts at the loss (where the is trivially 1.0) and multiplies local derivatives along every path back to the inputs.
Step through the forward pass, then the backward pass for a small example where with :
Now step through the actual Value class code. Watch how each operation records its children and local gradients, then how backward() walks the graph in reverse, accumulating gradients:
We know how to measure error and how to trace that error back to every parameter. Now let’s build the model itself, starting with how it represents tokens.
A raw token id like 4 is just an index. The model can’t do math with a bare integer. So each token looks up a learned vector (a list of 16 numbers) from an table. Think of it as each token having a 16-dimensional “personality” that the model can adjust during training.
Position matters too. The letter “a” at position 0 plays a different role than “a” at position 4. So there’s a second embedding table indexed by position. The token embedding and position embedding are added together to form the input to the rest of the network.
Click a token below to see its embedding vectors and how they combine:
This is how work. At each position, the model needs to gather information from previous positions. It does this through : each token produces three vectors from its embedding.
A Query (“what am I looking for?“), a Key (“what do I contain?“), and a Value (“what information do I offer if selected?“). The query at the current position is compared against all keys from previous positions via . High dot product means high relevance. Softmax converts these scores into attention weights, and the weighted sum of values is the output.
Explore the attention weights below. Each cell shows how much one position attends to another. Switch between the four attention heads to see different patterns:
The model pipes each token through: embed, normalize, attend, add , normalize, MLP, add residual, project to output logits. The (multilayer perceptron) is a two-layer feed-forward network: project up to 64 dimensions, apply (zero out negatives), project back to 16. If attention is how tokens communicate, the MLP is where each position thinks independently.
Step through the pipeline for one token and watch data flow through each stage:
Here’s the actual gpt() function from microgpt. Step through to see the code executing line by line, with the intermediate vector at each stage:
The training loop repeats 1,000 times: pick a name, tokenize it, run the model forward over every position, compute the cross-entropy loss at each position, average the losses, backpropagate to get gradients for every parameter, and update the parameters to make the loss a bit lower.
The optimizer is Adam, which is smarter than naive gradient descent. It maintains a running average of each parameter’s recent gradients (momentum) and a running average of the squared gradients (adaptive ). Parameters that have been getting consistent gradients take larger steps. Parameters that have been oscillating take smaller ones.
Watch the loss decrease over 1,000 training steps. The model starts at ~3.3 (random guessing among 27 tokens: ) and settles around 2.37. The generated names evolve from gibberish to plausible:
Step through the code for one complete training iteration. Watch it pick a name, run the forward pass at each position, compute the loss, run backward, and update the parameters:
Once training is done, is straightforward. Start with BOS, run the forward pass, get 27 probabilities, randomly sample one token, feed it back in, and repeat until the model outputs BOS again (meaning “I’m done”) or we hit the maximum length.
Temperature controls how we sample. Before softmax, we divide the logits by the temperature. A temperature of 1.0 samples directly from the learned distribution. Lower temperatures sharpen the distribution (the model picks its top choices more often). Higher temperatures flatten it (more diverse but potentially less coherent output).
Adjust the temperature and watch the probability distribution change:
Step through the inference loop to see a name being generated character by character. At each step, the model runs forward, produces probabilities, and samples the next token:
This 200-line script contains the complete algorithm. Between this and ChatGPT, litte changes conceptually. The differences are things like: trillions of tokens instead of 32,000 names. Subword tokenization (100K vocabulary) instead of characters. Tensors on GPUs instead of scalar Value objects in Python. Hundreds of billions of parameters instead of 4,192. Hundreds of layers instead of one. Training across thousands of GPUs for months.
But the loop is the same. Tokenize, embed, attend, compute, predict the next token, measure surprise, walk the gradients backward, nudge the parameters. Repeat.
...
Read the original on growingswe.com »
git-memento is a Git extension that records the AI coding session used to produce a commit.
It runs a commit and then stores a cleaned markdown conversation as a git note on the new commit.
* Attach the AI session trace to the commit (git notes).
* Keep provider support extensible (Codex first, others later).
git memento init
git memento init codex
git memento init claude
git memento commit
git memento commit
You can pass -m multiple times, and each value is forwarded to git commit in order. When -m is omitted, git commit opens your default editor.
* Without a session id, it copies the note(s) from the previous HEAD onto the amended commit.
* With a session id, it copies previous note(s) and appends the new fetched session as an additional session entry.
* A single commit note can contain sessions from different AI providers.
git memento share-notes
git memento share-notes upstream
This pushes refs/notes/* and configures local remote. so notes can be fetched by teammates.
Push your branch and sync notes to the same remote in one command (default: origin):
git memento push
git memento push upstream
This runs git push and then performs the same notes sync as share-notes.
git memento notes-sync
git memento notes-sync upstream
git memento notes-sync upstream –strategy union
* Merges remote notes into local notes and pushes synced notes back to the remote.
git memento notes-rewrite-setup
Carry notes from a rewritten range (for squash/rewrite flows) onto a new target commit:
git memento notes-carry –onto
This reads notes from commits in .. and appends a provenance block to .
git memento audit –range main..HEAD
git memento audit –range origin/main..HEAD –strict –format json
git memento doctor
git memento doctor upstream –format json
git memento help
git memento –version
Provider defaults can come from env vars, and init persists the selected provider + values in local git config:
* If the repository is not configured yet, commit, amend , push, share-notes, notes-sync, notes-rewrite-setup, and notes-carry fail with a message to run git memento init first.
If a session id is not found, git-memento asks Codex for available sessions and prints them.
dotnet publish src/GitMemento.Cli/GitMemento.Cli.fsproj -c Release -r osx-arm64 -p:PublishAot=true
dotnet publish src/GitMemento.Cli/GitMemento.Cli.fsproj -c Release -r linux-x64 -p:PublishAot=true
dotnet publish src/GitMemento.Cli/GitMemento.Cli.fsproj -c Release -r win-x64 -p:PublishAot=true
Copy the produced executable to a directory in your PATH.
Ensure the binary name is git-memento (or git-memento.exe on Windows).
git memento commit
curl -fsSL https://raw.githubusercontent.com/mandel-macaque/memento/main/install.sh | sh
* Release assets are built with NativeAOT (PublishAot=true) and packaged as a single executable per platform.
* If the workflow runs from a tag push (for example v1.2.3), that tag is used as the GitHub release tag/name.
* If the workflow runs from main without a tag, the release tag becomes (for example 1.0.0-a1b2c3d4).
* install.sh always downloads from releases/latest, so the installer follows the latest published GitHub release.
CI runs install smoke tests on Linux, macOS, and Windows that verify:
* install.sh downloads the latest release asset for the current OS/architecture.
* The binary is installed for the current user into the configured install directory.
* git memento –version and git memento help both execute after installation.
dotnet test GitMemento.slnx
npm run test:js
This repository includes a reusable marketplace action with two modes:
* mode: gate: runs git memento audit as a CI gate and fails if note coverage checks fail. git-memento must already be installed in the job.
name: memento-note-comments
on:
push:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: write
pull-requests: read
jobs:
comment-memento-notes:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: mandel-macaque/memento@v1
with:
mode: comment
github-token: ${{ secrets.GITHUB_TOKEN }}
name: memento-note-gate
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
jobs:
enforce-memento-notes:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: mandel-macaque/memento/install@v1
with:
memento-repo: mandel-macaque/memento
- uses: mandel-macaque/memento@v1
with:
mode: gate
strict: “true”
- uses: mandel-macaque/memento/install@v1
with:
memento-repo: mandel-macaque/memento
npm ci
npm run build:action
...
Read the original on github.com »
Example video title will go here for this video
Example video title will go here for this video
To stream WLTX 19 on your phone, you need the WLTX 19 app.
Example video title will go here for this video
Example video title will go here for this video
To stream WLTX 19 on your phone, you need the WLTX 19 app.
EVERETT, Wash. — The City of Everett has shut down its entire network of Flock license plate reader cameras after a Snohomish County judge ruled the footage those cameras collect qualifies as a public record.
The decision came after a Washington man filed public records requests seeking access to data captured by the cameras.
Jose Rodriguez of Walla Walla, represented by attorney Tim Hall, requested the footage from multiple jurisdictions in Washington state, to see what information the automated license plate reader system was collecting.
“He started noticing that the cameras were everywhere — he wanted to see what kind of data they collect,” Hall said.
The requests revealed that Flock cameras continuously capture thousands of images, regardless of whether a vehicle is linked to a crime.
When several cities, including Everett, moved to block the request, the case went to court.
On Tuesday, a Snohomish County judge ruled that footage captured by Flock cameras qualifies as a public record under Washington law, meaning members of the public can request access to the data.
Everett Mayor Cassie Franklin said the city disagrees with the ruling and is concerned about who could obtain the footage.
“We were very disappointed,” Franklin said. “That means perpetrators of crime, people who are maybe engaged in domestic abuse or stalkers, they can request footage and that could cause a lot of harm.”
Following the ruling, Everett temporarily turned off all 68 of its Flock cameras.
At the same time, lawmakers in Olympia are debating a bill that would exempt Flock footage from public records law.
Supporters of the proposed legislation argue that public access to the data could create safety risks, including the possibility that federal immigration agents could attempt to obtain footage through public disclosure requests.
Hall pushed back on those concerns, saying public records requests are typically a lengthy process and unlikely to be useful for real-time tracking.
“As somebody who has made hundreds of public records requests myself, and represented many, many people in public records lawsuits, it’s generally a lengthy process,” Hall said. “Same would be true for ICE. They’re going to get data from where you were three months, two months ago.”
Franklin said if lawmakers pass legislation allowing cities to shield Flock data from public disclosure, Everett would consider turning the cameras back on. She said the city is not dismantling or removing the cameras in the meantime.
...
Read the original on www.king5.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.