10 interesting stories served every morning and every evening.
Passengers at Britain’s biggest airport, Heathrow, can leave liquids in containers up to two litres in their bags while going through security, after it finally completed the rollout of new high-tech CT scanners. Electronics such as laptops can also be left in luggage, while clear plastic bags for liquids no longer have to be used.Heathrow now says it is the biggest airport in the world to have the new equipment fully rolled out across all its terminals.But while it has become the largest airport to roll out the new high-tech scanners, it is far from the UK’s first, with Gatwick, Edinburgh and Birmingham airports having upgraded to them in recent years and increased to a two-litre limit.
At most UK airports, passengers can keep liquid containers of up to 100ml in their luggage, without having to remove them and use clear plastic bags. Bristol and Belfast airports have also raised their liquid limits to two litres.However, other airports that have the new scanners installed are waiting for the green light from the Department for Transport (DfT) to raise the limit from 100ml.A recent report by consumer group Which? found that the sensitivity of the new scanners being rolled out means that at some airports, more bag searches end up being carried out by hand after passing through them. Heathrow said the scanners, which provide better images of cabin bags, could service “thousands of passengers an hour with significantly greater efficiency, while maintaining high safety and security standards”.The rule change only applies to flights leaving Heathrow, and passengers must check restrictions on luggage at the airports they are returning from before boarding flights to the UK. The rollout of the new high-tech scanners across the UK has suffered a series of setbacks over the past few years.Boris Johnson promised in 2019 that the rules about taking liquids through security in containers of no more than 100ml, inside plastics bags, would be scrapped by the end of 2022. The pandemic eventually put paid to that.In December 2022, the Conservative government promised state-of-the-art scanning equipment would be installed in security lanes by June 2024 in the “biggest shake-up of airport security rules in decades”.
Then-Transport Secretary Mark Harper said the dominance of “tiny toiletry” was nearly over. But, as it turned out, the June 2024 deadline was not achievable for the biggest airports - although a number of smaller ones, with fewer lanes to get sorted, did install the scanners in place before that date.Then, on the evening of Friday 13 June, 2024, the government said those smaller airports who had already introduced the new scanners and dropped their 100ml liquids rules, must reinstate them. This triggered anger among airport operators.The EU also announced a reversion to the 100ml rule in July that year. There has since been a period of inconsistency. Last summer, the Transport Secretary was telling passengers to assume the 100ml rule still applied.
Heathrow chief executive Thomas Woldbye said the £1bn package of upgrades would mean passengers could spend “less time preparing for security and more time enjoying their journey”.Of the world’s busiest 10 airports, Heathrow is the only one to have scrapped the 100ml rule for liquid containers on international flights. A DfT spokesperson said: “Heathrow is the latest UK airport to complete its rollout of next-generation security equipment for passengers, helping ensure security checks remain robust and can be completed smoothly.“Airports are responsible for the installation and operation of security equipment. “Passengers should continue to check security requirements with airports before they travel and come prepared with liquids in containers no larger than 100ml in hand baggage unless advised otherwise.“The Advantage Travel Partnership, a network of travel agents, said airports setting their own timelines on the lifting of the 100ml cap has “led to confusion and frustration” and passengers have been “tripped up”.Chief executive Julia Lo Bue-Said said: “We would urge UK airports to work collectively with the government to ensure there is clear messaging around the rules to avoid confusion and delays where possible.”
...
Read the original on www.bbc.com »
FBI Director Kash Patel said Monday that he had opened an investigation into the Signal group text chats that Minnesota residents are using to share information about federal immigration agents’ movements, launching a new front in the Trump administration’s conflict there with potential free speech implications.
Patel said in an interview with conservative podcaster Benny Johnson that he wanted to know whether any Minnesota residents had put federal agents “in harm’s way” with activities such as sharing agents’ license plate numbers and locations.
“You cannot create a scenario that illegally entraps and puts law enforcement in harm’s way,” he said in the interview, which was posted to YouTube.
The investigation quickly drew skepticism from free speech advocates who said the First Amendment protects members of the public who share legally obtained information, such as the names of federal agents or where they are conducting enforcement operations.
“There are legitimate reasons to share such information, including enabling members of the public to observe and document law enforcement activity and to hold officials accountable for misconduct,” Aaron Terr, director of public advocacy at the Foundation for Individual Rights and Expression, said in an email.
“Given this administration’s poor track record of distinguishing protected speech from criminal conduct, any investigation like this deserves very close scrutiny,” he said.
For months, digital tools have been at the center of how people have pushed back against immigration enforcement efforts in Minnesota and across the country. The administration’s opponents have used group text chats to track Immigration and Customs Enforcement operations, share photos of suspected ICE vehicles and raise awareness for neighbors. In June, administration officials criticized ICEBlock, an app designed to share information about ICE sightings. Apple removed the app from its app store in October, prompting a lawsuit from the app’s developer alleging the administration unlawfully pressured Apple to remove it.
In the past few days, the group text chats — especially those on the encrypted messaging app Signal — have drawn attention from right-wing media. On Saturday, Cam Higby, a conservative journalist based near Seattle, said in a thread on X that he had “infiltrated” Signal groups from around Minneapolis that he alleged were obstructing law enforcement. His thread, which got 20 million views, focused on how the groups share such information as the license plate numbers of suspected federal vehicles. NBC News has not verified Higby’s claims.
Patel said he got the idea for the investigation from Higby.
“As soon as Higby put that post out, I opened an investigation on it,” he said. “We immediately opened up that investigation, because that sort of Signal chat — being coordinated with individuals not just locally in Minnesota, but maybe even around the country — if that leads to a break in the federal statute or a violation of some law, then we are going to arrest people.”
The Signal Foundation, the nonprofit organization that operates the Signal app, did not immediately respond to a request for comment.
Signal, which is considered one of the most secure chat apps, is a go-to resource for people concerned about privacy. It is perhaps best known as the app Defense Secretary Pete Hegseth used to share sensitive military information last year in a group chat that accidentally included a journalist.
In the Twin Cities, Signal group chats have been a standard part of toolkits — along with walkie-talkies and whistles — used by activists, parents and neighborhood-watch members who have organized as volunteers to warn families about immigration enforcement activities by relaying real-time information, especially near schools. Patrol volunteers have said that, with more than 3,000 federal immigration agents in Minnesota, they are motivated by a desire to protect parents, children and school staff members who are not U. S. citizens.
Patel did not say which laws he thought Minnesota residents may have violated. An FBI spokesperson said the bureau had no further information to provide.
The announcement seemed likely to have implications for the First Amendment’s guarantee of free speech. Alex Abdo, litigation director at the Knight First Amendment Institute at Columbia University, said the First Amendment protects the right to record law enforcement officers as they carry out their official responsibilities.
“The ability of everyday citizens to hold government agents to account, by observing them and advocating for change, is what has distinguished the American experiment with democracy from authoritarian regimes around the world,” Abdo said in an email.
“Unless the FBI has evidence of a crime, and not just evidence of activity the Constitution protects, it should stand down,” he said.
Patel acknowledged in the interview with Johnson that an investigation into group text chats would raise free speech concerns and said the FBI would “balance” the rights guaranteed by the First and Second amendments with what he said were potential violations of federal law.
“Now, we will balance the First and Second amendment constantly, but we have to let the community know that we will not tolerate acts of violence and an escalation and a violation of the federal code,” he said. The Second Amendment could be at issue because Alex Pretti, the nurse shot and killed by a federal agent Saturday in Minneapolis, was permitted to carry a gun in public and had one with him.
Terr, of the Foundation for Individual Rights and Expression, said the government does not get to “balance” the First Amendment against its other interests.
“The Constitution takes precedence over any conflicting state or federal law, and over any official’s desire to suppress speech they dislike,” he said in his email.
He added: “There is a First Amendment exception for speech intended and likely to provoke imminent unlawful action, but that doesn’t apply to just any speech the government claims puts officials in harm’s way. By contrast, if individuals are threatening federal agents or conspiring to physically harm them, that is illegal. But conspiracy requires an agreement to commit a specific crime and a substantial step toward carrying it out.”
Patel also said the FBI had made “substantial progress” in an investigation into groups and people responsible for funding resistance to immigration enforcement. He alleged that the protests and neighborhood monitoring are “not happening organically” but did not immediately provide evidence.
...
Read the original on www.nbcnews.com »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on tech.lgbt »
Today, we are introducing Kimi K2.5, the most powerful open-source model to date. Kimi K2.5 builds on Kimi K2 with continued pretraining over approximately 15T mixed visual and text tokens. Built as a native multimodal model, K2.5 delivers state-of-the-art coding and vision capabilities and a self-directed agent swarm paradigm.For complex tasks, Kimi K2.5 can self-direct an agent swarm with up to 100 sub-agents, executing parallel workflows across up to 1,500 tool calls. Compared with a single-agent setup, this reduces execution time by up to 4.5x. The agent swarm is automatically created and orchestrated by Kimi K2.5 without any predefined subagents or workflow.Kimi K2.5 is available via Kimi.com, the Kimi App, the API, and Kimi Code. Kimi.com & Kimi App now supports 4 modes: K2.5 Instant, K2.5 Thinking, K2.5 Agent, and K2.5 Agent Swarm (Beta). Agent Swarm is currently in beta on Kimi.com, with free credits available for high-tier paid users.Across three agentic benchmarks—HLE, BrowseComp, and SWE-Verified—Kimi K2.5 delivers strong performance at a fraction of the cost.Kimi K2.5 is the strongest open-source model to date for coding, with particularly strong capabilities in front-end development.K2.5 can turn simple conversations into complete front-end interfaces, implementing interactive layouts and rich animations such as scroll-triggered effects. Below are examples generated by K2.5 from a single prompt with image-gen tool:Beyond text prompts, K2.5 excels at coding with vision. By reasoning over images and video, K2.5 improves image/video-to-code generation and visual debugging, lowering the barrier for users to express intent visually.Here is an example of K2.5 reconstructing a website from video:This capability stems from massive-scale vision-text joint pre-training. At scale, the trade-off between vision and text capabilities disappears — they improve in unison.Below is an example of K2.5 reasoning over a puzzle and marking the shortest path using code:K2.5 excels in real-world software engineering tasks. We evaluate it using Kimi Code Bench, our internal coding benchmark covering diverse end-to-end tasks — from building to debugging, refactoring, testing, and scripting — across multiple programming languages. On this benchmark, K2.5 shows consistent and meaningful improvements over K2 across task types.To try out K2.5′s agentic coding capabilities, K2.5 Agent offers a set of preconfigured tools for immediate, hands-on experiences. For software engineering use cases, we recommend pairing Kimi K2.5 with our new coding product, Kimi Code.Kimi Code works in your terminal and can be integrated with various IDEs including VSCode, Cursor, Zed, etc. Kimi Code is open-sourced and supports images and videos as inputs. It also automatically discovers and migrates existing skills and MCPs into your working environment in Kimi Code.Here’s an example using Kimi Code to translate the aesthetic of Matisse’s La Danse into the Kimi App. This demo highlights a breakthrough in autonomous visual debugging. Using visual inputs and documentation lookup, K2.5 visually inspects its own output and iterates on it autonomously. It creates an art-inspired webpage created end to end:Scaling Out, Not Just Up. We release K2.5 Agent Swarm as a research preview, marking a shift from single-agent scaling to self-directed, coordinated swarm-like execution.Trained with Parallel-Agent Reinforcement Learning (PARL), K2.5 learns to self-direct an agent swarm of up to 100 sub-agents, executing parallel workflows across up to 1,500 coordinated steps, without predefined roles or hand-crafted workflows.PARL uses a trainable orchestrator agent to decompose tasks into parallelizable subtasks, each executed by dynamically instantiated, frozen subagents. Running these subtasks concurrently significantly reduces end-to-end latency compared to sequential agent execution.Training a reliable parallel orchestrator is challenging due to delayed, sparse, and non-stationary feedback from independently running subagents. A common failure mode is serial collapse, where the orchestrator defaults to single-agent execution despite having parallel capacity. To address this, PARL employs staged reward shaping that encourages parallelism early in training and gradually shifts focus toward task success.We define the reward aswhere anneals from over training. Early on, the auxiliary reward incentivizes subagent instantiation and concurrent execution, promoting exploration of the parallel scheduling space. As training progresses, optimization shifts toward end-to-end task quality , preventing degenerate solutions where parallelism is enabled in name only.To further force parallel strategies to emerge, we introduce a computational bottleneck that makes sequential execution impractical. Instead of counting total steps, we evaluate performance using Critical Steps, a latency-oriented metric inspired by the critical path in parallel computation:captures orchestration overhead, while reflects the slowest subagent at each stage. Under this metric, spawning more subtasks only helps if it shortens the critical path.An agent swarm has an orchestrator that dynamically creates specialized subagents (e.g., AI Researcher, Physics Researcher, Fact Checker) and decomposes complex tasks into parallelizable subtasks for efficient distributed execution.In our parallel-agent reinforcement learning environment, the reward increases smoothly as training progresses. At the same time, the level of parallelism during training also gradually increases.K2.5 Agent Swarm improves performance on complex tasks through parallel, specialized execution. In our internal evaluations, it leads to an 80% reduction in end-to-end runtime while enabling more complex, long-horizon workloads, as shown below.Agent Swarm reduces the minimum critical steps required to achieve target performance by 3×–4.5× compared to single-agent execution in wide search scenario, with savings scaling as targets rise—translating to up to 4.5× wall-clock time reduction via parallelization.Here are representative trajectories demonstrating K2.5 Agent Swarm in action:K2.5 Agent can handle high-density, large-scale office work end to end. It reasons over large, high-density inputs, coordinates multi-step tool use, and delivers expert-level outputs: documents, spreadsheets, PDFs, and slide decks—directly through conversation.With a focus on real-world professional tasks, we design two internal expert productivity benchmarks. The AI Office Benchmark evaluates end-to-end Office output quality, while the General Agent Benchmark measures multi-step, production-grade workflows against human expert performance. Across both benchmarks, K2.5 shows 59.3% and 24.3% improvements over K2 Thinking, reflecting stronger end-to-end performance on real-world tasks.
K2.5 agent supports advanced tasks such as adding annotations in Word, constructing financial models with Pivot Tables, and writing LaTeX equations in PDFs, while scaling to long-form outputs like 10,000-word papers or 100-page documents.Tasks that once took hours or days now complete in minutes. Here are some examples:Grounded in advances in coding with vision, agent swarms, and office productivity, Kimi K2.5 represents a meaningful step toward AGI for the open-source community, demonstrating strong capability on real-world tasks under real-world constraints. Looking ahead, we will push further into the frontier of agentic intelligence, redefining the boundaries of AI in knowledge work.To reproduce official Kimi-K2.5 benchmark results, we recommend using the official API. For third-party providers, refer to Kimi Vendor Verifier (KVV) to choose high-accuracy services. Details: https://kimi.com/blog/kimi-vendor-verifier.htmlWe report results for Kimi K2.5 and DeepSeek-V3.2 with thinking mode enabled, Claude Opus 4.5 with extended thinking mode, GPT-5.2 with xhigh reasoning effort, and Gemini 3 Pro with a high thinking level. For vision benchmarks, we additionally report results for Qwen3-VL-235B-A22B-Thinking.Unless otherwise specified, all Kimi K2.5 experiments were conducted with temperature = 1.0, top-p = 0.95, and a context length of 256k tokens.Benchmarks without publicly available scores were re-evaluated under the same conditions used for Kimi K2.5 and are marked with an asterisk (*).We could not evaluate GPT-5.2 xhigh on all benchmarks due to service stability issues. For benchmarks that were not tested, we mark them as “-”.HLE, AIME 2025, HMMT 2025 (Feb), GPQA-Diamond and IMO-AnswerBench were evaluated with a maximum completion budget of 96k tokens.Results for AIME and HMMT are averaged over 32 runs (avg@32); GPQA-Diamond over 8 runs (avg@8).For HLE, we report scores on the full set (text & image). Kimi K2.5 scores 31.5 (text) and 21.3 (image) without tools, and 51.8 (text) and 39.8 (image) with tools. The DeepSeek-V3.2 score corresponds to its text-only subset (marked with †) . Hugging Face access was blocked to prevent potential data leakage. HLE with tools uses simple context management: once the context exceeds a threshold, only the latest round of tool messages is retained.Kimi K2.5 was equipped with search, code-interpreter, and web-browsing tools for HLE with tools and all agentic search benchmarks.Except for BrowseComp (where K2.5 and DeepSeek-V3.2 used the discard-all strategy), no context management was applied, and tasks exceeding the supported context length were directly counted as failed.The test system prompts emphasize deep and proactive tool use, instructing models to reason carefully, leverage tools, and verify uncertain information. Full prompts will be provided in the technical report.Results for Seal-0 and WideSearch are averaged over four runs (avg@4).ZeroBench (w/ tools) uses max-tokens-per-step = 24k and max-steps = 30 for multi-step reasoning.MMMU-Pro follows the official protocol, preserving input order and prepending images.GPT-5.2-xhigh had ~10% failure rate (no output despite 3 retries), treated as incorrect; reported scores likely underestimate true performance.OmniDocBench Score is computed as (1 − normalized Levenshtein distance) × 100, where a higher score denotes superior accuracy.Terminal-Bench 2.0 scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser. In our implementation, we evaluated Terminal-Bench 2.0 under non-thinking mode. This choice was made because our current context management strategy for the thinking mode is incompatible with Terminus-2.For the SWE-Bench series of evaluations (including verified, multilingual, and pro), we used an internally developed evaluation framework. This framework includes a minimal set of tools—bash tool, createfile tool, insert tool, view tool, strreplace tool, and submit tool—along with tailored system prompts designed for the tasks. The highest scores were achieved under non-thinking mode.The score of Claude Opus 4.5 on CyberGym is reported under the non-thinking setting.All reported scores of coding tasks are averaged over 5 independent runs.
...
Read the original on www.kimi.com »
Clawdbot is a personal AI assistant you run on your own devices. It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat), plus extension channels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and listen on macOS/iOS/Android, and can render a live Canvas you control. The Gateway is just the control plane — the product is the assistant.
If you want a personal, single-user assistant that feels local, fast, and always-on, this is it.
Preferred setup: run the onboarding wizard (clawdbot onboard). It walks through gateway, workspace, channels, and skills. The CLI wizard is the recommended path and works on macOS, Linux, and Windows (via WSL2; strongly recommended). Works with npm, pnpm, or bun. New install? Start here: Getting started
Model note: while any model is supported, I strongly recommend Anthropic Pro/Max (100/200) + Opus 4.5 for long‑context strength and better prompt‑injection resistance. See Onboarding.
npm install -g clawdbot@latest
# or: pnpm add -g clawdbot@latest
clawdbot onboard –install-daemon
The wizard installs the Gateway daemon (launchd/systemd user service) so it stays running.
clawdbot onboard –install-daemon
clawdbot gateway –port 18789 –verbose
# Send a message
clawdbot message send –to +1234567890 –message “Hello from Clawdbot”
# Talk to the assistant (optionally deliver back to any connected channel: WhatsApp/Telegram/Slack/Discord/Google Chat/Signal/iMessage/BlueBubbles/Microsoft Teams/Matrix/Zalo/Zalo Personal/WebChat)
clawdbot agent –message “Ship checklist” –thinking high
Prefer pnpm for builds from source. Bun is optional for running TypeScript directly.
git clone https://github.com/clawdbot/clawdbot.git
cd clawdbot
pnpm install
pnpm ui:build # auto-installs UI deps on first run
pnpm build
pnpm clawdbot onboard –install-daemon
# Dev loop (auto-reload on TS changes)
pnpm gateway:watch
Note: pnpm clawdbot … runs TypeScript directly (via tsx). pnpm build produces dist/ for running via Node / the packaged clawdbot binary.
* DM pairing (dmPolicy=“pairing” / channels.discord.dm.policy=“pairing” / channels.slack.dm.policy=“pairing”): unknown senders receive a short pairing code and the bot does not process their message.
* Approve with: clawdbot pairing approve (then the sender is added to a local allowlist store).
* Public inbound DMs require an explicit opt-in: set dmPolicy=“open” and include “*” in the channel allowlist (allowFrom / channels.discord.dm.allowFrom / channels.slack.dm.allowFrom).
Clawdbot can auto-configure Tailscale Serve (tailnet-only) or Funnel (public) while the Gateway stays bound to loopback. Configure gateway.tailscale.mode:
* serve: tailnet-only HTTPS via tailscale serve (uses Tailscale identity headers by default).
* gateway.bind must stay loopback when Serve/Funnel is enabled (Clawdbot enforces this).
* Serve can be forced to require a password by setting gateway.auth.mode: “password” or gateway.auth.allowTailscale: false.
* Funnel refuses to start unless gateway.auth.mode: “password” is set.
It’s perfectly fine to run the Gateway on a small Linux instance. Clients (macOS app, CLI, WebChat) can connect over Tailscale Serve/Funnel or SSH tunnels, and you can still pair device nodes (macOS/iOS/Android) to execute device‑local actions when needed.
* Gateway host runs the exec tool and channel connections by default.
* Device nodes run device‑local actions (system.run, camera, screen recording, notifications) via node.invoke.
In short: exec runs where the Gateway lives; device actions run where the device lives.
The macOS app can run in node mode and advertises its capabilities + permission map over the Gateway WebSocket (node.list / node.describe). Clients can then execute local actions via node.invoke:
* system.run runs a local command and returns stdout/stderr/exit code; set needsScreenRecording: true to require screen-recording permission (otherwise you’ll get PERMISSION_MISSING).
* system.notify posts a user notification and fails if notifications are denied.
* canvas.*, camera.*, screen.record, and location.get are also routed via node.invoke and follow TCC permission status.
* Use /elevated on|off to toggle per‑session elevated access when enabled + allowlisted.
* Gateway persists the per‑session toggle via sessions.patch (WS method) alongside thinkingLevel, verboseLevel, model, sendPolicy, and groupActivation.
* Use these to coordinate work across sessions without jumping between chat surfaces.
ClawdHub is a minimal skill registry. With ClawdHub enabled, the agent can search for skills automatically and pull in new ones as needed.
Send these in WhatsApp/Telegram/Slack/Google Chat/Microsoft Teams/WebChat (group commands are owner-only):
* /new or /reset — reset the session
The Gateway alone delivers a great experience. All apps are optional and add extra features.
If you plan to build/run companion apps, follow the platform runbooks below.
* Menu bar control for the Gateway and health.
Note: signed builds required for macOS permissions to stick across rebuilds (see docs/mac/permissions.md).
* Pairs as a node via the Bridge.
* Pairs via the same Bridge + pairing flow as iOS.
agent: {
model: “anthropic/claude-opus-4-5”
* Default: tools run on the host for the main session, so the agent has full access when it’s just you.
* Group/channel safety: set agents.defaults.sandbox.mode: “non-main” to run non‑main sessions (groups/channels) inside per‑session Docker sandboxes; bash then runs in Docker for those sessions.
* Allowlist who can talk to the assistant via channels.whatsapp.allowFrom.
* If channels.whatsapp.groups is set, it becomes a group allowlist; include “*” to allow all.
* Optional: set channels.telegram.groups (with channels.telegram.groups.“*”.requireMention); when set, it is a group allowlist (include “*” to allow all). Also channels.telegram.allowFrom or channels.telegram.webhookUrl as needed.
channels: {
telegram: {
botToken: “123456:ABCDEF”
* Optional: set commands.native, commands.text, or commands.useAccessGroups, plus channels.discord.dm.allowFrom, channels.discord.guilds, or channels.discord.mediaMaxMb as needed.
channels: {
discord: {
token: “1234abcd”
* macOS only; Messages must be signed in.
* If channels.imessage.groups is set, it becomes a group allowlist; include “*” to allow all.
* Allowlist who can talk via msteams.allowFrom; group access via msteams.groupAllowFrom or msteams.groupPolicy: “open”.
browser: {
enabled: true,
color: “#FF4500”
Use these when you’re past the onboarding flow and want the deeper reference.
Clawdbot was built for Clawd, a space lobster AI assistant. 🦞 by Peter Steinberger and the community.
See CONTRIBUTING.md for guidelines, maintainers, and how to submit PRs. AI/vibe-coded PRs welcome! 🤖
Special thanks to Mario Zechner for his support and for
pi-mono.
Thanks to all clawtributors:
...
Read the original on github.com »
Moltbot is a personal AI assistant you run on your own devices. It answers you on the channels you already use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat), plus extension channels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and listen on macOS/iOS/Android, and can render a live Canvas you control. The Gateway is just the control plane — the product is the assistant.
If you want a personal, single-user assistant that feels local, fast, and always-on, this is it.
Preferred setup: run the onboarding wizard (moltbot onboard). It walks through gateway, workspace, channels, and skills. The CLI wizard is the recommended path and works on macOS, Linux, and Windows (via WSL2; strongly recommended). Works with npm, pnpm, or bun. New install? Start here: Getting started
Model note: while any model is supported, I strongly recommend Anthropic Pro/Max (100/200) + Opus 4.5 for long‑context strength and better prompt‑injection resistance. See Onboarding.
npm install -g moltbot@latest
# or: pnpm add -g moltbot@latest
moltbot onboard –install-daemon
The wizard installs the Gateway daemon (launchd/systemd user service) so it stays running. Legacy note: clawdbot remains available as a compatibility shim.
moltbot onboard –install-daemon
moltbot gateway –port 18789 –verbose
# Send a message
moltbot message send –to +1234567890 –message “Hello from Moltbot”
# Talk to the assistant (optionally deliver back to any connected channel: WhatsApp/Telegram/Slack/Discord/Google Chat/Signal/iMessage/BlueBubbles/Microsoft Teams/Matrix/Zalo/Zalo Personal/WebChat)
moltbot agent –message “Ship checklist” –thinking high
Prefer pnpm for builds from source. Bun is optional for running TypeScript directly.
git clone https://github.com/moltbot/moltbot.git
cd moltbot
pnpm install
pnpm ui:build # auto-installs UI deps on first run
pnpm build
pnpm moltbot onboard –install-daemon
# Dev loop (auto-reload on TS changes)
pnpm gateway:watch
Note: pnpm moltbot … runs TypeScript directly (via tsx). pnpm build produces dist/ for running via Node / the packaged moltbot binary.
* DM pairing (dmPolicy=“pairing” / channels.discord.dm.policy=“pairing” / channels.slack.dm.policy=“pairing”): unknown senders receive a short pairing code and the bot does not process their message.
* Approve with: moltbot pairing approve (then the sender is added to a local allowlist store).
* Public inbound DMs require an explicit opt-in: set dmPolicy=“open” and include “*” in the channel allowlist (allowFrom / channels.discord.dm.allowFrom / channels.slack.dm.allowFrom).
Moltbot can auto-configure Tailscale Serve (tailnet-only) or Funnel (public) while the Gateway stays bound to loopback. Configure gateway.tailscale.mode:
* serve: tailnet-only HTTPS via tailscale serve (uses Tailscale identity headers by default).
* gateway.bind must stay loopback when Serve/Funnel is enabled (Moltbot enforces this).
* Serve can be forced to require a password by setting gateway.auth.mode: “password” or gateway.auth.allowTailscale: false.
* Funnel refuses to start unless gateway.auth.mode: “password” is set.
It’s perfectly fine to run the Gateway on a small Linux instance. Clients (macOS app, CLI, WebChat) can connect over Tailscale Serve/Funnel or SSH tunnels, and you can still pair device nodes (macOS/iOS/Android) to execute device‑local actions when needed.
* Gateway host runs the exec tool and channel connections by default.
* Device nodes run device‑local actions (system.run, camera, screen recording, notifications) via node.invoke.
In short: exec runs where the Gateway lives; device actions run where the device lives.
The macOS app can run in node mode and advertises its capabilities + permission map over the Gateway WebSocket (node.list / node.describe). Clients can then execute local actions via node.invoke:
* system.run runs a local command and returns stdout/stderr/exit code; set needsScreenRecording: true to require screen-recording permission (otherwise you’ll get PERMISSION_MISSING).
* system.notify posts a user notification and fails if notifications are denied.
* canvas.*, camera.*, screen.record, and location.get are also routed via node.invoke and follow TCC permission status.
* Use /elevated on|off to toggle per‑session elevated access when enabled + allowlisted.
* Gateway persists the per‑session toggle via sessions.patch (WS method) alongside thinkingLevel, verboseLevel, model, sendPolicy, and groupActivation.
* Use these to coordinate work across sessions without jumping between chat surfaces.
ClawdHub is a minimal skill registry. With ClawdHub enabled, the agent can search for skills automatically and pull in new ones as needed.
Send these in WhatsApp/Telegram/Slack/Google Chat/Microsoft Teams/WebChat (group commands are owner-only):
* /new or /reset — reset the session
The Gateway alone delivers a great experience. All apps are optional and add extra features.
If you plan to build/run companion apps, follow the platform runbooks below.
* Menu bar control for the Gateway and health.
Note: signed builds required for macOS permissions to stick across rebuilds (see docs/mac/permissions.md).
* Pairs as a node via the Bridge.
* Pairs via the same Bridge + pairing flow as iOS.
agent: {
model: “anthropic/claude-opus-4-5”
* Default: tools run on the host for the main session, so the agent has full access when it’s just you.
* Group/channel safety: set agents.defaults.sandbox.mode: “non-main” to run non‑main sessions (groups/channels) inside per‑session Docker sandboxes; bash then runs in Docker for those sessions.
* Allowlist who can talk to the assistant via channels.whatsapp.allowFrom.
* If channels.whatsapp.groups is set, it becomes a group allowlist; include “*” to allow all.
* Optional: set channels.telegram.groups (with channels.telegram.groups.“*”.requireMention); when set, it is a group allowlist (include “*” to allow all). Also channels.telegram.allowFrom or channels.telegram.webhookUrl as needed.
channels: {
telegram: {
botToken: “123456:ABCDEF”
* Optional: set commands.native, commands.text, or commands.useAccessGroups, plus channels.discord.dm.allowFrom, channels.discord.guilds, or channels.discord.mediaMaxMb as needed.
channels: {
discord: {
token: “1234abcd”
* macOS only; Messages must be signed in.
* If channels.imessage.groups is set, it becomes a group allowlist; include “*” to allow all.
* Allowlist who can talk via msteams.allowFrom; group access via msteams.groupAllowFrom or msteams.groupPolicy: “open”.
browser: {
enabled: true,
color: “#FF4500”
Use these when you’re past the onboarding flow and want the deeper reference.
Moltbot was built for Molty, a space lobster AI assistant. 🦞 by Peter Steinberger and the community.
See CONTRIBUTING.md for guidelines, maintainers, and how to submit PRs. AI/vibe-coded PRs welcome! 🤖
Special thanks to Mario Zechner for his support and for
pi-mono.
Thanks to all clawtributors:
...
Read the original on github.com »
Researchers working in southern Greece have identified the oldest known handheld wooden tools, dated to about 430,000 years ago. The objects came from Marathousa 1, a site in the Megalopolis Basin in the central Peloponnese. The area once held a lakeshore during the Middle Pleistocene, a period between about 774,000 and 129,000 years ago.
Excavations at Marathousa 1 have produced stone flakes, animal bones with cut marks, and the remains of a straight-tusked elephant. Archaeologists link these finds to repeated visits by early humans who processed large carcasses near water. Waterlogged sediments at the site created low oxygen conditions. Such conditions slowed decay and preserved pieces of wood that usually rot away over long spans of time.
Researchers examined dozens of wood fragments under microscopes. They studied surface marks, internal structure, and wood species. This work helped the team separate human modification from damage caused by roots, sediment pressure, or animals. Two fragments showed clear signs of shaping and use.
One piece comes from alder. The surface shows cut marks from stone tools and rounded areas formed through repeated contact with soil. The shape and wear fit use as a digging stick near the lakeshore. Such a tool would have helped with loosening wet ground or extracting plant foods. The second artifact, a very small fragment from willow or poplar, shows carved edges and smoothing from handling. The size points to a finger held tool. Researchers link this piece to fine tasks, such as adjusting stone flakes during tool production.
A third alder fragment drew attention during sorting. Deep parallel grooves run across the surface, with crushed fibers along the edges. Microscopic study matched these marks to claw damage from a large carnivore, likely a bear. This evidence places large predators at the same location where humans butchered elephants. Both groups used the lakeshore and may have competed for access to carcasses.
Before this work, the oldest known handheld wooden tools came from sites in Africa, Europe, and Asia, all younger than 430,000 years. One older wooden structure from Kalambo Falls in Zambia dates to about 476,000 years ago. Researchers interpret wood as part of built features rather than a handheld implement. The Marathousa finds push the record for shaped wooden tools back by at least 40,000 years and provide the first such evidence from southeastern Europe.
The tools show careful selection of local trees that grow in wet settings, including alder, willow, and poplar. Alongside stone and bone artifacts from the same layers, the wooden pieces show broad knowledge of natural materials and varied technical skill during the Middle Pleistocene.
...
Read the original on archaeologymag.com »
The text based internet can be exciting, informative, and fun. Using telnet, you can access a variety of these resources on the internet. Below you’ll find lists of a few places to get you started.
If you have an interesting item to add, just send an email to us:
Rainmaker was pretty great, and it lasted at least as far as 2018. I don’t recall what happened to it.
* nyancat.dakko.us
ANSI art animation of “poptart cat”, with support for many different terminals (cool screenshots!)
The telnet server is offline, but the website is still up for this one!
Both are offline at the time of this update.
A large active listing of Dial-Up and Telnet accessible Bulletin Board Systems on the Internet:
http://www.jumpjet.info/Offbeat-Internet/Public/TelNet/url.htm
Jumpjet has a nice list of telnet locations organized by category:
Mudconnect keeps a good list of muds and moos:
http://www.lights.ca/hytelnet/
Hytelnet is an old (an now unmaintained) directory:
...
Read the original on telnet.org »
Thinking about doing the thing is not doing the thing.
Dreaming about doing the thing is not doing the thing.
Visualizing success from doing the thing is not doing the thing.
Waiting to feel ready to do the thing is not doing the thing.
Talking about doing the thing is not doing the thing.
Explaining the thing to others is not doing the thing.
Arguing online about the thing is not doing the thing.
Announcing that you’ll start the thing is not doing the thing.
Listening to podcasts about doing the thing is not doing the thing.
Watching tutorials about doing the thing is not doing the thing.
Reading threads about how others did the thing is not doing the thing.
Planning the perfect system for the thing is not doing the thing.
Buying tools for the thing is not doing the thing.
Reorganizing your workspace for the thing is not doing the thing.
Feeling guilty about not doing the thing is not doing the thing.
Being “busy” instead of doing the thing is not doing the thing.
Telling yourself you’ll start tomorrow is not doing the thing.
Failing while doing the thing is doing the thing.
Doing it badly is doing the thing.
Doing it timidly is doing the thing.
Doing a small part of the thing is doing the thing.
Writing a blog about doing the thing is not doing the thing.
I should probably get back to work.
...
Read the original on www.softwaredesign.ing »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.