10 interesting stories served every morning and every evening.
I’ve been obsessed with procedural maps since I was a kid rolling dice on the random dungeon tables in the AD&D Dungeon Master’s Guide. There was something magical about it — you didn’t design the dungeon, you discovered it, one room at a time, and the dice decided whether you got a treasure chamber or a dead end full of rats.
Years later, I decided to build my own map generator. It creates little medieval island worlds — with roads, rivers, coastlines, cliffs, forests, and villages — entirely procedurally. Built with Three.js WebGPU and TSL shaders, about 4,100 hex cells across 19 grids, generated in ~20 seconds.
If you’ve ever played Carcassonne, you already understand WFC. You have a stack of tiles and place them so everything lines up. Each tile has edges — grass, road, city. Adjacent tiles must have matching edges. A road edge must connect to another road edge. Grass must meet grass. The only difference is that the computer does it faster, and complains less when it gets stuck.
The twist: hex tiles have 6 edges instead of 4. That’s 50% more constraints per tile, and the combinatorial explosion is real. Square WFC is well-trodden territory. Hex WFC is… less so.
WFC is reliable for small grids. But as the grid gets bigger, the chance of painting yourself into a dead end goes up fast. A 217-cell hex grid almost never fails. A 4123-cell grid fails regularly.
The solution: modular WFC. Instead of one giant solve, the map is split into 19 hexagonal grids arranged in two rings around a center — about 4,100 cells total. Each grid is solved independently, but it has to match whatever tiles were already placed in neighboring grids. Those border tiles become fixed constraints.
And sometimes those constraints are simply incompatible. No amount of backtracking inside the current grid can fix a problem that was baked in by a neighbor. This is where I spent a lot of dev time.
Here’s the dirty secret of WFC: it fails. A lot. You make a series of random choices, propagate constraints, and eventually back yourself into a corner where some cell has zero valid options left. Congratulations, the puzzle is unsolvable.
The textbook solution is backtracking — undo your last decision and try a different tile. My solver tracks every possibility it removes during propagation (a “trail” of deltas), so it can rewind cheaply without copying the entire grid state. It’ll try up to 500 backtracks before giving up.
But backtracking alone isn’t enough. The real problem is cross-grid boundaries.
After many failed approaches, I landed on a layered recovery system:
Layer 1: Unfixing. During the initial constraint propagation, if a neighbor cell creates a contradiction, the solver converts it from a fixed constraint back into a solvable cell. Its own neighbors (two cells out — “anchors”) become the new constraints. This is cheap and handles easy cases.
Layer 2: Local-WFC. If the main solve fails, the solver runs a mini-WFC on a small radius-2 region around the problem area — re-solving 19 cells in the overlap area to create a more compatible boundary. Up to 5 attempts, each targeting a different problem cell. Local-WFC was the breakthrough. Instead of trying to solve the impossible, go back and change the problem.
Layer 3: Drop and hide. Last resort. Drop the offending neighbor cell entirely and place mountain tiles to cover the seams. Mountains are great — their cliff edges match anything, and they look intentional. Nobody questions a mountain.
This map isn’t flat — it has 5 levels of elevation. Ocean and Grass start at level 0, but slopes and cliffs can move up or down a level. Low slopes go up 1 level, high slopes go up 2 levels. A road tile at level 3 needs to connect to another road tile at level 3, or a slope tile that transitions between levels. Get it wrong and you end up with roads that dead-end into cliff faces or rivers flowing uphill into the sky. The elevation axis turns a 2D constraint problem into a 3D one, and it’s where a lot of the tile variety (and a lot of the solver failures) comes from.
The better approach: cube coordinates (q, r, s where s = -q-r). It’s a 3D coordinate system for the three hex axes. Neighbor finding becomes trivial — just add or subtract 1 from two coordinates.
The good news is that WFC doesn’t really care about geometry. It’s concerned with which edges match which — it’s essentially a graph problem. The hex coordinates only matter for rendering and for the multi-grid layout, where the 19 grids are themselves arranged as a hex-of-hexes with their own offset positions.
If you’ve ever worked with hex grids, you owe Amit Patel at Red Blob Games a debt of gratitude. His hex grid guide is the definitive reference.
Trees, Buildings, and Why Not Everything Should Be WFC
Early on, I tried using WFC for tree and building placement. Bad idea. WFC is great at local edge matching but terrible at large-scale patterns. You’d get trees scattered randomly instead of clustered into forests, or buildings spread evenly instead of gathered into villages.
The solution: good old Perlin noise. A global noise field determines tree density and building placement, completely separate from WFC. Areas where the noise is above a threshold get trees; slightly different noise drives buildings. This gives you organic clustering — forests, clearings, villages — that WFC could never produce. I also used some additional logic to place buildings at the end of roads, ports and windmills on coasts, henges on hilltops etc.
WFC handles the terrain. Noise handles the decorations. Each tool does what it’s good at.
I wanted that ‘Zelda: The Wind Waker’ cartoon shimmer on the water surface. Originally I tried generating caustics procedurally with four layers of Voronoi noise. This turned out to be very GPU heavy and did not look great. The solution was sampling a small scrolling caustic texture with a simple noise mask, which looks way better and is super cheap. Sometimes the easy solution is the correct solution.
Waves are sine bands that radiate outward from coastlines, inspired by Bad North’s gorgeous shoreline effect. To know “how far from the coast” each pixel is, the system renders a coast mask — a top down orthographic render of the entire map with white for land and black for water — then dilates and blurs it into a gradient. The wave shader reads this gradient to place animated sine bands at regular distance intervals, with noise to break up the pattern.
This worked great on straight coastlines. In concave coves and inlets, the wave lines got thick and ugly. The blur-based gradient spreads the same value range over a wider physical area in coves, stretching the wave bands out.
Screen-space derivatives to detect gradient stretching — worked at one zoom level, broke at others.
Texture-space gradient magnitude to detect opposing coast edges canceling out — only detected narrow rivers, not actual problem coves.
The fundamental issue: blur encodes “how much land is nearby,” not “how far is the nearest coast edge.” These are different questions, and no amount of post-processing the blur can extract true distance.
The solve was to do a CPU-side “surroundedness” probe that checks each water cell’s neighbors to detect coves, writing a separate mask texture that thins the waves in enclosed areas. It’s kind of a hack but it works and the wave edges thin out nicely at the edges.
No dice required this time — but the feeling is the same. You hit a button, the map builds itself, and you discover what the algorithm decided to put there. It’s super satisfying to see the road and river systems matching up perfectly. Every time it’s different, and every time I find myself exploring for a while. The kid rolling dice on the dungeon tables would be into this.
...
Read the original on felixturner.github.io »
Last week, Dan Blanchard, the maintainer of chardet—a Python library for detecting text encodings used by roughly 130 million projects a month—released a new version. Version 7.0 is 48 times faster than its predecessor, supports multiple cores, and was redesigned from the ground up. Anthropic’s Claude is listed as a contributor. The license changed from LGPL to MIT.
Blanchard’s account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch. The resulting code shares less than 1.3% similarity with any prior version, as measured by JPlag. His conclusion: this is an independent new work, and he is under no obligation to carry forward the LGPL. Mark Pilgrim, the library’s original author,
opened a GitHub issue to object. The LGPL requires that modifications be distributed under the same license, and a reimplementation produced with ample exposure to the original codebase cannot, in Pilgrim’s view, pass as a clean-room effort.
The dispute drew responses from two prominent figures in the open source world. Armin Ronacher, the creator of Flask, welcomed the relicensing. Salvatore Sanfilippo (antirez), the creator of Redis,
published a broader defense of AI reimplementation, grounding it in copyright law and the history of the GNU project. Both conclude, by different routes, that what Blanchard did is legitimate. I respect both writers, and I think both are wrong—or more precisely, both are evading the question that actually matters.
That question is this: does legal mean legitimate? Neither piece answers it. Both move from “this is legally permissible” to “this is therefore fine,” without pausing at the gap between those two claims. Law sets a floor; clearing it does not mean the conduct is right. That gap is where this essay begins.
Antirez builds his case on history. When the GNU project reimplemented the UNIX userspace, it was lawful. So was Linux. Copyright law prohibits copying “protected expressions”—the actual code, its structure, its specific mechanisms—but it does not protect ideas or behavior. AI-assisted reimplementation occupies the same legal ground. Therefore, it is lawful.
The legal analysis is largely correct, and I am not disputing it. The problem lies in what antirez does next: he presents the legal conclusion as if it were also a social one, and uses a historical analogy that, examined more carefully, argues against his own position.
When GNU reimplemented the UNIX userspace, the vector ran from proprietary to free. Stallman was using the limits of copyright law to turn proprietary software into free software. The ethical force of that project did not come from its legal permissibility—it came from the direction it was moving, from the fact that it was expanding the commons. That is why people cheered.
The vector in the chardet case runs the other way. Software protected by a copyleft license—one that guarantees users the right to study, modify, and redistribute derivative works under the same terms—has been reimplemented under a permissive license that carries no such guarantee. This is not a reimplementation that expands the commons. It is one that removes the fencing that protected the commons. Derivative works built on chardet 7.0 are under no obligation to share their source code. That obligation, which applied to a library downloaded 130 million times a month, is now gone.
Antirez does not address this directional difference. He invokes the GNU precedent, but that precedent is a counterexample to his conclusion, not a supporting one.
Ronacher’s argument is different. He discloses upfront that he has a stake in the outcome: “I personally have a horse in the race here because I too wanted chardet to be under a non-GPL license for many years. So consider me a very biased person in that regard.” He goes on to write that he considers “the GPL to run against that spirit by restricting what can be done with it”—the spirit being that society is better off when we share.
This claim rests on a fundamental misreading of what the GPL does.
Start with what the GPL actually prohibits. It does not prohibit keeping source code private. It imposes no constraint on privately modifying GPL software and using it yourself. The GPL’s conditions are triggered only by distribution. If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms. This is not a restriction on sharing. It is a condition placed on sharing: if you share, you must share in kind.
The requirement that improvements be returned to the commons is not a mechanism that suppresses sharing. It is a mechanism that makes sharing recursive and self-reinforcing. The claim that imposing contribution obligations on users of a commons undermines sharing culture does not hold together logically.
The contrast with the MIT license clarifies the point. Under MIT, anyone may take code, improve it, and close it off into a proprietary product. You can receive from the commons without giving back. If Ronacher calls this structure “more share-friendly,” he is using a concept of sharing with a specific directionality built in: sharing flows toward whoever has more capital and more engineers to take advantage of it.
The historical record bears this out. In the 1990s, companies routinely absorbed GPL code into proprietary products—not because they had chosen permissive licenses, but because copyleft enforcement was slack. The strengthening of copyleft mechanisms closed that gap. For individual developers and small projects without the resources to compete on anything but reciprocity, copyleft was what made the exchange approximately fair.
The creator of Flask knows this distinction. If he elides it anyway, the argument is not naïve—it is convenient.
The most interesting moment in Ronacher’s piece is not the argument but a detail he mentions in passing: Vercel reimplemented GNU Bash using AI and published it, then got visibly upset when Cloudflare
reimplemented Next.js the same way.
Ronacher notes this as an irony and moves on. But the irony cuts deeper than he lets on. Next.js is MIT licensed. Cloudflare’s vinext did not violate any license—it did exactly what Ronacher calls a contribution to the culture of openness, applied to a permissively licensed codebase. Vercel’s reaction had nothing to do with license infringement; it was purely competitive and territorial. The implicit position is: reimplementing GPL software as MIT is a victory for sharing, but having our own MIT software reimplemented by a competitor is cause for outrage. This is what the claim that permissive licensing is “more share-friendly” than copyleft looks like in practice. The spirit of sharing, it turns out, runs in one direction only: outward from oneself.
Ronacher registers the contradiction and does not stop. “This development plays into my worldview,” he writes. When you present evidence that cuts against your own position, acknowledge it, and then proceed to your original conclusion unchanged, that is a signal that the conclusion preceded the argument.
Back to the question posed at the start. Is legal the same as legitimate?
Antirez closes his careful legal analysis as though it settles the matter. Ronacher acknowledges that “there is an obvious moral question here, but that isn’t necessarily what I’m interested in.” Both pieces treat legal permissibility as a proxy for social legitimacy. But law only says what conduct it will not prevent—it does not certify that conduct as right. Aggressive tax minimization that never crosses into illegality may still be widely regarded as antisocial. A pharmaceutical company that legally acquires a patent on a long-generic drug and raises the price a hundredfold has not done something legal and therefore fine. Legality is a necessary condition; it is not a sufficient one.
In the chardet case, the distinction is sharper still. What the LGPL protected was not Blanchard’s labor alone. It was a social compact agreed to by everyone who contributed to the library over twelve years. The terms of that compact were: if you take this and build on it, you share back under the same terms. This compact operated as a legal instrument, yes, but it was also the foundation of trust that made contribution rational. The fact that a reimplementation may qualify legally as a new work, and the fact that it breaks faith with the original contributors, are separate questions. If a court eventually rules in Blanchard’s favor, that ruling will tell us what the law permits. It will not tell us that the act was right.
Zoë Kooyman, executive director of the FSF, put it plainly: “Refusing to grant others the rights you yourself received as a user is highly antisocial, no matter what method you use.”
Reading this debate, I keep returning to a question about position. From where are these two writers looking at the situation?
Antirez created Redis. Ronacher created Flask. Both are figures at the center of the open source ecosystem, with large audiences and well-established reputations. For them, falling costs of AI reimplementation means something specific: it is easier to reimplement things they want in a different form. Ronacher says explicitly that he had begun reimplementing GNU Readline precisely because of its copyleft terms.
For the people who have spent years contributing to a library like chardet, the same shift in costs means something else entirely: the copyleft protection around their contributions can be removed. The two writers are speaking from the former position to people in the latter, telling them that this was always lawful, that historical precedent supports it, and that the appropriate response is adaptation.
When positional asymmetry of this kind is ignored, and the argument is presented as universal analysis, what you get is not analysis but rationalization. Both writers arrive at conclusions that align precisely with their own interests. Readers should hold that fact in mind.
Bruce Perens, who wrote the original Open Source Definition, told : “The entire economics of software development are dead, gone, over, kaput!” He meant it as an alarm. Antirez, from a similar assessment of the situation, draws the conclusion: adapt. Ronacher says he finds the direction exciting.
None of the three responses addresses the central question. When copyleft becomes technically easier to circumvent, does that make it less necessary, or more?
I think more. What the GPL protected was not the scarcity of code but the freedom of users. The fact that producing code has become cheaper does not make it acceptable to use that code as a vehicle for eroding freedom. If anything, as the friction of reimplementation disappears, so does the friction of stripping copyleft from anything left exposed. The erosion of enforcement capacity is a legal problem. It does not touch the underlying normative judgment.
That judgment is this: those who take from the commons owe something back to the commons. The principle does not change depending on whether a reimplementation takes five years or five days. No court ruling on AI-generated code will alter its social weight.
This is where law and community norms diverge. Law is made slowly, after the fact, reflecting existing power arrangements. The norms that open source communities built over decades did not wait for court approval. People chose the GPL when the law offered them no guarantee of its enforcement, because it expressed the values of the communities they wanted to belong to. Those values do not expire when the law changes.
In previous writing, I argued for a training copyleft (TGPL) as the next step in this line of development. The chardet situation suggests the argument has to go further: to a specification copyleft covering the layer below source code. If source code can now be generated from a specification, the specification is where the essential intellectual content of a GPL project resides. Blanchard’s own claim—that he worked only from the test suite and API without reading the source—is, paradoxically, an argument for protecting that test suite and API specification under copyleft terms.
The history of the GPL is the history of licensing tools evolving in response to new forms of exploitation: GPLv2 to GPLv3, then AGPL. What drove each evolution was not a court ruling but a community reaching a value judgment first and then seeking legal instruments to express it. The same sequence is available now. Whatever courts eventually decide about AI reimplementation, the question we need to answer first is not a legal one. It is a social one. Do those who take from the commons owe something back? I think they do. That judgment does not require a verdict.
What makes the pieces by antirez and Ronacher worth reading is not that they are right. It is that they make visible, with unusual clarity, what they are choosing not to see. When legality is used as a substitute for a value judgment, the question that actually matters gets buried in the footnotes of a law it has already outgrown.
...
Read the original on writings.hongminhee.org »
A Broward County judge has dismissed a red-light camera ticket, ruling that the state law used to issue the citation improperly shifts the burden of proof onto vehicle owners.
In a 21-page order signed March 3, Judge Steven P. DeLuca granted a defendant’s motion to dismiss a photo-enforced traffic citation issued under Florida’s red-light camera law.
The case involved a Sunrise red-light camera citation issued to a registered vehicle owner after automated cameras captured a vehicle entering an intersection against a red signal. The defendant argued the statute unconstitutionally requires the registered owner to prove they were not driving — instead of requiring the government to prove who was behind the wheel.
In the order, the court found that red-light camera cases, although labeled as civil infractions, function as “quasi-criminal” proceedings because they can result in monetary penalties, a formal finding of guilt, and consequences tied to a driver’s record.
Under Florida Statute 316.0083, once a camera captures a violation, the registered owner is presumed responsible unless they submit an affidavit identifying another driver. The court ruled that the framework improperly shifts the burden of proof away from the state.
Because traffic infractions that move to county court must be proven “beyond a reasonable doubt,” the judge wrote that the statute’s presumption violates constitutional due process protections.
As a result, the citation in this case was formally dismissed.
Click here to view the PDF file
Joel Mumford is an attorney with The Ticket Clinic. He said if the case is quasi criminal, which means almost criminal or criminal-like, the state has to follow procedural due process.
“The state or the agencies that issue the tickets, it’s their burden to prove all the elements of the crime beyond and to the extent of each and every reasonable doubt. And the first element, which should be, who’s driving the car,” Mumford said. “The statute in Florida presumes that the registered owner is the driver of the car.”
Mumford said even though the court order only applies in Broward County, it could open the door for challenges in other parts of the state, meaning other counties across the state following suit.
“What could probably happen is they the court could then get an appeal, and then if it goes up to the district court of appeal level and then make a decision on it, depending on what that says, that could then be applied to the entire state, if there’s no similar appellate cases throughout the state. So that could make it statewide,” he said.
Drivers in Boynton Beach want to see the red-light cameras gone.
Fifteen red-light camera systems are already running at seven intersections in Boynton Beach alone. Congress Avenue and Gateway Boulevard is just one of them.
“I’ve been ticketed here twice, and it’s ridiculous because they it’s just not fair,” one driver said who didn’t want to be identified. The person that does the determination when you ran the light, it’s just a random. Whoever they want to pick, pick you to say, okay, you’re gonna pay the ticket.”
That driver had to pay his $158 dollar ticket but he’s hopeful Palm Beach County can soon follow suit. He wants red light cameras gone, so future violations could be thrown out.
“I think they need to outlaw it and get rid of it,” he said.
Advocacy group StopTheCams, which has long opposed automated traffic enforcement, called the ruling a major victory. In a press release, the group said the decision confirms what critics have argued for years — that red-light camera laws punish vehicle owners without requiring proof they committed the violation.
Supporters of red-light cameras argue the systems improve safety by deterring dangerous driving at intersections. Florida’s red-light camera law, known as the Mark Wandall Traffic Safety Act, allows local governments to use automated enforcement systems.
It remains unclear whether the ruling will be appealed or how broadly it could affect similar cases statewide. For now, the decision applies to this specific case in Broward County, but legal observers say it could fuel renewed challenges to Florida’s red-light camera enforcement system.
...
Read the original on cbs12.com »
Artificial intelligence chips are getting upgraded more quickly than data centers can be built, a market reality that exposes a key risk to the AI trade and Oracle’s debt-fueled expansion.
OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.
The current Abilene site is expected to use Nvidia’s Blackwell processors, and the power isn’t projected to come online for a year. By then, OpenAI is hoping to have expanded access to Nvidia’s next-generation chips in bigger clusters elsewhere, said the person, who asked not to be named due to confidentiality.
Bloomberg was first to report on the companies ending their plans for expansion in Abilene. In a post on X on Sunday, Oracle called news reports about the activity, “false and incorrect,” but the post only said existing projects are on track and didn’t address expansion plans.
Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger.
It’s a logical decision for OpenAI, which doesn’t want older chips. Nvidia used to release a new generation of data center processors every two years. Now, CEO Jensen Huang has the company shipping one every year, and each generation offers a leap in capability. Vera Rubin, unveiled at CES in January and already in production, delivers five times the inference performance of Blackwell.
For the companies building frontier models, the smallest improvement in performance could equate to huge gaps in model benchmarks and rankings, which are closely followed by developers and translate directly to usage, revenue, and valuation.
That all points to a bigger problem at play. For infrastructure companies, securing a site, connecting power and standing up a facility takes 12 to 24 months at minimum. But customers want the latest and greatest, and they’re tracking the yearly chip upgrades.
Oracle’s added challenge is that it’s the only hyperscaler funding its buildout primarily with debt, to the tune of $100 billion and counting. Google, Amazon and Microsoft, by contrast, are leaning on their enormous cash-generating businesses.
Meanwhile, Oracle partner Blue Owl is declining to fund an additional facility, and plans to cut up to 30,000 jobs.
Oracle reports fiscal third-quarter results on Tuesday, and investors will be paying close to how the company addresses a $50 billion capital expenditure plan with negative free cash flow, and whether the financing pipeline can hold up.
The stock is down 23% so far this year and has lost over half its value since peaking in September.
Beyond Oracle, GPU depreciation is a risk for the broader market and could have ramifications across the AI landscape. Every infrastructure deal signed today may result in a commitment to outdated hardware before the power is even connected.
...
Read the original on www.cnbc.com »
After several intense and incredible years building Bluesky from the ground up, I’ve decided to step back as CEO and transition to a new role as Bluesky’s Chief Innovation Officer.
In 2019, I set out to build an open protocol for social media, with the goal of enabling the development of a new generation of applications designed to empower users. The Bluesky app started off as a reference client for the protocol, but has since grown into a thriving platform with over 40 million users. Last year, we grew a world-class team, expanded the AT Protocol ecosystem, and proved that a values-driven social network could thrive at scale.
As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things. As part of this transition, Toni Schneider, former CEO of Automattic and partner at True Ventures, will join our team as interim CEO, while our board runs a search for a permanent chief executive.
Toni believes deeply in the Bluesky mission, and has been an advisor to the company and me personally for over a year. Both Automattic and True Ventures are also investors in Bluesky, and support the development of a more open, user-driven internet. Automattic, the company behind WordPress.com, has built their business on open source software, working to make the web a better and more participatory place. Toni was a key part of guiding that mission, and is someone who leads with genuine curiosity, takes the long view, and deeply understands what it means to build a company around a mission. I am confident that he is the right person to lead us into this next chapter.
Scaling up this company has been a learning experience unlike anything else. I’ve grown a lot as a leader and had the privilege of assembling the best team I’ve ever worked with. As we’ve grown, I’ve found that people thrive when they’re in a role where their passions overlap with their strengths. This is as true for me as it is for our team. I’m most energized by exploring new ideas, bringing a vision to life, and helping people discover their strengths. Transitioning to a more focused role where I can do what brings me energy is my way of putting that belief into practice.
I’m excited to dig into the next frontier of what decentralized social can be, while bringing Toni in to support our team as an experienced operator and leader. The work that got us here was just the beginning, and I’m grateful to keep building alongside this team and community.
...
Read the original on bsky.social »
My LinkedIn and Twitter feeds are full of screenshots from the recent Forbes article on Cursor claiming that Anthropic’s $200/month Claude Code Max plan can consume $5,000 in compute. The relevant quote:
Today, that subsidization appears to be even more aggressive, with that $200 plan able to consume about $5,000 in compute, according to a different person who has seen analyses on the company’s compute spend patterns.
This is being shared as proof that Anthropic is haemorrhaging money on inference. It doesn’t survive basic scrutiny.
I’m fairly confident the Forbes sources are confusing retail API prices with actual compute costs. These are very different things.
Anthropic’s current API pricing for Opus 4.6 is $5 per million input tokens and $25 per million output tokens. At those prices, yes - a heavy Claude Code Max 20 user could rack up $5,000/month in API-equivalent usage. That maths checks out.
But API pricing is not what it costs Anthropic to serve those tokens.
The best way to estimate what inference actually costs is to look at what open-weight models of similar size are priced at on OpenRouter - where multiple providers compete on price.
Qwen 3.5 397B-A17B is a good comparison point. It’s a large MoE model, broadly comparable in architecture size to what Opus 4.6 is likely to be. Equally, so is Kimi K2.5 1T params with 32B active, which is probably approaching the upper limit of what you can efficiently serve.
Here’s what the pricing looks like:
The Qwen 3.5 397B model on OpenRouter (via Alibaba Cloud) costs _$0.39_ per million input tokens and _$2.34_ per million output tokens. Compare that to Opus 4.6′s API pricing of $5/$25. Kimi K2.5 is even cheaper at $0.45 per million input tokens and $2.25 output.
And this ratio holds for cached tokens too - DeepInfra charges $0.07/MTok for cache reads on Kimi K2.5 vs Anthropic’s $0.50/MTok.
These OpenRouter providers are running a business. They have to cover their compute costs, pay for GPUs, and make a margin. They’re not charities. If so many can serve a model of comparable size at ~10% of Anthropic’s API price and remain in business, it is hard for me to believe that they are all taking enormous losses (at ~the exact same rate range).
If a heavy Claude Code Max user consumes $5,000 worth of tokens at Anthropic’s retail API prices, and the actual compute cost is roughly 10% of that, Anthropic is looking at approximately $500 in real compute cost for the heaviest users.
That’s a loss of $300/month on the most extreme power users - not $4,800.
However, most users don’t come anywhere near the limit. Anthropic themselves said when they introduced weekly caps that fewer than 5% of subscribers would be affected. I personally use the Max 20x plan and probably consume around 50% of my weekly token budget and it’s hard to use that many tokens without getting serious RSI. At that level of usage, the maths works out to roughly break-even or profitable for Anthropic.
The real story is actually in the article. The $5,000 figure comes from Cursor’s internal analysis. And for Cursor, the number probably is roughly correct - because Cursor has to pay Anthropic’s retail API prices (or close to it) for access to Opus 4.6.
So to provide a Claude Code-equivalent experience using Opus 4.6, it would cost Cursor ~$5,000 per power user per month. But it would cost Anthropic perhaps $500 max.
And the real issue for Cursor is that developers want to use the Anthropic models, even in Cursor itself. They have real “brand awareness”, and they are genuinely better than the cheaper open weights models - for now at least. It’s a real conundrum for them.
Obviously Anthropic isn’t printing free cashflow. The costs of training frontier models, the enormous salaries required to hire top AI researchers, the multi-billion dollar compute commitments - these are genuinely massive expenses that dwarf inference costs.
But on a per-user, per-token basis for inference? I believe Anthropic is very likely profitable - potentially very profitable - on the average Claude Code subscriber.
The “AI inference is a money pit” narrative is misinformation that actually plays into the hands of the frontier labs. If everyone believes that serving tokens is wildly expensive, nobody questions the 10x+ markups on API pricing. It discourages competition and makes the moat look deeper than it is.
If you want to understand the real economics of AI inference, don’t take API prices at face value. Look at what competitive open-weight model providers charge on OpenRouter. That’s a much closer proxy for what it actually costs to run these models - and it’s a fraction of what the frontier labs charge.
...
Read the original on martinalderson.com »
I’ve been maintaining Emacs Solo
for a while now, and I think it’s time to talk about what happened in this latest cycle as the project reaches its two-year mark.
For those who haven’t seen it before, Emacs Solo is my daily-driver Emacs configuration with one strict rule: no external packages. Everything is either built into Emacs or written from scratch by me in the lisp/ directory. No package-install, no straight.el, no
use-package :ensure t pointing at ELPA or MELPA. Just Emacs and Elisp. I’m keeping this post text only, but if you’d like to check how
Emacs Solo looks and feels, the repository has screenshots and more details.
Why? Partly because I wanted to understand what Emacs actually gives you out of the box. Partly because I wanted my config to survive without breakage across Emacs releases. Partly because I was tired of dealing with package repositories, mirrors going down in the middle of the workday, native compilation hiccups, and the inevitable downtime when something changed somewhere upstream and my job suddenly became debugging my very long (at the time) config instead of doing actual work. And partly, honestly, because it’s a lot of fun!
This post covers the recent refactor, walks through every section of the core config, introduces all 35 self-contained extra modules I’ve written, and shares some thoughts on what I’ve learned.
Now, I’ll be the first to admit: this config is long. But there’s a principle behind it. I only add features when they are not already in Emacs core, and when I do, I try to build them myself. That means the code is sketchy sometimes, sure, but it’s in my control. I wrote it, I understand it, and when it breaks, I know exactly where to look. The refactor I’m about to describe makes this distinction crystal clear: what is “Emacs core being tweaked” versus what is “a really hacky outsider I built in because I didn’t want to live without it”.
The single biggest change in this cycle was architectural. Emacs Solo used to be one big init.el with everything crammed together. That worked, but it had problems:
— It was hard to navigate (even with outline-mode)
— If someone wanted just one piece, say my Eshell config or my VC extensions, they had to dig through thousands of lines
— It was difficult to tell where “configuring built-in Emacs” ended and “my own hacky reimplementations” began
The solution was clean and simple: split the config into two layers.
This file configures only built-in Emacs packages and features. Every
use-package block in here has :ensure nil, because it’s pointing at something that ships with Emacs. This is pure, standard Emacs customization.
The idea is that anyone can read init.el, find a section they like, and
copy-paste it directly into their own config. No dependencies. No setup. It just works, because it’s configuring things Emacs already has.
These are my own implementations: replacements for popular external packages, reimagined as small, focused Elisp files. Each one is a proper provide/require module. They live under lisp/ and are loaded at the bottom of init.el via a simple block:
If you don’t want one of them, just comment out the require line. If you want to use one in your own config, just copy the .el file into your own lisp/ directory and require it. That’s it.
This separation made the whole project dramatically easier to maintain, understand, and share.
The init.el file is organized into clearly labeled sections (using outline-mode-friendly headers, so you can fold and navigate them inside Emacs). Here’s every built-in package and feature it touches, and why.
The emacs use-package block is the largest single section. It sets up sensible defaults that most people would want:
— Window layout commands bound under C-x w (these are upcoming
Emacs 31 features: window-layout-transpose,
window-layout-rotate-clockwise, window-layout-flip-leftright,
window-layout-flip-topdown)
— Disabling C-z (suspend) because accidentally suspending Emacs in a terminal is never fun
— Sensible file handling: backups and auto-saves in a cache/
directory, recentf for recent files, clean buffer naming with
uniquify
— Tree-sitter auto-install and auto-mode (treesit-auto-install-grammar t and treesit-enabled-modes t, both Emacs 31)
— delete-pair-push-mark, kill-region-dwim, ibuffer-human-readable-size, all the small quality-of-life settings coming in Emacs 31
A full abbrev-mode setup with a custom placeholder system. You define abbreviations with ###1###, ###2### markers, and when the abbreviation expands, it prompts you to fill in each placeholder interactively. The ###@### marker tells it where to leave point after expansion. I wrote a whole article about it.
Configures auth-source to use ~/.authinfo.gpg for credential storage. Simple but essential if you use Gnus, ERC, or any network-facing Emacs feature.
Makes buffers automatically refresh when files change on disk. Essential for any Git workflow.
Configuration file mode settings and a compilation-mode setup with ANSI color support, so compiler output actually looks readable.
Custom window management beyond the defaults, because Emacs window management out of the box is powerful but needs a little nudging.
Tab-bar configuration for workspace management. Emacs has had tabs since version 27, and they’re genuinely useful once you configure them properly.
Two IRC clients, both built into Emacs, both configured. ERC gets the bigger treatment: logging, scrolltobottom, fill, match highlighting, and even inline image support (via one of the extra modules). The Emacs 31 cycle brought nice improvements here too, including a fix for the scrolltobottom/fill-wrap dependency issue.
This is where Emacs Solo’s completion story lives. Instead of reaching for Vertico, Consult, or Helm, I use icomplete-vertical-mode, which is built into Emacs. With the right settings it’s surprisingly capable:
I’ve also been contributing patches upstream to improve icomplete’s vertical rendering with prefix indicators. Some of those features are already landing in Emacs 31, which means the polyfill code I carry today will eventually become unnecessary.
A heavily customized Dired setup. Custom listing switches, human readable sizes, integration with system openers (open on macOS,
xdg-open on Linux), and the dired-hide-details-hide-absolute-location
option from Emacs 31.
Writable Dired, so you can rename files by editing the buffer directly.
This one I’m particularly proud of. Emacs Solo’s Eshell configuration includes:
— Shared history across all Eshell buffers: Every Eshell instance reads from and writes to a merged history, so you never lose a command just because you ran it in a different buffer
— Custom prompts: Multiple prompt styles you can toggle between with C-c t (full vs. minimal) and C-c T (lighter vs. heavier full prompt)
This is one of the largest sections and one I’m most invested in. Emacs’s built-in vc is an incredible piece of software that most people overlook in favor of Magit. I’m not saying it replaces Magit entirely, but with the right configuration it covers 95% of daily Git operations:
— Git add/reset from vc-dir: S to stage, U to unstage, directly in the vc-dir buffer. Admittedly, I almost never use this because I’m now used to the Emacs-style VC workflow: C-x v D or C-x v =, then killing what I don’t want, splitting what isn’t ready yet, and finishing with C-c C-c. Amending with C-c C-e is awesome. Still useful once or twice a semester.
— Browse remote: C-x v B opens your repository on GitHub/GitLab in a browser; with a prefix argument it jumps to the current file and line
— Jump to current hunk: C-x v = opens the diff buffer scrolled to the hunk containing your current line
— Switch between modified files: C-x C-g lets you
completing-read through all modified/untracked files in the current repo
Merge conflict resolution and diff viewing. Ediff configured to split windows sanely (side by side, not in a new frame).
Documentation at point, with eldoc-help-at-pt (Emacs 31) for showing docs automatically.
The LSP client that ships with Emacs. Configured with:
— Custom server programs, including rassumfrassum for multiplexing TypeScript + ESLint + Tailwind (I wrote a whole post
about that)
— Keybindings under C-c l for code actions, rename, format, and inlay hints
— Automatic enabling for all prog-mode buffers except
emacs-lisp-mode and lisp-mode
Diagnostics, spell checking, and whitespace visualization. All built-in, all configured.
The Emacs newsreader and email client. Configured for IMAP/SMTP usage.
RSS/Atom feed reader built into Emacs. Customized with some extras I build my self for dealing with youtube feeds: thumbnail, transcripts, sending to AI for a quick summary, and so on.
Org-mode configuration, because of course.
File tree navigation in a side window. With Emacs 31, speedbar gained speedbar-window support, so it can live inside your existing frame instead of spawning a new one.
Buffer name disambiguation when you have multiple files with the same name open.
Quick web searches from the minibuffer. Configured with useful search engines.
Specific configurations for every language I work with, organized into three areas:
Common Lisp: inferior-lisp and lisp-mode with custom REPL interaction, evaluation commands, and a poor man’s SLIME/SLY setup that actually works quite well for basic Common Lisp development.
Non-Tree-sitter: sass-mode for when tree-sitter grammars aren’t available.
Tree-sitter modes: ruby-ts-mode, js-ts-mode,
json-ts-mode, typescript-ts-mode, bash-ts-mode,
rust-ts-mode, toml-ts-mode, markdown-ts-mode (Emacs 31),
yaml-ts-mode, dockerfile-ts-mode, go-ts-mode. Each one configured with tree-sitter grammar sources (which Emacs 31 is starting to define internally, so those definitions will eventually become unnecessary).
This is where the fun really is. Each of these is a complete, standalone Elisp file that reimplements functionality you’d normally get from an external package. They’re all in lisp/ and can be used independently.
I call them “hacky reimplementations” in the spirit of Emacs Solo: they’re not trying to be feature-complete replacements for their MELPA counterparts. They’re trying to be small, understandable,
and good enough for daily use while keeping the config self-contained.
Custom color themes based on Modus. Provides several theme variants: Catppuccin Mocha, Crafters (the default), Matrix, and GITS. All built on top of Emacs’s built-in Modus themes by overriding faces, so you get the accessibility and completeness of Modus with different aesthetics.
Custom mode-line format and configuration. A hand-crafted mode-line that shows exactly what I want: buffer state indicators, file name, major mode, Git branch, line/column, and nothing else. No doom-modeline, no telephone-line, just format strings and faces.
Enhanced navigation and window movement commands. Extra commands for moving between windows, resizing splits, and navigating buffers more efficiently.
Configurable format-on-save with a formatter registry. You register formatters by file extension (e.g., prettier for .tsx,
black for .py), and the module automatically hooks into
after-save-hook to format the buffer. All controllable via a
defcustom, so you can toggle it on and off globally.
Frame transparency for GUI and terminal. Toggle transparency on your Emacs frame. Works on both graphical and terminal Emacs, using the appropriate mechanism for each.
Sync shell PATH into Emacs. The classic macOS problem: GUI Emacs doesn’t inherit your shell’s PATH. This module solves it the same way exec-path-from-shell does, but in about 20 lines instead of a full package.
Rainbow coloring for matching delimiters. Colorizes nested parentheses, brackets, and braces in different colors so you can visually match nesting levels. Essential for any Lisp, and helpful everywhere else.
Interactive project finder and switcher. A completing-read
interface for finding and switching between projects, building on Emacs’s built-in project.el.
Vim-like keybindings and text objects for Viper. If you use Emacs’s built-in viper-mode (the Vim emulation layer), this extends it with text objects and additional Vim-like commands. No Evil needed.
Highlight TODO and similar keywords in comments. Makes TODO,
FIXME, HACK, NOTE, and similar keywords stand out in source code comments with distinctive faces. A small thing that makes a big difference.
Git diff gutter indicators in buffers. Shows added, modified, and deleted line indicators in the margin, like diff-hl or
git-gutter. Pure Elisp, using vc-git under the hood.
Quick window switching with labels. When you have three or more windows, this overlays single-character labels on each window so you can jump to any one with a single keystroke. A minimal reimplementation of the popular ace-window package.
Centered document layout mode. Centers your text in the window with wide margins, like olivetti-mode. Great for prose writing, Org documents, or any time you want a distraction-free centered layout.
Upload text and files to 0x0.st. Select a region or a file and upload it to the 0x0.st paste service. The URL is copied to your kill ring. Quick and useful for sharing snippets.
Edit files as root via TRAMP. Reopen the current file with root privileges using TRAMP’s /sudo:: prefix. A reimplementation of the
...
Read the original on www.rahuljuliato.com »
WorldThe thing that everyone expected to happen has happenedGoldman pitches hedge funds on strategies to bet against corporate loansSaudi Aramco warns of ‘catastrophic consequences’ if Iran war drags onUSGoldman pitches hedge funds on strategies to bet against corporate loansIran is a crucial test case for the American way of warCompaniesGoldman pitches hedge funds on strategies to bet against corporate loansYann LeCun’s AI start-up raises more than $1bn in Europe’s largest seed roundSaudi Aramco warns of ‘catastrophic consequences’ if Iran war drags onTechYann LeCun’s AI start-up raises more than $1bn in Europe’s largest seed roundMarketsThe thing that everyone expected to happen has happenedGoldman pitches hedge funds on strategies to bet against corporate loansSaudi Aramco warns of ‘catastrophic consequences’ if Iran war drags onOpinionThe thing that everyone expected to happen has happenedIran is a crucial test case for the American way of warWork & CareersWhite men will have ‘fewer board seats’ in future, says UK diversity chair Why it’s time to end the grim march of the touchscreenVenice’s cicchetti renaissance: where to find the city’s best bar snacksYou can turn this to your advantage if every news story has ‘tax exile’ in itLife & ArtsRoy Chan can turn you into Austin ButlerThe world’s most expensive properties are supercharging their securityWine storage is emerging from the cellar How To Spend It
Yann LeCun’s AI start-up raises more than $1bn in Europe’s largest seed round per month.
Complete digital access to quality FT journalism on any device.
Cancel anytime during your trial. Access to eight surprising articles a day, hand-picked by FT editors. For seamless reading, access content via the FT Edit page on FT.com and receive the FT Edit newsletter.Essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%.Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.Check whether you already have access via your university or organisation.Discover all the plans currently available in your countrySee why over a million readers pay to read the Financial Times.Find out why
How To Spend It
...
Read the original on www.ft.com »
Background: Why I put my whole life into a single database
Back in 2019, I started collecting all kinds of metrics about my life. Every single day for the last 3 years I tracked over 100 different data types - ranging from fitness & nutrition to social life, computer usage and weather.
Ideas or suggestions?
I’d love to hear from you!
The goal of this project was to answer questions about my life, like
How does living in different cities affect other factors like fitness, productivity and happiness?
How does sleep affect my day, my fitness level, and happiness?
How does the weather, and the different seasons affect my life?
Are there any trends over the last few years?
How does computer time, work and hours in meetings affect my personal life?
Since the start of this project, I collected ~380,000 data points, with the biggest data sources being:
Naturally after I started collecting this data, I wanted to visualize what I was learning, so I created this page. Initially, the domain whereisFelix.today (now renamed to howisFelix.today) started as a joke to respond to friends asking when I’d be back in NYC or San Francisco. Rather than send them my schedule, I’d point them to this domain. However, now it’s more than my location: it’s all of me.
Use a single database, owned and hosted by me, with all the data I’ve collected over the years
Be able to easily add and remove questions on the fly, as I learn what’s beneficial to track
Full control of how the data is visualized
Works well for frequent flyers with mixed time zones
I selected 48 graphs to show publicly on this page. For privacy reasons, and to prevent any accidental data leaks, the graphs below are snapshots taken on a given day.
Visualization of the number of data entries in FxLifeSheet over the last 10 years, and where the data came from.
Initially (2014) the only data used was RescueTime and Foursquare Swarm location data
Once I started the FxLifeSheet project in April 2019, I manually tracked , ranging from mood, sleep, social life, to fitness data
I was able to retrospectively fetch the historic weather data based on my location on a given day
I also implemented other import sources, like fetching my historic weight and the number of steps from Apple Health
Days tracked my Mood to be Happy & Excited
On days where I tracked my mood to be “happy” & “excited”, the following other factors of my life were affected
50% more likely to have pushed my comfort zone
44% more likely to have meditated that day
33% more excited about what’s ahead in the future
31% more likely to drink alcohol that day (parties, good friends and such)
28% more time spent reading or listening to audio books
26% more likely to have worked on interesting technical challenges
20% more likely to have learned something new that day
45% less time spent in video & audio calls that day
All flights taken within the last 7 years, tracked using Foursquare Swarm, analyzed by JetLovers.
The stats clearly show the impact of COVID starting 2020
Sunday has been my “commute” day, flying between San Francisco, New York City and Vienna
All flights taken within the last 7 years, tracked using Foursquare Swarm, analyzed by JetLovers.
Frankfurt - Vienna was the flight connecting me with most US airports
Germany is high up on the list due to layovers, even though I didn’t spend actually much time there
Inspired by Your Life in Weeks by WaitButWhy, I use Google Sheets to visualize every week of my life, with little notes on what city/country I was in, and other life events that have happened.
The first 14 years I didn’t really get much done
I can highly recommend taking a few weeks (or even months) off between jobs (if you have the possibility)
Shades of blue indicate my full-time employments
You can create your own version using my template
Average daily steps measured through the iPhone’s Apple Health app. I decided against using SmartWatch data for steps, as SmartWatches have changed over the last 8 years.
I walked a total of steps over last 8 years
I walk more than twice as much when I’m in New York, compared to any other city
In NYC I had the general rule of thumb to walk instead of taking public transit whenever it’s less than 40 minutes. I used that time to call friends & family, or listen to audio books
Although Vienna is very walkable, the excellent public transit system with subway trains coming every 3-5 minutes, has caused me to walk less
San Francisco was always scary to walk
This graph clearly shows the correlation between my body weight and my sleeping/resting heart rate. The resting heart rate is measured by the Withings ScanWatch while sleeping, and indicates how hard your heart has to work while not being active. Generally the lower the resting heart rate, the better.
I started my lean bulk (controlled weight gain combined with 5 workouts a week) in August 2020
My resting heart rate went from 58bpm to 67bpm () from August 2020 to March 2021 with a weight gain of (+19lbs) as part of a controlled lean-bulk combined with a 5-day/week workout routine
The spike in resting heart rate in July & August 2021 was due to bars and nightclubs opening up again in Austria
After a night of drinking, my resting/sleeping heart rate was about 50% higher than after a night without any alcohol
The spike in resting heart rate in Oct/Nov/Dec 2021 was due to having bronchitis and a cold/flu, not getting correct treatment early enough
How healthy have I been over the Years?
Every day I answered the question on how healthy I felt. In the graph, the yellow color indicates that I felt a little under the weather, not sick per se. Red means I was sick and had to stay home. Green means I felt energized and healthy.
During the COVID lockdowns I tended to stay healthier. This may be due to not going out, no heavy drinking, less close contact with others, etc. which resulted in me having better sleep.
Usually during excessive traveling I get sick (cold/flu)
Q4 2021 I had bronchitis, however, I didn’t know about it at the time and didn’t get proper treatment
Overall I’m quite prone to getting sick (cold/flu)
Days with more than 4 Alcoholic Drinks
On days where I had more than 4 alcoholic beverages (meaning I was partying), the following other factors were affected
21x more likely to dance
80% more likely to take a nap the day of, or the day after
40% warmer temperatures, and 40% less precipitation. There weren’t many opportunities for parties in Winter due to lockdowns in the last 2 years. Also, people are more motivated to go out when it’s nice outside.
My FxLifeSheet bot asks me 4 times a day how I’m feeling at the moment.
This graph groups the entries by month, and shows the % of entries for each value (0 - 5) with 5 being very excited, and 0 being worried.
I designed the ranges so that 0 or 5 are not entered as much. 0 is rendered as dark green at the top, whereas 5 is rendered as light green at the bottom.
For privacy reasons I won’t get into some of the details on why certain months were worse than others.
Every Swarm check-in over the last 7 years visualized on a map, including the actual trip (flight, drive, etc.)
Every Swarm check-in over the last 7 years visualized, zoomed in
Each time I did a check-in at a place (e.g. Coffee, Restaurant, Airport, Gym) on Foursquare Swarm at a given city, this is tracked as a single entry.
Each check-in at a given city is counted as a single entry, grouped by years
2018 and 2019 I lived in New York City
The longer it’s been since I moved away from Austria, the more time I actually spent back home in Austria for visits and vacations
2020 clearly shows the impact of COVID
Each check-in at a given category is tracked, and summed up over the last years
In 2020 and 2021, check-ins at Offices went down to zero due to COVID, and a distributed work setup
Airports being the #4 most visited category was a surprise, but is accurate. A total of 403 airport check-ins, whereas a flight with a layover would count as 3 airport check-ins
Earlier in my life, I didn’t always check into ‘commute’ places like public transit and super markets
Number of Foursquare Swarm check-ins on each quarter over the last 10 years. I didn’t use Foursquare Swarm as seriously before 2015. Once I moved to San Francisco in Q3 2015 I started my habit of checking into every point of interest (POI) I visit.
Q3 2015 I moved to San Francisco, however I couldn’t use Swarm yet, since my move was a secret until the official announced at the Twitter Flight conference
Q2 2020 clearly shows the impact of COVID with Q3 already being open in Austria
Q3 2021 the vaccine was already widely available and I was able to travel/visit more again
My time in New York was the most active when it comes to check-ins. When I’m in NYC, I tend to eat/drink out more, and grab to-go food, which I do way less in Vienna
Every Swarm check-in visualized on a map. Only areas where I’ve had multiple check-ins are rendered.
Number of days per year that I’ve spent in full lockdown, meaning restaurants, bars and non-essential stores were closed.
I escaped parts of the Austrian lockdown by spending time in the US when I was already vaccinated
Surprisingly 2021 I spent more days in a full lockdown than in 2020, even with vaccines available
How was my life affected by the recent COVID lockdowns? As lockdown day I classify every day where places like restaurants, gyms and non-essential stores were closed.
200% more time spent in audio & video calls with friends (non-work related)
60% more likely to follow my meal plan (macros & calories)
50% colder temperatures: Lockdowns tended to happen in Autumn and Winter
100% less likely to dance
Alcoholic drinks per day. Days with no data are rendered as white
Friday and Saturday nights are clearly visible on those graphs
2021 and summer/winter of 2019 also show the Wednesday night party in Vienna
Q2 and Q4 2020 clearly show the COVID lockdowns, as well as Q2 2021
Summer of 2021 all bars and dance clubs were open in Vienna
...
Read the original on howisfelix.today »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.