10 interesting stories served every morning and every evening.
Why else would they keep them around for so long?
Why else would they keep them around for so long?
Every bug is different. But the math is always real.
Think our numbers are wrong? Edit them yourself.
Users Affected × Frequency × Time Per Incident
How many Apple users hit this bug, how often, and how long they suffer each time.
Σ (Workaround Time × Participation Rate)
The extra time spent by people who try to fix what Apple won’t.
Years Unfixed × Pressure Factor
How long Apple has known about this and how urgent the task usually is.
Human Hours Wasted ÷ Engineering Hours to Fix
How many times over Apple could have fixed it with the productivity they’ve destroyed.
...
Read the original on www.bugsappleloves.com »
API Error: 400 {
“error”: {
“type”: “invalid_request_error”,
“message”:“This organization has been disabled.”
One minute I’m a €220/month “Max 20x” AI “power user” (is this even a thing?). The next, I am a disabled non-person “organization”.
Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else.
Until in one of these sessions I got presented with that response.
If you are automating prompts that look like system instructions (i.e. scaffolding context files, or using Claude to find errors of another Claude and iterate on its CLAUDE.md, or etc…), you are walking on a minefield.
My account was banned! no warning and no feedback, just that message saying that my request was invalid because I am a disabled organization.
I wasn’t even doing anything groundbreaking, in fact, I was asking it to tune a tool I use to do project scaffolding.
Yes you read that right: project scaffolding! Probably one of the most boring things you can think of doing!
So I asked claude to update my scaffolding tool so that it would include a CLAUDE.md file in there with baked instructions for a particular homemade framework (cof
boreDOM cof).
I was playing like a “human-in-the-loop” middleware for these LLM tools. Like watching one instance of Claude try to “boss around” another instance of itself, and the platform’s security guards mistook it for a riot.
To help understand this, there are three main characters in this story:
* Claude A (an instance of claude in a tmux pane)
* Claude B (another instance in another tmux pane)
The loop was like this:
The disabled organization (me) asked Claude A to update the scaffold tool
with a cool CLAUDE.md
I went on and started a new project with the tool, opened a claude in there
(Claude B) and asked for a complex task to be done
Whenever Claude B made a mistake, I would go to Claude A, paste the error,
and say something like “hey, Claude B made this error”
Running two instances of claudes, while one updated the CLAUDE.md of another as it made mistakes.
The loop repeated until I was told I was a disabled organization.
I just wanted a standard context file for new projects.
At one point Claude A got somewhat annoyed with Claude B, and started shoutting! writting in en-US instead of en-GB, that is: ALL CAPS.
I went on to check the file, and it was getting littered with these kind of instructions to make Claude B do something instead of what it would try to do.
My guess is that this likely tripped the “Prompt Injection” heuristics that the non-disabled organization has.
I would love to see the face of that AI when it saw its own “system prompt” language being echoed back to it.
Or I don’t know. This is all just a guess from me.
So I went running to read their docs. What was going on here?
Made an appeal, which was a link to a google docs form, with a textbox where I tried to convince some Claude C in the multi-trillion-quadrillion dollar non-disabled organization that I was not only a human but also a well-intended one.
I got no reply. Not even an automatic response. 0 comms.
So I wrote to their support, this time I wrote the text with the help of an LLM from another non-disabled organization.
I got no reply. Not even an automatic response.
And to wonder that people complain about civil servants, eh, wait until you have to deal with one of these expensive machines!
After a couple of days I got an e-mail:
Yes, the only e-mail I got was a credit note giving my money back.
It’s like they’re saying “We don’t want to talk to you anymore, here is some hush money”. But hey guys, it is not a conversation if it is one-way only, and here I am talking to a wall.
I didn’t even get to have a “It’s not you, it’s us.” I just got a credit note.
I’m glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools… then I would be out of e-mail, photos, documents, and phone OS.
AI moderation is currently a “black box” that prioritizes safety over accuracy to an extreme degree.
If you are automating prompts that look like system instructions (i.e. scaffolding context files), you are walking on a minefield.
I got my €220 back (ouch that’s a lot of money for this kind of service, thanks capitalism). I have reframed the whole scaffolding project, and reverted all the code Claude did there.
Soon I will re-release boreDOM with a new angle and approach, without the help of Claude. I am trying to turn it into a JS framework for LLMs (llm first, or even llm only, it now has no API). To produce and iterate on those single.html files that these tools are now bringing to the world.
If you want to take a look at the CLAUDE.md that Claude A was making Claude B run with, I commited it and it is available
here.
Again to wrap this up: this whole post is just my hypothesis. Claude was not doing anything other than iterating on this file at the moment I got the ban. And I haven’t heard from them about this anymore (or ever).
“you got to understand that these organizations have a lot of users…”
...
Read the original on hugodaniel.com »
Here are a few lines of summarized tcpdump output for an ssh session where I send a single keystroke:
$ ./first_lines_of_pcap.sh single-key.pcap
1 0.000s CLIENT->SERVER 36 bytes
2 0.007s SERVER->CLIENT 564 bytes
3 0.015s CLIENT->SERVER 0 bytes
4 0.015s CLIENT->SERVER 36 bytes
5 0.015s SERVER->CLIENT 36 bytes
6 0.026s CLIENT->SERVER 0 bytes
7 0.036s CLIENT->SERVER 36 bytes
8 0.036s SERVER->CLIENT 36 bytes
9 0.046s CLIENT->SERVER 0 bytes
10 0.059s CLIENT->SERVER 36 bytes
I said a “few” because there are a lot of these lines.
$ ./summarize_pcap.sh single-key.pcap
Total packets: 270
36-byte msgs: 179 packets ( 66.3%) 6444 bytes
Other data: 1 packet ( 0.4%) 564 bytes
TCP ACKs: 90 packets ( 33.3%)
Data sent: 6444 bytes in 36-byte messages, 564 bytes in other data
Ratio: 11.4x more data in 36-byte messages than other data
Data packet rate: ~90 packets/second (avg 11.1 ms between data packets)
That is a lot of packets for one keypress. What’s going on here? Why do I care?
I am working on a high-performance game that runs over ssh. The TUI for the game is created in bubbletea 1 and sent over ssh via wish.
The game is played in an 80x60 window that I update 10 times a second. I’m targeting at least 2,000 concurrent players, which means updating ~100 million cells a second. I care about performance.
So I have a script that connects a few hundred bots over ssh and has them make a move a second. Then I use go’s outstanding profiling tools to look at what’s going on.
Yesterday I inadvertently broke my test harness. Instead of regularly sending game data, my server sent the bots a single message that said “your screen is too small.” This cut my game’s CPU and bandwidth usage in half.
At first I was disappointed. I (briefly) thought I had a free massive speedup on my hands, but it was actually a testing error.
If I wasn’t sending game data back to my bots, why did CPU usage drop by 50% instead of 100%?
As part of debugging the test harness issue, I used tcpdump to log game traffic with and without the breaking change. Something like:
# The game runs on port 22
timeout 30s tcpdump -i eth0 ‘port 22’ -w with-breaking-change.pcap
# Revert change
timeout 30s tcpdump -i eth0 ‘port 22’ -w without-breaking-change.pcap
Our breaking change stopped us from rendering our game over ssh. So with-breaking-change.pcap contains packets that represent the overhead of each connection without actually rendering the game.
I was debugging this with Claude Code, so I asked it to summarize what it saw in the pcap.
Wanna take a look yourself? I put with-breaking-change.pcap in this directory
Wow! Here’s what I found:
Packet Size Distribution (413,703 total packets):
274,907 packets (66%): Exactly 36 bytes
138,778 packets (34%): 0 bytes (TCP ACKs)
18 packets (
Further analysis on a smaller pcap pointed to these mysterious packets arriving ~20ms apart.
This was baffling to me (and to Claude Code). We kicked around several ideas like:
* Some quirk of bubbletea or wish
One thing stood out - these exchanges were initiated by my ssh client (stock ssh installed on MacOS) - not by my server.
On a hunch, I took a tcpdump of a regular ssh session.
# on my mac, in one tab
sudo tcpdump -ien0 ‘port 22’
# on my mac, in another tab
ssh $some_vm_of_mine
I waited for the initial connection chatter to die down, sent one keystroke to my remote vm, and looked at the tcpdump output.
I saw the exact same pattern! What in the world?
Once I realized that this was a property of stock ssh and not my game, debugging got a lot easier.
Running ssh -vvv gave me a pretty good sense of what was going on:
debug3: obfuscate_keystroke_timing: starting: interval ~20ms
debug3: obfuscate_keystroke_timing: stopping: chaff time expired (49 chaff packets sent)
debug3: obfuscate_keystroke_timing: starting: interval ~20ms
debug3: obfuscate_keystroke_timing: stopping: chaff time expired (101 chaff packets sent)
That 20ms is a smoking gun - it lines up perfectly with the mysterious pattern we saw earlier! And the rest of the message is pretty helpful too - we sent 49 “chaff” packets for the first keystroke and 101 “chaff” for around the second one.
In 2023, ssh added keystroke timing obfuscation. The idea is that the speed at which you type different letters betrays some information about which letters you’re typing. So ssh sends lots of “chaff” packets along with your keystrokes to make it hard for an attacker to determine when you’re actually entering keys.
That makes a lot of sense for regular ssh sessions, where privacy is critical. But it’s a lot of overhead for an open-to-the-whole-internet game where latency is critical.
Keystroke obfuscation can be disabled client-side. After reverting my original breaking change, I tried updating my test harness to pass ObscureKeystrokeTiming=no when starting up ssh sessions.
This worked great. CPU usage dropped dramatically and bots still received valid data.
But this is hardly a solution in the real world. I want ssh mygame to Just Work without asking users to pass options that they might not understand.
Claude Code originally didn’t have much faith that we could disable this functionality server-side.
Fortunately, the description I found of SSH keystroke obfuscation made it easy to look up the relevant code in go’s ssh library (which I was transitively depending on).
Log message:
Introduce a transport-level ping facility
This adds a pair of SSH transport protocol messages SSH2_MSG_PING/PONG
to implement a ping capability. These messages use numbers in the “local
extensions” number space and are advertised using a “[email protected]”
ext-info message with a string version number of “0”.
The “chaff” messages that ssh uses to obscure keystrokes are SSH2_MSG_PING messages. And they’re sent to servers that advertise the availability of the [email protected] extension. What if we just…don’t advertise [email protected]?
I searched go’s ssh library for [email protected] and found the commit where support was added. The commit was tiny and seemed very easy to revert.
I cloned the go crypto repo and told Claude to revert this change and update our dependencies to use our clone (go’s replace directive makes forking a library very easy).
Then I re-ran my test harness. The results were…very good:
Total CPU 29.90% -> 11.64%
Syscalls 3.10s -> 0.66s
Crypto 1.6s -> 0.11s
Bandwidth ~6.5 Mbit/sec -> ~3 Mbit/sec
Obviously forking go’s crypto library is a little scary, and I’m gonna have to do some thinking about how to maintain my little patch in a safe way.
But this is a huge improvement. I’ve spent much of the last week squeezing out small single-digit performance wins. A >50% drop was unimaginable to me.
I’ve been thinking about whether LLMs remove parts of the problem-solving process that I enjoy. But I’ve gotta say, debugging this problem using Claude Code was super fun.
I am familiar enough with tcpdump, tshark, and friends to know what they can do. But I don’t use them regularly enough to be fast with them. Being able to tell an agent “here’s a weird pcap - tell me what’s going on” was really lovely. And by watching commands as the agent ran them I was able to keep my mental model of the problem up to date.
There were still edge cases. At some point in my confusion I switched to ChatGPT and it very confidently told me that my tcpdump output was normal ssh behavior:
And then doubled down when I pushed back:
Similarly, I had to push Claude Code to consider forking go’s ssh library. And I had to make the original leap of “wait…if our test harness was broken, why was usage not 0%?”
When you say “LLMs did not fully solve this problem” some people tend to respond with “you’re holding it wrong!”
I think they’re sometimes right! Interacting with LLMs is a new skill, and it feels pretty weird if you’re used to writing software like it’s 2020. A more talented user of LLMs may have trivially solved this problem.
But the best way to develop a skill is by practicing it. And for me, that means figuring out how to transfer my problem-solving intuitions to the tools that I’m using.
Besides. Being in the loop is fun. How else would I write this post?
...
Read the original on eieio.games »
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
Secure your code as you build
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
Secure your code as you build
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
...
Read the original on github.com »
On Jan 14th Proton sent out an email newsletter with the subject line:
Introducing Projects - Try Lumo’s powerful new feature now
There is a problem with this email. And I’m not talking about the question of how exactly AI aligns with Proton’s core values of privacy and security.
The problem is I had already explicitly opted out of Lumo emails.
That toggle for “Lumo product updates” is unchecked. Lumo is the only topic I’m not subscribed to. Proton has over a dozen newsletters, including some crypto nonsense. I opt-in to everything but Lumo, I gave an undeniable no to Lumo emails.
So the email I received from Proton is spam, right?
My understanding is that spam is a violation of GDPR and UK data protection laws. Regardless, Proton’s email is a clear abuse of their own service towards a paying business customer.
Despite the subject line and contents, and despite the “From Lumo” name and @lumo.proton.me address, maybe this was an honest mistake?
Proton’s first reply explained how to opt-out.
Thank you for contacting us. You can unsubscribe from the newsletters if you do the following:- Log in to your account at https://account.protonvpn.com/login- If you need additional assistance, let me know.[screenshot of the same opt-out toggle]
John Support directs me to the exact same “Lumo product updates” toggle I had already unchecked. I replied explaining that I had already opted out. Support replies saying they’re “checking this with the team” then later replies again asking for screenshots.
Can you make sure to send me a screenshot of this newsletter option disabled, as well as the date when the last message was sent to you regarding the Lumo offer?You can send me a screenshot of the whole message, including the date. Is it perhaps 14 January 2026 that you received the message?
I found that last line curious, are they dealing with other unhappy customers? Maybe I’m reading too much into it.
I sent the screenshots and signed off with “Don’t try to pretend this fits into another newsletter category.”
After more “checking this with the team” I got a response today.
In this case, the mentioned newsletter is for promoting Lumo Business Suit to Business-related plans. Hence, why you received it, as Product Updates and Email Subscription are two different things.In the subscription section, you will see the “Email Subscription” category, where you can disable the newsletter in order to avoid getting it in the future.
If I understand correctly, Proton are claiming this email is the “Proton for Business newsletter”. Not the “Lumo product updates” newsletter.
I don’t know about you, but I think that’s baloney. Proton Support had five full business days to come up with a better excuse. Please tell me, how can I have been any more explicit about opting out of Lumo emails, only to receive “Try Lumo” “From Lumo”, and be told that is not actually a Lumo email?
Has anyone else noticed that the AI industry can’t take “no” for an answer? AI is being force-fed into every corner of tech. It’s unfathomable to them that some of us aren’t interested.
The entire AI industry is built upon a common principle of non-consent. They laugh in the face of IP and copyright law. AI bots DDoS websites and lie about user-agents. Can it get worse than the sickening actions of Grok? I dread to think.
As Proton has demonstrated above, and Mozilla/Firefox recently too, the AI industry simply will not accept “no” as an answer. Some examples like spam are more trivial than others, but the growing trend is vile and disturbing.
I do not want your AI.
I guess someone at Microsoft read my post and said “hold my beer”. This morning I woke up to a lovely gift in my inbox; “Build Al agents with the new GitHub Copilot SDK”.
GitHub Ensloppification is moving faster than I can delete my account for good. (It’s an unfortunate requirement for client projects.) For the record, I have never said “yes” to any GitHub newsletter. Even before Copilot I disabled every possible GitHub email notification.
The “Unsubscribe” link provides the hidden newsletter list. There is nothing within GitHub account settings I can find to disable spam.
As expected, Microsoft has opted me in without my consent. The wheels are falling off at GitHub. The brutally slow front-end UI. The embarrassingly lacklustre Actions CI. Now this sloppy tripe everywhere. Reminder to developers: GitHub is not Git.
...
Read the original on dbushell.com »
Currently, Gemini CLI restricts IDE integration features to environments where TERM_PROGRAM is vscode (or other hardcoded values). This forces 3rd-party integrations like jetbrains-ide-companion to mock VS Code by spoofing environment variables to enable core features, otherwise it could not be discovered by Gemini CLI.
For some reason, the process detection is not working properly on windows/linux (, reported by users here JetBrains Plugin Review and here #9273 , and a few other bug report email i’ve received), which making this native IDE detection logic a MUST do for gemini-cli discover and connect to IDE via environmental variables instead of port info file.
This PR adds JetBrains IDE Series to the IDE_DEFINITIONS and updates the detection logic to recognize TERMINAL_EMULATOR=JetBrains-JediTerm as a first-class supported environment.
...
Read the original on github.com »
You can find a collection with all the optical illusions in this article (and more!) on CodePen. You can move your mouse over many of the demos below to reveal the effect or stop the animations.
The Poggendorff illusion is an optical illusion in which a diagonal line interrupted by a vertical bar appears misaligned, even when both segments are actually continuous.
A simple version of this effect can be seen in the following demo. I used the ::before and ::after pseudo-elements to create the diagonal line and the vertical bar, respectively.
The effect can also be seen in a more elaborate version with multiple diagonal lines and vertical bars:
This drawing can easily be achieved using two CSS gradients: one tilted at 70 degrees and another consisting of a series of vertical columns. I applied it to the body, although I could have used :root instead.
Another variation of this illusion is the Münsterberg Poggendorff Arch, in which the two sides of an arch appear misaligned and seem as though they will not meet at the top - but they do (mouse over to see it).
The following illusions combine gradients and flat colors. Surprisingly, some of the gradients do not actually exist. They are simple gray bars that, when placed over a gradient, appear to have gradients themselves.
Take the following demo: all three bars (two vertical ones on the sides and one horizontal bar in the center) are the same shade of gray. The only real gradient is behind them, which tricks our brain into believing that the bars are different colors and even contain gradients.
Here is another variation of this effect. It looks like the central line has a repeating gradient of dark and light grays, but in reality it is a flat color. If you mouse over the demo, the bar will expand, making it clear that there is no gradient at all.
The next few optical illusions share a common idea: some colors are identical, but they do not look the same. This typically happens when regions of the same color or brightness are surrounded by areas with different contrast.
For example, in the following demo, the left and right ends are the same shade of gray. However, one looks lighter because it is closer to white, while the other looks darker because it is closer to black. Mouse over to reveal that they are, in fact, the same color.
Run the following demo. You will see two gray columns in a black-and-white grid. Both columns are the same shade of gray, but the one surrounded by black appears darker than the one surrounded by white.
I coded this demo using mix-blend-mode so I could try something a bit different. That worked well, but it also made it harder to showcase the effect on hover. In hindsight, I should have planned that better.
This optical illusion also works with colors. For example, these two squares appear to be different shades of blue, but they are the same color. This time, you can mouse over to reveal the effect:
The ring in the following illustration has the same color all the way around. However, one side is placed over white and the other over black, which makes them look different. If you mouse over the demo, the red bar will disappear, making it more obvious that the ring is a single, uniform color.
You have probably seen the illusion involving a checkerboard and an object casting a shadow, where two tiles - one seemingly light and one seemingly dark - turn out to be the same color.
This demo follows the same principle. You will see two tiles labeled A and B. Both have the same shade of gray, but most people cannot tell at first glance (or second, or even third).
The circle at the center of this flower-shaped element is the same white as the rest of the page, but it gives the impression of being brighter, as if it were emitting light.
This is one of my favorite illusions in the collection. The circles (or spheres) look red, blue, or green, but in reality they are all the same grayish color. Our brain “colorizes” them based on the lines that overlap the shapes. Don’t believe it? Mouse over the illustration.
In the following illustration, the lines inside the yellow section appear blue, while the lines inside the blue section appear red… but they are all black (or very dark gray). The white contour creates the illusion of color. Mouse over to remove the contour and the lines will clearly appear black.
One set of lines looks straighter (top) while the other looks more curved (bottom). In reality, both sets are equally wavy. The only difference is how they are colored: changing the color at the peaks makes the lines look straighter. Changing it at the inflection points makes them look more curved.
The CSS code for the wavy lines is adapted from a Temani Afif snippet on CSS-Tricks and his wavy shape generator.
This is a classic optical illusion and an easy one to code in CSS. Three gradients are all that is needed to generate the effect in which the horizontal lines appear slanted, even though they are perfectly parallel.
This optical illusion depicts an impossible shape. Parts that should be in front appear in the back, top becomes right, and everything feels contradictory. I coded this one some time ago for the 2024 Divtober event.
Which orange circle is larger: the one on the right or the one on the left? It is a trick question: both are the same size. However, having smaller surrounding elements gives the impression that one is larger.
I also created an animated version of this illusion (see below), as well as another version using a square shape instead of a flower shape:
When people look at this illustration, they usually say they see a white square over black circles. However, the square is not actually there. The “Pac-Man” shapes create the illusion of a square and a sense of depth. Our brain fills in the missing information.
There are no circles or discs in this illustration, only vertical and horizontal lines forming crosses. Our visual system completes the shape and makes us perceive a disc that does not exist.
This illustration shows concentric circles, some of which have a green-and-black pattern. Our brain perceives a central patterned circle and four concentric circles around it, beneath the green circle.
I cheated a little when creating this in CSS, as I actually used a green circle blended with the other backgrounds.
Perspective-based illusions are fascinating. Even when we know we are looking at a flat image, our brain insists on interpreting depth.
In the Hering illusion, the red lines appear to curve outward, even though they are straight.
The opposite effect is the Wundt illusion. When the lines expand from the sides toward the center, the red lines appear to curve inward (this effect is more subtle).
Both yellow lines are the same length, but the top one looks longer due to perceived depth and perspective. I tried a different approach when coding this one by applying a three-dimensional rotation in CSS… so the perspective is technically real.
This illusion is easy to code in CSS and easy to fall for. Both the vertical and horizontal lines are the same length, but the vertical line appears longer.
A classic illusion: the horizontal lines are the same length, but inward- or outward-pointing edges dramatically change how we perceive them. I could swear the top one is longer. But it is not.
From a coding perspective, each shape is a pseudo-element. I ensured the horizontal lines were identical by using the same gradients and only repositioning the edges in the ::before and ::after.
It looks like the top rectangle is leaning to the left, but it is actually parallel to the one at the bottom. The trick lies in the direction of the diagonal lines used to “color” each rectangle.
This illusion works better on larger screens. The effect is diminished when you can see the whole picture.
This is a simple effect: the black lines are parallel, but they appear not to be because of the direction of the bars crossing them.
I slightly overcomplicated this one while coding it. I initially built the black-and-red version below and tried to reuse more code than I probably should have.
Here is the original version I created. The effect is also visible there:
Good news! There are more optical illusions below - but first, a warning.
ATTENTION: The following optical illusions are static, but they give the impression of movement. Proceed accordingly.
This is a trippy optical illusion. It is completely static, yet it looks like the black hole at the center is expanding - especially when you are not looking at it directly, creating the sensation of falling into a pit.
From a coding perspective, this one was very simple: a background pattern made with two radial gradients, plus a blurred pseudo-element for the “expanding” hole.
This is one of only two optical illusions in this collection where I used HTML elements instead of relying exclusively on CSS. It is a classic effect: when you look at the illustration, the peripheral discs appear to rotate, even though nothing is actually moving.
Another classic illusion. Focus on the white dots and the adjacent dots will appear to turn black. There is no animation, no transition, and nothing dynamic. Just intersecting lines and small white circles, yet it looks like motion.
This pattern consists of repeating black and white dots across the page. If you focus on one dot, the others will begin to disappear. At first it may happen by row or column, but after a short while, most of them vanish.
If you do not immediately see the effect, try focusing on one black dot. Mouse over it, wait a few seconds while keeping your focus, and then mouse out.
This is a static image, but it gives the impression that the pattern inside the circle is moving sideways. This happens because our eyes are constantly making small movements, even when we are not aware of it.
If you cannot see the illusion, try slightly moving the screen (or your head) while looking just outside the circle.
When you look around this pattern, the central area appears to slide and sway, even though it is completely static. This illusion makes me dizzy… but that may also be because I had to stare at it for a long time while coding it.
This illusion is particularly interesting. There is a pink circle surrounded by concentric pink and purple rings. If you focus on the pink circle, the rings appear to spin or scintillate, as if there were some activity in them. Of course, nothing is actually moving.
This demo was challenging to code and takes a long time to load. Mainly because it uses a large number of conic gradients behind the scenes, which browsers struggle to render efficiently. There is probably a better way to implement it, but I have not explored that yet.
If you look closely at the illustration, you may notice wave-like motion. As with the previous illusions in this section, the image is entirely static.
Good news! There are more optical illusions below - but first, another warning.
ATTENTION: The following optical illusions actually move, and the illusion is created by motion itself. Some of them can be dizzying, so proceed accordingly.
Earlier, we saw two static versions of the Ebbinghaus illusion. This one is animated. The elements move side to side, and the surrounding shapes grow and shrink, giving the impression that the orange circle is changing size - when it definitely is not.
This looks like a three-dimensional tower spinning in space, as seen from above. In reality, it is a flat, two-dimensional image rotating.
Mouse over the demo to stop the rotation and the illusion of depth disappears entirely.
This optical illusion requires only two gradients: a conic gradient for the fan-shaped arms and a radial gradient for the circles and discs.
If you focus on the black dot, the illustration may appear to develop a darker greenish or brownish border. However, the colors never change.
This illusion is delightful and disorienting. While the background colors of the wheel are spinning, the spokes remain fixed. However, they appear to rotate in the opposite direction. In reality, only the background is moving.
What do you see in this animation? Most people report two sets of lines operating independently: one moving horizontally and another moving vertically. And that is exactly how it looks.
In reality, it is a single shape moving uniformly. Run the demo, mouse over the lines, and the true motion will be revealed.
Focus on one of the red dots. You will notice it moves straight up and down along a vertical path. Now shift your focus to one of the black crosses in the center. Suddenly, the red dots appear to zigzag instead of moving straight.
The CSS code for the wavy lines is adapted from a Temani Afif snippet on CSS-Tricks and his wavy shape generator.
It may look like the boxes are moving at different speeds or like a set of walking feet. In reality, all elements move at the same pace and in parallel. Mouse over the demo to reveal the effect.
The illusion also works when the “feet” move in circles, as shown in this alternative version:
Follow the red dot as it moves sideways. From the corner of your vision, it may appear that the dashed black-and-white lines are moving closer together (when the dot moves left) or farther apart (when it moves right). In reality, the lines are completely static.
These dots always have the same color. However, when placed against alternating backgrounds, they appear to jump or move out of sync because of how they blend with their surroundings.
Mouse over the demo to remove the background and the illusion disappears.
This illusion gives the impression that a blue square is growing and shrinking rhythmically, almost as if it were breathing or beating like a heart.
Although the image is rotating, its size never changes. Mouse over the illustration to remove the green boxes and reveal the rotating blue square.
This illustration shows a circle made of pink dots, with one dot missing. Focus on the cross at the center and the missing dot will appear as a yellow or green dot, giving the impression that it is “eating” the pink dots. Just like Pac-Man.
I could have used CSS trigonometric functions to calculate the exact positions of the dots, but since they never change, I chose to hardcode the values instead.
Here is a related effect. Follow the light gray circle as it spins, and the darker circles will appear to change from gray to greenish. Focus on the cross at the center, and after a short time, the darker circles may begin to fade entirely.
This illusion is particularly dizzying. Follow the bluish dot as it moves from right to left and back again. It will appear as though parts of the tiled background are shifting, even though they are static. The only moving element is the dot.
From a CSS perspective, I coded the pattern using conic gradients, and applied it to the ::before and ::after pseudo-elements. I then flipped one upside down and clipped it.
The radii of a wheel, when viewed through a palisade, appear to curve. In reality, they are perfectly straight. Mouse over the demo to remove the palisade and you will see that the radii never bend.
This animation demonstrates how our minds infer motion that may not actually be there. Consider the two blue dots. Different people perceive different movements: side to side, top to bottom, or even circular motion.
Cover the right side of the animation so that you see only one dot at a time. The motion now appears vertical. Cover the bottom part instead, and the motion appears horizontal. This is our brain trying to complete the movement.
These two illustrations are identical - same shapes, same animation. The only difference is the CSS timing function.
The top animation moves smoothly from right to left. The bottom one appears to move choppily in the same direction, but if you focus on it, it may suddenly seem to reverse direction and move faster.
Most of the inspiration for these optical illusions came from two excellent resources:
You can also find this article on:
(You can leave comments on those platforms and I will reply there).
...
Read the original on alvaromontoro.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.