10 interesting stories served every morning and every evening.
Open a file from the explorer to begin”The Kernel in The Mind”Based on “The Kernel in The Mind” by Moon Hee Lee 🧠The kernel isn’t a process—it’s the system. It serves user processes, reacts to context, and enforces separation and control. The Kernel Is Not a Process: It’s the always-present authority bridging hardware and software.Serving the Process: Orchestrates syscalls, interrupts, and scheduling to keep user tasks running.System of Layers: Virtual, mapped, isolated, and controlled—structure at runtime.1. What is the fundamental difference between the kernel and a process?The kernel is not a process—it’s the system itself that serves processesThe kernel is just a library that processes link againstThere is no difference; they are the same thing2. How does the kernel primarily serve user processes?3. What characterizes the kernel’s system of layers?Simple and flat with no hierarchy
...
Read the original on reverser.dev »
...
Read the original on replaceyourboss.ai »
...
Read the original on www.qualcomm.com »
Social media has become a reminder of something precious we are losing in the age of LLMs: unique voices.
Over time, it has become obvious just how many posts are being generated by an LLM. The tell is the voice. Every post sounds like it was posted by the same social media manager.
If you rely on an LLM to write all your posts, you are making a mistake.
Your voice is an asset. Not just what you want to say, but how you say it.
Your voice is unique. It is formed from your lifetime of lived experiences. No one’s voice will be exactly like yours.
Your voice becomes recognizable. Over many posts it becomes something people subconsciously connect with, recognize, trust, and look forward to.
Your voice provides the framework for the impression you leave in a job interview, while networking at a meet-up, or with a co-worker.
Years ago I got a job thanks to my blog posts. A manager wanted my voice influencing their organization. Your voice is an asset.
Your voice matures and becomes even more unique with time and practice.
LLMs can rob you of that voice, and the rest of us lose something precious in the process.
Having an LLM write “in your voice” is not the same. Your voice is not static. It changes with the tides of your life and state of mind. Your most impactful message may come because it was the right moment and you were in the right frame of mind.
Let your voice grow with use. Let it be unique.
Do not let one of your greatest assets fade into atrophy, wilted by cognitive laziness.
I do not care what the linguistic remix machine juggles into being.
I care what you have to say.
...
Read the original on tonyalicea.dev »
The chip made for the AI inference era — the Google TPUAs I find the topic of Google TPUs extremely important, I am publishing a comprehensive deep dive, not just a technical overview, but also strategic and financial coverage of the Google TPU. The history of the TPU and why it all even started?The difference between a TPU and a GPU?Where are the problems for the wider adoption of TPUsGoogle’s TPU is the biggest competitive advantage of its cloud business for the next 10 yearsHow many TPUs does Google produce today, and how big can that get?Gemini 3 and the aftermath of Gemini 3 on the whole chip industryThe history of the TPU and why it all even started?The story of the Google Tensor Processing Unit (TPU) begins not with a breakthrough in chip manufacturing, but with a realization about math and logistics. Around 2013, Google’s leadership—specifically Jeff Dean, Jonathan Ross (the CEO of Groq), and the Google Brain team—ran a projection that alarmed them. They calculated that if every Android user utilized Google’s new voice search feature for just three minutes a day, the company would need to double its global data center capacity just to handle the compute load.At the time, Google was relying on standard CPUs and GPUs for these tasks. While powerful, these general-purpose chips were inefficient for the specific heavy lifting required by Deep Learning: massive matrix multiplications. Scaling up with existing hardware would have been a financial and logistical nightmare.This sparked a new project. Google decided to do something rare for a software company: build its own custom silicon. The goal was to create an ASIC (Application-Specific Integrated Circuit) designed for one job only: running TensorFlow neural networks.2013-2014: The project moved really fast as Google both hired a very capable team and, to be honest, had some luck in their first steps. The team went from design concept to deploying silicon in data centers in just 15 months—a very short cycle for hardware engineering.2015: Before the world knew they existed, TPUs were already powering Google’s most popular products. They were silently accelerating Google Maps navigation, Google Photos, and Google Translate.This urgency to solve the “data center doubling” problem is why the TPU exists. It wasn’t built to sell to gamers or render video; it was built to save Google from its own AI success. With that in mind, Google has been thinking about the »costly« AI inference problems for over a decade now. This is also one of the main reasons why the TPU is so good today compared to other ASIC projects.The difference between a TPU and a GPU?To understand the difference, it helps to look at what each chip was originally built to do. A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).It passes inputs through a massive grid of multipliers.The data is passed directly to the next unit in the array without writing back to memory.What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.Google’s new TPU design, also called Ironwood also addressed some of the key areas where a TPU was lacking:They enhanced the SparseCore for efficiently handling large embeddings (good for recommendation systems and LLMs)It increased HBM capacity and bandwidth (up to 192 GB per chip). For a better understanding, Nvidia’s Blackwell B200 has 192GB per chip, while Blackwell Ultra, also known as the B300, has 288 GB per chip.Improved the Inter-Chip Interconnect (ICI) for linking thousands of chips into massive clusters, also called TPU Pods (needed for AI training as well as some time test compute inference workloads). When it comes to ICI, it is important to note that it is very performant with a Peak Bandwidth of 1.2 TB/s vs Blackwell NVLink 5 at 1.8 TB/s. But Google’s ICI, together with its specialized compiler and software stack, still delivers superior performance on some specific AI tasks.The key thing to understand is that because the TPU doesn’t need to decode complex instructions or constantly access memory, it can deliver significantly higher Operations Per Joule.For scale-out, Google uses Optical Circuit Switch (OCS) and its 3D torus network, which compete with Nvidia’s InfiniBand and Spectrum-X Ethernet. The main difference is that OCS is extremely cost-effective and power-efficient as it eliminates electrical switches and O-E-O conversions, but because of this, it is not as flexible as the other two. So again, the Google stack is extremely specialized for the task at hand and doesn’t offer the flexibility that GPUs do.As we defined the differences, let’s look at real numbers showing how the TPU performs compared to the GPU. Since Google isn’t revealing these numbers, it is really hard to get details on performance. I studied many articles and alternative data sources, including interviews with industry insiders, and here are some of the key takeaways.The first important thing is that there is very limited information on Google’s newest TPUv7 (Ironwood), as Google introduced it in April 2025 and is just now starting to become available to external clients (internally, it is said that Google has already been using Ironwood since April, possibly even for Gemini 3.0.). And why is this important if we, for example, compare TPUv7 with an older but still widely used version of TPUv5p based on Semianalysis data:TPUv7 has 192GB of memory capacity vs TPUv5p 96GBTPUv7 memory Bandwidth is 7,370 GB/s vs 2,765 for v5pWe can see that the performance leaps between v5 and v7 are very significant. To put that in context, most of the comments that we will look at are more focused on TPUv6 or TPUv5 than v7.Based on analyzing a ton of interviews with Former Google employees, customers, and competitors (people from AMD, NVDA & others), the summary of the results is as follows.Most agree that TPUs are more cost-effective compared to Nvidia GPUs, and most agree that the performance per watt for TPUs is better. This view is not applicable across all use cases tho.»If it is the right application, then they can deliver much better performance per dollar compared to GPUs. They also require much lesser energy and produces less heat compared to GPUs. They’re also more energy efficient and have a smaller environmental footprint, which is what makes them a desired outcome.The use cases are slightly limited to a GPU, they’re not as generic, but for a specific application, they can offer as much as 1.4X better performance per dollar, which is pretty significant saving for a customer that might be trying to use GPU versus TPUs.«Similarly, a very insightful comment from a Former Unit Head at Google around TPUs materially lowering AI-search cost per query vs GPUs:»TPU v6 is 60-65% more efficient than GPUs, prior generations 40-45%«This interview was in November 2024, so the expert is probably comparing the v6 TPU with the Nvidia Hopper. Today, we already have Blackwell vs V7.Many experts also mention the speed benefit that TPUs offer, with a Former Google Head saying that TPUs are 5x faster than GPUs for training dynamic models (like search-like workloads).There was also a very eye-opening interview with a client who used both Nvidia GPUs and Google TPUs as he describes the economics in great detail:»If I were to use eight H100s versus using one v5e pod, I would spend a lot less money on one v5e pod. In terms of price point money, performance per dollar, you will get more bang for TPU. If I already have a code, because of Google’s help or because of our own work, if I know it already is going to work on a TPU, then at that point it is beneficial for me to just stick with the TPU usage.In the long run, if I am thinking I need to write a new code base, I need to do a lot more work, then it depends on how long I’m going to train. I would say there is still some, for example, of the workload we have already done on TPUs that in the future because as Google will add newer generation of TPU, they make older ones much cheaper.
For example, when they came out with v4, I remember the price of v2 came down so low that it was practically free to use compared to any NVIDIA GPUs.Google has got a good promise so they keep supporting older TPUs and they’re making it a lot cheaper. If you don’t really need your model trained right away, if you’re willing to say, “I can wait one week,” even though the training is only three days, then you can reduce your cost 1/5.«Another valuable interview was with a current AMD employee, acknowledging the benefits of ASICs:»I would expect that an AI accelerator could do about probably typically what we see in the industry. I’m using my experience at FPGAs. I could see a 30% reduction in size and maybe a 50% reduction in power vs a GPU.«We also got some numbers from a Former Google employee who worked in the chip segment:»When I look at the published numbers, they (TPUs) are anywhere from 25%-30% better to close to 2x better, depending on the use cases compared to Nvidia. Essentially, there’s a difference between a very custom design built to do one task perfectly versus a more general purpose design.«What is also known is that the real edge of TPUs lies not in the hardware but in the software and in the way Google has optimized its ecosystem for the TPU.A lot of people mention the problem that every Nvidia »competitor« like the TPU faces, which is the fast development of Nvidia and the constant »catching up« to Nvidia problem. This month a former Google Cloud employee addressed that concern head-on as he believes the rate at which TPUs are improving is faster than the rate at Nvidia:»The amount of performance per dollar that a TPU can generate from a new generation versus the old generation is a much significant jump than Nvidia«In addition, the recent data from Google’s presentation at the Hot Chips 2025 event backs that up, as Google stated that the TPUv7 is 100% better in performance per watt than their TPUv6e (Trillium).Even for hard Nvidia advocates, TPUs are not to be shrugged off easily, as even Jensen thinks very highly of Google’s TPUs. In a podcast with Brad Gerstner, he mentioned that when it comes to ASICs, Google with TPUs is a »special case«. A few months ago, we also got an article from the WSJ saying that after the news publication The Information published a report that stated that OpenAI had begun renting Google TPUs for ChatGPT, Jensen called Altman, asking him if it was true, and signaled that he was open to getting the talks back on track (investment talks). Also worth noting was that Nvidia’s official X account posted a screenshot of an article in which OpenAI denied plans to use Google’s in-house chips. To say the least, Nvidia is watching TPUs very closely.Ok, but after looking at some of these numbers, one might think, why aren’t more clients using TPUs?Where are the problems for the wider adoption of TPUsThe main problem for TPUs adoption is the ecosystem. Nvidia’s CUDA is engraved in the minds of most AI engineers, as they have been learning CUDA in universities. Google has developed its ecosystem internally but not externally, as it has used TPUs only for its internal workloads until now. TPUs use a combination of JAX and TensorFlow, while the industry skews to CUDA and PyTorch (although TPUs also support PyTorch now). While Google is working hard to make its ecosystem more supportive and convertible with other stacks, it is also a matter of libraries and ecosystem formation that takes years to develop.It is also important to note that, until recently, the GenAI industry’s focus has largely been on training workloads. In training workloads, CUDA is very important, but when it comes to inference, even reasoning inference, CUDA is not that important, so the chances of expanding the TPU footprint in inference are much higher than those in training (although TPUs do really well in training as well — Gemini 3 the prime example).The fact that most clients are multi-cloud also poses a challenge for TPU adoption, as AI workloads are closely tied to data and its location (cloud data transfer is costly). Nvidia is accessible via all three hyperscalers, while TPUs are available only at GCP so far. A client who uses TPUs and Nvidia GPUs explains it well:»Right now, the one biggest advantage of NVIDIA, and this has been true for past three companies I worked on is because AWS, Google Cloud and Microsoft Azure, these are the three major cloud companies.Every company, every corporate, every customer we have will have data in one of these three. All these three clouds have NVIDIA GPUs. Sometimes the data is so big and in a different cloud that it is a lot cheaper to run our workload in whatever cloud the customer has data in.I don’t know if you know about the egress cost that is moving data out of one cloud is one of the bigger cost. In that case, if you have NVIDIA workload, if you have a CUDA workload, we can just go to Microsoft Azure, get a VM that has NVIDIA GPU, same GPU in fact, no code change is required and just run it there.With TPUs, once you are all relied on TPU and Google says, “You know what? Now you have to pay 10X more,” then we would be screwed, because then we’ll have to go back and rewrite everything. That’s why. That’s the only reason people are afraid of committing too much on TPUs. The same reason is for Amazon’s Trainium and Inferentia.«These problems are well known at Google, so it is no surprise that internally, the debate over keeping TPUs inside Google or starting to sell them externally is a constant topic. When keeping them internally, it enhances the GCP moat, but at the same time, many former Google employees believe that at some point, Google will start offering TPUs externally as well, maybe through some neoclouds, not necessarily with the biggest two competitors, Microsoft and Amazon. Opening up the ecosystem, providing support, etc., and making it more widely usable are the first steps toward making that possible.A former Google employee also mentioned that Google last year formed a more sales-oriented team to push and sell TPUs, so it’s not like they have been pushing hard to sell TPUs for years; it is a fairly new dynamic in the organization.Google’s TPU is the biggest competitive advantage of its cloud business for the next 10 yearsThe most valuable thing for me about TPUs is their impact on GCP. As we witness the transformation of cloud businesses from the pre-AI era to the AI era, the biggest takeaway is that the industry has gone from an oligopoly of AWS, Azure, and GCP to a more commoditized landscape, with Oracle, Coreweave, and many other neoclouds competing for AI workloads. The problem with AI workloads is the competition and Nvidia’s 75% gross margin, which also results in low margins for AI workloads. The cloud industry is moving from a 50-70% gross margin industry to a 20-35% gross margin industry. For cloud investors, this should be concerning, as the future profile of some of these companies is more like that of a utility than an attractive, high-margin business. But there is a solution to avoiding that future and returning to a normal margin: the ASIC.The cloud providers who can control the hardware and are not beholden to Nvidia and its 75% gross margin will be able to return to the world of 50% gross margins. And there is no surprise that all three AWS, Azure, and GCP are developing their own ASICs. The most mature by far is Google’s TPU, followed by Amazon’s Trainum, and lastly Microsoft’s MAIA (although Microsoft owns the full IP of OpenAI’s custom ASICs, which could help them in the future).While even with ASICs you are not 100% independent, as you still have to work with someone like Broadcom or Marvell, whose margins are lower than Nvidia’s but still not negligible, Google is again in a very good position. Over the years of developing TPUs, Google has managed to control much of the chip design process in-house. According to a current AMD employee, Broadcom no longer knows everything about the chip. At this point, Google is the front-end designer (the actual RTL of the design) while Broadcom is only the backend physical design partner. Google, on top of that, also, of course, owns the entire software optimization stack for the chip, which makes it as performant as it is. According to the AMD employee, based on this work split, he thinks Broadcom is lucky if it gets a 50-point gross margin on its part.Without having to pay Nvidia for the accelerator, a cloud provider can either price its compute similarly to others and maintain a better margin profile or lower costs and gain market share. Of course, all of this depends on having a very capable ASIC that can compete with Nvidia. Unfortunately, it looks like Google is the only one that has achieved that, as the number one-performing model is Gemini 3 trained on TPUs. According to some former Google employees, internally, Google is also using TPUs for inference across its entire AI stack, including Gemini and models like Veo. Google buys Nvidia GPUs for GCP, as clients want them because they are familiar with them and the ecosystem, but internally, Google is full-on with TPUs.As the complexity of each generation of ASICs increases, similar to the complexity and pace of Nvidia, I predict that not all ASIC programs will make it. I believe outside of TPUs, the only real hyperscaler shot right now is AWS Trainium, but even that faces much bigger uncertainties than the TPU. With that in mind, Google and its cloud business can come out of this AI era as a major beneficiary and market-share gainer.Recently, we even got comments from the SemiAnalysis team praising the TPU:»Google’s silicon supremacy among hyperscalers is unmatched, with their TPU 7th Gen arguably on par with Nvidia Blackwell. TPU powers the Gemini family of models which are improving in capability and sit close to the pareto frontier of $ per intelligence in some tasks«How many TPUs does Google produce today, and how big can that get?Here are the numbers that I researched:This post is for paid subscribers
...
Read the original on www.uncoveralpha.com »
What we did in response
...
Read the original on mixpanel.com »
I was working on a bug in Chocolate Quake netcode. The issue was an edge case where starting two clients on the same machine resulted in the second one zombifying the first one. When the bug occurred there was no disconnection but the client could no longer move. Instead the screen would show an “indicator” looking like an unplugged Ethernet cable in the upper left corner.
As I dug into the code, I learned there were more of these. Located inside pak0.pak and nested in gfx.wad are files TURTLE, DISC, RAM, and NET. I could not find anything about these “indicators” so I documented them here.
The TURTLE indicator shows up on screen when the framerate goes below 10 fps. It is unlikely to have been intended for players but rather for people at id Software during development. Programmers could see where the engine was not fast enough. More importantly map designers could see if they had too many polygons in specific areas of their map.
The TURTLE indicator can be enabled/disabled with command showturtle 1/0. The code is all in function SCR_DrawTurtle, where host_frametime is the time in seconds it took to draw the last frame.
There is a scr_showturtle in Quake 2 source code but it does not do anything.
The icon doesn’t actually depict a turtle but a tortoise. A turtle swims in the water while a tortoise walks on land.
Quake does not render polygons using directly a texture and a lightmap. Instead it combines these two into a “surface” which is then fed to the rasterizer. After being used surfaces are not discarded but cached because the next frame is likely to need the same surface again.
The RAM indicator is here to warn when the engine evicts from the cache surfaces that were generated and cached on the same frame. This means the geometry of the map forces the engine to operate beyond its surface cache capacity. Under this condition, the renderer enters a catastrophic “death spiral” where it evicts surfaces that will be needed later in the frame. Needless to say the framerate suffers greatly.
This was likely a feature intended for map designers to warn them of scenes going beyond the amount of surface cache memory Quake provisioned. See D_SCAlloc where thrashing is detected to learn more about it.
Alike the turtle one, this indicator can also be enabled/disabled with command showram 1/0.
The DISC indicator wraps HDD access done via Sys_FileRead. It is unlikely it was used by developers to diagnose anything since its screen location overlaps with the TURTLE indicator. It is just here to give feedback to players that the game is loading.
Because the icon is hidden when Sys_FileRead returns, it is normal to see it flicker on the screen (and it also looks kinda cool). The code for this indicator is in Draw_BeginDisc.
The NET indicator is displayed when a client has not received any packets from the server in the last 300ms. This was likely aimed at players to help them determine how bad their connection was (a distant server would easily have a 500ms ping in these dial-up over PPP modem days) or if they had plainly lost connection to the server.
The code for this indicator is in SCR_DrawNet.
The NET indicator is prevent and active in Quake 2. The code is still in SCR_DrawNet but the image is no longer in a wad. It is store in pak0.pak at pics/net.pcx.
Below, a terrible user experience where the frame made the engine thrash its surface cache, the framerate dropped below 10fps, and the engine last received packets from the server more than 300ms ago.
...
Read the original on fabiensanglard.net »
Under the harsh lights of an operating theatre in the Indian capital, Delhi, a woman lies motionless as surgeons prepare to remove her gallbladder. She is under general anaesthesia: unconscious, insensate and rendered completely still by a blend of drugs that induce deep sleep, block memory, blunt pain and temporarily paralyse her muscles.Yet, amid the hum of monitors and the steady rhythm of the surgical team, a gentle stream of flute music plays through the headphones placed over her ears.Even as the drugs silence much of her brain, its auditory pathway remains partly active. When she wakes up, she will regain consciousness more quickly and clearly because she required lower doses of anaesthetic drugs such as propofol and opioid painkillers than patients who heard no music.That, at least, is what a new peer-reviewed study from Delhi’s Maulana Azad Medical College and Lok Nayak Hospital suggests. The research, published in the journal Music and Medicine, offers some of the strongest evidence yet that music played during general anaesthesia can modestly but meaningfully reduce drug requirements and improve recovery.The study focuses on patients undergoing laparoscopic cholecystectomy, the standard keyhole operation to remove the gallbladder. The procedure is short - usually under an hour - and demands a particularly swift, “clear-headed” recovery.To understand why the researchers turned to music, it helps to decode the modern practice of anaesthesia.“Our aim is early discharge after surgery,” says Dr Farah Husain, senior specialist in anaesthesia and certified music therapist for the study. “Patients need to wake up clear-headed, alert and oriented, and ideally pain-free. With better pain management, the stress response is curtailed.” Achieving that requires a carefully balanced mix of five or six drugs that together keep the patient asleep, block pain, prevent memory of the surgery and relax the muscles.
In procedures like laparoscopic gallbladder removal, anaesthesiologists now often supplement this drug regimen with regional “blocks” - ultrasound-guided injections that numb nerves in the abdominal wall. “General anaesthesia plus blocks is the norm,” says Dr Tanvi Goel, primary investigator and a former senior resident of Maulana Azad Medical College. “We’ve been doing this for decades.“But the body does not take to surgery easily. Even under anaesthesia, it reacts: heart rate rises, hormones surge, blood pressure spikes. Reducing and managing this cascade is one of the central goals of modern surgical care. Dr Husain explains that the stress response can slow recovery and worsen inflammation, highlighting why careful management is so important. The stress starts even before the first cut, with intubation - the insertion of a breathing tube into the windpipe.To do this, the anaesthesiologist uses a laryngoscope to lift the tongue and soft tissues at the base of the throat, obtain a clear view of the vocal cords, and guide the tube into the trachea. It’s a routine step in general anaesthesia that keeps the airway open and allows precise control of the patient’s breathing while they are unconscious.“The laryngoscopy and intubation are considered the most stressful response during general anaesthesia,” says Dr Sonia Wadhawan, director-professor of anaesthesia and intensive care at Maulana Azad Medical College and supervisor of the study.“Although the patient is unconscious and will remember nothing, their body still reacts to the stress with changes in heart rate, blood pressure, and stress hormones.“To be sure, the drugs have evolved. The old ether masks have vanished. In their place are intravenous agents - most notably propofol, the hypnotic made infamous by Michael Jackson’s death but prized in operating theatres for its rapid onset and clean recovery. “Propofol acts within about 12 seconds,” notes Dr Goel. “We prefer it for short surgeries like laparoscopic cholecystectomy because it avoids the ‘hangover’ caused by inhalational gases.“The team of researchers wanted to know whether music could reduce how much propofol and fentanyl (an opioid painkiller) patients required. Less drugs means faster awakening, steadier vital signs and reduced side effects.So they designed a study. A pilot involving eight patients led to a full 11-month trial of 56 adults, aged roughly 20 to 45, randomly assigned to two groups. All received the same five-drug regimen: a drug that prevents nausea and vomiting, a sedative, fentanyl, propofol and a muscle relaxant. Both groups wore noise-cancelling headphones - but only one heard music.“We asked patients to select from two calming instrumental pieces - soft flute or piano,” says Dr Husain. “The unconscious mind still has areas that remain active. Even if the music isn’t explicitly recalled, implicit awareness can lead to beneficial effects.”
Patients exposed to music required lower doses of propofol and fentanyl. They experienced smoother recoveries, lower cortisol or stress-hormone levels and a much better control of blood pressure during the surgery. “Since the ability to hear remains intact under anaesthesia,” the researchers write, “music can still shape the brain’s internal state.“Clearly, music seemed to quieten the internal storm. “The auditory pathway remains active even when you’re unconscious,” says Dr Wadhawan. “You may not remember the music, but the brain registers it.“The idea that the mind behind the anaesthetic veil is not entirely silent has long intrigued scientists. Rare cases of “intraoperative awareness” show patients recalling fragments of operating-room conversation. If the brain is capable of picking up and remembering stressful experiences during surgery - even when a patient is unconscious - then it might also be able to register positive or comforting experiences, like music, even without conscious memory.“We’re only beginning to explore how the unconscious mind responds to non-pharmacological interventions like music,” says Dr Husain. “It’s a way of humanising the operating room.“Music therapy is not new to medicine; it has long been used in psychiatry, stroke rehabilitation and palliative care. But its entry into the intensely technical, machine-governed world of anaesthesia marks a quiet shift. If such a simple intervention can reduce drug use and speed recovery - even modestly - it could reshape how hospitals think about surgical wellbeing.As the research team prepares its next study exploring music-aided sedation, building on earlier findings, one truth is already humming through the data: even when the body is still and the mind asleep, it appears a few gentle notes can help the healing begin.
...
Read the original on www.bbc.com »
Arthur Conan Doyle delved into men’s mental health through his Sherlock Holmes stories. (Image: Wikimedia Commons/Canva , Public domain)
Note: This article is republished from The Conversation under a Creative Commons license. It includes links to external sites that may earn a commission for purchases. We did not add these links and have kept the original content intact.
Arthur Conan Doyle was not just one of the world’s best crime fiction writers. He was a progressive wordsmith who brought light to controversial and taboo subjects. One of those taboo subjects was male vulnerability and mental health problems — a topic of personal significance to the author.
Doyle was a vulnerable child. His father, Charles, was an alcoholic, which led to financial troubles in the family. Charles was admitted to an asylum in 1881 and spent the next 12 years in various mental care establishments. So began Doyle’s interest in male vulnerability and mental health.
The character of Sherlock Holmes is a true expression of male vulnerability that does not equate it with weakness. Doyle does not represent Holmes as infallible, but as a man others can relate to — he battles with drug addiction, loneliness and depression. His genius thrives in part because of these vulnerabilities, not despite them.
(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);
Many of Doyle’s Sherlock Holmes stories examine male characters facing emotional catastrophe, betrayal or moral dilemmas. In works such as The Man with the Twisted Lip (1891), The Adventure of the Engineer’s Thumb (1892) and The Stockbroker’s Clerk (1894), Holmes’s male clients approach him with problems layered with emotional turmoil, fear and failure.
In The Man with the Twisted Lip, for example, a man named Neville St Clair hides his double life. He tells his family that he is a respectable entrepreneur going to London on business. In reality he is begging on the city streets. He lives this double life due to fear and shame over the inability to pay off his debts. “It was a long fight between my pride and the money,” he explains, “but the dollars won at last.”
Also Read: Who Looks Smarter: The Quick Thinker or the Careful Thinker?
“I would have endured imprisonment, ay, even execution, rather than have left my miserable secret as a family blot to my children,” St Clair says. In having his character consider execution to protect his and his family’s reputation, Doyle explored the societal expectations of Victorian masculinity and how men struggled with such pressures.
(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);
The Stockbroker’s Clerk also examines male suicide, as well as economic and professional anxieties. When Holmes reveals the crimes of Harry Pinner, the man attempts suicide rather than face prison.
In The Engineer’s Thumb, hydraulic engineer Victor is treated physically by Watson and mentally by Holmes. As Doyle writes: “Round one of his hands he had a handkerchief wrapped, which was mottled all over with bloodstains. He was young, not more than five-and-twenty, I should say, with a strong masculine face; but he was exceedingly pale and gave me the impression of a man who was suffering from some strong agitation, which it took all his strength of mind to control.”
The physical injury marks Victor as a victim of physical violence. Watson suggests that Victor is using all his mental capabilities to keep calm about his severe pain. Holmes treats Victor’s mind as he listens to his story: “Pray lie down there and make yourself absolutely at home. Tell us what you can, but stop when you are tired, and keep up your strength with a little stimulant.”
Also Read: Study of 3 Million Finnish Adults Finds Non-Voters Tend to Die Earlier
(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);
Holmes is a protector, a confidante and a comforter in this scene. He provides Victor with breakfast, induces him to lie down and offers him a stimulant (more than likely brandy).
The extremity of violence that Victor has endured has escalated to mental trauma. In having Holmes treat Victor’s mental trauma while Watson treats his physical pain, Doyle showed the importance psychological support for men of the age.
Holmes was a highly popular character. To contemporary readers, his drug use and dysfunctional clients were seen as markers of his genius rather than a reflection of the significant social issues that men faced during this period. But today, they offer a window into the mental struggles of Victorian men, and a point of connection between readers of the past and present.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Emma Linford, Honorary research associate, English literature, University of Hull
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Also Read
Loading title…
(function(card) {
const CACHE_TTL = 3600000; // 1 hour in milliseconds
const link = card.querySelector(‘.also-read-link’).href;
const cacheKey = `alsoReadCache:${link}`;
const updateCard = (title, image) => {
card.querySelector(‘.also-read-title’).innerText = title;
card.querySelector(‘.also-read-image’).style.backgroundImage = `url(${image})`;
let cachedData;
try {
cachedData = localStorage.getItem(cacheKey);
if (cachedData) {
cachedData = JSON.parse(cachedData);
} catch(e) {
console.error(“Error parsing cache data:”, e);
cachedData = null;
if (cachedData && Date.now() - cachedData.timestamp < CACHE_TTL) {
updateCard(cachedData.title, cachedData.image);
return;
fetch(link)
.then(response => {
if (!response.ok) throw new Error(‘Network response was not ok’);
return response.text();
.then(html => {
const doc = new DOMParser().parseFromString(html, “text/html”);
const ogTitle = doc.querySelector(‘meta[property=“og:title”]’)?.content || “Read More”;
const ogImage = doc.querySelector(‘meta[property=“og:image”]’)?.content || “https://via.placeholder.com/300”;
localStorage.setItem(cacheKey, JSON.stringify({
title: ogTitle,
image: ogImage,
timestamp: Date.now()
updateCard(ogTitle, ogImage);
.catch(error => {
console.error(“Error fetching Open Graph data:”, error);
if (cachedData) {
updateCard(cachedData.title, cachedData.image);
})(document.currentScript.parentElement);
.also-read-card {
max-width: 600px;
width: 100%;
margin: 20px 0;
border: 1px solid #e0e0e0;
border-left: 8px solid #5170ff;
border-radius: 6px;
overflow: hidden;
background: #fff;
box-shadow: 0 1px 5px rgba(0,0,0,0.08);
transition: box-shadow 0.3s ease;
display: flex;
align-items: stretch;
.also-read-link {
display: flex;
align-items: stretch;
text-decoration: none;
color: inherit;
width: 100%;
.also-read-image {
width: 150px;
height: 100%;
flex-shrink: 0;
background-size: cover;
background-position: center;
/* Note: background-image transitions might not animate as expected */
.also-read-info {
padding: 15px;
flex-grow: 1;
display: flex;
flex-direction: column;
justify-content: center;
.also-read-label {
display: block;
font-size: 16px;
font-weight: 800;
letter-spacing: 1px;
color: #5170ff;
...
Read the original on scienceclock.com »
Arthur Conan Doyle was not just one of the world’s best crime fiction writers. He was a progressive wordsmith who brought light to controversial and taboo subjects. One of those taboo subjects was male vulnerability and mental health problems — a topic of personal significance to the author.
Doyle was a vulnerable child. His father, Charles, was an alcoholic, which led to financial troubles in the family. Charles was admitted to an asylum in 1881 and spent the next 12 years in various mental care establishments. So began Doyle’s interest in male vulnerability and mental health.
The character of Sherlock Holmes is a true expression of male vulnerability that does not equate it with weakness. Doyle does not represent Holmes as infallible, but as a man others can relate to — he battles with drug addiction, loneliness and depression. His genius thrives in part because of these vulnerabilities, not despite them.
Many of Doyle’s Sherlock Holmes stories examine male characters facing emotional catastrophe, betrayal or moral dilemmas. In works such as The Man with the Twisted Lip (1891), The Adventure of the Engineer’s Thumb (1892) and The Stockbroker’s Clerk (1894), Holmes’s male clients approach him with problems layered with emotional turmoil, fear and failure.
In The Man with the Twisted Lip, for example, a man named Neville St Clair hides his double life. He tells his family that he is a respectable entrepreneur going to London on business. In reality he is begging on the city streets. He lives this double life due to fear and shame over the inability to pay off his debts. “It was a long fight between my pride and the money,” he explains, “but the dollars won at last.”
“I would have endured imprisonment, ay, even execution, rather than have left my miserable secret as a family blot to my children,” St Clair says. In having his character consider execution to protect his and his family’s reputation, Doyle explored the societal expectations of Victorian masculinity and how men struggled with such pressures.
The Stockbroker’s Clerk also examines male suicide, as well as economic and professional anxieties. When Holmes reveals the crimes of Harry Pinner, the man attempts suicide rather than face prison.
In The Engineer’s Thumb, hydraulic engineer Victor is treated physically by Watson and mentally by Holmes. As Doyle writes: “Round one of his hands he had a handkerchief wrapped, which was mottled all over with bloodstains. He was young, not more than five-and-twenty, I should say, with a strong masculine face; but he was exceedingly pale and gave me the impression of a man who was suffering from some strong agitation, which it took all his strength of mind to control.”
The physical injury marks Victor as a victim of physical violence. Watson suggests that Victor is using all his mental capabilities to keep calm about his severe pain. Holmes treats Victor’s mind as he listens to his story: “Pray lie down there and make yourself absolutely at home. Tell us what you can, but stop when you are tired, and keep up your strength with a little stimulant.”
Holmes is a protector, a confidante and a comforter in this scene. He provides Victor with breakfast, induces him to lie down and offers him a stimulant (more than likely brandy).
The extremity of violence that Victor has endured has escalated to mental trauma. In having Holmes treat Victor’s mental trauma while Watson treats his physical pain, Doyle showed the importance psychological support for men of the age.
Holmes was a highly popular character. To contemporary readers, his drug use and dysfunctional clients were seen as markers of his genius rather than a reflection of the significant social issues that men faced during this period. But today, they offer a window into the mental struggles of Victorian men, and a point of connection between readers of the past and present.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
...
Read the original on theconversation.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.