10 interesting stories served every morning and every evening.




1 554 shares, 25 trendiness

Linux Kernel Explorer

Open a file from the ex­plorer to be­gin”The Kernel in The Mind”Based on The Kernel in The Mind” by Moon Hee Lee 🧠The ker­nel is­n’t a process—it’s the sys­tem. It serves user processes, re­acts to con­text, and en­forces sep­a­ra­tion and con­trol. The Kernel Is Not a Process: It’s the al­ways-pre­sent au­thor­ity bridg­ing hard­ware and soft­ware.Serv­ing the Process: Orchestrates syscalls, in­ter­rupts, and sched­ul­ing to keep user tasks run­ning.Sys­tem of Layers: Virtual, mapped, iso­lated, and con­trolled—struc­ture at run­time.1. What is the fun­da­men­tal dif­fer­ence be­tween the ker­nel and a process?The ker­nel is not a process—it’s the sys­tem it­self that serves process­es­The ker­nel is just a li­brary that processes link again­st­There is no dif­fer­ence; they are the same thing2. How does the ker­nel pri­mar­ily serve user processes?3. What char­ac­ter­izes the ker­nel’s sys­tem of lay­ers?Sim­ple and flat with no hi­er­ar­chy

...

Read the original on reverser.dev »

2 386 shares, 17 trendiness

AI CEO – Replace Your Boss Before They Replace You

...

Read the original on replaceyourboss.ai »

3 368 shares, 28 trendiness

Same-day upstream Linux support for Snapdragon 8 Elite Gen 5 mobile platform

...

Read the original on www.qualcomm.com »

4 337 shares, 19 trendiness

We're Losing Our Voice to LLMs

Social me­dia has be­come a re­minder of some­thing pre­cious we are los­ing in the age of LLMs: unique voices.

Over time, it has be­come ob­vi­ous just how many posts are be­ing gen­er­ated by an LLM. The tell is the voice. Every post sounds like it was posted by the same so­cial me­dia man­ager.

If you rely on an LLM to write all your posts, you are mak­ing a mis­take.

Your voice is an as­set. Not just what you want to say, but how you say it.

Your voice is unique. It is formed from your life­time of lived ex­pe­ri­ences. No one’s voice will be ex­actly like yours.

Your voice be­comes rec­og­niz­able. Over many posts it be­comes some­thing peo­ple sub­con­sciously con­nect with, rec­og­nize, trust, and look for­ward to.

Your voice pro­vides the frame­work for the im­pres­sion you leave in a job in­ter­view, while net­work­ing at a meet-up, or with a co-worker.

Years ago I got a job thanks to my blog posts. A man­ager wanted my voice in­flu­enc­ing their or­ga­ni­za­tion. Your voice is an as­set.

Your voice ma­tures and be­comes even more unique with time and prac­tice.

LLMs can rob you of that voice, and the rest of us lose some­thing pre­cious in the process.

Having an LLM write in your voice” is not the same. Your voice is not sta­tic. It changes with the tides of your life and state of mind. Your most im­pact­ful mes­sage may come be­cause it was the right mo­ment and you were in the right frame of mind.

Let your voice grow with use. Let it be unique.

Do not let one of your great­est as­sets fade into at­ro­phy, wilted by cog­ni­tive lazi­ness.

I do not care what the lin­guis­tic remix ma­chine jug­gles into be­ing.

I care what you have to say.

...

Read the original on tonyalicea.dev »

5 302 shares, 20 trendiness

The chip made for the AI inference era – the Google TPU

The chip made for the AI in­fer­ence era — the Google TPUAs I find the topic of Google TPUs ex­tremely im­por­tant, I am pub­lish­ing a com­pre­hen­sive deep dive, not just a tech­ni­cal overview, but also strate­gic and fi­nan­cial cov­er­age of the Google TPU. The his­tory of the TPU and why it all even started?The dif­fer­ence be­tween a TPU and a GPU?Where are the prob­lems for the wider adop­tion of TPUsGoogle’s TPU is the biggest com­pet­i­tive ad­van­tage of its cloud busi­ness for the next 10 yearsHow many TPUs does Google pro­duce to­day, and how big can that get?Gem­ini 3 and the af­ter­math of Gemini 3 on the whole chip in­dus­try­The his­tory of the TPU and why it all even started?The story of the Google Tensor Processing Unit (TPU) be­gins not with a break­through in chip man­u­fac­tur­ing, but with a re­al­iza­tion about math and lo­gis­tics. Around 2013, Google’s lead­er­ship—specif­i­cally Jeff Dean, Jonathan Ross (the CEO of Groq), and the Google Brain team—ran a pro­jec­tion that alarmed them. They cal­cu­lated that if every Android user uti­lized Google’s new voice search fea­ture for just three min­utes a day, the com­pany would need to dou­ble its global data cen­ter ca­pac­ity just to han­dle the com­pute load.At the time, Google was re­ly­ing on stan­dard CPUs and GPUs for these tasks. While pow­er­ful, these gen­eral-pur­pose chips were in­ef­fi­cient for the spe­cific heavy lift­ing re­quired by Deep Learning: mas­sive ma­trix mul­ti­pli­ca­tions. Scaling up with ex­ist­ing hard­ware would have been a fi­nan­cial and lo­gis­ti­cal night­mare.This sparked a new pro­ject. Google de­cided to do some­thing rare for a soft­ware com­pany: build its own cus­tom sil­i­con. The goal was to cre­ate an ASIC (Application-Specific Integrated Circuit) de­signed for one job only: run­ning TensorFlow neural net­works.2013-2014: The pro­ject moved re­ally fast as Google both hired a very ca­pa­ble team and, to be hon­est, had some luck in their first steps. The team went from de­sign con­cept to de­ploy­ing sil­i­con in data cen­ters in just 15 months—a very short cy­cle for hard­ware en­gi­neer­ing.2015: Before the world knew they ex­isted, TPUs were al­ready pow­er­ing Google’s most pop­u­lar prod­ucts. They were silently ac­cel­er­at­ing Google Maps nav­i­ga­tion, Google Photos, and Google Translate.This ur­gency to solve the data cen­ter dou­bling” prob­lem is why the TPU ex­ists. It was­n’t built to sell to gamers or ren­der video; it was built to save Google from its own AI suc­cess. With that in mind, Google has been think­ing about the »costly« AI in­fer­ence prob­lems for over a decade now. This is also one of the main rea­sons why the TPU is so good to­day com­pared to other ASIC pro­jects.The dif­fer­ence be­tween a TPU and a GPU?To un­der­stand the dif­fer­ence, it helps to look at what each chip was orig­i­nally built to do. A GPU is a general-purpose” par­al­lel proces­sor, while a TPU is a domain-specific” ar­chi­tec­ture.The GPUs were de­signed for graph­ics. They ex­cel at par­al­lel pro­cess­ing (doing many things at once), which is great for AI. However, be­cause they are de­signed to han­dle every­thing from video game tex­tures to sci­en­tific sim­u­la­tions, they carry architectural bag­gage.” They spend sig­nif­i­cant en­ergy and chip area on com­plex tasks like caching, branch pre­dic­tion, and man­ag­ing in­de­pen­dent threads.A TPU, on the other hand, strips away all that bag­gage. It has no hard­ware for ras­ter­i­za­tion or tex­ture map­ping. Instead, it uses a unique ar­chi­tec­ture called a Systolic Array.The Systolic Array” is the key dif­fer­en­tia­tor. In a stan­dard CPU or GPU, the chip moves data back and forth be­tween the mem­ory and the com­put­ing units for every cal­cu­la­tion. This con­stant shuf­fling cre­ates a bot­tle­neck (the Von Neumann bot­tle­neck).In a TPUs sys­tolic ar­ray, data flows through the chip like blood through a heart (hence systolic”).It passes in­puts through a mas­sive grid of mul­ti­pli­ers.The data is passed di­rectly to the next unit in the ar­ray with­out writ­ing back to mem­ory.What this means, in essence, is that a TPU, be­cause of its sys­tolic ar­ray, dras­ti­cally re­duces the num­ber of mem­ory reads and writes re­quired from HBM. As a re­sult, the TPU can spend its cy­cles com­put­ing rather than wait­ing for data.Google’s new TPU de­sign, also called Ironwood also ad­dressed some of the key ar­eas where a TPU was lack­ing:They en­hanced the SparseCore for ef­fi­ciently han­dling large em­bed­dings (good for rec­om­men­da­tion sys­tems and LLMs)It in­creased HBM ca­pac­ity and band­width (up to 192 GB per chip). For a bet­ter un­der­stand­ing, Nvidia’s Blackwell B200 has 192GB per chip, while Blackwell Ultra, also known as the B300, has 288 GB per chip.Im­proved the Inter-Chip Interconnect (ICI) for link­ing thou­sands of chips into mas­sive clus­ters, also called TPU Pods (needed for AI train­ing as well as some time test com­pute in­fer­ence work­loads). When it comes to ICI, it is im­por­tant to note that it is very per­for­mant with a Peak Bandwidth of 1.2 TB/s vs Blackwell NVLink 5 at 1.8 TB/s. But Google’s ICI, to­gether with its spe­cial­ized com­piler and soft­ware stack, still de­liv­ers su­pe­rior per­for­mance on some spe­cific AI tasks.The key thing to un­der­stand is that be­cause the TPU does­n’t need to de­code com­plex in­struc­tions or con­stantly ac­cess mem­ory, it can de­liver sig­nif­i­cantly higher Operations Per Joule.For scale-out, Google uses Optical Circuit Switch (OCS) and its 3D torus net­work, which com­pete with Nvidia’s InfiniBand and Spectrum-X Ethernet. The main dif­fer­ence is that OCS is ex­tremely cost-ef­fec­tive and power-ef­fi­cient as it elim­i­nates elec­tri­cal switches and O-E-O con­ver­sions, but be­cause of this, it is not as flex­i­ble as the other two. So again, the Google stack is ex­tremely spe­cial­ized for the task at hand and does­n’t of­fer the flex­i­bil­ity that GPUs do.As we de­fined the dif­fer­ences, let’s look at real num­bers show­ing how the TPU per­forms com­pared to the GPU. Since Google is­n’t re­veal­ing these num­bers, it is re­ally hard to get de­tails on per­for­mance. I stud­ied many ar­ti­cles and al­ter­na­tive data sources, in­clud­ing in­ter­views with in­dus­try in­sid­ers, and here are some of the key take­aways.The first im­por­tant thing is that there is very lim­ited in­for­ma­tion on Google’s newest TPUv7 (Ironwood), as Google in­tro­duced it in April 2025 and is just now start­ing to be­come avail­able to ex­ter­nal clients (internally, it is said that Google has al­ready been us­ing Ironwood since April, pos­si­bly even for Gemini 3.0.). And why is this im­por­tant if we, for ex­am­ple, com­pare TPUv7 with an older but still widely used ver­sion of TPUv5p based on Semianalysis data:TPUv7 has 192GB of mem­ory ca­pac­ity vs TPUv5p 96GBTPUv7 mem­ory Bandwidth is 7,370 GB/s vs 2,765 for v5pWe can see that the per­for­mance leaps be­tween v5 and v7 are very sig­nif­i­cant. To put that in con­text, most of the com­ments that we will look at are more fo­cused on TPUv6 or TPUv5 than v7.Based on an­a­lyz­ing a ton of in­ter­views with Former Google em­ploy­ees, cus­tomers, and com­peti­tors (people from AMD, NVDA & oth­ers), the sum­mary of the re­sults is as fol­lows.Most agree that TPUs are more cost-ef­fec­tive com­pared to Nvidia GPUs, and most agree that the per­for­mance per watt for TPUs is bet­ter. This view is not ap­plic­a­ble across all use cases tho.»If it is the right ap­pli­ca­tion, then they can de­liver much bet­ter per­for­mance per dol­lar com­pared to GPUs. They also re­quire much lesser en­ergy and pro­duces less heat com­pared to GPUs. They’re also more en­ergy ef­fi­cient and have a smaller en­vi­ron­men­tal foot­print, which is what makes them a de­sired out­come.The use cases are slightly lim­ited to a GPU, they’re not as generic, but for a spe­cific ap­pli­ca­tion, they can of­fer as much as 1.4X bet­ter per­for­mance per dol­lar, which is pretty sig­nif­i­cant sav­ing for a cus­tomer that might be try­ing to use GPU ver­sus TPUs.«Similarly, a very in­sight­ful com­ment from a Former Unit Head at Google around TPUs ma­te­ri­ally low­er­ing AI-search cost per query vs GPUs:»TPU v6 is 60-65% more ef­fi­cient than GPUs, prior gen­er­a­tions 40-45%«This in­ter­view was in November 2024, so the ex­pert is prob­a­bly com­par­ing the v6 TPU with the Nvidia Hopper. Today, we al­ready have Blackwell vs V7.Many ex­perts also men­tion the speed ben­e­fit that TPUs of­fer, with a Former Google Head say­ing that TPUs are 5x faster than GPUs for train­ing dy­namic mod­els (like search-like work­loads).There was also a very eye-open­ing in­ter­view with a client who used both Nvidia GPUs and Google TPUs as he de­scribes the eco­nom­ics in great de­tail:»If I were to use eight H100s ver­sus us­ing one v5e pod, I would spend a lot less money on one v5e pod. In terms of price point money, per­for­mance per dol­lar, you will get more bang for TPU. If I al­ready have a code, be­cause of Google’s help or be­cause of our own work, if I know it al­ready is go­ing to work on a TPU, then at that point it is ben­e­fi­cial for me to just stick with the TPU us­age.In the long run, if I am think­ing I need to write a new code base, I need to do a lot more work, then it de­pends on how long I’m go­ing to train. I would say there is still some, for ex­am­ple, of the work­load we have al­ready done on TPUs that in the fu­ture be­cause as Google will add newer gen­er­a­tion of TPU, they make older ones much cheaper.

For ex­am­ple, when they came out with v4, I re­mem­ber the price of v2 came down so low that it was prac­ti­cally free to use com­pared to any NVIDIA GPUs.Google has got a good promise so they keep sup­port­ing older TPUs and they’re mak­ing it a lot cheaper. If you don’t re­ally need your model trained right away, if you’re will­ing to say, I can wait one week,” even though the train­ing is only three days, then you can re­duce your cost 1/5.«Another valu­able in­ter­view was with a cur­rent AMD em­ployee, ac­knowl­edg­ing the ben­e­fits of ASICs:»I would ex­pect that an AI ac­cel­er­a­tor could do about prob­a­bly typ­i­cally what we see in the in­dus­try. I’m us­ing my ex­pe­ri­ence at FPGAs. I could see a 30% re­duc­tion in size and maybe a 50% re­duc­tion in power vs a GPU.«We also got some num­bers from a Former Google em­ployee who worked in the chip seg­ment:»When I look at the pub­lished num­bers, they (TPUs) are any­where from 25%-30% bet­ter to close to 2x bet­ter, de­pend­ing on the use cases com­pared to Nvidia. Essentially, there’s a dif­fer­ence be­tween a very cus­tom de­sign built to do one task per­fectly ver­sus a more gen­eral pur­pose de­sign.«What is also known is that the real edge of TPUs lies not in the hard­ware but in the soft­ware and in the way Google has op­ti­mized its ecosys­tem for the TPU.A lot of peo­ple men­tion the prob­lem that every Nvidia »competitor« like the TPU faces, which is the fast de­vel­op­ment of Nvidia and the con­stant »catching up« to Nvidia prob­lem. This month a for­mer Google Cloud em­ployee ad­dressed that con­cern head-on as he be­lieves the rate at which TPUs are im­prov­ing is faster than the rate at Nvidia:»The amount of per­for­mance per dol­lar that a TPU can gen­er­ate from a new gen­er­a­tion ver­sus the old gen­er­a­tion is a much sig­nif­i­cant jump than Nvidia«In ad­di­tion, the re­cent data from Google’s pre­sen­ta­tion at the Hot Chips 2025 event backs that up, as Google stated that the TPUv7 is 100% bet­ter in per­for­mance per watt than their TPUv6e (Trillium).Even for hard Nvidia ad­vo­cates, TPUs are not to be shrugged off eas­ily, as even Jensen thinks very highly of Google’s TPUs. In a pod­cast with Brad Gerstner, he men­tioned that when it comes to ASICs, Google with TPUs is a »special case«. A few months ago, we also got an ar­ti­cle from the WSJ say­ing that af­ter the news pub­li­ca­tion The Information pub­lished a re­port that stated that OpenAI had be­gun rent­ing Google TPUs for ChatGPT, Jensen called Altman, ask­ing him if it was true, and sig­naled that he was open to get­ting the talks back on track (investment talks). Also worth not­ing was that Nvidia’s of­fi­cial X ac­count posted a screen­shot of an ar­ti­cle in which OpenAI de­nied plans to use Google’s in-house chips. To say the least, Nvidia is watch­ing TPUs very closely.Ok, but af­ter look­ing at some of these num­bers, one might think, why aren’t more clients us­ing TPUs?Where are the prob­lems for the wider adop­tion of TPUsThe main prob­lem for TPUs adop­tion is the ecosys­tem. Nvidia’s CUDA is en­graved in the minds of most AI en­gi­neers, as they have been learn­ing CUDA in uni­ver­si­ties. Google has de­vel­oped its ecosys­tem in­ter­nally but not ex­ter­nally, as it has used TPUs only for its in­ter­nal work­loads un­til now. TPUs use a com­bi­na­tion of JAX and TensorFlow, while the in­dus­try skews to CUDA and PyTorch (although TPUs also sup­port PyTorch now). While Google is work­ing hard to make its ecosys­tem more sup­port­ive and con­vert­ible with other stacks, it is also a mat­ter of li­braries and ecosys­tem for­ma­tion that takes years to de­velop.It is also im­por­tant to note that, un­til re­cently, the GenAI in­dus­try’s fo­cus has largely been on train­ing work­loads. In train­ing work­loads, CUDA is very im­por­tant, but when it comes to in­fer­ence, even rea­son­ing in­fer­ence, CUDA is not that im­por­tant, so the chances of ex­pand­ing the TPU foot­print in in­fer­ence are much higher than those in train­ing (although TPUs do re­ally well in train­ing as well — Gemini 3 the prime ex­am­ple).The fact that most clients are multi-cloud also poses a chal­lenge for TPU adop­tion, as AI work­loads are closely tied to data and its lo­ca­tion (cloud data trans­fer is costly). Nvidia is ac­ces­si­ble via all three hy­per­scalers, while TPUs are avail­able only at GCP so far. A client who uses TPUs and Nvidia GPUs ex­plains it well:»Right now, the one biggest ad­van­tage of NVIDIA, and this has been true for past three com­pa­nies I worked on is be­cause AWS, Google Cloud and Microsoft Azure, these are the three ma­jor cloud com­pa­nies.Every com­pany, every cor­po­rate, every cus­tomer we have will have data in one of these three. All these three clouds have NVIDIA GPUs. Sometimes the data is so big and in a dif­fer­ent cloud that it is a lot cheaper to run our work­load in what­ever cloud the cus­tomer has data in.I don’t know if you know about the egress cost that is mov­ing data out of one cloud is one of the big­ger cost. In that case, if you have NVIDIA work­load, if you have a CUDA work­load, we can just go to Microsoft Azure, get a VM that has NVIDIA GPU, same GPU in fact, no code change is re­quired and just run it there.With TPUs, once you are all re­lied on TPU and Google says, You know what? Now you have to pay 10X more,” then we would be screwed, be­cause then we’ll have to go back and rewrite every­thing. That’s why. That’s the only rea­son peo­ple are afraid of com­mit­ting too much on TPUs. The same rea­son is for Amazon’s Trainium and Inferentia.«These prob­lems are well known at Google, so it is no sur­prise that in­ter­nally, the de­bate over keep­ing TPUs in­side Google or start­ing to sell them ex­ter­nally is a con­stant topic. When keep­ing them in­ter­nally, it en­hances the GCP moat, but at the same time, many for­mer Google em­ploy­ees be­lieve that at some point, Google will start of­fer­ing TPUs ex­ter­nally as well, maybe through some neo­clouds, not nec­es­sar­ily with the biggest two com­peti­tors, Microsoft and Amazon. Opening up the ecosys­tem, pro­vid­ing sup­port, etc., and mak­ing it more widely us­able are the first steps to­ward mak­ing that pos­si­ble.A for­mer Google em­ployee also men­tioned that Google last year formed a more sales-ori­ented team to push and sell TPUs, so it’s not like they have been push­ing hard to sell TPUs for years; it is a fairly new dy­namic in the or­ga­ni­za­tion.Google’s TPU is the biggest com­pet­i­tive ad­van­tage of its cloud busi­ness for the next 10 years­The most valu­able thing for me about TPUs is their im­pact on GCP. As we wit­ness the trans­for­ma­tion of cloud busi­nesses from the pre-AI era to the AI era, the biggest take­away is that the in­dus­try has gone from an oli­gop­oly of AWS, Azure, and GCP to a more com­modi­tized land­scape, with Oracle, Coreweave, and many other neo­clouds com­pet­ing for AI work­loads. The prob­lem with AI work­loads is the com­pe­ti­tion and Nvidia’s 75% gross mar­gin, which also re­sults in low mar­gins for AI work­loads. The cloud in­dus­try is mov­ing from a 50-70% gross mar­gin in­dus­try to a 20-35% gross mar­gin in­dus­try. For cloud in­vestors, this should be con­cern­ing, as the fu­ture pro­file of some of these com­pa­nies is more like that of a util­ity than an at­trac­tive, high-mar­gin busi­ness. But there is a so­lu­tion to avoid­ing that fu­ture and re­turn­ing to a nor­mal mar­gin: the ASIC.The cloud providers who can con­trol the hard­ware and are not be­holden to Nvidia and its 75% gross mar­gin will be able to re­turn to the world of 50% gross mar­gins. And there is no sur­prise that all three AWS, Azure, and GCP are de­vel­op­ing their own ASICs. The most ma­ture by far is Google’s TPU, fol­lowed by Amazon’s Trainum, and lastly Microsoft’s MAIA (although Microsoft owns the full IP of OpenAI’s cus­tom ASICs, which could help them in the fu­ture).While even with ASICs you are not 100% in­de­pen­dent, as you still have to work with some­one like Broadcom or Marvell, whose mar­gins are lower than Nvidia’s but still not neg­li­gi­ble, Google is again in a very good po­si­tion. Over the years of de­vel­op­ing TPUs, Google has man­aged to con­trol much of the chip de­sign process in-house. According to a cur­rent AMD em­ployee, Broadcom no longer knows every­thing about the chip. At this point, Google is the front-end de­signer (the ac­tual RTL of the de­sign) while Broadcom is only the back­end phys­i­cal de­sign part­ner. Google, on top of that, also, of course, owns the en­tire soft­ware op­ti­miza­tion stack for the chip, which makes it as per­for­mant as it is. According to the AMD em­ployee, based on this work split, he thinks Broadcom is lucky if it gets a 50-point gross mar­gin on its part.With­out hav­ing to pay Nvidia for the ac­cel­er­a­tor, a cloud provider can ei­ther price its com­pute sim­i­larly to oth­ers and main­tain a bet­ter mar­gin pro­file or lower costs and gain mar­ket share. Of course, all of this de­pends on hav­ing a very ca­pa­ble ASIC that can com­pete with Nvidia. Unfortunately, it looks like Google is the only one that has achieved that, as the num­ber one-per­form­ing model is Gemini 3 trained on TPUs. According to some for­mer Google em­ploy­ees, in­ter­nally, Google is also us­ing TPUs for in­fer­ence across its en­tire AI stack, in­clud­ing Gemini and mod­els like Veo. Google buys Nvidia GPUs for GCP, as clients want them be­cause they are fa­mil­iar with them and the ecosys­tem, but in­ter­nally, Google is full-on with TPUs.As the com­plex­ity of each gen­er­a­tion of ASICs in­creases, sim­i­lar to the com­plex­ity and pace of Nvidia, I pre­dict that not all ASIC pro­grams will make it. I be­lieve out­side of TPUs, the only real hy­per­scaler shot right now is AWS Trainium, but even that faces much big­ger un­cer­tain­ties than the TPU. With that in mind, Google and its cloud busi­ness can come out of this AI era as a ma­jor ben­e­fi­ciary and mar­ket-share gainer.Re­cently, we even got com­ments from the SemiAnalysis team prais­ing the TPU:»Google’s sil­i­con su­premacy among hy­per­scalers is un­matched, with their TPU 7th Gen ar­guably on par with Nvidia Blackwell. TPU pow­ers the Gemini fam­ily of mod­els which are im­prov­ing in ca­pa­bil­ity and sit close to the pareto fron­tier of $ per in­tel­li­gence in some tasks«How many TPUs does Google pro­duce to­day, and how big can that get?Here are the num­bers that I re­searched:This post is for paid sub­scribers

...

Read the original on www.uncoveralpha.com »

6 221 shares, 9 trendiness

Our response to a recent security incident

What we did in re­sponse

...

Read the original on mixpanel.com »

7 216 shares, 16 trendiness

Quake Engine Indicators

I was work­ing on a bug in Chocolate Quake net­code. The is­sue was an edge case where start­ing two clients on the same ma­chine re­sulted in the sec­ond one zomb­i­fy­ing the first one. When the bug oc­curred there was no dis­con­nec­tion but the client could no longer move. Instead the screen would show an indicator” look­ing like an un­plugged Ethernet ca­ble in the up­per left cor­ner.

As I dug into the code, I learned there were more of these. Located in­side pak0.pak and nested in gfx.wad are files TURTLE, DISC, RAM, and NET. I could not find any­thing about these indicators” so I doc­u­mented them here.

The TURTLE in­di­ca­tor shows up on screen when the fram­er­ate goes be­low 10 fps. It is un­likely to have been in­tended for play­ers but rather for peo­ple at id Software dur­ing de­vel­op­ment. Programmers could see where the en­gine was not fast enough. More im­por­tantly map de­sign­ers could see if they had too many poly­gons in spe­cific ar­eas of their map.

The TURTLE in­di­ca­tor can be en­abled/​dis­abled with com­mand show­tur­tle 1/0. The code is all in func­tion SCR_DrawTurtle, where host_fram­e­time is the time in sec­onds it took to draw the last frame.

There is a scr_show­tur­tle in Quake 2 source code but it does not do any­thing.

The icon does­n’t ac­tu­ally de­pict a tur­tle but a tor­toise. A tur­tle swims in the wa­ter while a tor­toise walks on land.

Quake does not ren­der poly­gons us­ing di­rectly a tex­ture and a lightmap. Instead it com­bines these two into a surface” which is then fed to the ras­ter­izer. After be­ing used sur­faces are not dis­carded but cached be­cause the next frame is likely to need the same sur­face again.

The RAM in­di­ca­tor is here to warn when the en­gine evicts from the cache sur­faces that were gen­er­ated and cached on the same frame. This means the geom­e­try of the map forces the en­gine to op­er­ate be­yond its sur­face cache ca­pac­ity. Under this con­di­tion, the ren­derer en­ters a cat­a­strophic death spi­ral” where it evicts sur­faces that will be needed later in the frame. Needless to say the fram­er­ate suf­fers greatly.

This was likely a fea­ture in­tended for map de­sign­ers to warn them of scenes go­ing be­yond the amount of sur­face cache mem­ory Quake pro­vi­sioned. See D_SCAlloc where thrash­ing is de­tected to learn more about it.

Alike the tur­tle one, this in­di­ca­tor can also be en­abled/​dis­abled with com­mand showram 1/0.

The DISC in­di­ca­tor wraps HDD ac­cess done via Sys_FileRead. It is un­likely it was used by de­vel­op­ers to di­ag­nose any­thing since its screen lo­ca­tion over­laps with the TURTLE in­di­ca­tor. It is just here to give feed­back to play­ers that the game is load­ing.

Because the icon is hid­den when Sys_FileRead re­turns, it is nor­mal to see it flicker on the screen (and it also looks kinda cool). The code for this in­di­ca­tor is in Draw_BeginDisc.

The NET in­di­ca­tor is dis­played when a client has not re­ceived any pack­ets from the server in the last 300ms. This was likely aimed at play­ers to help them de­ter­mine how bad their con­nec­tion was (a dis­tant server would eas­ily have a 500ms ping in these dial-up over PPP mo­dem days) or if they had plainly lost con­nec­tion to the server.

The code for this in­di­ca­tor is in SCR_DrawNet.

The NET in­di­ca­tor is pre­vent and ac­tive in Quake 2. The code is still in SCR_DrawNet but the im­age is no longer in a wad. It is store in pak0.pak at pics/​net.pcx.

Below, a ter­ri­ble user ex­pe­ri­ence where the frame made the en­gine thrash its sur­face cache, the fram­er­ate dropped be­low 10fps, and the en­gine last re­ceived pack­ets from the server more than 300ms ago.

...

Read the original on fabiensanglard.net »

8 207 shares, 9 trendiness

Indian study finds music helps patients heal under anaesthesia

Under the harsh lights of an op­er­at­ing the­atre in the Indian cap­i­tal, Delhi, a woman lies mo­tion­less as sur­geons pre­pare to re­move her gall­blad­der. She is un­der gen­eral anaes­the­sia: un­con­scious, in­sen­sate and ren­dered com­pletely still by a blend of drugs that in­duce deep sleep, block mem­ory, blunt pain and tem­porar­ily paral­yse her mus­cles.Yet, amid the hum of mon­i­tors and the steady rhythm of the sur­gi­cal team, a gen­tle stream of flute mu­sic plays through the head­phones placed over her ears.Even as the drugs si­lence much of her brain, its au­di­tory path­way re­mains partly ac­tive. When she wakes up, she will re­gain con­scious­ness more quickly and clearly be­cause she re­quired lower doses of anaes­thetic drugs such as propo­fol and opi­oid painkillers than pa­tients who heard no mu­sic.That, at least, is what a new peer-re­viewed study from Delhi’s Maulana Azad Medical College and Lok Nayak Hospital sug­gests. The re­search, pub­lished in the jour­nal Music and Medicine, of­fers some of the strongest ev­i­dence yet that mu­sic played dur­ing gen­eral anaes­the­sia can mod­estly but mean­ing­fully re­duce drug re­quire­ments and im­prove re­cov­ery.The study fo­cuses on pa­tients un­der­go­ing la­paro­scopic chole­cys­tec­tomy, the stan­dard key­hole op­er­a­tion to re­move the gall­blad­der. The pro­ce­dure is short - usu­ally un­der an hour - and de­mands a par­tic­u­larly swift, clear-headed” re­cov­ery.To un­der­stand why the re­searchers turned to mu­sic, it helps to de­code the mod­ern prac­tice of anaes­the­sia.“Our aim is early dis­charge af­ter surgery,” says Dr Farah Husain, se­nior spe­cial­ist in anaes­the­sia and cer­ti­fied mu­sic ther­a­pist for the study. Patients need to wake up clear-headed, alert and ori­ented, and ide­ally pain-free. With bet­ter pain man­age­ment, the stress re­sponse is cur­tailed.” Achieving that re­quires a care­fully bal­anced mix of five or six drugs that to­gether keep the pa­tient asleep, block pain, pre­vent mem­ory of the surgery and re­lax the mus­cles.

In pro­ce­dures like la­paro­scopic gall­blad­der re­moval, anaes­the­si­ol­o­gists now of­ten sup­ple­ment this drug reg­i­men with re­gional blocks” - ul­tra­sound-guided in­jec­tions that numb nerves in the ab­dom­i­nal wall. General anaes­the­sia plus blocks is the norm,” says Dr Tanvi Goel, pri­mary in­ves­ti­ga­tor and a for­mer se­nior res­i­dent of Maulana Azad Medical College. We’ve been do­ing this for decades.“But the body does not take to surgery eas­ily. Even un­der anaes­the­sia, it re­acts: heart rate rises, hor­mones surge, blood pres­sure spikes. Reducing and man­ag­ing this cas­cade is one of the cen­tral goals of mod­ern sur­gi­cal care. Dr Husain ex­plains that the stress re­sponse can slow re­cov­ery and worsen in­flam­ma­tion, high­light­ing why care­ful man­age­ment is so im­por­tant. The stress starts even be­fore the first cut, with in­tu­ba­tion - the in­ser­tion of a breath­ing tube into the wind­pipe.To do this, the anaes­the­si­ol­o­gist uses a laryn­go­scope to lift the tongue and soft tis­sues at the base of the throat, ob­tain a clear view of the vo­cal cords, and guide the tube into the tra­chea. It’s a rou­tine step in gen­eral anaes­the­sia that keeps the air­way open and al­lows pre­cise con­trol of the pa­tien­t’s breath­ing while they are un­con­scious.“The laryn­goscopy and in­tu­ba­tion are con­sid­ered the most stress­ful re­sponse dur­ing gen­eral anaes­the­sia,” says Dr Sonia Wadhawan, di­rec­tor-pro­fes­sor of anaes­the­sia and in­ten­sive care at Maulana Azad Medical College and su­per­vi­sor of the study.“Al­though the pa­tient is un­con­scious and will re­mem­ber noth­ing, their body still re­acts to the stress with changes in heart rate, blood pres­sure, and stress hor­mones.“To be sure, the drugs have evolved. The old ether masks have van­ished. In their place are in­tra­venous agents - most no­tably propo­fol, the hyp­notic made in­fa­mous by Michael Jackson’s death but prized in op­er­at­ing the­atres for its rapid on­set and clean re­cov­ery. Propofol acts within about 12 sec­onds,” notes Dr Goel. We pre­fer it for short surg­eries like la­paro­scopic chole­cys­tec­tomy be­cause it avoids the hangover’ caused by in­hala­tional gases.“The team of re­searchers wanted to know whether mu­sic could re­duce how much propo­fol and fen­tanyl (an opi­oid painkiller) pa­tients re­quired. Less drugs means faster awak­en­ing, stead­ier vi­tal signs and re­duced side ef­fects.So they de­signed a study. A pi­lot in­volv­ing eight pa­tients led to a full 11-month trial of 56 adults, aged roughly 20 to 45, ran­domly as­signed to two groups. All re­ceived the same five-drug reg­i­men: a drug that pre­vents nau­sea and vom­it­ing, a seda­tive, fen­tanyl, propo­fol and a mus­cle re­lax­ant. Both groups wore noise-can­celling head­phones - but only one heard mu­sic.“We asked pa­tients to se­lect from two calm­ing in­stru­men­tal pieces - soft flute or pi­ano,” says Dr Husain. The un­con­scious mind still has ar­eas that re­main ac­tive. Even if the mu­sic is­n’t ex­plic­itly re­called, im­plicit aware­ness can lead to ben­e­fi­cial ef­fects.”

Patients ex­posed to mu­sic re­quired lower doses of propo­fol and fen­tanyl. They ex­pe­ri­enced smoother re­cov­er­ies, lower cor­ti­sol or stress-hor­mone lev­els and a much bet­ter con­trol of blood pres­sure dur­ing the surgery. Since the abil­ity to hear re­mains in­tact un­der anaes­the­sia,” the re­searchers write, music can still shape the brain’s in­ter­nal state.“Clearly, mu­sic seemed to qui­eten the in­ter­nal storm. The au­di­tory path­way re­mains ac­tive even when you’re un­con­scious,” says Dr Wadhawan. You may not re­mem­ber the mu­sic, but the brain reg­is­ters it.“The idea that the mind be­hind the anaes­thetic veil is not en­tirely silent has long in­trigued sci­en­tists. Rare cases of intraoperative aware­ness” show pa­tients re­call­ing frag­ments of op­er­at­ing-room con­ver­sa­tion. If the brain is ca­pa­ble of pick­ing up and re­mem­ber­ing stress­ful ex­pe­ri­ences dur­ing surgery - even when a pa­tient is un­con­scious - then it might also be able to reg­is­ter pos­i­tive or com­fort­ing ex­pe­ri­ences, like mu­sic, even with­out con­scious mem­ory.“We’re only be­gin­ning to ex­plore how the un­con­scious mind re­sponds to non-phar­ma­co­log­i­cal in­ter­ven­tions like mu­sic,” says Dr Husain. It’s a way of hu­man­is­ing the op­er­at­ing room.“Mu­sic ther­apy is not new to med­i­cine; it has long been used in psy­chi­a­try, stroke re­ha­bil­i­ta­tion and pal­lia­tive care. But its en­try into the in­tensely tech­ni­cal, ma­chine-gov­erned world of anaes­the­sia marks a quiet shift. If such a sim­ple in­ter­ven­tion can re­duce drug use and speed re­cov­ery - even mod­estly - it could re­shape how hos­pi­tals think about sur­gi­cal well­be­ing.As the re­search team pre­pares its next study ex­plor­ing mu­sic-aided se­da­tion, build­ing on ear­lier find­ings, one truth is al­ready hum­ming through the data: even when the body is still and the mind asleep, it ap­pears a few gen­tle notes can help the heal­ing be­gin.

...

Read the original on www.bbc.com »

9 197 shares, 11 trendiness

How Arthur Conan Doyle Explored Men’s Mental Health Through Sherlock Holmes

Arthur Conan Doyle delved into men’s men­tal health through his Sherlock Holmes sto­ries. (Image: Wikimedia Commons/Canva , Public do­main)

Note: This ar­ti­cle is re­pub­lished from The Conversation un­der a Creative Commons li­cense. It in­cludes links to ex­ter­nal sites that may earn a com­mis­sion for pur­chases. We did not add these links and have kept the orig­i­nal con­tent in­tact.

Arthur Conan Doyle was not just one of the world’s best crime fic­tion writ­ers. He was a pro­gres­sive word­smith who brought light to con­tro­ver­sial and taboo sub­jects. One of those taboo sub­jects was male vul­ner­a­bil­ity and men­tal health prob­lems — a topic of per­sonal sig­nif­i­cance to the au­thor.

Doyle was a vul­ner­a­ble child. His fa­ther, Charles, was an al­co­holic, which led to fi­nan­cial trou­bles in the fam­ily. Charles was ad­mit­ted to an asy­lum in 1881 and spent the next 12 years in var­i­ous men­tal care es­tab­lish­ments. So be­gan Doyle’s in­ter­est in male vul­ner­a­bil­ity and men­tal health.

The char­ac­ter of Sherlock Holmes is a true ex­pres­sion of male vul­ner­a­bil­ity that does not equate it with weak­ness. Doyle does not rep­re­sent Holmes as in­fal­li­ble, but as a man oth­ers can re­late to — he bat­tles with drug ad­dic­tion, lone­li­ness and de­pres­sion. His ge­nius thrives in part be­cause of these vul­ner­a­bil­i­ties, not de­spite them.

(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);

Many of Doyle’s Sherlock Holmes sto­ries ex­am­ine male char­ac­ters fac­ing emo­tional cat­a­stro­phe, be­trayal or moral dilem­mas. In works such as The Man with the Twisted Lip (1891), The Adventure of the Engineer’s Thumb (1892) and The Stockbroker’s Clerk (1894), Holmes’s male clients ap­proach him with prob­lems lay­ered with emo­tional tur­moil, fear and fail­ure.

In The Man with the Twisted Lip, for ex­am­ple, a man named Neville St Clair hides his dou­ble life. He tells his fam­ily that he is a re­spectable en­tre­pre­neur go­ing to London on busi­ness. In re­al­ity he is beg­ging on the city streets. He lives this dou­ble life due to fear and shame over the in­abil­ity to pay off his debts. It was a long fight be­tween my pride and the money,” he ex­plains, but the dol­lars won at last.”

Also Read: Who Looks Smarter: The Quick Thinker or the Careful Thinker?

I would have en­dured im­pris­on­ment, ay, even ex­e­cu­tion, rather than have left my mis­er­able se­cret as a fam­ily blot to my chil­dren,” St Clair says. In hav­ing his char­ac­ter con­sider ex­e­cu­tion to pro­tect his and his fam­i­ly’s rep­u­ta­tion, Doyle ex­plored the so­ci­etal ex­pec­ta­tions of Victorian mas­culin­ity and how men strug­gled with such pres­sures.

(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);

The Stockbroker’s Clerk also ex­am­ines male sui­cide, as well as eco­nomic and pro­fes­sional anx­i­eties. When Holmes re­veals the crimes of Harry Pinner, the man at­tempts sui­cide rather than face prison.

In The Engineer’s Thumb, hy­draulic en­gi­neer Victor is treated phys­i­cally by Watson and men­tally by Holmes. As Doyle writes: Round one of his hands he had a hand­ker­chief wrapped, which was mot­tled all over with blood­stains. He was young, not more than five-and-twenty, I should say, with a strong mas­cu­line face; but he was ex­ceed­ingly pale and gave me the im­pres­sion of a man who was suf­fer­ing from some strong ag­i­ta­tion, which it took all his strength of mind to con­trol.”

The phys­i­cal in­jury marks Victor as a vic­tim of phys­i­cal vi­o­lence. Watson sug­gests that Victor is us­ing all his men­tal ca­pa­bil­i­ties to keep calm about his se­vere pain. Holmes treats Victor’s mind as he lis­tens to his story: Pray lie down there and make your­self ab­solutely at home. Tell us what you can, but stop when you are tired, and keep up your strength with a lit­tle stim­u­lant.”

Also Read: Study of 3 Million Finnish Adults Finds Non-Voters Tend to Die Earlier

(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);

Holmes is a pro­tec­tor, a con­fi­dante and a com­forter in this scene. He pro­vides Victor with break­fast, in­duces him to lie down and of­fers him a stim­u­lant (more than likely brandy).

The ex­trem­ity of vi­o­lence that Victor has en­dured has es­ca­lated to men­tal trauma. In hav­ing Holmes treat Victor’s men­tal trauma while Watson treats his phys­i­cal pain, Doyle showed the im­por­tance psy­cho­log­i­cal sup­port for men of the age.

Holmes was a highly pop­u­lar char­ac­ter. To con­tem­po­rary read­ers, his drug use and dys­func­tional clients were seen as mark­ers of his ge­nius rather than a re­flec­tion of the sig­nif­i­cant so­cial is­sues that men faced dur­ing this pe­riod. But to­day, they of­fer a win­dow into the men­tal strug­gles of Victorian men, and a point of con­nec­tion be­tween read­ers of the past and pre­sent.

Looking for some­thing good? Cut through the noise with a care­fully cu­rated se­lec­tion of the lat­est re­leases, live events and ex­hi­bi­tions, straight to your in­box every fort­night, on Fridays. Sign up here.

(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,“_mgq”);

This ar­ti­cle fea­tures ref­er­ences to books that have been in­cluded for ed­i­to­r­ial rea­sons, and may con­tain links to book­shop.org. If you click on one of the links and go on to buy some­thing from book­shop.org The Conversation UK may earn a com­mis­sion.

Emma Linford, Honorary re­search as­so­ci­ate, English lit­er­a­ture, University of Hull

This ar­ti­cle is re­pub­lished from The Conversation un­der a Creative Commons li­cense. Read the orig­i­nal ar­ti­cle.

Also Read

Loading ti­tle…

(function(card) {

const CACHE_TTL = 3600000; // 1 hour in mil­lisec­onds

const link = card.query­S­e­lec­tor(‘.also-read-link’).href;

const cacheKey = `alsoReadCache:${link}`;

const up­date­Card = (title, im­age) => {

card.query­S­e­lec­tor(‘.also-read-ti­tle’).in­ner­Text = ti­tle;

card.query­S­e­lec­tor(‘.also-read-im­age’).style.back­groundIm­age = `url(${image})`;

let cached­Data;

try {

cached­Data = lo­cal­Stor­age.getItem(cacheKey);

if (cachedData) {

cached­Data = JSON.parse(cachedData);

} catch(e) {

con­sole.er­ror(“Er­ror pars­ing cache data:”, e);

cached­Data = null;

if (cachedData && Date.now() - cached­Data.time­stamp < CACHE_TTL) {

up­date­Card(cached­Data.ti­tle, cached­Data.im­age);

re­turn;

fetch(link)

.then(response => {

if (!response.ok) throw new Error(‘Network re­sponse was not ok’);

re­turn re­sponse.text();

.then(html => {

const doc = new DOMParser().parseFromString(html, text/html”);

const ogTi­tle = doc.query­S­e­lec­tor(‘meta[prop­erty=“og:ti­tle”]’)?.con­tent || Read More”;

const ogIm­age = doc.query­S­e­lec­tor(‘meta[prop­erty=“og:im­age”]’)?.con­tent || https://​via.place­holder.com/​300;

lo­cal­Stor­age.setItem(cacheKey, JSON.stringify({

ti­tle: ogTi­tle,

im­age: ogIm­age,

time­stamp: Date.now()

up­date­Card(ogTi­tle, ogIm­age);

.catch(error => {

con­sole.er­ror(“Er­ror fetch­ing Open Graph data:”, er­ror);

if (cachedData) {

up­date­Card(cached­Data.ti­tle, cached­Data.im­age);

})(document.currentScript.parentElement);

.also-read-card {

max-width: 600px;

width: 100%;

mar­gin: 20px 0;

bor­der: 1px solid #e0e0e0;

bor­der-left: 8px solid #5170ff;

bor­der-ra­dius: 6px;

over­flow: hid­den;

back­ground: #fff;

box-shadow: 0 1px 5px rgba(0,0,0,0.08);

tran­si­tion: box-shadow 0.3s ease;

dis­play: flex;

align-items: stretch;

.also-read-link {

dis­play: flex;

align-items: stretch;

text-dec­o­ra­tion: none;

color: in­herit;

width: 100%;

.also-read-image {

width: 150px;

height: 100%;

flex-shrink: 0;

back­ground-size: cover;

back­ground-po­si­tion: cen­ter;

/* Note: back­ground-im­age tran­si­tions might not an­i­mate as ex­pected */

.also-read-info {

padding: 15px;

flex-grow: 1;

dis­play: flex;

flex-di­rec­tion: col­umn;

jus­tify-con­tent: cen­ter;

.also-read-label {

dis­play: block;

font-size: 16px;

font-weight: 800;

let­ter-spac­ing: 1px;

color: #5170ff;

...

Read the original on scienceclock.com »

10 197 shares, 2 trendiness

Arthur Conan Doyle explored men’s mental health through his Sherlock Holmes stories

Arthur Conan Doyle was not just one of the world’s best crime fic­tion writ­ers. He was a pro­gres­sive word­smith who brought light to con­tro­ver­sial and taboo sub­jects. One of those taboo sub­jects was male vul­ner­a­bil­ity and men­tal health prob­lems — a topic of per­sonal sig­nif­i­cance to the au­thor.

Doyle was a vul­ner­a­ble child. His fa­ther, Charles, was an al­co­holic, which led to fi­nan­cial trou­bles in the fam­ily. Charles was ad­mit­ted to an asy­lum in 1881 and spent the next 12 years in var­i­ous men­tal care es­tab­lish­ments. So be­gan Doyle’s in­ter­est in male vul­ner­a­bil­ity and men­tal health.

The char­ac­ter of Sherlock Holmes is a true ex­pres­sion of male vul­ner­a­bil­ity that does not equate it with weak­ness. Doyle does not rep­re­sent Holmes as in­fal­li­ble, but as a man oth­ers can re­late to — he bat­tles with drug ad­dic­tion, lone­li­ness and de­pres­sion. His ge­nius thrives in part be­cause of these vul­ner­a­bil­i­ties, not de­spite them.

Many of Doyle’s Sherlock Holmes sto­ries ex­am­ine male char­ac­ters fac­ing emo­tional cat­a­stro­phe, be­trayal or moral dilem­mas. In works such as The Man with the Twisted Lip (1891), The Adventure of the Engineer’s Thumb (1892) and The Stockbroker’s Clerk (1894), Holmes’s male clients ap­proach him with prob­lems lay­ered with emo­tional tur­moil, fear and fail­ure.

In The Man with the Twisted Lip, for ex­am­ple, a man named Neville St Clair hides his dou­ble life. He tells his fam­ily that he is a re­spectable en­tre­pre­neur go­ing to London on busi­ness. In re­al­ity he is beg­ging on the city streets. He lives this dou­ble life due to fear and shame over the in­abil­ity to pay off his debts. It was a long fight be­tween my pride and the money,” he ex­plains, but the dol­lars won at last.”

I would have en­dured im­pris­on­ment, ay, even ex­e­cu­tion, rather than have left my mis­er­able se­cret as a fam­ily blot to my chil­dren,” St Clair says. In hav­ing his char­ac­ter con­sider ex­e­cu­tion to pro­tect his and his fam­i­ly’s rep­u­ta­tion, Doyle ex­plored the so­ci­etal ex­pec­ta­tions of Victorian mas­culin­ity and how men strug­gled with such pres­sures.

The Stockbroker’s Clerk also ex­am­ines male sui­cide, as well as eco­nomic and pro­fes­sional anx­i­eties. When Holmes re­veals the crimes of Harry Pinner, the man at­tempts sui­cide rather than face prison.

In The Engineer’s Thumb, hy­draulic en­gi­neer Victor is treated phys­i­cally by Watson and men­tally by Holmes. As Doyle writes: Round one of his hands he had a hand­ker­chief wrapped, which was mot­tled all over with blood­stains. He was young, not more than five-and-twenty, I should say, with a strong mas­cu­line face; but he was ex­ceed­ingly pale and gave me the im­pres­sion of a man who was suf­fer­ing from some strong ag­i­ta­tion, which it took all his strength of mind to con­trol.”

The phys­i­cal in­jury marks Victor as a vic­tim of phys­i­cal vi­o­lence. Watson sug­gests that Victor is us­ing all his men­tal ca­pa­bil­i­ties to keep calm about his se­vere pain. Holmes treats Victor’s mind as he lis­tens to his story: Pray lie down there and make your­self ab­solutely at home. Tell us what you can, but stop when you are tired, and keep up your strength with a lit­tle stim­u­lant.”

Holmes is a pro­tec­tor, a con­fi­dante and a com­forter in this scene. He pro­vides Victor with break­fast, in­duces him to lie down and of­fers him a stim­u­lant (more than likely brandy).

The ex­trem­ity of vi­o­lence that Victor has en­dured has es­ca­lated to men­tal trauma. In hav­ing Holmes treat Victor’s men­tal trauma while Watson treats his phys­i­cal pain, Doyle showed the im­por­tance psy­cho­log­i­cal sup­port for men of the age.

Holmes was a highly pop­u­lar char­ac­ter. To con­tem­po­rary read­ers, his drug use and dys­func­tional clients were seen as mark­ers of his ge­nius rather than a re­flec­tion of the sig­nif­i­cant so­cial is­sues that men faced dur­ing this pe­riod. But to­day, they of­fer a win­dow into the men­tal strug­gles of Victorian men, and a point of con­nec­tion be­tween read­ers of the past and pre­sent.

Looking for some­thing good? Cut through the noise with a care­fully cu­rated se­lec­tion of the lat­est re­leases, live events and ex­hi­bi­tions, straight to your in­box every fort­night, on Fridays. Sign up here.

This ar­ti­cle fea­tures ref­er­ences to books that have been in­cluded for ed­i­to­r­ial rea­sons, and may con­tain links to book­shop.org. If you click on one of the links and go on to buy some­thing from book­shop.org The Conversation UK may earn a com­mis­sion.

...

Read the original on theconversation.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.