10 interesting stories served every morning and every evening.
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on grapheneos.social »
The Core Project is a highly modular based system with community build extensions.
It starts with a recent Linux kernel, vmlinuz, and our root filesystem and start-up scripts packaged with a basic set of kernel modules in core.gz. Core (11MB) is simply the kernel + core.gz - this is the foundation for user created desktops, servers, or appliances. TinyCore is Core + Xvesa.tcz + Xprogs.tcz + aterm.tcz + fltk-1.3.tcz + flwm.tcz + wbar.tcz
TinyCore becomes simply an example of what the Core Project can produce, an 16MB FLTK/FLWM desktop.
CorePlus ofers a simple way to get started using the Core philosophy with its included community packaged extensions enabling easy embedded frugal or pendrive installation of the user’s choice of supported desktop, while maintaining the Core principal of mounted extensions with full package management.
It is not a complete desktop nor is all hardware completely supported. It represents only the core needed to boot into a very minimal X desktop typically with wired internet access.
The user has complete control over which applications and/or additional hardware to have supported, be it for a desktop, a netbook, an appliance, or server, selectable by the user by installing additional applications from online repositories, or easily compiling most anything you desire using tools provided.
Our goal is the creation of a nomadic ultra small graphical desktop operating system capable of booting from cdrom, pendrive, or frugally from a hard drive. The desktop boots extremely fast and is able to support additional applications and hardware of the users choice. While Tiny Core always resides in ram, additional applications extensions can either reside in ram, mounted from a persistent storage device, or installed into a persistent storage device.
We invite interested users and developers to explore Tiny Core. Within our forums we have an open developement model. We encourage shared knowledge. We promote community involvement and community built application extensions. Anyone can contribute to our project by packaging their favorite application or hardware support to run in Tiny Core. The Tiny Core Linux Team currently consists of eight members who peruse the forums to assist from answering questions to helping package new extensions.
Join us here and on IRC Freenode #tinycorelinux.
...
Read the original on www.tinycorelinux.net »
NanoKVM is a hardware KVM switch developed by the Chinese company Sipeed. Released last year, it enables remote control of a computer or server using a virtual keyboard, mouse, and monitor. Thanks to its compact size and low price, it quickly gained attention online, especially when the company promised to release its code as open-source. However, as we’ll see, the device has some serious security issues. But first, let’s start with the basics.
As mentioned, NanoKVM is a KVM switch designed for remotely controlling and managing computers or servers. It features an HDMI port, three USB-C ports, an Ethernet port for network connectivity, and a special serial interface. The package also includes a small accessory for managing the power of an external computer.
Using it is quite simple. First, you connect the device to the internet via an Ethernet cable. Once online, you can access it through a standard web browser (though JavaScript JIT must be enabled). The device supports Tailscale VPN, but with some effort (read: hacking), it can also be configured to work with your own VPN, such as WireGuard or OpenVPN server. Once set up, you can control it from anywhere in the world via your browser.
The device could be connected to the target computer using an HDMI cable, capturing the video output that would normally be displayed on a monitor. This allows you to view the computer’s screen directly in your browser, essentially acting as a virtual monitor.
Through the USB connection, NanoKVM can also emulate a keyboard, mouse, CD-ROM, USB drive, and even a USB network adapter. This means you can remotely control the computer as if you were physically sitting in front of it - but all through a web interface.
While it functions similarly to remote management tools like RDP or VNC, it has one key difference: there’s no need to install any software on the target computer. Simply plug in the device, and you’re ready to manage it remotely. NanoKVM even allows you to enter the BIOS, and with the additional accessory for power management, you can remotely turn the computer on, off, or reset it.
This makes it incredibly useful - you can power on a machine, access the BIOS, change settings, mount a virtual bootable CD, and install an operating system from scratch, just as if you were physically there. Even if the computer is on the other side of the world.
NanoKVM is also quite affordable. The fully-featured version, which includes all ports, a built-in mini screen, and a case, costs just over €60, while the stripped-down version is around €30. By comparison, a similar RaspberryPi-based device, PiKVM, costs around €400. However, PiKVM is significantly more powerful and reliable and, with a KVM splitter, can manage multiple devices simultaneously.
As mentioned earlier, the announcement of the device caused quite a stir online - not just because of its low price, but also due to its compact size and minimal power consumption. In fact, it can be powered directly from the target computer via a USB cable, which it also uses to simulate a keyboard, mouse, and other USB devices. So you have only one USB cable - in one direction it powers NanoKVM, on the other it helps it to simulate keyboard mouse and other devices on a computer you want to manage.
The device is built on the open-source RISC-V processor architecture, and the manufacturer eventually did release the device’s software under an open-source license at the end of last year. (To be fair, one part of the code remains closed, but the community has already found a suitable open-source replacement, and the manufacturer has promised to open this portion soon.)
However, the real issue is security.
Understandably, the company was eager to release the device as soon as possible. In fact, an early version had a minor hardware design flaw - due to an incorrect circuit cable, the device sometimes failed to detect incoming HDMI signals. As a result, the company recalled and replaced all affected units free of charge. Software development also progressed rapidly, but in such cases, the primary focus is typically on getting basic functionality working, with security taking a backseat.
So, it’s not surprising that the developers made some serious missteps - rushed development often leads to stupid mistakes. But some of the security flaws I discovered in my quick (and by no means exhaustive) review are genuinely concerning.
One of the first security analysis revealed numerous vulnerabilities - and some rather bizarre discoveries. For instance, a security researcher even found an image of a cat embedded in the firmware. While the Sipeed developers acknowledged these issues and relatively quickly fixed at least some of them, many remain unresolved.
After purchasing the device myself, I ran a quick security audit and found several alarming flaws. The device initially came with a default password, and SSH access was enabled using this preset password. I reported this to the manufacturer, and to their credit, they fixed it relatively quickly. However, many other issues persist.
The user interface is riddled with security flaws - there’s no CSRF protection, no way to invalidate sessions, and more. Worse yet, the encryption key used for password protection (when logging in via a browser) is hardcoded and identical across all devices. This is a major security oversight, as it allows an attacker to easily decrypt passwords. More problematic, this needed to be explained to the developers. Multiple times.
Another concern is the device’s reliance on Chinese DNS servers. And configuring your own (custom) DNS settings is quite complicated. Additionally, the device communicates with Sipeed’s servers in China - downloading not only updates but also the closed-source component mentioned earlier. For this closed source component it needs to verify an identification key, which is stored on the device in plain text. Alarmingly, the device does not verify the integrity of software updates, includes a strange version of the WireGuard VPN application (which does not work on some networks), and runs a heavily stripped-down version of Linux that lacks systemd and apt. And these are just a few of the issues.
Were these problems simply oversights? Possibly. But what additionally raised red flags was the presence of tcpdump and aircrack - tools commonly used for network packet analysis and wireless security testing. While these are useful for debugging and development, they are also hacking tools that can be dangerously exploited. I can understand why developers might use them during testing, but they have absolutely no place on a production version of the device.
And then I discovered something even more alarming - a tiny built-in microphone that isn’t clearly mentioned in the official documentation. It’s a miniature SMD component, measuring just 2 x 1 mm, yet capable of recording surprisingly high-quality audio.
What’s even more concerning is that all the necessary recording tools are already installed on the device! By simply connecting via SSH (remember, the device initially used default passwords!), I was able to start recording audio using the amixer and arecord tools. Once recorded, the audio file could be easily copied to another computer. With a little extra effort, it would even be possible to stream the audio over a network, allowing an attacker to eavesdrop in real time.
Physically removing the microphone is possible, but it’s not exactly straightforward. As seen in the image, disassembling the device is tricky, and due to the microphone’s tiny size, you’d need a microscope or magnifying glass to properly desolder it.
To summarize: the device is riddled with security flaws, originally shipped with default passwords, communicates with servers in China, comes preinstalled with hacking tools, and even includes a built-in microphone - fully equipped for recording audio - without clear mention of it in the documentation. Could it get any worse?
I am pretty sure these issues stem from extreme negligence and rushed development rather than malicious intent. However, that doesn’t make them any less concerning.
That said, these findings don’t mean the device is entirely unusable.
Since the device is open-source, it’s entirely possible to install custom software on it. In fact, one user has already begun porting his own Linux distribution - starting with Debian and later switching to Ubuntu. With a bit of luck, this work could soon lead to official Ubuntu Linux support for the device.
This custom Linux version already runs the manufacturer’s modified KVM code, and within a few months, we’ll likely have a fully independent and significantly more secure software alternative. The only minor inconvenience is that installing it requires physically opening the device, removing the built-in SD card, and flashing the new software onto it. However, in reality, this process isn’t too complicated.
And while you’re at it, you might also want to remove the microphone… or, if you prefer, connect a speaker. In my test, I used an 8-ohm, 0.5W speaker, which produced surprisingly good sound - essentially turning the NanoKVM into a tiny music player. Actually, the idea is not so bad, because PiKVM also included 2-way audio support for their devices end of last year.
All this of course raises an interesting question: How many similar devices with hidden functionalities might be lurking in your home, just waiting to be discovered? And not just those of Chinese origin. Are you absolutely sure none of them have built-in miniature microphones or cameras?
You can start with your iPhone - last year Apple has agreed to pay $95 million to settle a lawsuit alleging that its voice assistant Siri recorded private conversations. They shared the data with third parties and used them for targeted ads. “Unintentionally”, of course! Yes, that Apple, that cares about your privacy so much.
And Google is doing the same. They are facing a similar lawsuit over their voice assistant, but the litigation likely won’t be settled until this fall. So no, small Chinese startup companies are not the only problem. And if you are worried about Chinese companies obligations towards Chinese government, let’s not forget that U. S. companies also have obligations to cooperate with U.S. government. While Apple is publicly claiming they do not cooperate with FBI and other U. S. agencies (because thy care about your privacy so much), some media revealed that Apple was holding a series secretive Global Police Summit at its Cupertino headquarters where they taught police how to use their products for surveillance and policing work. And as one of the police officers pointed out - he has “never been part of an engagement that was so collaborative.”. Yep.
If you want to test the built-in microphone yourself, simply connect to the device via SSH and run the following two commands:
* arecord -Dhw:0,0 -d 3 -r 48000 -f S16_LE -t wav test.wav & > /dev/null & (this will capture the sound to a file named test.wav)
Now, speak or sing (perhaps the Chinese national anthem?) near the device, then press Ctrl + C, copy the test.wav file to your computer, and listen to the recording.
...
Read the original on telefoncek.si »
While LLMs are adept at reading and can be terrific at editing, their writing is much more mixed. At best, writing from LLMs is hackneyed and cliché-ridden; at worst, it brims with tells that reveal that the prose is in fact automatically generated.
What’s so bad about this? First, to those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their
intellectual
fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse).
Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?
This can be navigated, of course, but it is truly perilous: our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice. For us at Oxide, there is a more mechanical reason to be jaundiced about using LLMs to write: because our hiring process very much selects for writers, we know that everyone at Oxide can write — and we have the luxury of demanding of ourselves the kind of writing that we know that we are all capable of.
So our guideline is to generally not use LLMs to write, but this shouldn’t be thought of as an absolute — and it doesn’t mean that an LLM can’t be used as part of the writing process. Just please: consider your responsibility to yourself, to your own ideas — and to the reader.
...
Read the original on rfd.shared.oxide.computer »
Z-Image is a powerful and highly efficient image generation model with 6B parameters. Currently there are three variants:
🚀 Z-Image-Turbo — A distilled version of Z-Image that matches or exceeds leading competitors with only 8 NFEs (Number of Function Evaluations). It offers ⚡️sub-second inference latency⚡️ on enterprise-grade H800 GPUs and fits comfortably within 16G VRAM consumer devices. It excels in photorealistic image generation, bilingual text rendering (English & Chinese), and robust instruction adherence.
🧱 Z-Image-Base — The non-distilled foundation model. By releasing this checkpoint, we aim to unlock the full potential for community-driven fine-tuning and custom development.
✍️ Z-Image-Edit — A variant fine-tuned on Z-Image specifically for image editing tasks. It supports creative image-to-image generation with impressive instruction-following capabilities, allowing for precise edits based on natural language prompts.
💡 Prompt Enhancing & Reasoning: Prompt Enhancer empowers the model with reasoning capabilities, enabling it to transcend surface-level descriptions and tap into underlying world knowledge.
We adopt a Scalable Single-Stream DiT (S3-DiT) architecture. In this setup, text, visual semantic tokens, and image VAE tokens are concatenated at the sequence level to serve as a unified input stream, maximizing parameter efficiency compared to dual-stream approaches.
According to the Elo-based Human Preference Evaluation (on Alibaba AI Arena), Z-Image-Turbo shows highly competitive performance against other leading models, while achieving state-of-the-art results among open-source models.
Build a virtual environment you like and then install the dependencies:
pip install -e .
Then run the following code to generate an image:
python inference.py
Install the latest version of diffusers, use the following command:
Click here for details for why you need to install diffusers from source
We have submitted two pull requests (#12703 and #12715) to the 🤗 diffusers repository to add support for Z-Image. Both PRs have been merged into the latest official diffusers release. Therefore, you need to install diffusers from source for the latest features and Z-Image support.
pip install git+https://github.com/huggingface/diffusers
Then, try the following code to generate an image:
import torch
from diffusers import ZImagePipeline
# 1. Load the pipeline
# Use bfloat16 for optimal performance on supported GPUs
pipe = ZImagePipeline.from_pretrained(
“Tongyi-MAI/Z-Image-Turbo”,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=False,
pipe.to(“cuda”)
# [Optional] Attention Backend
# Diffusers uses SDPA by default. Switch to Flash Attention for better efficiency if supported:
# pipe.transformer.set_attention_backend(“flash”) # Enable Flash-Attention-2
# pipe.transformer.set_attention_backend(“_flash_3”) # Enable Flash-Attention-3
# [Optional] Model Compilation
# Compiling the DiT model accelerates inference, but the first run will take longer to compile.
# pipe.transformer.compile()
# [Optional] CPU Offloading
# Enable CPU offloading for memory-constrained devices.
# pipe.enable_model_cpu_offload()
prompt = “Young Chinese woman in red Hanfu, intricate embroidery. Impeccable makeup, red floral forehead pattern. Elaborate high bun, golden phoenix headdress, red flowers, beads. Holds round folding fan with lady, trees, bird. Neon lightning-bolt lamp (⚡️), bright yellow glow, above extended left palm. Soft-lit outdoor night background, silhouetted tiered pagoda (西安大雁塔), blurred colorful distant lights.”
# 2. Generate Image
image = pipe(
prompt=prompt,
height=1024,
width=1024,
num_inference_steps=9, # This actually results in 8 DiT forwards
guidance_scale=0.0, # Guidance should be 0 for the Turbo models
generator=torch.Generator(“cuda”).manual_seed(42),
).images[0]
image.save(“example.png”)
Decoupled-DMD is the core few-step distillation algorithm that empowers the 8-step Z-Image model.
Our core insight in Decoupled-DMD is that the success of existing DMD (Distributaion Matching Distillation) methods is the result of two independent, collaborating mechanisms:
* CFG Augmentation (CA): The primary engine 🚀 driving the distillation process, a factor largely overlooked in previous work.
* Distribution Matching (DM): Acts more as a regularizer ⚖️, ensuring the stability and quality of the generated output.
By recognizing and decoupling these two mechanisms, we were able to study and optimize them in isolation. This ultimately motivated us to develop an improved distillation process that significantly enhances the performance of few-step generation.
Building upon the strong foundation of Decoupled-DMD, our 8-step Z-Image model has already demonstrated exceptional capabilities. To achieve further improvements in terms of semantic alignment, aesthetic quality, and structural coherence—while producing images with richer high-frequency details—we present DMDR.
Our core insight behind DMDR is that Reinforcement Learning (RL) and Distribution Matching Distillation (DMD) can be synergistically integrated during the post-training of few-step models. We demonstrate that:
If you find our work useful in your research, please consider citing:
@article{team2025zimage,
title={Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer},
author={Z-Image Team},
journal={arXiv preprint arXiv:2511.22699},
year={2025}
@article{liu2025decoupled,
title={Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield},
author={Dongyang Liu and Peng Gao and David Liu and Ruoyi Du and Zhen Li and Qilong Wu and Xin Jin and Sihan Cao and Shifeng Zhang and Hongsheng Li and Steven Hoi},
journal={arXiv preprint arXiv:2511.22677},
year={2025}
@article{jiang2025distribution,
title={Distribution Matching Distillation Meets Reinforcement Learning},
author={Jiang, Dengyang and Liu, Dongyang and Wang, Zanyi and Wu, Qilong and Jin, Xin and Liu, David and Li, Zhen and Wang, Mengmeng and Gao, Peng and Yang, Harry},
journal={arXiv preprint arXiv:2511.13649},
year={2025}
We’re actively looking for Research Scientists, Engineers, and Interns to work on foundational generative models and their applications. Interested candidates please send your resume to: jingpeng.gp@alibaba-inc.com
...
Read the original on github.com »
In 2002 I asked a number of developers/Unix people for screenshots of their desktops. I recently republished them, and, seeing the interest this generated, I thought it’d be fun to ask the same people* again 13 years later. To my delight I managed to reach many of them.
* Sans Dennis Ritchie and itojun, who are no longer with us.
So, without further ado:
Brian Kernighan (Unix legend, the K in K&R and AWK):
my desktop is pretty boring, since it consists of xterm windows to whatever unix system i am using at the moment. the machine itself is likely to be running some x-window server like exceed on some flavor of windows, though for many years i just used an x terminal.
If you thought it was boring last time, check this out!
I don’t know how to make a screenshot, because I normally use my computer in text-mode. I have X and GNOME installed, but I use them only occasionally.
Under X, I use the standard environment of Trisquel, but mostly I type at Emacs in a console.
Well, my desktop is quite boring. I mostly work with four xterms and a few Netscape windows. The KDE bar hides automatically, you can only see a thin grey line at the bottom.
Here is the new one. You’ll see that, like before, I have lots of xterms where I work on Vim, Zimbu and email. Now using the Chrome browser, showing off the Zimbu homepage. But clearly everything has become bigger!
Not that much has changed in 13 years. Still using Linux. Still just a browser window and a ton of terminals hiding behind them. The main change is that switched from Pine to Thunderbird for email at some point. The OS on my laptop here is Ubuntu with Unity although there are a lot of Debian packages installed so it is a bit of a hybrid at this point. Oh, and yes, my son Carl is a lot older now.
Ah, my desktop is pretty boring, I used fvwm 1.24 as my window manager and I try to have no more than 1 or 2 windows open per virtual desktop. I use FreeBSD 4-STABLE as my operating system. I first came across Unix when I got an account on a Pyramid 90x running OSx. This had a dual-universe setup: both AT&T and BSD-style environments, chosen by an environment variable. Initially I was given the AT&T environment, but my friends convinced me to ``come over” to BSD. Since then I’ve been a BSD afficionado.
After OSx, SunOS 3.5 and later SunOS releases, until 386BSD 0.1 came out and I started to run BSD at home. Then when 386BSD transmogrified to FreeBSD, I went with FreeBSD.
In terms of desktop, I’m a command-line guy, always will be. My favourite editor is vi, my favourite shell is tcsh (but kudos to rc for elegance). So I don’t really feel the need for GUI things like Gnome or KDE :-)
How things have (and have not changed). I’m still a command-line junkie with at least two xterm windows open. I’m still using a 3x3 virtual desktop. However, instead of fvwm, it is now LXDE. I’ve also switched from FreeBSD to Linux and I’m running Lubuntu as my distribution.
There are a lot of indispensable GUI tools that I use. These include Firefox, lyx, Gimp, KeepassX, Shutter, viking, dia, Wireshark, calibre, audacity, Handbrake and VLC. But where possible I still prefer to script things. My main development languages are still shell, Perl and C.
My shell is now bash. The vi keystrokes are burned into my fingertips and, as long as vim can be ported to new systems, that will be my text editor until I pass on. My mail client is now mutt (definitely not a web client) and my mail is stored locally, not on someone else’s server.
The only issue I have is that, since a job change, I now have to deal with Windoze things. Thus, I have VirtualBox, libreoffice and Wine to help me do that.
I started with Unix on a Pyramid 90x. I now have a smart phone that blows the 90x out of the water on performance, RAM and storage. But I’m so very happy that, somewhere down underneath, there is still a Bourne shell and an operating system that does open(), close(), read(), write(), fork() and exec()!
Jordan Hubbard (FreeBSD co-founder, later Director of UNIX Technology at Apple; now CTO of iXsystems):
You’ll probably be sad (or perhaps not) to hear that my desktop hasn’t really changed much at all - still OS X, though because OS X has virtual desktops now I have multiple “desktops” (6 of them) where Mail.app runs on one, Safari on another, Calendar, Slack, etc - all on separate desktops. This makes it a bit boring, but here’s the one I probably spend the most time in - the terminal window desktop. :)
There we go. Actually, that’s a condensate in one workspace cause I usually use about 4. Some of my favourite apps:
not on the shot, but worth noted
and of course a shot of RTCW
‘screenshot as code’, I maintain my desktop configuration through saltstack: https://github.com/TTimo/linux-salted/commits/master
...
Read the original on anders.unix.se »
“I think that these days what we mean by “autism” is basically “weird person disease.””
“Accurate diagnosis requires consideration of multiple diagnoses. Sometimes, different diagnoses can overlap with one another and can only be differentiated in subtle and nuanced ways, but particular diagnoses vary considerably in levels of public awareness. As such, an individual may meet the diagnostic criteria for one diagnosis but self-diagnoses with a different diagnosis because it is better known.”
Sam Fellowes, Self-Diagnosis in Psychiatry and the Distribution of Social Resources
Unsurprisingly, these days I meet many people in the psychiatric clinic who are convinced that they have autism, or suspect (with various degrees of confidence) that they have autism, or report being diagnosed with autism at some point in their lives by some clinician. And for a fair number of such individuals, I cannot say with reasonable certitude that they have autism. The reasons they give for considering autism vary widely, but tend to be along the lines of…
What’s interesting about many of the items above is that the number one diagnostic possibility in my mind is an anxiety disorder of some sort. I remember seeing a woman who was a classic example of someone with high neuroticism, poor self-esteem, and severe social anxiety, and she had believed for much of her life that she was autistic because some random doctor somewhere at some point (she couldn’t even remember who or what sort of assessment this involved) had told her that she had autism, and she believed it because it fit in with her experience of being awkward-shy-weird.
It is common for me to meet individuals who think they have autism and find myself thinking, “schizoid,” “obsessive compulsive,” “cluster B,” “social anxiety,” “generalized anxiety,” “trauma,” “socially awkward,”… None of these, however, have the mimetic virality of autism.
I don’t want to come across as being skeptical of the reality of autism as a diagnosis or as asserting that most people are misdiagnosed. Autism exists, to the extent that any psychiatric disorder exists. Not everyone is misdiagnosed, perhaps even most people. I am not trying to say, “autism is bullshit.” It’s not. I offer the diagnosis of autism as a clinician perhaps as often as I find myself doubting it.
What intrigues me is that people are drawn to autism as a diagnosis because it seems to offer recognition of something they’ve lived with: they may be deeply awkward, terribly shy, or bad with people, they may struggle with social interactions, they may find other people annoying, other people may find them weird, they may have a hard time connecting to others, they may have been bullied, and they may have directed their loneliness or introversion towards peculiar interests or hobbies. Autism seems to them to capture all that. It seems like an apt and appealing narrative. But autism may also be the only relevant diagnosis they’ve heard of or are familiar with. They haven’t seen any cool TikToks about being schizoid. No one’s offering them quizzes about being schizotypal. A random pediatrician or primary care doc is not going to tell them they have an obsessive-compulsive style of personality. So when some professional doubts that they have “autism,” they see it as a dismissal or rejection of their “lived experience.” Of course, I am weird-anxious-awkward. How can you say otherwise? What they don’t know is that the choice is not between autism or nothing, but rather between autism and about a dozen other diagnostic possibilities.
So for the sake of our collective sanity, let’s consider a few of them…
To be diagnosed with autism spectrum disorder according to DSM-5, a person must have ongoing difficulties in social communication and interaction in all three areas: trouble with back-and-forth social connection, problems with nonverbal communication like eye contact and body language, and difficulty making or keeping friendships. They also must show at least two types of repetitive or restricted behaviors, such as repetitive movements or phrases, needing things to stay the same, having very intense focused interests, or being unusually sensitive (or under-sensitive) to things like sounds, textures, or lights. These patterns must have been present since early childhood (even if they weren’t noticed until later when life got more complicated), lead to substantial impairment in functioning, and can’t simply be explained by intellectual disability (or other psychiatric disorders).
To “have” autism is simply to demonstrate this cluster of characteristics at the requisite level of severity and pervasiveness. It doesn’t mean that the person has a specific type of brain attribute or a specific set of genes that differentiates them from non-autistics. No such internal essence exists for the notion as currently conceptualized.
Autism spectrum is wide enough to have very different prototypes within it. On one end we have profound autism, representing someone with severe autistic traits who is completely dependent on others for care and has substantial intellectual disability or very limited language ability. At the other end, we have successful nerdy individuals with autistic traits and superior intelligence, often seen in science or academia, à la Sheldon Cooper. (Holden Thorp, editor-in-chief of the Science journals and former UNC chancellor, for example, has publicly disclosed his own autism diagnosis.) This wide range is confusing enough on its own, even without considering other conditions that can present with autism-like features.
Autism cannot be identified via medical “tests.” It is identified via clinical information in the form of history, observation, and interaction, and the less information available or the more unreliable the information provided is, the more uncertain we’ll be. To have autism is basically a judgment call that one is a good match to a descriptive prototype. We can get this judgment wrong, and we sometimes do get it wrong. (There is nothing wrong with this fallibility as such, as long as we recognize it. Lives have been built on foundations less sturdy.)
Autism as a category or identity has taken on a life of its own. I am aware that not everyone in the neurodiversity crowd accepts the legitimacy of clinician judgments or clinical criteria as outlined in the diagnostic manuals, such as the DSM and ICD. There are other ways to ground the legitimacy of self-diagnoses, in theoretically virtuous accounts or pragmatic uses, which require distinct considerations of their own; I don’t reject that. But here, I am concerned with autism as a clinical diagnosis and the accuracy of autism understood in terms of alignment with clinical diagnosis. Would competent and knowledgeable clinicians with access to all relevant clinical information concur that the person’s presentation meets diagnostic criteria for autism? If you don’t really care about that, this post is not for you.
Schizoid personality describes people who have little desire for close relationships and prefer solitary activities. Unlike people who are simply shy or socially anxious, individuals with schizoid personality style genuinely don’t find relationships rewarding or necessary. They typically appear emotionally detached or cold, show restricted emotional expression, seem indifferent to praise or criticism, and have few if any close friends or confidants. They often live quietly on the margins of society, pursuing solitary interests or jobs. They keep their inner worlds (which can be quite rich) private and don’t seek emotional intimacy with others.
In autism, social difficulties stem from genuine challenges with processing social information: difficulty reading facial expressions, understanding implied meanings, picking up on social cues, knowing unwritten social rules, etc. In schizoid personality, the person typically understands social conventions but simply isn’t motivated to engage with them. They withdraw from genuine disinterest. Schizoid personality also lacks the additional features of autism (repetitive or restricted behaviors, various sensory sensitivities).
Schizotypal personality describes people who have odd or eccentric beliefs, unusual perceptual experiences, and difficulties with close relationships. Unlike schizoid personality (which involves simple disinterest in relationships), schizotypal includes strange ways of thinking and perceiving the world. People with schizotypal personality might believe in telepathy, feel they have special powers, think random events have special meaning for them personally, or have unusual perceptual experiences (like feeling a presence in the room or hearing whispers). They typically have few close friends, experience social anxiety that doesn’t improve with familiarity, and may appear paranoid or suspicious of others’ motives. Both schizotypal personality and autism can involve social difficulties and odd or eccentric behavior, but in schizotypal personality, the peculiarity comes from magical thinking, paranoid ideas, and perceptual distortions.
Obsessive-compulsive personality describes people who are preoccupied with orderliness, perfectionism, and control. These individuals are rigid rule-followers who want things to be done “the right way,” have difficulty delegating tasks, and get caught up in details and lists to the point where they lose sight of the main goal. They tend to be workaholics who neglect leisure and friendships, are inflexible about matters of morality or ethics, and are often stubborn and controlling. Both obsessive-compulsive personality and autism can involve rigid adherence to routines, rules, and specific ways of doing things. In obsessive-compulsive personality, the inflexibility comes from anxiety about loss of control. The person is trying to, consciously or unconsciously, manage anxiety through control and perfectionism. In autism, the need for sameness and routine serves different functions. It provides predictability in a world that feels confusing or it helps with sensory regulation rather than anxiety-driven perfectionism.
Severe social anxiety is an intense, persistent fear of social situations where a person might be judged, embarrassed, or humiliated. Social anxiety disorder involves overwhelming fear that interferes with daily life. People with this condition worry excessively about saying something stupid, looking foolish, or being rejected. They often avoid social situations entirely, which can lead to isolation, difficulty maintaining employment, and problems forming relationships. Both social anxiety and autism involve social difficulties and withdrawal. Social anxiety usually improves significantly in comfortable, safe environments (like with close family or friends), while autistic social differences tend to be more consistent across all contexts.
Borderline personality disorder involves intense emotional instability, unstable relationships, fear of abandonment, and a shifting sense of self, with people experiencing rapid mood swings and chaotic relationships that alternate between idealization and devaluation of others. While it can resemble autism through social difficulties, emotional dysregulation, rigid thinking, and feeling different from others, the key distinctions are that borderline centers on intense relationship preoccupations and emotional chaos, whereas autism involves genuine difficulty understanding social cues and communication; borderline features rapidly shifting identity and relationship-triggered mood swings, while autism includes stable self-concept, sensory sensitivities, restricted interests, and literal communication that aren’t present in borderline; and borderline symptoms fluctuate dramatically with relationship stability while autistic traits remain consistent across contexts.
Social communication disorder is a condition in DSM-5 where someone has significant, ongoing difficulty using verbal and nonverbal communication appropriately in social contexts. People with social communication disorder struggle with the “pragmatic” aspects of language, that is, knowing how to use language effectively in social situations. They may have trouble understanding when to take turns in conversation, knowing how much detail to give, adjusting their speaking style for different situations, understanding implied meanings or hints, picking up on nonverbal cues like body language and facial expressions, and knowing how to start, maintain, or end conversations naturally. This makes forming friendships and relationships difficult and affects life functioning. The social communication problems in social communication disorder look nearly identical to the “Criterion A” features of autism. However, unlike autism, people with social communication disorder don’t show repetitive behaviors, restricted interests, sensory sensitivities, or the need for sameness and routine.
Social communication disorder is rarely diagnosed in favor of autism primarily because autism provides access to critical services, insurance coverage, educational support, and legal protections that social communication disorder does not reliably offer, creating strong practical incentives for families and clinicians to prefer the autism diagnosis. Additionally, autism has an established evidence base, validated assessment tools, clear intervention protocols, and a large supportive community with a neurodiversity-affirming culture, while social communication disorder has none of these. It has no community, minimal research, no specific treatments, and little professional awareness since it was only introduced in the DSM in 2013. Service delivery, insurance, and educational systems are built entirely around autism rather than social communication disorder, and since both conditions require similar interventions for social-communication difficulties, there’s little practical incentive to make the diagnostic distinction, especially when the boundary between them (whether restricted/repetitive behaviors are truly absent or just subtle) is often unclear and clinicians are often unsure the distinction really matters.
Trauma-related disorders, particularly from early developmental trauma, severe neglect, or disrupted attachment, can mimic autism through social withdrawal and avoidance of eye contact (defensive protection rather than social processing difficulties), communication delays and difficulties (from lack of language exposure or trauma’s impact on brain development), emotional dysregulation and meltdowns (from emotional dysregulation rather than sensory overload), repetitive self-soothing behaviors (anxiety management rather than stimming), sensory sensitivities (hypervigilance rather than sensory processing differences), and rigid need for routine (anxiety-driven safety-seeking rather than cognitive processing style).
Severe early deprivation can create “quasi-autistic” patterns that can be genuinely difficult to distinguish. The critical distinctions are that trauma-related difficulties typically improve significantly in safe, nurturing environments and with adequate psychological treatment, show more variability across contexts (worse with triggers), are tied to identifiable adverse experiences rather than present from earliest infancy, and lack the restricted interests and genuine social communication processing deficits of autism.
Social awkwardness refers to social ineptness without meaningful impairment that falls within what is considered normal or typical human variation. This can be mistaken for autism because both may involve limited friendships, preference for solitude, conversation difficulties, reduced eye contact, and intense interests, particularly fueled by online self-diagnosis culture and broad autism awareness. The key distinctions are that socially awkward individuals understand what they should do socially but find it difficult or uninteresting (versus genuinely not understanding unwritten rules), show significant improvement with practice and maturity, are more comfortable in specific contexts, lack the sensory sensitivities and restricted/repetitive behaviors required for autism diagnosis, and generally achieve life goals despite awkwardness rather than experiencing clinically significant impairment.
Selective Mutism, Intellectual Disability (without autism), Stereotypic Movement Disorder, Attention-Deficit/Hyperactivity Disorder (ADHD), Schizophrenia Spectrum Disorders, Avoidant Personality Disorder, Attachment Disorders, Generalized Anxiety Disorder, Obsessive-Compulsive Disorder, and Rett Syndrome (a characteristic pattern of developmental regression after initial normal development, typically 6-18 months).
Comorbidity is possible and expected. Someone can be autistic and have maladaptive personality patterns, trauma histories, or anxiety disorders that complicate the presentation. Developmental context, response to relationships, and subjective experiences are all very important in looking beyond the surface presentation to understanding the meaning and functions of behaviors.
...
Read the original on www.psychiatrymargins.com »
According to the Discourse, somebody killed perlThere’s been a flurry of discussion on Hacker News and other tech forums about what killed Perl. I wrote a lot of Perl in the mid 90s and subsequently worked on some of the most trafficked sites on the web in mod_perl in the early 2000s, so I have some thoughts. My take: it was mostly baked into the culture. Perl grew amongst a reactionary community with conservative values, which prevented it from evolving into a mature general purpose language ecosystem. Everything else filled the gap. Something to keep in mind, is that although this is my personal take, and therefore entirely an opinion piece, I was there at the time. I stopped doing Perl properly when I left Amazon, I think this would have been around 2005. It’s based on the first hand impressions of somebody who was very deeply involved in Perl in its heyday, and moved on. I have a lot of experience, from both inside and outside the tent. What culture? Perl always had a significant amount of what you might call “BOFH” culture, which came from its old UNIX sysadmin roots. All of those passive aggressive idioms and in jokes like “RTFM”, “lusers”, “wizards”, “asking for help the wrong way” etc. None of this is literally serious, but it does encode and inform social norms that are essentially tribal and introverted. There implicitly is a privileged population, with a cost of entry to join. Dues must be paid. Cultural conservatism as a first principle. This stems from the old locked-down data centre command culture. When computer resource was expensive, centralised, fragile, and manually operated, it was rigidly maintained by gatekeepers, defending against inappropriate use. I started my career as an apprentice programmer at the very end of this era, (late 80s) pre-web, and before microcomputers had made much inroads, and this really was the prevailing view from inside the fort. (This is a drawback about fort-building. Once you live in a fort, it’s slightly too easy to develop a siege mentality). Computers are special, users are inconvenient, disruption is the main enemy. An unfortunate feedback loop in this kind of “perilous” environment is that it easily turns prideful. It’s difficult to thrive here, if you survive and do well you are skilled; you’ve performed feats; you should mark your rites of passage. This can become a dangerous culture trap. If you’re not careful about it, you may start to think of the hazards and difficulties, the “foot guns”, as necessary features - they teach you those essential survival skills that mark you out. More unkindly, they keep the stupid folk out, and help preserve the high status of those who survived long enough to be assimilated. Uh-oh, now you’ve invented class politics. The problem with this thinking is that it’s self-reinforcing. Working hard to master system complexities was genuinely rewarding - you really were doing difficult things and doing them well. This is actually the same mechanism behind what eventually became known as ’meritocracy′1, but the core point is simpler - if difficulty itself becomes a badge of honour, you’ve created a trap: anything that makes the system more approachable starts to feel like it’s cheapening what you achieved. You become invested in preserving the barriers you overcame. So the UNIX operator culture tended to operate as a tribal meritocracy (as opposed to the UNIX implementer culture, which fell out of a different set of cultural norms, quite an interesting side bar itself2), a cultural priesthood, somewhat self-regarding, rewarding of cleverness and knowledge hoarding, prone to feats of bravado, full of lore, with a defensive mentality of keeping the flame aloft, keeping the plebs happy and fed, and warding off the barbarians. As we entered the 90s it was already gently in decline, because centralised computing was giving way to the rise of the microcomputer, but the sudden explosive growth of the WWW pulled internet / Unix culture suddenly back into the mainstream with an enormous and public opportunity vacuum. Everyone suddenly has an urgent need to write programs that push text off UNIX file-systems (and databases) and into web pages, and Perl is uniquely positioned to have a strong first-mover advantage in this suddenly vital, novel ecosystem. But it’s culture and values are very much pulled across from this previous era. I remember this tension as always being tangibly there. Perl IRC and mailing lists were quite cliquey and full of venerated experts and in-jokes, rough on naivety, keen on robust, verbose debate, and a little suspicious of newcomers. And very cult-like. The “TIMTOWTDI” rule, although ostensibly liberal, literally means ‘there is more than one way to do it in Perl’ - and you can perhaps infer from that that there’s little to no reason to do it using anything else. Elevating extreme flexibility like this is paradoxically also an engine of conservatism. If Perl can already do anything, flexibly, in multiple ways, then the language itself doesn’t need to change - ‘we already have one of those here, we don’t need new things’. This attitude determined how Perl intended to handle evolution: the core language would remain stable (a fort inside a fort, only accessible to high level wizards), while innovation was pushed outward to CPAN. You could add features outside of core by writing and consuming third party libraries, you could bend language behaviour with pragmas without modifying Perl itself. The very best CPAN modules could theoretically be promoted into core, allowing the language to evolve conservatively from proven, widely-used features. On paper, this sounds reasonable. In practice, I think it encoded a fundamental conflict of interest into the community early on, and set the stage for many of the later growth problems. I’m not going to pretend that Perl invented dependency hell, but I think it turned out to be another one of those profound misfeatures that their cultural philosophy lead them to mistake for virtue, and embrace. An interesting thing I think has been missed discussing the context of the original blog piece, about whether Perl 6 significantly impacted Perl growth, is the fact that Perl 6 itself manifested out of ongoing arguments. Perl 6 is a schism. Here’s a oft-cited note from Larry Wall himself about the incident that sparked Perl 6, at OSCON 2000 We spent the first hour gabbing about all sorts of political and organizational issues of a fairly boring and mundane nature. Partway through, Jon Orwant comes in, and stands there for a few minutes listening, and then he very calmly walks over to the coffee service table in the corner, and there were about 20 of us in the room, and he picks up a coffee mug and throws it against the other wall and he keeps throwing coffee mugs against the other wall, and he says “we are f-ed unless we can come up with something that will excite the community, because everyone’s getting bored and going off and doing other things”. Perl 6 was really a schism. Perl was already under a great amount of strain trying to accommodate the modernising influx of post dot-com mainstream web application building, alongside the entrenched conservatism of the core maintainers, and the maintenance burden of a few years exponential growth of third-party libraries, starting to build a fractal mess of slightly differentiating, incompatible approaches of those multiple ways to do things that were effectively now table-stakes language features, as the deployment landscape started to tiptoe towards a more modern, ubiquitous WWW3. So, while I agree that it’s wrong to generalise that ‘Perl 6 killed Perl’, I would say that Perl 6 was a symptom of the irreconcilable internal forces that killed Perl. Although, I also intend to go on to point out that Perl isn’t dead, nothing has actually killed Perl. Killed Perl is a very stupid way to frame the discussion, but here we are. So… Perl 6 is created as a valve to offset that pressure, and it kind of works. Up to a point. Unfortunately I think the side effect really is that the two branches of the culture, in the process of forking, double down on their encoded norms. Perl 5.x beds down as the practical, already solved way to do all the same things, with little need to change. Any requirements for more modern application patterns that are emerging in the broader web development environment, like idk, Unicode, REST clients, strict data structures, asynchronous I/O, whatever? That can either wait for Perl6 or you can pull things together using the CPAN if you want to move right now. Perl 6 leans the other way - they don’t need to ship immediately, we have Perl 5 already here for doing things, Perl 6 is going to innovate on everything, and spend it’s time getting there, designing up-front.4 They spend at least two years writing high level requirement specs. They even spin out a side-project trying to build a universal virtual machine to run all dynamic programming languages that never delivers5 This is the landscape where Perl’s central dominance of ‘back end’ web programming continues to slip. Unfortunately, alongside the now principled bias toward cultural conservatism, Perl 5 has an explicit excuse for it. The future is over there, and exciting, and meanwhile we’re working usefully, and getting paid, and getting stuff done. Kind of OK from inside the fort. Some day we’ll move to the newer fort, but right now this is fine. Not very attractive to newcomers though, really. And this is also sort of OK, because Perl doesn’t really want those sort of newcomers, does it? The kind that turns up on IRC or forums and asks basic questions about Perl 6 and sadly often gets treated with open contempt. Meanwhile, over thereRuby has sprouted “Ruby on Rails”, and it’s taken the dynamic web building world by storm. Rails is a second generation web framework, that’s proudly an ‘opinionated web framework’. Given that the web application architecture is starting to stabilise into a kind of three-tier system , with a client as a web browser, a middle tier as a monolithic application server, and a persistence layer as a relational database , and a split server architecture serving static and dynamic content from different routes, here is just one way to do that, with hugely developer friendly tooling turning this into a cookie-cutter solution for the 80% core, and a plugin and client-side decoration approach that allows for the necessary per-site customisation. Ruby is interesting as well. Ruby is kind of a Perl6 really. More accurately it’s a parallel universe Perl5 Ruby comes from Japan, and has developed as an attempt to build something similar to Perl, but it’s developed much later, by programming language enthusiasts, and for the first ten years or so, it’s mostly only used in Japan. To my line of thinking this is probably important. Ruby does not spring from decades of sysadmin or sysop culture. Ruby is a language for programmers, and is at this point an sensible candidate for building something like Rails with - a relatively blank canvas for dynamic programming, with many of the same qualities as Perl, with less legacy cruft, and more modern niceties, like an integrated object system, exceptions, straightforward data structures. Ruby also has adopted ‘friendliness’ as a core value, and the culture over there adopts a principled approach to aggressively welcoming newcomers, promoting easiness, and programmer happiness and convenience as strong first class principles. Rails is a huge hit. At this point, which is around about the time I stopped significantly using Perl (2004-2005) (because I quit my job, not out of any core animosity toward it, in fact, in my day, I was really quite a Perl fan), Rails is the most appealing place to start as a new web programmer. Adoption rate is high, community is great, velocity of development is well-paced, and there’s a lovely , well-lit, onboarding pipeline for how to start. You don’t even really need to know ruby. It has a one-shot install tool, and generates working websites from templates, almost out of the box. It’s an obvious starting point. Perl being Perl, develops several analogue frameworks to Rails, all of them interdependently compatible and incompatible with each other and each other’s dependencies, all of them designed to be as customisable and as user configurable as they possibly can be6 There are also the other obvious contenders. PHP has been there all along, and it’s almost coming up from entirely the opposite cultural background of Perl. PHP is a users language. It’s built to be deployed by copying script files to your home directory, with minimal server side impact or privileges. It’s barely designed at all, but it encounters explosive growth all the way through the first (and through into the second) web era, almost entirely because it makes the barrier to onboarding so low as to be non-existent. PHP gets a couple of extra free shots in the arm Because it’s architecture is so amenable to shared-server hosting, it is adopted as the primary implementation language of the blogging boom. An entire generation of web developers is born of installing and customising WordPress and text-pattern et. al by installing it directly into your home directory on a rented CPanel host account. It’s the go-to answer for ’I’m not a programmer really but how do I get a personal web site′7 This zero gate-keeping approach keeps the PHP stack firmly on the table of ‘basic’ web programmers all through the history of the web up to the current day. Because of these initially lightweight deployment targets, PHP scales like little else, mostly because it’s execution model leans strongly towards idempotent execution, with each web request tearing up and tearing down the whole environment. In a sense, this is slower than keeping hot state around, but it does lend itself extremely well to shared-nothing horizontal scaling, which as the web user base increases gigantically throughout the 2000s era, is the simplest route to scaling out. Facebook famously, is built in PHP at this point in time. There is of course one other big horse in the race in this era, and it’s a particularly interesting one in many ways, certainly when contrasted with Perl. This is of course, Python. Python is a close contemporary of Perl’s but once again, it’s roots are somewhere very different. Python doesn’t come from UNIX culture either. Python comes from academia, and programming language culture. It’s kind of a forgotten footnote, but Python was originally built for the Amoeba operating system, and it’s intention was to be a straightforward programming language for scripting this8. The idea was to build a language that could be the ‘second programming language’ for programmers. Given that this is the 1980s, early 1990s, the programmers would be expected to be mostly using C / C++ ,perhaps Pascal. Python was intended to allow faster development for lighter weight programs or scripting tasks. I suppose the idea was to take something that you might want to build in a shell script, but provide enough high level structured support that you could cleanly build the kind of things that quickly become a problem in shell scripts. So, it emphasises data structures, and scoped variables, and modules, and prioritises making it possible to extend the language with modules. Typical things that experienced programmers would want to use. The language was also designed to be portable between the different platforms programmers would use, running on the desktops of the day, but also on the server. As a consequence, it had a broad standard library of common portable abstractions around standard system features - file-systems, concurrency, time, FFI. For quite a long time, one of python’s standard mottoes was ‘batteries included’. Python never set the world on fire at any particular moment, but it remained committed to a clear evolutionary incremental development, and clean engineering principles. Again, I think the key element here is cultural tone. Python is kind of boring, not trying to be anyone’s best language, or even a universal language. Python was always a little fussy, maybe snobby, slightly abstracted away from the real world. It’s almost as old as Perl and it just kept incrementally evolving, picking up users, picking up features, slowly broadening the standard library. The first time I saw Python pick up an undeniable mainstream advantage would also have been around the early 2000s, when Google publicly adopted it as one of their house standard languages. Never radical, just calmly evolving in it’s environs. When I sketch out this landscape, I remain firmly convinced that most of Perl’s impedance to continued growth were cultural. Perl’s huge moment of relevance in the 90s was because it cross-pollinated two diverging user cultures. Traditional UNIX / database / data-centre maintenance and admin users, and enthusiastic early web builders and scalers. It had a cultural shock phase from extremely rapid growth, the centre couldn’t hold, and things slowly fell apart. Circling back though, it’s time to address the real elephant in the room. Perl manifestly did not die. It’s here right now. It’s installed I think by default, on almost every single computer I own and operate, without me doing a single thing to make that happen. It’s still used every day by millions of people on millions of systems (even if that isn’t deliberate). It’s still used by many people entirely deliberately for building software, whether that’s because they know it and like it and it works, or because they’re interfacing with or working on legacy Perl systems (of which there are still many), or maybe they’re using it still in it’s original intentional role - A capable POSIX-native scripting language, with much better performance and a broader feature-set than any shell or awk. I still occasionally break it out myself, for small scripts I would like to use more than once, or as parts of CLI pipelines. What I don’t do any more is reach for Perl first to make anything new. In my case, it’s just because I typically am spoilt for options that are a better fit for most tasks, depending on whatever it is I’m trying to achieve. By the time I came to Perl, (1998-ish), I was already on my third career phase, I had a strong UNIX background, and had already built real things in lisp, java, pascal, visual basic and C++. My attitude to languages was already informed by picking a tool to fit the task at hand. Boy did I love Perl for a few years. The product/market-fit for those early web days was just beautiful. The culture did have too much of the negative tropes I’ve been pointing at, but that wasn’t really a problem personally for me, I’d grown up amongst the BOFHs inside the data centres already, it wasn’t too hard for me to assimilate, nor pick up the core principles. I did occasionally bounce off a couple of abrasive characters in the community, but mostly this just kept me loosely coupled, I enjoyed how the language solved the problems I needed solving quickly, I enjoyed the flexibility, and I also enjoyed the way that it made me feel smart, and en-route to my wizard’s robes and hat, when i used it to solve harder problems in creative ways, or designed ways around bugs and gremlins. For a good 3-4 years I would have immediately picked it as my favourite language. So as I say, I didn’t fall out of it with any sense of pique, I just naturally moved to different domains, and picked up tools that best fit. After Amazon, I spent t a lot of time concentrating on OS X and audio programming, and that involved a lot of objective C, C++. The scripting tools in that domain were often in ruby, sometimes python. For personal hacking, I picked up lisp again9 (which I’d always enjoyed in school). I dipped in and out of Perl here and there for occasional contract work, but I tended to gravitate more towards larger database stuff, where I typically found C, java and python. The next time I was building web things, it was all Rails and ruby, and then moving towards the web services / REST / cloud era, the natural fits were go, and of course node and JavaScript or Typescript. I’ve always been a polyglot, and I’ve always been pretty comfortable moving between programming languages. The truth of the matter is, that the majority of programming work is broadly similar, and the specific implementation details of the language you use don’t matter all that much, if it’s a good fit for the circumstances. I can’t imagine Perl disappearing entirely in my lifetime. I can remember entire programming environments and languages that are much, much deader than I can ever see Perl becoming. Pascal used to be huge for teaching and also for desktop development in the 8/16 bit eraObjective C - only really useful inside the Apple ecosystem, and they’re hell bent on phasing it out. Before I got into the Internet, I used to build application software for 16 bit Windows (3.11) which was a vast market, in a mixture of database 4GLs (like PowerBuilder, Gupta/Centura SQLWindows) and Win16 C APIs. This entire universe basically no longer exists, and is fully obsolete. There must be many similar cases.I mean who the hell realistically uses common lisp any more outside of legacy or enthusiast markets? Less people than Perl I’m sure.Perl also got to be if not first, then certainly early to dominate a new market paradigm. Plenty of things never manage that. It’s hard to see Perl as anything other than an enormous success on these terms. Perl innovated and influenced languages that came after in some truly significant ways. Tightly embedding regular expressions and extending regular expressions (the most commonly used regular expression dialect in other tools is Perl)CPAN, for package/library distribution via the internet, with dependency resolution - and including important concepts like supply chain verification with strong package signaturesA huge emphasis on testing, automated test harnesses, and CI. Perl test format (TAP) is also widely found in other CI/harness systemsBlending the gap between shell / scripting / and system programming in a single tool. I suppose this is debatable, but the way Perl basically integrated all the fundamental POSIX/libc as native built-ins with broadly the same semantics, but with managed memory and shell conventions was really revolutionary. Before this, most languages I had ever seen broadly tended to sit in one box, afterwards, most languages tended to span across several.Amazing integrated documentation, online, in-tool and also man pages. POD is maybe the most successful ever implementation of literate programming ideas (although most of the real docs don’t intertwingle the documentation very much iirc)Just these points, and I’m sure there are many others that could be made, are enough of a legacy to be proud of. Counterfactuals are stupid (but also fun). If I squint, I can imagine that a Perl with a less reactionary culture, and a healthier acceptance of other ideas and environmental change might have been able to evolve alongside the other tools in the web paradigm shift, and still occupy a more central position in today’s development landscape. That’s not the Perl we have though, and that didn’t happen. And I’m very confident that without the Perl we did have, the whole of modern software practice would be differently shaped. I do think Perl now lives in a legacy role, with a declining influence, but that’s really nothing to feel shame or regret for. Nobody is going to forcibly take Perl away as long as POSIX exists, and so far as I can see, that means forever. In 2025 too, I can see the invisible hand creeping up on some of these other systems I’ve mentioned. Rust is slowly absorbing C and C++. Ruby (and of course Rails) is clearly in decline, in a way that probably consigns it to become a similar legacy state. From a certain angle, it looks a lot like Typescript is slowly supplanting Python. I won’t be entirely surprised if that happens, although at my age I kind of doubt I’ll live to see the day. 1 : Meritocracy is a fun word. It was originally coined as a pejorative term to describe a dystopian mechanism by which modern i.e. Western / British society entrenches and justifies an unfair and unequal distribution of privilege 2 : The UNIX implementer culture, is scientific/academic and fell out of Bell Labs. I guess you could extend this school of thought as a cultural sweep towards building abstracted cloud operations, toward plan 9/ Inferno / go 3 : Web 2.0 was first defined in 1999 by Darcy DiNucci in a print article , the term didn’t become mainstream until it was picked up and promoted by Tim O’Reilly (then owner/operator of perl.com, trivia fans), an astute inside observer of the forces driving web development 4: Another unfortunate bit of luck here. Right at the point of time that ‘agile’ starts getting some traction as a more natural way to embrace software development - i.e. iterating in small increments against a changing environment and requirements, Perl 6 decides to do perhaps the most waterfall open source development process ever attempted. . It is fifteen years before Perl 6 ships something resembling a usable programming language.
5 : The Parrot VM, a lovely quixotic idea, which sadly fizzled out, after even Perl 6 stopped trying to target it. Interestingly enough, both python and ruby both made relatively high profile ports to the JVM that were useful enough to be used for production deploys in certain niches. 6 : A side effect of this degree of abstraction, is that as well as being very hard to get started, it’s easy to fall foul of performance overhead. 7 : This ubituitious ecosystem of small footprint wordpress custom installs gives birth to the web agency model of commercial website building / small ecommerce sites, which thrives and is suprisingly healthy today. Recent, and slighly optimistic surveys have pitched WordPress as powering over 40% of all websites today. Now this is certainly inflated, but even if the realistic number is half of that, that’s still pretty damn healthy. 8 : It’s often repeated that Python was designed as a teaching language, but as far as I know, that’s not actually the case. The designer of Python, Guido Van Rossum was previously working on a project that was a intended as training language, called ABC, and many of ABC’s syntax and structural features influenced or made their way into Python. 9 : Common lisp is a better answer to an infinitely flexible ‘everything’ chainsaw language than perl, IMHO
...
Read the original on www.beatworm.co.uk »
Accessibility barriers in research are not new, but they are urgent. The message we have heard from our community is that arXiv can have the most impact in the shortest time by offering HTML papers alongside the existing PDF.
arXiv has successfully launched papers in HTML format. We are gradually backfilling HTML for arXiv’s corpus of over 2 million papers over time. Not every paper can be successfully converted, so a small percentage of papers will not have an HTML version. We will work to improve conversion over time.
The link to the HTML format will appear on abstract pages below the existing PDF download link. Authors will have the opportunity to preview their paper’s HTML as a part of the submission process.
The beta rollout is just the beginning. We have a long way to go to improve HTML papers and will continue to solicit feedback from authors, readers, and the entire arXiv community to improve conversions from LaTeX.
Did you know that 90% of submissions to arXiv are in TeX format, mostly LaTeX? That poses a unique accessibility challenge: to accurately convert from TeX—a very extensible language used in myriad unique ways by authors—to HTML, a language that is much more accessible to screen readers and text-to-speech software, screen magnifiers, and mobile devices. In addition to the technical challenges, the conversion must be both rapid and automated in order to maintain arXiv’s core service of free and fast dissemination.
Because of these challenges we know there will be some conversion and rendering issues. We have decided to launch in beta with “experimental” HTML because:
Accessible papers are needed now. We have talked to the arXiv community, especially researchers with accessibility needs, and they overwhelmingly asked us not to wait.
We need your help. The obvious work is done. Reports from the community will help us identify issues we can track back to specific LaTeX packages that are not converting correctly.
HTML papers on arXiv.org are a work in progress and will sometimes display errors. As we work to improve accessibility we share with you the causes of these errors and what authors can do to help minimize them. Learn more about error messages you may see in HTML papers
We encourage the community to try out HTML papers in your field:
* Go to the abstract page for a paper you are interested in reading.
* Look in the section where you find the link to the PDF download, and click the new link for HTML.
* Report issues by either a) clicking on the Open Issue button b) selecting text and clicking on the Open Issue for Selection button or c) use Ctrl+? on your keyboard. If you are using a screen reader, use Alt+y to toggle accessible reporting buttons per paragraph.
Please do not create reports that the HTML paper doesn’t look exactly like the PDF paper
Our primary goal for this project is to make papers more accessible, so the focus during the beta phase will value function over form. HTML layouts that are incorrect or are illegible are important to report. But we do expect the HTML papers to present differently than the same paper rendered in PDF. Line breaks will occur in different places and there is likely to be more white space. In general, the HTML paper won’t present as compactly. Intricate typographic layouts will not be rendered so intricately. This is by design.
HTML is a different medium and brings its own advantages versus PDF. In addition to being much more compatible with assistive technologies, HTML does a far better job adapting to the characteristics of the device you are reading on, including mobile devices.
If you are an author you can help us improve conversions to HTML by following our guide to LaTeX Markup Best Practices for Successful HTML Papers.
If you are a developer and have free development cycles, help us improve conversions! Our collaborators at LaTeXML maintain a list of issues and welcome feedback and developer contributions.
If you are a publisher, member of a society, or conference organizer you can help us improve conversions to HTML by reviewing the .cls files your organization recommends to authors for unsupported packages. Providing .cls files that use supported packages is an easy way to support and sow accessibility in the scientific community.
First, we want to share a special thank you to all the scientists with disabilities who have generously shared their insights, expertise, and guidance throughout this project.
We want to thank two organizations without which HTML papers on arXiv would not be possible: The LaTeX Project, and the LaTeXML team from NIST. We deeply thank each member of these teams for their knowledge, incredible work, and commitment to accessibility.
...
Read the original on info.arxiv.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.