10 interesting stories served every morning and every evening.
The students at America’s elite universities are supposed to be the smartest, most promising young people in the country. And yet, shocking percentages of them are claiming academic accommodations designed for students with learning disabilities.
In an article published this week in The Atlantic, education reporter Rose Horowitch lays out some shocking numbers. At Brown and Harvard, 20 percent of undergraduate students are disabled. At Amherst College, that’s 34 percent. At Stanford University, it’s a galling 38 percent. Most of these students are claiming mental health conditions and learning disabilities, like anxiety, depression, and ADHD.
Obviously, something is off here. The idea that some of the most elite, selective universities in America—schools that require 99th percentile SATs and sterling essays—would be educating large numbers of genuinely learning disabled students is clearly bogus. A student with real cognitive struggles is much more likely to end up in community college, or not in higher education at all, right?
The professors Horowitz interviewed largely back up this theory. “You hear ‘students with disabilities’ and it’s not kids in wheelchairs,” one professor told Horowitch. “It’s just not. It’s rich kids getting extra time on tests.” Talented students get to college, start struggling, and run for a diagnosis to avoid bad grades. Ironically, the very schools that cognitively challenged students are most likely to attend—community colleges—have far lower rates of disabled students, with only three to four percent of such students getting accommodations.
To be fair, some of the students receiving these accommodations do need them. But the current language of the Americans with Disabilities Act (ADA) allows students to get expansive accommodations with little more than a doctor’s note.
While some students are no doubt seeking these accommodations as semi-conscious cheaters, I think most genuinely identify with the mental health condition they’re using to get extra time on tests. Over the past few years, there’s been a rising push to see mental health and neurodevelopmental conditions as not just a medical fact, but an identity marker. Will Lindstrom, the director of the Regents’ Center for Learning Disorders at the University of Georgia, told Horowitch that he sees a growing number of students with this perspective. “It’s almost like it’s part of their identity,” Lindstrom told her. “By the time we see them, they’re convinced they have a neurodevelopmental disorder.”
What’s driving this trend? Well, the way conditions like ADHD, autism, and anxiety get talked about online—the place where most young people first learn about these conditions—is probably a contributing factor. Online creators tend to paint a very broad picture of the conditions they describe. A quick scroll of TikTok reveals creators labeling everything from always wearing headphones, to being bad at managing your time, to doodling in class as a sign that someone may have a diagnosable condition. According to these videos, who isn’t disabled?
The result is a deeply distorted view of “normal.” If ever struggling to focus or experiencing boredom is a sign you have ADHD, the implication is that a “normal,” nondisabled person has essentially no problems. A “neurotypical” person, the thinking goes, can churn out a 15-page paper with no hint of procrastination, maintain perfect focus during a boring lecture, and never experience social anxiety or awkwardness. This view is buffeted by the current way many of these conditions are diagnosed. As Horowitch points out, when the latest issue of the DSM, the manual psychiatrists use to diagnose patients, was released in 2013, it significantly lowered the bar for an ADHD diagnosis. When the definition of these conditions is set so liberally, it’s easy to imagine a highly intelligent Stanford student becoming convinced that any sign of academic struggle proves they’re learning disabled, and any problems making friends are a sign they have autism.
Risk-aversion, too, seems like a compelling factor driving bright students to claim learning disabilities. Our nation’s most promising students are also its least assured. So afraid of failure—of bad grades, of a poorly-received essay—they take any sign of struggle as a diagnosable condition. A few decades ago, a student who entered college and found the material harder to master and their time less easily managed than in high school would have been seen as relatively normal. Now, every time she picks up her phone, a barrage of influencers is clamoring to tell her this is a sign she has ADHD. Discomfort and difficulty are no longer perceived as typical parts of growing up.
In this context, it’s easy to read the rise of academic accommodations among the nation’s most intelligent students as yet another manifestation of the risk-aversion endemic in the striving children of the upper middle class. For most of the elite-college students who receive them, academic accommodations are a protection against failure and self-doubt. Unnecessary accommodations are a two-front form of cheating—they give you an unjust leg-up on your fellow students, but they also allow you to cheat yourself out of genuine intellectual growth. If you mask learning deficiencies with extra time on texts, soothe social anxiety by forgoing presentations, and neglect time management skills with deadline extensions, you might forge a path to better grades. But you’ll also find yourself less capable of tackling the challenges of adult life.
...
Read the original on reason.com »
: Parenting and leadership is similar. Teach a man to fish, etc.
I spent a couple of years managing a team, and I entered that role — like many — without knowing anything about how to do it. I tried to figure out how to be a good manager, and doing so I ended up reading a lot about servant leadership. It never quite sat right with me, though. Servant leadership seems to me a lot like curling parenting: the leader/parent anticipate problems and sweep the way for their direct reports/children.
To be clear, this probably feels very good (initially, anyway) for the direct reports/children. But the servant leader/curling parent quickly becomes an overworked single point of failure, and once they leave there is nobody else who knows how to handle the obstacles the leader moved out of the way for everyone. In the worst cases, they leave behind a group of people who have been completely isolated from the rest of the organisation, and has no idea what their purpose is and how to fit in with the rest of the world.
I would like to invent my own buzzword: transparent leadership. In my book, a good leader
explains values and principles embraced by the organisation to aid them in
making aligned decisions on their own,
creates direct links between supply and demand (instead of deliberately making
themselves a middle man),
allows their direct reports career growth by gradually taking over leadership
responsibilities,
The middle manager that doesn’t perform any useful work is a fun stereotype, but I also think it’s a good target to aim for. The difference lies in what to do once one has rendered oneself redundant. A common response is to invent new work, ask for status reports, and add bureaucracy.
A better response is to go back to working on technical problems. This keeps the manager’s skills fresh and gets them more respect from their reports. The manager should turn into a high-powered spare worker, rather than a paper-shuffler.
...
Read the original on entropicthoughts.com »
Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.
AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”
The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.
According to The Information, one US Azure sales unit set quotas for salespeople to increase customer spending on a product called Foundry, which helps customers develop AI applications, by 50 percent. Less than a fifth of salespeople in that unit met their Foundry sales growth targets. In July, Microsoft lowered those targets to roughly 25 percent growth for the current fiscal year. In another US Azure unit, most salespeople failed to meet an earlier quota to double Foundry sales, and Microsoft cut their quotas to 50 percent for the current fiscal year.
...
Read the original on arstechnica.com »
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
RAM is so expensive, Samsung won’t even sell it to Samsung
Due to rising prices from the “AI” bubble, Samsung Semiconductor reportedly refused a RAM order for new Galaxy phones from Samsung Electronics.
The price of eggs has nothing on the price of computer memory right now. Thanks to a supply crunch from the “AI” bubble, RAM chips are the new gold, with prices on consumer PC memory kits ballooning out of control. In an object lesson in the ridiculousness of an economic bubble, Samsung won’t even sell its memory to… Samsung.
Here’s the situation. Samsung makes everything from refrigerators to supermassive oil tankers. Getting all that stuff made requires an organization that’s literally dozens of affiliated companies and subsidiaries, which don’t necessarily work as closely or harmoniously as you might assume. For this story, we’re talking about Samsung Electronics, which makes Galaxy phones, tablets, laptops, watches, etc., and Samsung Semiconductor Global, which manufactures memory and other chips and supplies the global market. That global market includes both Samsung subsidiaries and their competitors—laptops from Samsung, Dell, and Lenovo sitting on a Best Buy store shelf might all have Samsung-manufactured memory sitting in their RAM slots.
Samsung subsidiaries are, naturally, going to look to Samsung Semiconductor first when they need parts. Such was reportedly the case for Samsung Electronics, in search of memory supplies for its newest smartphones as the company ramps up production for 2026 flagship designs. But with so much RAM hardware going into new “AI” data centers—and those companies willing to pay top dollar for their hardware—memory manufacturers like Samsung, SK Hynix, and Micron are prioritizing data center suppliers to maximize profits.
The end result, according to a report from SE Daily spotted by SamMobile, is that Samsung Semiconductor rejected the original order for smartphone DRAM chips from Samsung Electronics’ Mobile Experience division. The smartphone manufacturing arm of the company had hoped to nail down pricing and supply for another year. But reports say that due to “chipflation,” the phone-making division must renegotiate quarterly, with a long-term supply deal rejected by its corporate sibling. A short-term deal, with higher prices, was reportedly hammered out.
Assuming that this information is accurate—and to be clear, we can’t independently confirm it—consumers will see prices rise for Samsung phones and other mobile hardware. But that’s hardly a surprise. Finished electronics probably won’t see the same meteoric rise in prices as consumer-grade RAM modules, but this rising tide is flooding all the boats. Raspberry Pi, which strives to keep its mod-friendly electronics as cheap as possible, has recently had to bring prices up and called out memory costs as the culprit. Lenovo, the world’s largest PC manufacturer, is stockpiling memory supplies as a bulwark against the market.
But if you’re hoping to see prices lower in 2026, don’t hold your breath. According to a forecast from memory supplier TeamGroup, component prices have tripled recently, causing finished modules to jump in prices as quickly as 100 percent in a month. Absent some kind of disastrous market collapse, prices are expected to continue rising into next year, and supply could remain constrained well into 2027 or later.
Michael is a 10-year veteran of technology journalism, covering everything from Apple to ZTE. On PCWorld he’s the resident keyboard nut, always using a new one for a review and building a new mechanical board or expanding his desktop “battlestation” in his off hours. Michael’s previous bylines include Android Police, Digital Trends, Wired, Lifehacker, and How-To Geek, and he’s covered events like CES and Mobile World Congress live. Michael lives in Pennsylvania where he’s always looking forward to his next kayaking trip.
AMD’s FSR Redstone tech to get wider rollout in December
...
Read the original on www.pcworld.com »
...
Read the original on sinclairtarget.com »
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Anthropic has tapped law firm Wilson Sonsini to begin work on one of the largest initial public offerings ever, which could come as soon as 2026, as the artificial intelligence start-up races OpenAI to the public market.
The maker of the Claude chatbot, which is in talks for a private funding round that would value it at more than $300bn, chose the US west coast law firm in recent days, according to two people with knowledge of the decision.
The start-up, led by chief executive Dario Amodei, had also discussed a potential IPO with big investment banks, according to multiple people with knowledge of those talks. The people characterised the discussions as preliminary and informal, suggesting that the company was not close to picking its IPO underwriters.
Nonetheless, these moves represent a significant step up in Anthropic’s preparations for an IPO that would test the appetite of public markets to back the massive, lossmaking research labs at the heart of the AI boom.
Wilson Sonsini has advised Anthropic since 2022, including on commercial aspects of multibillion-dollar investments from Amazon, and has worked on high-profile tech IPOs such as Google, LinkedIn and Lyft.
Its investors are enthusiastic about an IPO, arguing that Anthropic can seize the initiative from its larger rival OpenAI by listing first.
Anthropic could be prepared to list in 2026, according to one person with knowledge of its plans. Another person close to the company cautioned that an IPO so soon was unlikely.
“It’s fairly standard practice for companies operating at our scale and revenue level to effectively operate as if they are publicly traded companies,” said an Anthropic spokesperson. “We haven’t made any decisions about when or even whether to go public, and don’t have any news to share at this time.”
OpenAI was also undertaking preliminary work to ready itself for a public offering, according to people with knowledge of its plans, though they cautioned it was too soon to set even an approximate date for a listing.
But both companies may also be hampered by the fact that their rapid growth and the astronomical costs of training AI models make their financial performance difficult to forecast.
The pair will also be attempting IPOs at valuations that are unprecedented for US tech start-ups. OpenAI was valued at $500bn in October. Anthropic received a $15bn commitment from Microsoft and Nvidia last month, which will form part of a funding round expected to value the group between $300bn and $350bn.
Anthropic had been working through an internal checklist of changes required to go public, according to one person familiar with the process.
The San Francisco-headquartered start-up hired Krishna Rao, who worked at Airbnb for six years and was instrumental in that company’s IPO, as chief financial officer last year.
Wilson Sonsini did not respond to a request for comment.
...
Read the original on www.ft.com »
Memory price inflation comes for us all, and if you’re not affected yet, just wait.
I was building a new PC last month using some parts I had bought earlier this year. The 64 Gigabyte T-Create DDR5 memory kit I used cost $209 then. Today? The same kit costs $650!
Just in the past week, we found out Raspberry Pi’s increasing their single board computer prices. Micron’s killing the Crucial brand of RAM and storage devices completely, meaning there’s gonna be one fewer consumer memory manufacturer. Samsung can’t even buy RAM from themselves to build their own Smartphones, and small vendors like Libre Computer and Mono are seeing RAM prices double, triple, or even worse, and they’re not even buying the latest RAM tech!
I think PC builders might be the first crowd to get impacted across the board—just look at these insane graphs from PC Parts Picker, showing RAM prices going from like $30 to $120 for DDR4, or like $150 to five hundred dollars for 64 gigs of DDR5.
But the impacts are only just starting to hit other markets.
Libre Computer mentioned on Twitter a single 4 gigabyte module of LPDDR4 memory costs $35. That’s more expensive than every other component on one of their single board computers combined! You can’t survive selling products at a loss, so once the current production batches are sold through, either prices will be increased, or certain product lines will go out of stock.
The smaller the company, the worse the price hit will be. Even Raspberry Pi, who I’m sure has a little more margin built in, already raised SBC prices (and introduced a 1 GB Pi 5—maybe a good excuse for developers to drop Javascript frameworks and program for lower memory requirements again?).
Cameras, gaming consoles, tablets, almost anything that has memory will get hit sooner or later.
I can’t believe I’m saying this, but compared to the current market, Apple’s insane memory upgrade pricing is… actually in line with the rest of the industry.
The reason for all this, of course, is AI datacenter buildouts. I have no clue if there’s any price fixing going on like there was a few decades ago—that’s something conspiracy theorists can debate—but the problem is there’s only a few companies producing all the world’s memory supplies.
And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.
So they’re shutting down their consumer memory lines, and devoting all production to AI.
Even companies like GPU board manufacturers are getting shafted; Nvidia’s not giving memory to them along with their chips like they used to, basically telling them “good luck, you’re on your own for VRAM now!”
Which is especially rich, because Nvidia’s profiting obscenely off of all this stuff.
That’s all bad enough, but some people see a silver lining. I’ve seen some people say “well, once the AI bubble bursts, at least we’ll have a ton of cheap hardware flooding the market!”
And yes, in past decades, that might be one outcome.
But the problem here is the RAM they’re making, a ton of it is either integrated into specialized GPUs that won’t run on normal computers, or being fitted into special types of memory modules that don’t work on consumer PCs, either. (See: HBM).
That, and the GPUs and servers being deployed now don’t even run on normal power and cooling, they’re part of massive systems that would take a ton of effort to get running in even the most well-equipped homelabs. It’s not like the classic Dell R720 that just needs some air and a wall outlet to run.
That is to say, we might be hitting a weird era where the PC building hobby is gutted, SBCs get prohibitively expensive, and anyone who didn’t stockpile parts earlier this year is, pretty much, in a lurch.
Even Lenovo admits to stockpiling RAM, making this like the toilet paper situation back in 2020, except for massive corporations. Not enough supply, so companies who can afford to get some will buy it all up, hoping to stave off the shortages that will probably last longer, partly because of that stockpiling.
I don’t think it’s completely outlandish to think some companies will start scavenging memory chips (ala dosdude1) off other systems for stock, especially if RAM prices keep going up.
It’s either that, or just stop making products. There are some echoes to the global chip shortages that hit in 2021-2022, and that really shook up the market for smaller companies.
I hate to see it happening again, but somehow, here we are a few years later, except this time, the AI bubble is to blame.
Sorry for not having a positive note to end this on, but I guess… maybe it’s a good time to dig into that pile of old projects you never finished instead of buying something new this year.
How long will this last? That’s anybody’s guess. But I’ve already put off some projects I was gonna do for 2026, and I’m sure I’m not the only one.
...
Read the original on www.jeffgeerling.com »
These release notes cover the new features, as well as some backwards incompatible changes you should be aware of when upgrading from Django 5.2 or earlier. We’ve
begun the deprecation process for some features.
See the How to upgrade Django to a newer version guide if you’re updating an existing project.
The Django 5.2.x series is the last to support Python 3.10 and 3.11.
Django 6.0 supports Python 3.12, 3.13, and 3.14. We highly recommend, and only officially support, the latest release of each series.
Following the release of Django 6.0, we suggest that third-party app authors drop support for all versions of Django prior to 5.2. At that time, you should be able to run your package’s tests using python -Wd so that deprecation warnings appear. After making the deprecation warning fixes, your app should be compatible with Django 6.0.
These release notes cover the new features, as well as some backwards incompatible changes you should be aware of when upgrading from Django 5.2 or earlier. We’ve
begun the deprecation process for some features.
See the How to upgrade Django to a newer version guide if you’re updating an existing project.
The Django 5.2.x series is the last to support Python 3.10 and 3.11.
Django 6.0 supports Python 3.12, 3.13, and 3.14. We highly recommend, and only officially support, the latest release of each series.
Following the release of Django 6.0, we suggest that third-party app authors drop support for all versions of Django prior to 5.2. At that time, you should be able to run your package’s tests using python -Wd so that deprecation warnings appear. After making the deprecation warning fixes, your app should be compatible with Django 6.0.
Built-in support for the Content Security Policy (CSP)
standard is now available, making it easier to protect web applications against content injection attacks such as cross-site scripting (XSS). CSP allows declaring trusted sources of content by giving browsers strict rules about which scripts, styles, images, or other resources can be loaded.
CSP policies can now be enforced or monitored directly using built-in tools: headers are added via the
ContentSecurityPolicyMiddleware, nonces are supported through the csp() context processor, and policies are configured using the SECURE_CSP and
SECURE_CSP_REPORT_ONLY settings.
These settings accept Python dictionaries and support Django-provided constants for clarity and safety. For example:
from django.utils.csp import CSP
SECURE_CSP = {
“default-src”: [CSP. SELF],
“script-src”: [CSP.SELF, CSP.NONCE],
“img-src”: [CSP.SELF, “https:“],
The resulting Content-Security-Policy header would be set to:
default-src ‘self’; script-src ‘self’ ‘nonce-SECRET’; img-src ‘self’ https:
To get started, follow the CSP how-to guide. For in-depth guidance, see the CSP security overview and the
reference docs, which include details about decorators to override or disable policies on a per-view basis.
Django now includes a built-in Tasks framework for running code outside the HTTP request–response cycle. This enables offloading work, such as sending emails or processing data, to background workers.
The framework provides task definition, validation, queuing, and result handling. Django guarantees consistent behavior for creating and managing tasks, while the responsibility for running them continues to belong to external worker processes.
Tasks are defined using the task() decorator:
from django.core.mail import send_mail
from django.tasks import task
@task
def email_users(emails, subject, message):
return send_mail(subject, message, None, emails)
Once defined, tasks can be enqueued through a configured backend:
email_users.enqueue(
emails=[“user@example.com”],
subject=“You have a message”,
message=“Hello there!”,
Backends are configured via the TASKS setting. The two
built-in backends included in this release are primarily intended for development and testing.
Django handles task creation and queuing, but does not provide a worker mechanism to run tasks. Execution must be managed by external infrastructure, such as a separate process or service.
See Django’s Tasks framework for an overview and the Tasks reference for API details.
Email handling in Django now uses Python’s modern email API, introduced in Python 3.6. This API, centered around the
email.message. EmailMessage class, offers a cleaner and Unicode-friendly interface for composing and sending emails. It replaces use of Python’s older legacy (Compat32) API, which relied on lower-level MIME classes (from email.mime) and required more manual handling of message structure and encoding.
Notably, the return type of the EmailMessage.message() method is now an instance of Python’s
email.message.EmailMessage. This supports the same API as the previous SafeMIMEText and SafeMIMEMultipart return types, but is not an instance of those now-deprecated classes.
...
Read the original on docs.djangoproject.com »
This is HQ EV CLINIC Lab for EV Research and Development, EVC Academy trainings, franchising and Networking. For service appointement, choose franchise locations. Dismiss
File Service
Tesla
Model S X
File Service
Tesla
Model S X
HomeHybridBMW2021 > PHEV BMW iBMUCP 21F37E Post-Crash Recovery — When EU engineering becomes a synonym for “unrepairable” + “generating waste”.
2021 > PHEV BMW iBMUCP 21F37E Post-Crash Recovery — When EU engineering becomes a synonym for “unrepairable” + “generating waste”.
2021 > PHEV BMW iBMUCP PHEV Post-Crash Recovery — When EU engineering becomes a synonym for “unrepairable” + “generating waste”.
If you own a BMW PHEV — or if you’re an insurance company — every pothole, every curb impact, small or large incideng and even any rabbit jumping out of a bush represents a potential €5,000 cost, just for a single blown fuse inside the high-voltage battery system.
This “safety fuse” is designed to shut the system down the moment any crash event is detected. Sounds safe — but extremely expensive. Theoraticaly insurance for BMW PHEV should be 3x higher than ICE or EV
Unfortunately, that’s not the only issue.
BMW has over-engineered the diagnostic procedure to such a level that even their own technicians often do not know the correct replacement process. And it gets worse: the original iBMUCP module, which integrates the pyrofuse, contactors, BMS and internal copper-bonded circuitry, is fully welded shut. There are no screws, no service openings, and it is not designed to be opened, even though the pyrofuse and contactors are technically replaceable components. Additionally, the procedure requires flashing the entire vehicle both before and after the replacement, which adds several hours to the process and increases risk of bricked components which can increase the recovery cost by factor 10x.
But that is still not the only problem.
Even after we managed to open the unit and access everything inside, we discovered that the Infineon TC375 MCU is fully locked. Both the D-Flash sectors and crash-flag areas are unreadable via DAP or via serial access.
Meaning: even if you replace the pyrofuse, you still cannot clear the crash flag, because the TC375 is cryptographically locked.
This leaves only one method:
➡️ Replace the entire iBMUCP module with a brand-new one. (1100€ + tax for faulty fuse)
And the registration of the new component is easily one of the worst procedures we have ever seen. You need an ICOM, IMIB, and AOS subscription — totalling over €25,000 in tools — just to replace a fuse. (even we managed to activate this one with IMIB, it will be necessary in some situation)
Yes, you read that correctly, 25,000€
Lot of vehicles designed and produced in Europe — ICE, PHEV, and EV — have effectively become a missleading ECO exercise. Vehicles marketed as “CO₂-friendly” end up producing massive CO₂ footprints through forced services, throw-away components, high failure rates and unnecessary parts manufacturing cycles, overcomplicated service procedures, far larger than what the public is told. If we are destroying our ICE automotive industry based on EURO norms, who is calculating real ECO footprint of replacement part manucfacturing, unecessary servicing and real waste cost?
We saw this years ago on diesel and petrol cars:
DPF failures, EGR valves, high-pressure pumps, timing belts running in oil, low quality automatic transmissions, and lubrication system defects. Everyone calculates the CO₂ footprint of a moving vehicle — nobody calculates the CO₂ footprint of a vehicle that is constantly broken and creating waste.
ISTA’s official iBMUCP replacement procedure is so risky that if you miss one single step — poorly explained within ISTA — the system triggers ANTITHEFT LOCK.
This causes the balancing controller to wipe and lock modules.
Meaning: even in an authorised service centre, system can accidentally delete the configuration and end up needing not only a new iBMUCP, but also all new battery modules.
Yes — replacing a fuse can accidentally trigger the replacement of all healthy HV modules, costing €6,000+ VAT per module, plus a massive unknown CO₂ footprint.
This has already happened to several workshops in the region.
The next problem: BMW refuses to provide training access for ISTA usage. We submitted two official certification requests — both were rejected by the central office in Austria, which is borderline discriminatory.
One more next problem: Battery erasal can happen in OEM and can happen in our or any other 3rd party workshop, but if procedure was started in workshop 1, it cant be continued in workshop 2. If battery damage happens in our workshop during fuse change, and than battery swap needed, we or even OEM workshop do not cover costs of completely new battery pack. Which increases heavily ownership costs.
All of this represents unnecessary complexity with no meaningful purpose.
While Tesla’s pyrofuse costs €11 and the BMS reset is around 50€, allowing the car to be safely restored, BMW’s approach borders on illogical engineering, with no benefit to safety, no benefit to anti-theft protection — the only outcome is the generation of billable labour hours and massive amounts of needless electronic/lithium waste.
Beyond that, we are actively working on breaking the JTAG/DAP protection to gain direct access to the D-Flash data and decrypt its contents together with our colleagues from Hungary. The goal is to simplify the entire battery-recovery procedure, reduce costs, and actually deliver the CO₂ reduction that the EU keeps missleading— since the manufacturers clearly won’t.
21F35B high voltage battery unit,
voltage and electric current sensor, current: Counter for the reuse of cell modules exceeded (safety function)
OEM Service cost: 4000€+tax (aprox — if you have bmw quote, send)
OEM iBMUCP : 1100€+tax
Labor hours: 24h — 50h
EVC: 2500€+tax (full service)
**It is cheaper to change LG Battery on Tesla, than changing fuse on BMW PHEV, and probably even less CO2 footpring
If you want to book your service with EV CLINIC:
Zagreb 1: www.evclinic.hr
Berlin: www.evclinic.de
Slovenija: www.evclinic.si
Serbia: www.evclinic.rs
Click to share on Facebook (Opens in new window)
Click to share on X (Opens in new window)
...
Read the original on evclinic.eu »
This is the code I currently use to drive my volumetric displays.
It supports two closely related devices which are configured in the src/driver/gadgets directory:
* Rotovox is a 400mm Orb featuring two 128x64 panels arranged vertically side by side.
* Vortex is a 300mm Orb featuring two 128x64 panels arranged horizontally, back to back.
Rotovox has a higher vertical resolution and better horizontal density; Vortex is brighter and has a higher refresh rate.
The 3D printable parts for Vortex are available here.
This code was originally written for a single display, and the device specific code was later somewhat abstracted out to support a second similar gadget. There are assumptions about the hardware that are pretty well baked in:
* It consists of two HUB75 LED panels spinning around a vertical axis.
* The panels use either ABCDE addressing or ABC shift register addressing.
* It uses a single GPIO (a photodiode or similar) to sync to rotation - high for 180°, low for 180°.
The GPIO mappings and panel layout are defined in src/driver/gadgets/gadget_. GPIO is via memory mapped access - if you’re using a different model of Pi you’ll need to change BCM_BASE in the GPIO code. I haven’t tested this, and you should probably assume it doesn’t work.
Input is via a bluetooth gamepad - I’ve been using an Xbox controller, and the input system is based on the default mapping for that.
Audio out is also via bluetooth. I haven’t had success with the higher quality codecs, but the headset protocol works.
There are two parts to this code - the driver, which creates a voxel buffer in shared memory and scans its contents out in sync with rotation, and the client code which generates content and writes it into the voxel buffer. Both driver and client code are designed to run on the same device, a Raspberry Pi embedded in the hardware and spinning at several hundred RPM. There is a demo included in the Python directory which streams point clouds from a PC over wifi to the device, but fundamentally it’s designed as a self contained gadget, like an alternate timeline Vectrex. A bluetooth gamepad is used to control the demos.
On the Raspberry Pi, clone the repository:
Configure the project for your hardware:
First, the driver has to be running:
When invoked from the command line it periodically outputs profiling information (frame rate, rotation rate), and accepts keyboard input for various diagnostics:
While that’s running, try one of the toys:
The viewer takes a list of .obj and .png files as arguments. You can scale, rotate and so on using the gamepad, and it also accepts keyboard input when run remotely from the command line.
If you don’t have a physical volumetric display, there’s a simulator, virtex, which you can run in place of vortex. It exposes the same voxel buffer in shared memory, but renders the contents using OpenGL in an X11 window.
Run without command line arguments it creates a display compatible with the currently configured gadget, but there are some options to let you experiment with different geometries:
An idealised device with linear scanning and 3 bits per channel can be invoked like this:
The simulator is fill rate intensive; if you’re running it on a Raspberry Pi you’ll probably want to reduce the slice count.
If you want it to start up automatically on boot, you can install vortex as a service, and set multivox to run on startup.
First install everything to its default location ~/Multivox:
This will build the executable files and copy them into the destination directory, as well as creating .mct files in ~/Multivox/carts for the built in toys.
and fill in the following information:
Then start it up:
The driver assigns itself to core 3 - you can add isolcpus=3 to the end of /boot/cmdline.txt to ensure it’s the only thing running on that core.
You’ll also want the launcher to start up on boot:
If everything goes smoothly, when you turn on the device it will boot up into Multivox. This is a fantasy console which acts as a launcher for all the games and demos you run on the hardware. The bundled toys are automatically installed in the ~/Multivox/carts/ directory as .mct files, and external apps can be launched by adding a .mct file containing its command, path and arguments.
Each .mct file appears as a cartridge in the Multivox front end. They should each have a label on the side; at the moment all you can do to distinguish between them is change their colour in the .mct.
When you exit an app back to the launcher, it saves a snapshot of the voxel volume, and this gives a preview of what you’ll see when you launch a cart. This means there are two competing representations of the same information, and any future work on the front end will probably start with overhauling the entire approach.
Some basic UI for controls such as changing bit depth, rebooting and so on would also be a boon.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.