10 interesting stories served every morning and every evening.
Building a fake battery, adding a USB-C port, booting from SD card, and giving a new life to a classic Linux smartphone.
My friend Dima sent me his old-school classic Nokia N900. The battery is very old, and it does not boot as-is. So naturally, I wanted to see if I can resurrect it.
Step 0: Is such a thing even possible?
Yes it is! (Unless there are other hardware issues)
I did look at BL-5J batteries for sale. Many listings say “Genuine” and “OEM”, but what does that even mean for a battery for a device that is at least 10 years old? (Some later Nokia Lumia phones used the same battery) And listings with newer-looking batteries do not list the manufacture date. I don’t need another old battery or a “spicy pillow”.
Also, where’s the fun in that?
Cut and soldered a quick prototype to connect instead of the battery. Resistors are to emulate the “normal” temperature by providing expected resistance between the third pin and ground. See link above for details.
Hooked up a large supercapacitor to the battery pins and to a +5V source. If I recall correctly, using a capacitor without additional power did not work.
Now, let’s make something that can fit into the battery compartment.
These supercapacitors are nice, but way too large. After searching on Mouser, I found FM0H473ZF, 47000 mF (0.047F) capacitors in a rectangular case that is only 5mm thick.
Ten of these (~0.5F) is enough to run the smartphone without dying.
Capacitor contraption (TM) arranged (using a 3D-printed template) and soldered together.
And they all fit nicely into the battery compartment. The power is provided by a wire routed through the hole for the carry loop.
Running fine! One noticeable issue is that capacitors are getting pretty warm. Probably my sloppy soldering, but no shorts that I could find.
This is where I should have stopped. At some point while messing with the “battery” and power, I managed to corrupt the internal partition and the installed OS. Not sure if this was from the sudden battery pull or from supplying +5V instead of the expected +4.2V to the battery pins. Luckily, newer Maemo Leste is intended to run from the SD card anyway, and internal storage still works, so I was able to overwrite it with the bootloader.
I thought it might be practical to power the “battery” through the existing USB port. Just run the +5V wire from USB to the “battery”, and avoid additional wires. (If you think this is kinda stupid, you are right)
Yooo… What is happening here? Dima says “oh yeah, the USB port was re-soldered. Twice”. A quick glance at the forums also confirms that USB port was poorly designed and is prone to breaking.
Just one wire from the +5V pad to the “battery”. The ground is the same as the battery pin.
Assembled everything back, routed and soldered the +5V wire, and added a diode to prevent the battery from feeding the USB port, and to drop the voltage to more acceptable ~4.3V.
The setup works, but the smartphone constantly shows either “Charging”, or “Device using more power than it is receiving from the PC. Charging with a compatible charger is recommended”, with battery gauge going crazy.
And then, the power just cut out.
Yeah, this was not a great idea. Let’s see what happened.
USB +5V wire detached itself from the port. I presume this is from either the high current, age, stress, or corrosion.
However, when I opened the smartphone up, I… ripped off the +5V pad. (dark circle in lower right on the photo)
After reading some N900 forums, that +5V pad is a common place to connect the replacement USB port to (which was done here), but… that is the ONLY +5V connection on the board besides the pads under the USB port itself.
RIP Nokia N900. I tried to resurrect you, but instead, I killed your OS and ripped out the USB port wires.
To be fair, N900 is far from dead. I already flashed u-boot, was able to boot from SD card, and do not plan to use internal storage otherwise. Power can be supplied entirely through the new “battery”. So technically, I do not need the USB functionality for the smartphone itself, just to power the “battery”. At this point, I might as well replace the port with USB-C. Because why not.
Approximate placement of the new USB port.
The location of the original port is not very convenient. It is sandwiched between the main board and the SD card reader (lower left on the photo). SD card reader is also attached by a permanently-attached ribbon (i.e. nearly irreplaceable).
First, I used a small file to make the micro-USB-shaped hole on the smartphone body fit the USB-C shape. Then, I took a small 6-pin USB-C port, cut and sanded down its plastic parts to make it fit in the original spot. It is still slightly (~0.25mm) taller than the original, but I cannot make it any slimmer.
I tried to attach the USB-C port to the board in the correct place by carefully assembling the board, port and SD card reader into the body, and using small drops of glue to lightly affix the edge of the USB port (that I could reach) to the main board. The intent was to wait for glue to cure, take everything back apart and glue the port in its now-correct position for good. This took several tries but did not really work, as the port got detached while removing the main board every time, and the the superglue I used left lots of residue but did not adhere. Luckily, the tight fit and the shape of the USB-C port hold it in place mechanically quite well.
Originally, I planned to solder all 6 pins and add 5.1 Ohm pull-down resistors to CC1 and CC2 pins (for full power delivery functionality). But there is simply not enough space to route the wires, the narrow valley between the chips (in the lower right of the photo) barely fits 3, and I did not have anything thinner on hand.
Since I did not solder the pull-down resistors, this USB-C port could only be powered by a “dumb” USB-A-to-USB-C cable, at default 0.5A. Chargers with power delivery functionality cannot identify such USB-C ports, and will not provide power at all. (This is also an issue with some handheld consoles such as RGB30)
The two wires are routed to the battery compartment through a very convenient opening in the metal frame, crimped and inserted into a DuPont connector.
Back to the battery. The capacitor contraption I built before works, but was kind of flimsy, and does not have any more space for a DuPont connector. Also, I would rather use a single capacitor, but it still has to fit. Since the original battery is unusable, I might as well try to salvage it, too.
Take off the sticker (that tells you not to do so :). The top BCM piece is held to the main battery body by two tiny screws (hidden under some crumbly compound) on each end, double-sided sticker, and a single lead in the middle.
Battery Control Module. Interestingly, for this battery, the body is the positive terminal. So the positive lead connects the battery body and the positive pin directly, while the negative lead goes thorough some control circuitry. Attaching a capacitor to these battery terminals should be sufficient.
Since I have a 3D printer, and once you have one, every problem can be solved by printing stuff, I printed the new “battery” to accommodate a large capacitor, diode (for voltage drop), wires, DuPont connectors, and the original battery’s BCM.
N900 with a new “battery”. Fits really tight, and only 0.25-0.5mm too tall, so the cover still snaps closed.
Boots without problems. Since the attached capacitor is pretty large, it can take a minute or two to charge it to an acceptable level (~4.0V) with a 0.5A current.
Nokia N900 enjoying its new life as an online radio device using Open Media Player.
...
Read the original on yaky.dev »
Preamble: The whole affair is Google’s fault and not Bear Blog’s. Huge thanks to Herman—Bear Blog’s founder and dev—for his patience and help.
A month after I started my first Bear blog at blog.james-zhan.com, my blog was entirely de-indexed by Google for no apparent reason:
I have since migrated to journal.james-zhan.com (you are on it right now) and redirected all links from blog.james-zhan.com accordingly, but to this day, I don’t understand what happened, and so I’m putting this post out there to see if perhaps anyone could shed some light—you are welcome to email me or leave a comment at the bottom of the post.
Let me backtrack and show you how it all went down.
My blog went live on Oct 4 and I published a lengthy, well-research opinion piece commenting on a recent event.
Because of that, I wanted the article to show up on Google ASAP so that when people searched about the event, maybe my article would come up. I knew that it could take a while for Google to naturally crawl and index a new site, so to accelerate the process, I went on Google Search Console (GSC), submitted the sitemap and requested indexing on my article.
And it worked—the next day, my blog and articles were indexed and showed up on Google if you put in the right search terms.
In GSC, you can even see that I was getting some impressions and clicks at the time from the exact topic that my opinion piece was about. Great!
From then on, every time I published a new post, I would go to GSC and request indexing for the post URL, and my post would be on Google search results shortly after, as expected.
On Oct 14, as I was digging around GSC, I noticed that it was telling me that one of the URLs weren’t indexed. I thought that was weird, and not being very familiar with GSC, I went ahead and clicked the “Validate” button.
Only after did I realized that URL was the RSS feed subscribe link, https://blog.james-zhan.com/feed/?type=rss, which wasn’t even a page so it made sense that it hadn’t been indexed, but it was too late and there was no way for me to stop the validation.
I received an email from GSC telling me it was validating that 1 page with indexing issues:
Four days later, on Oct 20, I received an email from GSC saying “Some fixes failed for Page indexing issues on site https://blog.james-zhan.com/” and when I searched “site:blog.james-zhan.com,” I saw that all but one of my blog posts had been de-indexed:
All of them showed the same reason:
“Page is not indexed: Crawled — currently not indexed”
Confused, I poked around GSC to see if it showed me why, and I couldn’t find anything useful, so I resubmitted the sitemap for good measure, and clicked “Validate” again.
I even requested indexing for all the individual blog post URLs and that didn’t do anything.
As of the publishing of this post, the validation status is still “Started” (it’s been nearly 20 days).
As I was troubleshooting, I noticed that the day that I initiated the validation for the first time (Oct 14) was the same day that all but one of my blog posts got de-indexed:
Did my accidental attempt to make GSC index https://blog.james-zhan.com/feed/?type=rss cause some kind of glitch, thereby de-indexing the rest of the blog?
I don’t get why it would, but it’s weird that the two events happened on the same day.
While this was going on, I continued to post a few articles, and you can see that all the new posts faced the “Page is not indexed: Crawled — currently not indexed” error:
And then on Nov 3, I discovered that the remaining, single blog post that had been indexed just got de-indexed as well:
So basically, no one could find my blog on Google.
I’m not a web dev or programmer, but I tried my best to cover as much ground as possible in my troubleshooting to narrow down the cause.
The root domain, james-zhan.com, was from GoDaddy. I’ve had this domain for many years and I’ve used it on different sites and never had an issue with Google’s indexing.
For example, just this year, I created a new subdomain with it and that’s been indexed by Google.
I also don’t touch any advanced configuration with DNS records or what have you—I don’t have knowledge in that stuff, so it’s unlikely I somehow screwed up something in GoDaddy.
But just to be sure it wasn’t some wonky thing going on specifically with the Bear blog + GoDaddy combo, I created another Bear blog with the subdomain www.james-zhan.com.
This one shows up on Google no problem.
Whenever people discuss the indexing of their website in online forums, they always talk about the quality of the content being a huge factor. They say that your site isn’t indexed or isn’t ranked highly because your site doesn’t have much content, your content is low effort, or something like that.
First, I’m not worried about ranking—I just want my blog to be properly indexed.
Second, the issue couldn’t be the quality or the quantity of the content. I came across some other pretty barebones Bear blogs that don’t have much content, and looked them up on Google, and they showed up in the results just fine.
An example: Phong’s blog. It’s a very minimalist blog with only 6 posts (of great quality) and it shows up on Google search.
Conclusion: Quality or quantity of content wasn’t the cause.
I read about how the structure of a site can play a role in Google’s indexing.
Some say that if your blog posts’ URLs are all “orphaned,” like:
Allegedly, that might cause Google to not index your posts. By default, when you publish a post on Bear Blog, the blog post’s path isn’t preceded by “blog/.”
So I went around and checked the post URLs of other Bear blogs and saw that none of them had “/blog/” in them, and those blogs were indexed just fine. I also highly doubt it’s a real issue; otherwise, it wouldn’t be the default behaviour on Bear Blog.
Conclusion: Lack of internal linking wasn’t the cause.
I reached out to Herman with all the details and asked him for help. Of course, he responded promptly and helped me troubleshoot to identify the cause.
He was able to confirm and the following:
* GoDaddy and DNS weren’t the cause
* My bear blog had nothing that would prevent Google from indexing
* HTML/CSS doesn’t affect SEO/indexing
I had the following CSS code to put the tags above the blog post title, but Herman said this was fine
/* –- Move tags above the title –- */
main {
display: flex;
flex-direction: column;
/* Style and reposition the tags */
main > p.tags {
order: -1; /* Moves tags above the title */
margin: 0 0 0.6rem 0;
font-size: 0.9em;
letter-spacing: 0.02em;
color: var(–heading-color);
opacity: 0.8;
/* Keep the title below tags */
main > h1 {
order: 0;
* I had the following CSS code to put the tags above the blog post title, but Herman said this was fine
I just wanted to take a moment to express my gratitude to Herman for investigating this with me. My emails to him were pretty elaborate with troubleshooting steps I had taken along with many screenshots. He took the time to fully understand the whole issue and even triple-checked my site to make sure everything was sound.
It was a refreshing tech support experience, and made me love Bear Blog as a platform just that much more.
I don’t even have to use “site:”—just by searching “James Zhan blog,” both my blog and my www.james-zhan.com site show up in other search engines:
So there’s definitely nothing wrong on a technical level with my blog that would prevent Google from indexing it.
I copied my blog over to a different subdomain (you are on it right now), moved my domain from GoDaddy to Porkbun for URL forwarding, and set up URL forwarding with paths so any blog post URLs I posted online will automatically be redirected to the corresponding blog post on this new blog.
I also avoided submitting the sitemap of the new blog to GSC. I’m just gonna let Google naturally index the blog this time. Hopefully, this new blog won’t run into the same issue.
At this point, I’m no longer trying to resolve the issue, but just out of curiosity, I do want to know what the hell happened there. I’d had a previous site on GSC to track traffic for many years and never had such an issue.
If any of you have any guesses, I’d love to hear them (email me or leave a comment below)!
Subscribe to my blog via email or RSS feed.
...
Read the original on journal.james-zhan.com »
The Tor Project has been busy with the rustification of their offering for quite some time now.
If you have used Tor Browser, you know what it does. Anonymous browsing through encrypted relay chains. The network itself has been running since the early 2000s. All of it is built on C.
But that C codebase is an issue. It is known to have buffer overflows, use-after-free bugs, and memory corruption vulnerabilities. That is why they introduced Arti, a Rust rewrite of Tor that tackles these flaws by leveraging the memory safety of the programming language.
A new release of Arti just dropped last week, so let’s check it out!
We begin with the main highlight of this release, the rollout of the circuit timeout rework that was laid out in proposal 368. Tor currently uses something called Circuit Dirty Timeout (CDT). It is a single timer that controls when your connection circuits become unavailable and when they close down.
Unfortunately, it is predictable. Someone monitoring traffic can spot these patterns and potentially track your activity. Arti 1.8.0 fixes this by implementing usage-based timeouts with separate timers. One handles when circuits accept new connections. Another closes idle circuits at random times instead of fixed intervals.
This should reduce the risk of fingerprinting from predictable timeout behavior.
Next up is the new experimental arti hsc ctor-migrate command that lets onion service operators migrate their restricted discovery keys from the C-based Tor to Arti’s keystore.
These keys handle client authorization for onion services. The command transfers them over without requiring operators to do the manual legwork. The release also delivers improvements for routing architecture, protocol implementation, directory cache support, and OR port listener configuration.
You can go through the changelog to learn more about the Arti 1.8.0 release.
Suggested Read 📖: Is Helium the Browser Brave Was Meant to Be?
...
Read the original on itsfoss.com »
Apple loses its appeal of a scathing contempt ruling in iOS payments case
But Sweeney warns iOS devs are still afraid of “totally illegal” retaliation by Apple.
A rare “total eclipse” of Apple by Epic Games, captured on camera.
A rare “total eclipse” of Apple by Epic Games, captured on camera.
Back in April, District Court Judge Yvonne Gonzalez Rogers delivered a scathing judgment finding that Apple was in “willful violation” of her 2021 injunction intended to open up iOS App Store payments. That contempt of court finding has now been almost entirely upheld by the Ninth Circuit Court of Appeals, a development that Epic Games’ Tim Sweeney tells Ars he hopes will “do a lot of good for developers and start to really change the App Store situation worldwide, I think.”
The ruling, signed by a panel of three appellate court judges, affirmed that Apple’s initial attempts to charge a 27 percent fee to iOS developers using outside payment options “had a prohibitive effect, in violation of the injunction.” Similarly, Apple’s restrictions on how those outside links had to be designed were overly broad; the appeals court suggests that Apple can only ensure that internal and external payment options are presented in a similar fashion.
The appeals court also agreed that Apple acted in “bad faith” by refusing to comply with the injunction, rejecting viable, compliant alternatives in internal discussions. And the appeals court was also not convinced by Apple’s process-focused arguments, saying the district court properly evaluated materials Apple argued were protected by attorney-client privilege.
While the district court barred Apple from charging any fees for payments made outside of its App Store, the appeals court now suggests that Apple should still be able to charge a “reasonable fee” based on its “actual costs to ensure user security and privacy.” It will be up to Apple and the district court to determine what that kind of “reasonable fee” should look like going forward.
Speaking to reporters Thursday night, though, Epic founder and CEO Tim Sweeney said he believes those should be “super super minor fees,” on the order of “tens or hundreds of dollars” every time an iOS app update goes through Apple for review. That should be more than enough to compensate the employees reviewing the apps to make sure outside payment links are not scams and lead to a system of “normal fees for normal businesses that sell normal things to normal customers,” Sweeney said.
“The 9th Circuit Court has confirmed: The Apple Tax is dead in the USA,” Sweeney wrote on social media. “This is the beginning of true, untaxed competition in payments worldwide on iOS.”
An Apple spokesperson has not yet responded to a request for comment from Ars Technica.
While some app developers have made the move to their own payment processors in the wake of April’s ruling, Sweeney said he thinks many were waiting to see if the decision would be overturned on appeal. With that fear now mooted, he said we’ll probably see “rapid adoption” of outside payment processors, including the Epic Web Shops that the company rolled out in October. Sweeney predicted that these kinds of web payments for iOS apps “will just become the norm” by the end of next year and that “after years of Apple obstruction, we’re finally going to see large-scale change happening.”
Sweeney also pointed to an alleged widespread “fear of retaliation” that has led many iOS developers to keep paying 30 percent fees to use Apple’s default in-app payments. Sweeney said that Apple has “infinite power to retaliate” against apps that add outside payment options by indefinitely delaying their app reviews or burying their products in App Store search results. Sweeney called this kind of “ghosting” a “totally illegal” exercise of “soft power” by Apple that regulators will need to look at if it keeps up.
When pitching Epic’s own outside payment options to major iOS developers, Sweeney said he frequently has to address fears that lower payment fees will be overwhelmed by the drop in users caused by this kind of Apple retaliation. “We’re just too afraid of Apple hurting our business,” Sweeney said in summary of the common response from other developers. “The sad truth is everybody’s afraid of Apple.”
Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.
After NPR and PBS defunding, FCC receives call to take away station licenses
How to break free from smart TV ads and tracking
After years of resisting it, SpaceX now plans to go public. Why?
...
Read the original on arstechnica.com »
Crossing the Koralpe massif more quickly and with more comfort. That’s what the future of train travel from Graz to Klagenfurt looks like. With the Koralm Railway, you will arrive at your destination even quicker. The fastest connection will shrink from three hours to just 45 minutes. Western Styria and southern Carinthia can be reached even more easily — as can our neighbouring countries Hungary and Italy.
The economy is also benefiting from the construction of the new Koralm Railway. As part of the new Southern Line, it is strengthening the Baltic-Adriatic Corridor in Europe. Transporting goods in Austria by train is becoming more attractive, which in turn is allowing our operations to remain competitive internationally. And the environment to breathe: Each tonne of freight moved by rail generates around 15 times less CO2 emissions than transporting it by lorry.
...
Read the original on infrastruktur.oebb.at »
Researchers have successfully used CRISPR gene editing technology to create a fungi strain that is highly efficient, more nutritious, and significantly more sustainable than its natural counterpart. The fungus Fusarium venenatum already stands out for its meat-like flavor and texture, leading to its approval for food use in several countries. This breakthrough, published in the journal Trends in Biotechnology, addresses the need for better, more environmentally friendly alternatives to conventional animal agriculture, which accounts for about 14% of global greenhouse gas emissions.
The scientists, led by corresponding author Xiao Liu of Jiangnan University, used CRISPR to remove two specific genes. The first modification, eliminating a gene for chitin synthase, resulted in thinner fungal cell walls. This change is crucial as it makes the fungal protein easier for humans to digest and increases its bioavailability. The second change involved removing the pyruvate decarboxylase gene, which optimized the fungus’s metabolism. This fine-tuning made the new strain, called FCPD, more productive, requiring 44% less sugar to produce the same amount of protein and doing so 88% faster than the original strain.
When scaled up, FCPD production showed a lower environmental footprint regardless of the manufacturing location, reducing greenhouse gas emissions by up to 60% over its life cycle compared to traditional fungal protein production. Furthermore, compared to chicken production in China, the new myoprotein requires 70% less land and reduces the risk of freshwater pollution by 78%. According to the researchers, this type of gene-edited food can help meet global food demands without the substantial environmental costs associated with conventional farming, representing a major advancement in the field of sustainable food technology.
For more details, read this article or download the open-access paper.
...
Read the original on www.isaaa.org »
We absolutely love SQLite here at DB Pro. You’d be hard-pressed to find anyone who actively dislikes it. Sure, it has limitations, and I do mean limitations, not weaknesses. SQLite can absolutely be used in production when it’s deployed properly and tuned with care.
SQLite has also seen something of a resurgence over the past few years. From being forked into projects like libSQL and Turso, to powering popular backend frameworks such as PocketBase, it’s clearly having a moment again.
As I said though, we love it. It even powers the local database inside DB Pro itself. For our use case, there really isn’t a better alternative.
Because we’ve been using SQLite in anger over the past three months, we’ve learnt a huge amount about it, including plenty of things we didn’t know before.
So I’m planning to write a short series of blog posts covering some of the cooler, more interesting features and nuances of SQLite that we’ve discovered along the way. This is the first of those posts.
First of all. Did you know SQLite has JSON functions and operators? I didn’t until recently! I came across this Hacker News comment when researching SQLite’s JSON operators.
I read that, then read it again, then again until I understood what bambax was saying. I was sort of in disbelief.
So I had to give it a try to see if it works. We’ve got an embedded SQLite-in-the-browser component on our blog and so I wanted to throw together some working examples for you guys (but mostly for me).
Let’s break down what bambax is saying:
This means you never have to choose your indexing strategy up front. If you later realise you need to query on a new JSON field, you simply add another generated column, add an index, and you’re done.
No data migration. No schema rewrite. No ETL. Just pure flexibility.
Your JSON documents are stored naturally, exactly as they arrive. No schema gymnastics required either!
Here’s where bambax is saying the magic happens. Let’s add virtual generated columns. Generated columns compute values on demand. They don’t actually store data:
I believe that no writes occur. There’s no backfilling. It’s instant. The virtual columns are computed on-the-fly from your JSON data whenever you query them.
Now is the icing on the cake. We add indexes to make these virtual columns blazing fast:
Suddenly your JSON behaves like normal relational columns with full index support.
Now your queries are blazing fast. So let’s give it a go with some examples:
I believe this is one of the strongest points to this pattern. If at a later date your JSON shape changes (expected), you can just add another column and create another index.
For example, you realise you need to query by user_id:
This pattern completely changed how I think about working with JSON in SQLite. You get the flexibility of schemaless data, combined with the performance and ergonomics of a relational database, without committing yourself too early or painting yourself into a corner.
There are plenty more of these little SQLite superpowers hiding in plain sight. This is just the first one I wanted to share.
...
Read the original on www.dbpro.app »
Doom and Quake studio id Software are now home to a “wall-to-wall” union according to the Communications Workers of America (CWA). The organisation have announced that a group of 165 id workers have just voted to unionise, adding to the ranks of the 300 ZeniMax quality assurance staff who unionised back in 2023.
According to the CWA’s press release, Microsoft have already recognised this latest union - which is made up of “developers, artists, programmers, and more” - in accordance with the labour neutrality agreement the two parties agreed in 2022.
“The wall-to-wall organizing effort at id Software was much needed; it’s incredibly important that developers across the industry unite to push back on all the unilateral workplace changes that are being handed down from industry executives,” said id Software producer and CWA organising committee member Andrew Willis.
Meanwhile, id lead services programmer and CWA committee member Chris Hays specifically cited remote staff not being dragged into the office as a reason behind the push for representation. “Remote work isn’t a perk,” he said. “It’s a necessity for our health, our families, and our access needs. RTO policies should not be handed down from executives with no consideration for accessibility or our well-being.”
The CWA release also cited “mass industry layoffs, sudden periods of crunch time, and unfair pay” as part of the impetus behind a wider push towards unionisation among devs across the industry this year, adding that the total of unionised workers across Microsoft’s fiefdom is now “nearly 4,000″ strong.
CWA president Ron Swaggerty added that the union “look forward to sitting across the table from Microsoft to negotiate a contract that reflects the skill, creativity, and dedication these workers bring to every project.”
If you want to learn more about the CWA’s unionisation efforts as the games industry’s suits and moneyfolk continue to lob developers out of windows with depressing regularity, give this interview Nic did a read.
Meanwhile, members of the “industry-wide union” the CWA announced earlier this year held a protest outside of The Game Awards yesterday, with their aim being to “to acknowledge the video games and studios that have been closed and to also condemn the creativity that’s been crushed by corporate greed and studio executives”.
...
Read the original on www.rockpapershotgun.com »
A “relaxation” project, mostly drawn on planes to and from Norway this month, where I had to travel to setup a digital art installation in Kristiansand with friends from the digital art collective Lab212. It has been drawn with one major constraint: it must fit in the inner pocket of my jacket (well, one specific jacket), except for the rods.
This is a 3D-printed dobsonian telescope built around a 76mm/300mm parabolic mirror kit. While there are plenty of mini-scope models on the internet, I wanted something that looked like a dobson that went a bit too hard through the clothes dryer, but without compromise on what matters:
* Nylon screws to collimate both the primary and secondary mirrors
* A bit of paraffin to lubricate the focuser
* A lycra light shroud that also helps with delaying dew forming on the mirrors
The focuser follows Analog Sky’s recipe: the tube that receives the eyepiece is also the movement itself, with a rounded thread that prints extremely smoothly with very little play. No additional hardware needed - the eyepiece is self-held by the flexion of plastic fins.
All the holes for the rods are straight, which forces them to arch, which “locks” the structure in place.
The alt/az movements use “teflon pads” (actually gray HDPE or UHMW for furniture feet) with rubber backing, scalped and glued.
Download the 3D files on Printables • Discussion on Astrosurf
If you build it, the real trick for ease of mounting is to chamfer the carbon rods with a 1mm chamfer at both ends and seal it with CA glue. See the chamfer pic in the gallery.
Sadly, the results aren’t great. We were used to very good λ/6 or better recent buys from Aliexpress, but this one is very overcorrected. It was very smooth, with a rather good edge at the foucault, but is overcorrected by 70%. With the eyepiece I selected, putting it at 30x power, this does not show too much, and it retains its “real telescope” status. But this mirror is so small that I will not refigure it — the realuminizing costs would outweigh the entire project.
Edit dec. 12th The λ/6 aliexpress mirrors mentioned were spherical. So, great starting points to figure them, but unusable as-is. I did not yet stumble on a great parabola at a low price, and this is to be expected.
Edit as of dec. 11th : of course I did not resist re-figuring it. It now hovers around 0.9 strehl. The star test with the selected eyepiece shows nice symmetric defocused stars and I can now count individual spider web strands and distinguish the dew droplets it carries, on a nearby electrical pole, whereas I did not even see the spider web with the mirror as it was from the factory. I still need to do a proper “showable” Bath report with enough interferograms, my last test was 4 interferograms and carries a ton of noise. So it is great but I now have to get it coated, and working a mirror this small did raise a few challenges in handling it.
All test pictures below are before refiguring
...
Read the original on lucassifoni.info »
Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.
...
Read the original on www.phoronix.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.