10 interesting stories served every morning and every evening.
TL;DR: Over the past decade, I’ve worked to build the perfect family dashboard system for our home, called Timeframe. Combining calendar, weather, and smart home data, it’s become an important part of our daily lives.
When Caitlin and I got married a decade ago, we set an intention to have a healthy relationship with technology in our home. We kept our bedroom free of any screens, charging our devices elsewhere overnight. But we missed our calendar and weather apps.
So I set out to build a solution to our problem. First, I constructed a Magic Mirror using an off-the-shelf medicine cabinet and LCD display with its frame removed. It showed the calendar and weather data we needed:
But it was hard to read the text, especially during the day as we get significant natural light in Colorado. At night, it glowed like any backlit display, sticking out sorely in our living space.
I then spent about a year experimenting with various jailbroken Kindle devices, eventually landing on design with calendar and weather data on a pair of screens. The Kindles took a few seconds to refresh and flash the screen to reset the ink pixels, so they only updated every half hour. I designed wood enclosures and laser-cut them at the local library makerspace:
Software-wise, I built a Ruby on Rails app for fetching the necessary data from Google Calendar and Dark Sky. The Kindles woke up on a schedule, loading a URL in the app that rendered a PNG using IMGKit. The prototype proved e-paper was the right solution: it was unobtrusive regardless of lighting:
The Kindles were a hack, requiring constant tinkering to keep them working. It was time for a more reliable solution. I tried an OLED screen to see if the lack of a global backlight would be less distracting, but it wasn’t much better than the Magic Mirror:
So it was back to e-paper. I found a system of displays from Visionect, which came in 6”/10”/13”/32” sizes and could update every ten minutes for 2-3 months on a single charge:
The 32” screen used an outdated lower-contrast panel and its resolution was too low to render text smoothly. The smaller sizes used a contrasty, high-PPI panel. I ended up using a combination of them around the house: a 6” in the mudroom for the weather, a 13” (with its built-in magnetic backing) in the kitchen attached to the side of the fridge, and a 10” in the bedroom.
The Visionect displays required running custom closed-source software, either as a SaaS or locally with Docker. I opted for a local installation on the Raspberry Pi already running the Rails backend. I had my best results pushing images to the Visionect displays every five minutes in a recurring background job. It used IMGKit to generate a PNG and send it to the Visionect API, logic I extracted into visionect-ruby. This setup proved to be incredibly reliable, without a single failure for months at a time.
Visiting friends often asked how they could have a similar system in their home. Three years after the initial prototype, I did my first market test with a potential customer. At their request, I experimented with different formats, including a month view on the 13” screen:
Unfortunately, the customer didn’t see enough value to justify the $1000 price tag (in 2019!) for the 13” device, let alone anything I’d charge for a subscription service. At around the same time, Visionect started charging a $7/mo per-device fee to run their backend software on premises with Docker, after years of it being free to use. I’d have needed to charge $10/month, if not more, for a single screen!
In late 2021, the Marshall Fire destroyed our home along with ~1,000 others. Our homeowner’s insurance gave us two years to rebuild, so we set off to redesign our home from the ground up.
Around the same time, Boox released the 25.3” Mira Pro, the first high-resolution option for large e-paper screens. Best of all, it could update in realtime! Unlike the Visionect devices, it was just a display with an HDMI port and needed to be plugged into power. A quick prototype powered by an old Mac Mini made it immediately obvious that it was a huge step forward in capability. The larger screen allowed for significantly more information to be displayed:
But the most compelling innovation was having the screen update in realtime. I added a clock, the current song playing on our Sonos system (using jishi/node-sonos-http-api) and the next-hour precipitation forecast from Dark Sky:
The working prototype was enough to convince me to build a place for it in the new house. We designed a “phone nook” on our main floor with an art light for the display:
We also ran power to two more locations for 13” Visionect displays, one in our bedroom and one by the door to our garage:
The real-time requirements of the Mira Pro immediately surfaced performance and complexity issues in the backend, prompting an almost complete rewrite.
While the Visionect system worked just fine with multiple-second response times, switching to long-polling every two seconds put a ceiling on how slow response times could be. To start, I moved away from generating images. The Visionect folks added the ability to render a URL directly in the backend, freeing up resources to serve the long-polling requests.
Most significantly, I started migrating towards Home Assistant (HA) as the primary data source. HA already had integrations for Google Calendar, Dark Sky (now Apple Weather), and Sonos, enabling me to remove over half of the code in the Timeframe codebase! I ended up landing a PR to Home Assistant to allow for the calendar behavior I needed, and will probably need to write a couple more before HA can be the sole data source.
With less data-fetching logic, I was able to remove both the database and Redis from the Rails application, a massive reduction in complexity. I now run the background tasks with Rufus Scheduler and save data fetching results with the Rails file store cache backend.
In addition to data retrieval, I’ve also worked to move as much of the application logic into Home Assistant. I now automatically display the status of any sensor that begins with sensor.timeframe, using a simple ICON,Label CSV format.
For example, the other day I wanted to have a reminder to start or schedule our dishwasher after 8pm if it wasn’t set to run. It took me about a minute to write a template sensor using the power level from the outlet:
{% if states(‘sensor.kitchen_dishwasher_switched_outlet_power’)|float < 2 and now().hour > 19 %}
utensils,Run the dishwasher!
{% endif %}
In the month since adding the helper, it reminded me twice when I’d have otherwise forgotten. And I didn’t have to commit or deploy any code!
Since moving into our new home, we’ve come to rely on the real-time functionality much more significantly. Effectively, we’ve turned the top-left corner of the displays into a status indicator for the house. For example, it shows what doors are open/unlocked:
Or whether the laundry is done:
It has a powerful function: if the status on the display is blank, the house is in a “healthy” state and does not need any attention. This approach of only showing what information is relevant in a given moment flies right in the face of how most smart homes approach communicating their status:
The single status indicator removes the need to scan an entire screen. This change in approach is possible because of one key difference: we have separated the control of our devices from the display of their status.
I continue to receive significant interest in the project and remain focused on bringing it to market. A few key issues remain:
While I have made significant progress in handling runtime errors gracefully, I have plenty to learn about creating embedded systems that do not need maintenance.
There are still several data sources I fetch directly outside of Home Assistant. Once HA is the sole source of data, I’ll be able to have Timeframe be a Home Assistant App, making it significantly easier to distribute.
The current hardware setup is not ready for adoption by the average consumer. The 25” Boox display is excellent but costs about $2000! It also doesn’t include the hardware needed to drive the display. There are a couple of potential options to consider, such as Android-powered devices from Boox and Philips or low-cost options from TRMNL.
Building Timeframe continues to be a passion of mine. While my day job has me building software for over a hundred million people, it’s refreshing to work on a project that improves my family’s daily life.
...
Read the original on hawksley.org »
I’m seeking assistance regarding a sudden restriction on my Google AI Ultra account that has persisted for three days. I received no prior warnings or notifications regarding a potential violation.
The only recent change in my workflow was connecting Gemini models via OpenClaw OAuth. If third-party integrations are the issue, I would expect the platform to block the integration rather than restrict a paid account ($249/mo) without communication.
I have already emailed support but haven’t received a response. Additionally, I found that accessing GCC support requires an additional fee, which seems unreasonable given the existing subscription cost. I WOULD LOVE TO GET THIS RESOLVED!!
Thank you for bringing this to our attention. We have shared the issue to our internal teams for a thorough investigation.
To ensure our engineering team can investigate and resolve these issues effectively, we highly recommend filing bug reports directly through the Antigravity in-app feedback tool. You can do this by navigating to the top-right corner of the interface, clicking the Feedback icon, and selecting Report Issue.
Sir, I am logged out of my account and I can’t even get into the app!! This is so frustrating..
[UPDATE] Day 4, and still total silence from support. I’ve received zero acknowledgement through official channels or the feedback center. I am now in the process of moving all my data and subscriptions off Google. It’s staggering that an organization of this scale can be this unresponsive to a widespread issue.
I contacted the Google Cloud Support via “GCP Account Suspension Inquiry”. They told me to contact Google One Support, because the error is tied to the personal subscription, not to a “Google Cloud project billing account”. Google One support told me to contact Google Cloud support
From emails “gemini-code-assist-user-feedback” or “antigravity-support” still no answer.
And it happens after some days after I bought the subscription for an year…
any update? please tell us how did u solved it!
Nope, still restricted, tried to escalate by Google One, But they can’t help with the problem either…
Same issue and same sentiment and I cancelled and removed billing for all Google products. Absolutely shameful treatment of paying customers. I emailed each of the contact emails for Antigravity and gemini-code-assist without reply. Unfortunately I prepaid for a year so it looks like I’ll have to sue a trillion-dollar company just to get the measly fee?
I have tried to contact everyone I could. And you all know how disgusting their supports are. I am totally disappointed with their customer service. After 3 weeks waiting, the result is that they cannot restore my account. I guess it is time to move on to Codex or Claude Code. Below is their reply after “full investigation by the internal team“:
”Thank you for your continued patience as we have thoroughly investigated your account access issue. Please be assured that we conducted a comprehensive investigation, exploring every possible avenue to restore your access.
Our product engineering team has confirmed that your account was suspended from using our Antigravity service. This suspension affects your access to the Gemini CLI and any other service that uses the Cloud Code Private API.
Our investigation specifically confirmed that the use of your credentials within the third-party tool “open claw” for testing purposes constitutes a violation of the Google Terms of Service [1]. This is due to the use of Antigravity servers to power a non-Antigravity product.
I must be transparent and inform you that, in accordance with Google’s policy, this situation falls under a zero tolerance policy, and we are unable to reverse the suspension. I am truly sorry to share this difficult news with you.”
Ok so basicaly, there’s no way we can restore our accounts to use Antigravity anymore yeah? this is unexpected, but until we can figure out how to resolve this issue, I’ll just subscribed using different account
I’m in the same situation…
Hi @Abhijit_Pramanik , could you please provide some help? This silence is unbearable.
Gemini Disabled on Antigravity IDE, How to Restore Access?
I’m in contact with Google One but their actions are no help at all, for almost a week they haven’t done anything, they only asked for screenshots/recordings of the login attempt.
Why is there silence from Google? What is the user supposed to do? Create a new account and buy a new PRO/ULTRA, or what? Any information at all?!
I’ve got ban and the only difference from vanilla IDE experience was antigravity-cockpit extension. No reply to my appeal email last 12 hours.
ost. I WOULD LOVE TO GET THIS RESOLVED!!
I’m subscribing the AI Pro and just integrated Gemini to OpenCode yesterday. After a just day use, my account is suspended without any warnings. Simply the API returns 403 error to my OpenCode and Gemini CLI like this:
Failed to login. Message: This service has been disabled in this account for violation of Terms of Service. If you believe this is an error, contact gemini-code-assist-user-feedback@google.com.
I emailed to the contact this morning but didn’t get any response yet.
If this is indeed the case, I find it utterly absurd. It seems Google’s response is woefully inadequate; I should explore Claude or other alternatives.
Quick update for everyone stuck in this 403 loop: I just spent the last 8 days fighting through Tier 1 support. Google One support finally admitted on record it’s a ‘known WAF bug’, but then literally routed me to Android App Developer support because they have no backend access to fix it.
The entire support flowchart is completely broken, and they are still billing us $250/mo for bricked accounts. I just documented the entire Kafkaesque support loop over on the google_antigravity subreddit. If you are stuck in this same Catch-22, go search for that post over there and share your Trajectory IDs in the comments so we can get some actual engineering eyes on this mass ban wave.
Hi @K8L, just wanted to share some context regarding this situation as I see you are waiting for a response.
Yesterday, Abhijit actually posted a brief statement acknowledging these 403 ToS issues, noting that the internal team was ‘prioritizing a resolution.’ However, the message was deleted just a few minutes later.
Hoping for some transparency, I left a single, polite comment asking for clarification on why the update was removed. Surprisingly, my forum account was banned shortly after posting that question.
Currently, there seems to be no official communication regarding these 403 errors, although we can see active replies being made to other unrelated threads on the forum.
This situation is quite concerning for us as developers. The automated system is still triggering these mass bans daily during fixed time windows, without any warning and seemingly without a review of the current process.
Fingers crossed this message doesn’t get taken down and my account survives long enough for you guys to read it, haha.
Facing this issue too, I wrote an email to gemini-code-assist-user-feedback@google.com “eight days ago”, and still got no response today. So disappointed
My account (pro) was also bricked for calling Gemini model from pi harness two times. No response from support and it’s been four days.
...
Read the original on discuss.ai.google.dev »
All the fun of short-form video, none of the corporate control.
Loops is federated, open-source, and designed to give power back to creators and communities across the social web. Build your community on a platform that can’t lock you in.
...
Read the original on joinloops.org »
We’ve been searching for a memory-safe programming language to replace C++ in Ladybird for a while now. We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited. Rust is a different story. The ecosystem is far more mature for systems programming, and many of our contributors already know the language. Going forward, we are rewriting parts of Ladybird in Rust.
When we originally evaluated Rust back in 2024, we rejected it because it’s not great at C++ style OOP. The web platform object model inherits a lot of 1990s OOP flavor, with garbage collection, deep inheritance hierarchies, and so on. Rust’s ownership model is not a natural fit for that.
But after another year of treading water, it’s time to make the pragmatic choice. Rust has the ecosystem and the safety guarantees we need. Both Firefox and Chromium have already begun introducing Rust into their codebases, and we think it’s the right choice for Ladybird too.
Our first target was LibJS , Ladybird’s JavaScript engine. The lexer, parser, AST, and bytecode generator are relatively self-contained and have extensive test coverage through test262, which made them a natural starting point.
I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand. We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output. Zero regressions across the board:
No performance regressions on any of the JS benchmarks we track either.
Beyond the test suites, I’ve done extensive testing by browsing the web in a lockstep mode where both the C++ and Rust pipelines run simultaneously, verifying that output is identical for every piece of JavaScript that flows through them.
If you look at the code, you’ll notice it has a strong “translated from C++” vibe. That’s because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline. The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
This is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time. New Rust code will coexist with existing C++ through well-defined interop boundaries.
We want to be deliberate about which parts get ported and in what order, so the porting effort is managed by the core team. Please coordinate with us before starting any porting work so nobody wastes their time on something we can’t merge.
I know this will be a controversial move, but I believe it’s the right decision for Ladybird’s future. :^)
...
Read the original on ladybird.org »
On Christmas Eve, 9 “peer-reviewed” economics papers were quietly retracted by Elsevier, the world’s largest academic publisher.
This includes 7 papers in the International Review of Financial Analysis (a good journal—it has an 18% acceptance rate):
Plus two more retractions in Finance Research Letters (29% acceptance rate):
Two days later, three more papers were retracted at the International Review of Economics & Finance (30% acceptance rate):
All 12 papers had one thing in common: Brian M Lucey, Professor of International Finance and Commodities, Trinity College Dublin — the #1 ranked economics and business school in Ireland — as a co-author.
Lucey published 56 papers in 2025, one paper every 6.5 days. Lmao.
Lucey has published 44 papers in Finance Research Letters alone, an Elsevier journal he edited.
I emailed Lucey for comment, but he did not respond.
Brian Lucey… where have I heard that name before?
Oh yeah, he bullied me on Twitter in 2023.
The stated reason for the retractions was that: “review of this submission was overseen, and the final decision was made, by the Editor Brian Lucey, despite his role as a co-author of the manuscript. This compromised the editorial process and breached the journal’s policies.”
In plain terms, Lucey was serving as editor while approving his own papers. The result was a complete bypass of peer review—an abuse of editorial authority that functioned as a citation-cartel scheme.
Apparently this was an open secret in the profession for many years, with EJMR comments going back 5+ years explicitly calling him out as a cheater:
Along with the 12 retractions, Lucey was removed as an editor at 5 journals: International Review of Financial Analysis, the International Review of Economics & Finance, Finance Research Letters, Financial Management, & Energy Finance.
Lucey remains as editor-in-chief at Wiley’s Journal of Economic Surveys.
I emailed Wiley, and they provided me with this statement:
We are aware of these concerns and have investigated Prof. Lucey’s activity on Journal of Economic Surveys. Our research integrity team did not find any concerns regarding conflict of interest or mishandling of papers, nor has Prof. Lucey published any papers in the journal since he joined the editorial team as a co-editor in 2024. We expect full commitment and adherence to our editorial practices and standards, and we will be monitoring the situation to ensure that there is no improper handling of papers at the journal.
In response to Wiley’s statement, one EMJR user wrote: “I am baffled how they could possibly still have confidence in him, given his serious and systematic ethical lapses in editorial positions. Sounds somewhat naive to expect ‘full adherence to our editorial practices and standards’!”
Until being purged from the leadership of these 5 journals, Lucey played a central role in coordinating Elsevier’s Finance Journals Ecosystem, which allows “participating journals to suggest transferring a rejected manuscript to another journal in the system without the need for resubmission and the associated cost.”
That system, and the editors involved, “came under fire last year when a preprint suggested it might facilitate citation stacking as a way to boost journal impact factors. The analysis in the preprint also suggested a citation ring involving Elsevier editors could be at work.”
I emailed the anonymous “Theophilos Nomos” who wrote this paper, but they did not respond to my email.
That pre-print names Samuel Vigne, a finance professor at Luiss Business School, former PhD student of Lucey, and prolific Lucey co-author (they have published at least 33 papers together) as a core node of Lucey’s citation cartel.
Multiple publications by Vigne and Lucey are flagged on PubPeer.
This example neatly illustrates how their co-authorship trading scheme operated:
It describes a draft uploaded to SSRN with three authors:
After submitting that draft to the Elsevier finance ecosystem, that draft was scrubbed from SSRN, and in the final published version, an additional author (Samuel Vigne) was added as a new author, with an “equal contribution” statement. The two versions are otherwise identical, containing the same figures, sections, and text.
Co-authorship trading is only one part of the operation. The other is citation stacking. In this model, a small, tightly linked group funnels an enormous volume of papers into the same handful of journals, then systematically stuffs those papers with citations to one another. The result is a rapid, artificial explosion in citation counts that makes them look like influential geniuses.
Take John Gooddell, a professor at the University of Akron and a Lucey co-author. Gooddell has published 68 papers in Finance Research Letters alone, a journal edited by Lucey. If each paper contains even a modest 50 references, that amounts to roughly 3,400 citations recycled through a single outlet. In 2024 alone, Gooddell published 61 papers. He’s not doing research. He’s farming citations.
Following Lucey’s retractions, Samuel Vigne was removed as the editor-in-chief of International Review of Financial Analysis and Finance Research Letters.
In addition to that anonymous pre-print, there is also a 2025 paper written by actual professors with sophisticated econometric analysis & graph theory which describes the citation cartel in much more detail. The conclusion of that paper is: ”Elsevier ecosystem journals benefited from the creation of the ecosystem … Elsevier journals in the ecosystem have overlapping editors and Elsevier appoints these editors in coordination with a single academic [Brian Lucey] that manages the fleet of ecosystem journals.”
Brian Lucey posted a reply to this paper, which was extremely weak and does not contain any tables or figures. It mostly ignores the data and structural model of the citation ring and instead leans on Lucey’s “lived experience” as an editor (“we have experience shepherding…”), while also nitpicking semantics and phrasing, such as Lucey complaining that they called him a “professor of finance” instead of his full honorific, “professor of international finance and commodities.”
The Elsevier ecosystem web page went live on 4 November 2020 , according to Lucey’s rebuttal. Below is a visualization of the network before and after this transition date, which shows a clear distortion of the citation network. During 2021-2025, the Ecosystem citations per article is 103 % higher.
2020 is also the year where Brian Lucey’s citation profile exhibits an exponential “J-curve”, a Hallmark of citation rings. Did he suddenly become a well-respected genius in 2020? Or did he figure out how to cheat the system?
In a comment to Retraction Watch, Lucey further argued that citation cartels are not a crime, because everyone does it.
”Because here’s the thing: Elsevier are aware of [editors publishing in their own journals] as a pretty common practice in finance and economics. We’ve given them evidence of hundreds of instances of this. And nothing has happened, which does raise the question, you know, maybe they’re going to go back and go look at all these. Presumably, they will treat everything the same.” Lucey shared his list of such instances. It includes 240 articles, 133 of which are in Science of the Total Environment, which was delisted from Clarivate’s Web of Science in November.
Dr. Thorsten Beck, in a blog post, confirmed that no, not everyone does it, and yes, it is a crime.
This incident raises an important question: is this common practice across academic journals? And are there rules for editors publishing in ‘their’ journals? As I was editor across three journals for a total of 11 years, I can certainly speak to this (and clearly say NO).I don’t have formal confirmation but I have been told by several independent sources that ultimately even Elsevier realised that this editor was seriously damaging the reputation of the journal, appointing a second editor and then easing out the ‘doubtful’ editor from his responsibilities.
The fallout from the Lucey–Vigne era extends far beyond a handful of retracted PDFs. What it exposes is a structural weakness in how academic “excellence” is manufactured, measured, and monetized. By presiding over a coordinated cluster of journals, a small group of editors effectively gained the ability to print their own academic currency.
However, blaming Lucey and Vigne alone ignores the hand that fed them. Elsevier did not just “allow” this to happen; they engineered the environment for it to flourish, because of incentives: Elsevier’s internal metrics (Impact Factors) directly benefitted from this behavior. It was a symbiotic corruption: the editors received a fast-track to academic stardom, and Elsevier received a high-margin, high-volume production line of citable content.
This is the “paper mill” reimagined for the elite: not a basement operation in a third-world nation, but a polished, corporate-mandated factory within the halls o the world’s most powerful publisher. This is the natural result of a corporate mandate to maximize profits by bundling journals into monopoly-priced packages, forcing universities to pay for the very “prestige” that Elsevier’s own staff helped to dilute. As one EJMR commenter noted, “The tragedy isn’t that they cheated; it’s that the system was designed to let them thrive for a decade before anyone bothered to look at the data.”
The question now is whether Trinity College Dublin will fire Lucey.
They did not respond to my inquiry.
An editor of a psychology journal was offered $1,500 per accepted paper.
Richard Tol, a professor of economics at the University of Sussex, wrote that he was offered $5,000 per paper.
Muhammad Ali Nasir, a professor of Macroeconomics at Leeds University, wrote about how common selling papers is in European finance journals: “I had been made such offers from anonymous emails but I choose not to engage and in one case forwarded the email to EiC. I will be surprised if any editor is not approached by these people.”
This raises a multi-million-euro question: given their documented corruption, are the various “educational consultancies” and special-purpose vehicles operated by Brian Lucey and Samuel Vigne used to circulate ecosystem funds, conference fees, or “consultancy” payouts from authors seeking a shortcut to publication?
Here is a hypothetical outline of how such a cash-flow scheme could function.“Hello [unknown, distant institutions], we offer consulting services: €€€ for excellent advice on how to publish in top-tier finance journals. Our advice yields results.”
I’m not going to provide details on how to corruptly have a paper published. I’m just going to speculate on what could be going on in a situation like this. It could be based on “consultancy fees” for advice on publishing that you or your institution pay to one of those companies. They give some advice, including what papers to cite, etc, and if you follow their advice you are likely to be published in one of their journals. This could be attractive for researchers and institutions in, e.g., China and the Middle East.
Another anonymous economics professor I spoke to told me:
Universities in East and West Asia pay cash bonuses for publications. Some authors hire a broker (many advertise openly on Facebook), other authors contact the editor directly. The cash bonus is shared between the author, broker, and editor. Besides selling papers, they also sell special issues, which allow the guest editors to do what they want.And they sell positions on the editorial board, which are important for promotion to the next academic rank.Some payments are in cash, others in kind. Finally, they organize conferences. Registration fees more than cover the costs of putting on a conference. The conference name suggests it is organized by a society, but it really is Lucey who pockets the profits.
Brian Lucey and Samuel Vigne operate four private companies in Ireland and the UK classified under “other education,” likely functioning as consultancies or special-purpose vehicles for academic or policy work.
The existence of these consultancies warrants investigation into potential conflicts of interest and financial misconduct.
...
Read the original on www.chrisbrunet.com »
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
...
Read the original on www.lesswrong.com »
Just a brief announcement that I have been working with Quanta Books to publish a short book in popular mathematics entitled “Six Math Essentials“, which will cover six of the fundamental concepts in mathematics — numbers, algebra, geometry, probability, analysis, and dynamics — and how they connect with our real-world intuition, the history of math and science, and to modern practice of mathematics, both in theory and in applications. The scheduled publication date is Oct 27, but it is currently available for preorder.
...
Read the original on terrytao.wordpress.com »
Last week I had to diagnose a bug in an open source library I maintain. The issue was gnarly enough that I couldn’t find it right away, but then I thought: if I set a breakpoint here and fire up the debugger, I will likely find the root cause very soon… and then proceed to mercilessly destroy it!
So I rolled up my sleeves, set the breakpoint, fired up the debugger, and… saw the program run to completion without interruptions whatsoever. My breakpoint had been ignored, even though I knew for certain that the line of code in question must have been executed (I double-checked just to be sure).
Since I was in “problem solving mode”, I ignored the debugger issue and started thinking of other approaches to diagnosing it. Prey to my tunnel vision, I modified the code to log potentially interesting data, but it didn’t yield the insights I was hoping for. How frustrating!
My fingertips itched to write even more troubleshooting code when it suddenly dawned on me: just fix the darn debugger already! Sure, it might feel slower, but it will give you the ability to see what you need to see, and then actually solve the problem.
So I fixed the debugger (it turned out to be a one-line configuration change), observed the program’s behavior in more detail, and used that knowledge to solve the issue.
What a paradox, I realized afterwards. The very desire to fix the bug prevented me from seeing I had to fix the tool first, and made me less effective in my bug hunt. This blog post is a reminder to myself, and to every bug-hungry programmer out there: fix your tools! They will do wonders for you.
...
Read the original on ochagavia.nl »
The link you clicked leads to a Base64 encoded string.
To decode it, you can use:
Skip to contentFebruary Updates 🌸The largest collection of free stuff on the internet!Learn how to block ads, trackers and other nasty things. Explore the world of AI and machine learning.Stream, download, torrent and binge all your favourite movies and shows!Stream, download and torrent songs, podcasts and more!Download and play all your favourite games or emulate some old but gold ones!Whether you’re a bookworm, otaku or comic book fan, you’ll be able to find your favourite pieces of literature here!Download all your favourite software, movies, shows, music, games and more!Download your favourite media using the BitTorrent protocol.All forms of content for Android and iOS.Content in languages other than English.Various topics like food, travel, news, shopping, fun sites and more!
...
Read the original on fmhy.net »
Catch bugs before they make it to production
Usage Guide
...
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.