10 interesting stories served every morning and every evening.
In 2024, Apple signed a deal with Taboola to serve ads in its app, notably Apple News. John Gruber, writing in Daring Fireball said at the time:
If you told me that the ads in Apple News have been sold by Taboola for the last few years, I’d have said, “Oh, that makes sense.” Because the ads in Apple News — at least the ones I see1 — already look like chumbox Taboola ads. Even worse, they’re incredibly repetitious.
I use Apple News to keep up on topics that I don’t find in sources I pay for (The Guardian and The New York Times). But there’s no way I’m going to pay the exorbitant price Apple wants for Apple News+ — £13 — because, while you get more publications, you still get ads.
And those ads have gotten worse recently. Many if not most of them look like and probably are scams. Here are a few examples from Apple News today.
Here are three ads that are scammy; the first two were clearly generated by AI, and the third may have been created by AI.
Why are they scams? When I searched domain information for the domains, I found that they were registered very recently.
This recent registration doesn’t necessarily mean they are scams, but they don’t inspire much confidence.
Here’s one example. This ad from Tidenox, whose website says I am retiring, showing a photo of an elderly woman, who says, “For 26 years, Tidenox has been port of your journey in creating earth and comfort at home.” The image of the retiring owner is probably made by AI. (Update: someone on Hacker News pointed out the partly masked Google Gemini logo on the bottom right. I hadn’t spotted that, in part because I don’t use any AI image generation tools.)
These fake “going out of business ads” have been around for a few years, and even the US Better Business Bureau warns about them, as they take peoples’ money then shut down. Does Apple care? Does Taboola care? Does Apple care that Taboola serves ads like this? My guess: no, no, and no.
Note the registration date for the tidenox.com domain. It’s nowhere near 26 years old, and it’s registered in China:
Shame on Apple for creating a honeypot for scam ads in what they consider to be a premium news service. This company cannot be trusted with ads in its products any more.
...
Read the original on kirkville.com »
Your web browser does not support this video. The Waymo World Model: A New Frontier For Autonomous Driving SimulationThe Waymo Driver has traveled nearly 200 million fully autonomous miles, becoming a vital part of the urban fabric in major U.S. cities and improving road safety. What riders and local communities don’t see is our Driver navigating billions of miles in virtual worlds, mastering complex scenarios long before it encounters them on public roads. Today, we are excited to introduce the Waymo World Model, a frontier generative model that sets a new bar for large-scale, hyper-realistic autonomous driving simulation. Your web browser does not support this video.Simulation of the Waymo Driver evading a vehicle going in the wrong direction. The simulation initially follows a real event, and seamlessly transitions to using camera and lidar images automatically generated by an efficient real-time Waymo World Model.
Simulation is a critical component of Waymo’s AI ecosystem and one of the three key pillars of our approach to demonstrably safe AI. The Waymo World Model, which we detail below, is the component that is responsible for generating hyper-realistic simulated environments.The Waymo World Model is built upon Genie 3—Google DeepMind’s most advanced general-purpose world model that generates photorealistic and interactive 3D environments—and is adapted for the rigors of the driving domain. By leveraging Genie’s immense world knowledge, it can simulate exceedingly rare events—from a tornado to a casual encounter with an elephant—that are almost impossible to capture at scale in reality. The model’s architecture offers high controllability, allowing our engineers to modify simulations with simple language prompts, driving inputs, and scene layouts. Notably, the Waymo World Model generates high-fidelity, multi-sensor outputs that include both camera and lidar data.This combination of broad world knowledge, fine-grained controllability, and multi-modal realism enhances Waymo’s ability to safely scale our service across more places and new driving environments. In the following sections we showcase the Waymo World Model in action, featuring simulations of the Waymo Driver navigating diverse rare edge-case scenarios.Most simulation models in the autonomous driving industry are trained from scratch based on only the on-road data they collect. That approach means the system only learns from limited experience. Genie 3’s strong world knowledge, gained from its pre-training on an extremely large and diverse set of videos, allows us to explore situations that were never directly observed by our fleet.Through our specialized post-training, we are transferring that vast world knowledge from 2D video into 3D lidar outputs unique to Waymo’s hardware suite. While cameras excel at depicting visual details, lidar sensors provide valuable complementary signals like precise depth. The Waymo World Model can generate virtually any scene—from regular, day-to-day driving to rare, long-tail scenarios—across multiple sensor modalities.Your web browser does not support this video.Simulation: Driving on the Golden Gate Bridge, covered in light snow. Waymo’s shadow is visible in the front camera footage.
Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Simulation: Driving on a street with lots of palm trees in a tropical city, strangely covered in snow.
Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Simulation: The leading vehicle driving into the tree branches.
Your web browser does not support this video.Simulation: Driving behind a vehicle with precariously positioned furniture on top.
Your web browser does not support this video.Simulation: A malfunctioned truck facing the wrong way, blocking the road.
Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.In the interactive viewers below, you can immersively view the realistic 4D point clouds generated by the Waymo World Model.Interactive 3D visualization of an encounter with an elephant.
The Waymo World Model offers strong simulation controllability through three main mechanisms: driving action control, scene layout control, and language control.Driving action control allows us to have a responsive simulator that adheres to specific driving inputs. This enables us to simulate “what if” counterfactual events such as whether the Waymo Driver could have safely driven more confidently instead of yielding in a particular situation.Counterfactual driving. We demonstrate simulations both under the original route in a past recorded drive, or a completely new route. While purely reconstructive simulation methods (e.g., 3D Gaussian Splats, or 3DGS) suffer from visual breakdowns due to missing observations when the simulated route is too different from the original driving, the fully learned Waymo World Model maintains good realism and consistency thanks to its strong generative capabilities.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Scene layout control allows for customization of the road layouts, traffic signal states, and the behavior of other road users. This way, we can create custom scenarios via selective placement of other road users, or applying custom mutations to road layouts.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Language control is our most flexible tool that allows us to adjust time-of-day, weather conditions, or even generate an entirely synthetic scene (such as the long-tail scenarios shown previously).Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.During a scenic drive, it is common to record videos of the journey on mobile devices or dashcams, perhaps capturing piled up snow banks or a highway at sunset. The Waymo World Model can convert those kinds of videos, or any taken with a regular camera, into a multimodal simulation—showing how the Waymo Driver would see that exact scene. This process enables the highest degree of realism and factuality, since simulations are derived from actual footage.Your web browser does not support this video.Your web browser does not support this video.Your web browser does not support this video.Some scenes we want to simulate may take longer to play out, for example, negotiating passage in a narrow lane. That’s harder to do because the longer the simulation, the tougher it is to compute and maintain stable quality. However, through a more efficient variant of the Waymo World Model, we can simulate longer scenes with dramatic reduction in compute while maintaining high realism and fidelity to enable large-scale simulations.🚀 Long rollout (4x speed playback) on an efficient variant of the Waymo World ModelYour web browser does not support this video.Navigating around in-lane stopper and fast traffic on the freeway.
Your web browser does not support this video.Your web browser does not support this video.Driving up a steep street and safely navigating around motorcyclists.
Your web browser does not support this video.By simulating the “impossible”, we proactively prepare the Waymo Driver for some of the most rare and complex scenarios. This creates a more rigorous safety benchmark, ensuring the Waymo Driver can navigate long-tail challenges long before it encounters them in the real world.
The Waymo World Model is enabled by the key research, engineering and evaluation contributions from James Gunn, Kanaad Parvate, Lu Liu, Lucas Deecke, Luca Bergamini, Zehao Zhu, Raajay Viswanathan, Jiahao Wang, Sakshum Kulshrestha, Titas Anciukevičius, Luna Yue Huang, Yury Bychenkov, Yijing Bai, Yichen Shen, Stefanos Nikolaidis, Tiancheng Ge, Shih-Yang Su and Vincent Casser.We thank Chulong Chen, Mingxing Tan, Tom Walters, Harish Chandran, David Wong, Jieying Chen, Smitha Shyam, Vincent Vanhoucke and Drago Anguelov for their support in defining the vision for this project, and for their strong leadership and guidance throughout.We would like to additionally thank Jon Pedersen, Michael Dreibelbis, Larry Lansing, Sasho Gabrovski, Alan Kimball, Dave Richardson, Evan Birenbaum, Harrison McKenzie Chapter and Pratyush Chakraborty, Khoa Vo, Todd Hester, Yuliang Zou, Artur Filipowicz, Sophie Wang and Linn Bieske for their invaluable partnership in facilitating and enabling this project.We thank our partners from Google DeepMind: Jack Parker-Holder, Shlomi Fruchter, Philip Ball, Ruiqi Gao, Songyou Peng, Ben Poole, Fei Xia, Allan Zhou, Sean Kirmani, Christos Kaplanis, Matt McGill, Tim Salimans, Ruben Villegas, Xinchen Yan, Emma Wang, Woohyun Han, Shan Han, Rundi Wu, Shuang Li, Philipp Henzler, Yulia Rubanova, and Thomas Kipf for helpful discussions and for sharing invaluable insights for this project.
...
Read the original on waymo.com »
OpenCiv3 (formerly known by the codename “C7”) is an open-source, cross-platform, mod-oriented, modernized reimagining of Civilization III by the fan community built with the Godot Engine and C#, with capabilities inspired by the best of the 4X genre and lessons learned from modding Civ3. Our vision is to make Civ3 as it could have been, rebuilt for today’s modders and players: removing arbitary limits, fixing broken features, expanding mod capabilities, and supporting modern graphics and platforms. A game that can go beyond C3C but retain all of its gameplay and content.
OpenCiv3 is under active development and currently in an early pre-alpha state. It is a rudimentary playable game but lacking many mechanics and late-game content, and errors are likely. Keep up with our development for the latest updates and opportunities to contribute!
New Players Start Here: An Introduction to OpenCiv3 at CivFanatics
NOTE: OpenCiv3 is not affiliated with civfanatics.com, Firaxis Games, BreakAway Games, Hasbro Interactive, Infogrames Interactive, Atari Interactive, or Take-Two Interactive Software. All trademarks are property of their respective owners.
The OpenCiv3 team is pleased to announce the first preview release of the v0.3 “Dutch” milestone. This is a major enhancement over the “Carthage” release, and our debut with standalone mode featuring placeholder graphics without the need for Civ3 media files. A local installation of Civ3 is still recommended for a more polished experience. See the release notes for a full list of new features in each version.
OpenCiv3 Dutch Preview 1 with the same game in Standalone mode (top) and with imported Civ3 graphics (bottom)
Download the appropriate zip file for your OS from the Dutch Preview 1 release
All official releases of OpenCiv3 along with more detailed release notes can be found on the GitHub releases page.
64-bit Windows, Linux, or Mac OS. Other platforms may be supported in future releases.
Minimum hardware requirements have not yet been identified. Please let us know if OpenCiv3 does not perform well on your system.
Recommended: A local copy of Civilization III files (the game itself does NOT have to run) from Conquests or the Complete edition. Standalone mode is available with placeholder graphics for those who do not have a copy.
Civilization III Complete is available for a pittance from Steam or GOG
This is a Windows 64-bit executable. OpenCiv3 will look for a local installation of Civilization III in the Windows registry automatically, or you may use an environment variable to point to the files.
If it is blocked, you may need to unblock it by
Check the “Unblock” checkbox near the bottom buttons in the “Security” section
If your Civilization III installation is not detected, you can set the environment variable CIV3_HOME pointing to it and restart OpenCiv3
This is an x86-64 Linux executable. You may use an environment variable to point to the files from a Civilization III installation. You can just copy or mount the top-level “Sid Meier’s Civilization III Complete” (Sans “Complete” if your install was from pre-Complete CDs) folder and its contents to your Linux system, or install the game via Steam or GOG.
Set the CIV3_HOME environment variable to point to the Civ3 files, e.g. export CIV3_HOME=“/path/to/civ3”
From that same terminal where you set CIV3_HOME, run OpenCiv3.x86_64
To make this variable permanent, add it to your .profile or equivalent.
This is a universal 64-bit executable, so it should run on both Intel and M1 Macs. You may use an environment variable to point to the files from a Civilization III installation. You can just copy or mount the top-level “Sid Meier’s Civilization III Complete” (Sans “Complete” if your install was from pre-Complete CDs) folder and its contents to your Mac system, or install the game via Steam or GOG.
Download the zip; it may complain bitterly, and you may have to tell it to keep the download instead of trashing it
Double click the zip file, and a folder with OpenCiv3.app and a json file will appear
If you try to open OpenCiv3.app it will tell you it’s damaged and try to trash it; it is not damaged
To unblock the downloaded app, from a terminal run xattr -cr /path/to/OpenCiv3.app; you can avoid typing the path out by typing xattr -cr and then dragging the OpenCiv3.app icon onto the terminal window
Set the CIV3_HOME environment variable to point to the Civ3 files, e.g. export CIV3_HOME=“/path/to/civ3”
From that same terminal where you set CIV3_HOME, run OpenCiv3.app with open /path/to/OpenCiv3.app, or again just type open and drag the OpenCiv3 icon onto the terminal window and press enter
OpenCiv3 uses many primitive placeholder assets; loading files from a local Civilization III install is recommended (see platform specific setup instructions above)
Support for playing Civ3 BIQ or SAV files is incomplete; some files will not load correctly and crashes may occur
For Mac:
Mac will try hard not to let you run this; it will tell you the app is damaged and can’t be opened and helpfully offer to trash it for you. From a terminal you can xattr -cr /path/to/OpenCiv3.app to enable running it.
Mac will crash if you hit buttons to start a new game (New Game, Quick Start, Tutorial, or Load Scenario) because it cant find our ‘new game’ save file we’re using as a stand-in for map generation. But you can Load Game and load c7-static-map-save.json or open a Civ3 SAV file to open that map
Other specific bugs will be tracked on the GitHub issues page.
© OpenCiv3 contributors. OpenCiv3 is free and open source software released under the MIT License.
...
Read the original on openciv3.org »
Today, Heroku is transitioning to a sustaining engineering model focused on stability, security, reliability, and support. Heroku remains an actively supported, production-ready platform, with an emphasis on maintaining quality and operational excellence rather than introducing new features. We know changes like this can raise questions, and we want to be clear about what this means for customers.
There is no change for customers using Heroku today. Customers who pay via credit card in the Heroku dashboard—both existing and new—can continue to use Heroku with no changes to pricing, billing, service, or day-to-day usage. Core platform functionality, including applications, pipelines, teams, and add-ons, is unaffected, and customers can continue to rely on Heroku for their production, business-critical workloads.
Enterprise Account contracts will no longer be offered to new customers. Existing Enterprise subscriptions and support contracts will continue to be fully honored and may renew as usual.
We’re focusing our product and engineering investments on areas where we can deliver the greatest long-term customer value, including helping organizations build and deploy enterprise-grade AI in a secure and trusted way.
...
Read the original on www.heroku.com »
This project is currently actively evolving and improving. While we are working toward a stable release, some APIs and interfaces may change as the design continues to mature. You are welcome to explore and experiment, but if you need long-term stability, it may be best to wait for a stable release, or be prepared to adapt to updates along the way.
LiteBox is a sandboxing library OS that drastically cuts down the interface to the host, thereby reducing attack surface. It focuses on easy interop of various “North” shims and “South” platforms. LiteBox is designed for usage in both kernel and non-kernel scenarios.
LiteBox exposes a Rust-y nix/rustix-inspired “North” interface when it is provided a Platform interface at its “South”. These interfaces allow for a wide variety of use-cases, easily allowing for connection between any of the North–South pairs.
See the following files for details:
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow
Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
...
Read the original on github.com »
Already have an account? Sign in
Sign in to your account
Don’t have an account? Sign up
...
Read the original on vecti.com »
An interactive visualization to understand how neural networks work
tap/click the right side of the screen to go forward →
I’ve always been curious about how AI works.
But with the constant news and updates, I often feel overwhelmed trying to keep up with it all.
So I decided to go back to the basics and start learning from the beginning, with neural networks.
When I’m learning, I find it easier to understand how things work when I can visualize them in my mind.
So I made this visualization, and I’m sharing it now.
I’m just hoping it can also be useful for those of you who are curious about AI and want to learn from the basics.
But a quick disclaimer: I’m not an expert, and I might get things wrong here and there.
If you spot anything off, just let me know. I’d love to learn from you too!
So, what exactly is a neural network?
A neural network is inspired by the structure and functions of biological neural networks.
It works by taking some data as input and processing it through a network of neurons.
Inside each neuron, there’s a rule that decides whether it should be activated.
When that happens, it means the neuron found a pattern in the data that it has learned to recognize.
This process repeats as the data moves through the layers of the network.
The pattern of activation in the final layer represents the output of the task.
Let’s start with a simple use case for a neural network: recognizing a handwritten number.
In this case, the input is an image of a number, and we want the neural network to tell us what number it is.
The output is determined by which neurons in the last layer get activated.
Each one corresponds to a number, and the one with the highest activation tells us the network’s prediction.
To do this, first we need to turn the image into data that the neural network can understand.
In this example, the data will be the brightness value of each pixel in the image.
The neuron will receive a value depending on how bright or dark that part of the image is.
The darker an area (which means there is something written there), the more value a neuron will have.
Once this process is finished, the input neurons will now have values that resemble the input image.
These input values are then passed on to the next layer of neurons to process.
But here’s the key part: before being passed, each value is multiplied by a certain weight.
These weights will be varied for each connection.
It might be positive, negative, less than 1, or more than 1.
The receiving neuron will then sum up all the weighted values it gets.
Then comes the rule we mentioned earlier — usually called an activation function.
There are actually different types of activation functions, but let’s use a simple rule for now:
If the total value is greater than a threshold, the neuron activates. Otherwise, it keeps inactive.
If the neuron gets activated, it means it recognized something in the image. Maybe a line, a curve, or a part of a number.
Now imagine we have to do these same operations for every neuron in the next layer.
Each neuron has its own weight and threshold value, so it will react differently to the same input image.
To put it differently, each neuron is looking for a different pattern in the image.
This process repeats layer by layer until we reach the final layer.
At each layer, the neurons process the patterns detected by the previous layer, building on them to recognize more complex patterns.
Until finally, in the last layer, the network has enough information to deduce what number is in the image.
So that’s basically how a neural network works in a nutshell.
It’s a series of simple math operations that process input data to produce an output.
With the right combination of weights and thresholds, the network can learn to map inputs to the right outputs.
In this case, it’s used to map an image of a handwritten number to the correct number.
I’ll stop here for now.
So far, we’ve looked at what a neural network is, how it reads input, performs calculations, and gives an output.
But we haven’t answered the important question:
How do we find the right weights and right thresholds, so that the correct neuron is activated?
That part’s a little tricky — I’m still trying to wrap my head around it and find a good way to visualize it.
So I won’t go into it just yet.
But for now, I hope this gives you a basic understanding of how neural networks work.
See you in the next one 👋
visualrambling.space is a personal project by Damar, someone who loves to learn about different topics and rambling about them visually.
If you also love this kind of stuff, feel free to follow me. I’ll try to post more content like this in the future!
...
Read the original on visualrambling.space »
This is a tool that encrypts files and splits the decryption key among trusted friends using Shamir’s Secret Sharing. For example, you can give pieces to 5 friends and require any 3 of them to cooperate to recover the key. No single friend can access your data alone.
Each friend receives a self-contained bundle with recover.html—a browser-based tool that works offline, with no servers or internet required. If this website disappears, recovery still works.
Your file is encrypted, the key is split into shares, and friends combine shares to recover it.
Different friend combinations can recover the file (any 3 of 5)
Add Bob’s and Carol’s shares (drag their README.txt files onto the page)
Watch the automatic decryption when threshold is met
This is the best way to understand what your friends would experience during a real recovery.
* The code is open source—you can read it on GitHub
* Everything runs locally in your browser; your files don’t leave your device
* Try the demo bundles first to see exactly how it works before using it with real secrets
I wanted a way to ensure trusted friends could access important files if something happened to me—without trusting any single person or service with everything. Shamir’s Secret Sharing seemed like the right approach, but I couldn’t find a tool that gave friends a simple, self-contained way to recover files together. So I built one. I’m sharing it in case it’s useful to others.
...
Read the original on eljojo.github.io »
5 Write high level specifications and test by yourself9 Find and mark functions that have a high security risk12 Do not generate blindly or to much complexity at once
Enjoy the audio version of this article:Your browser does not support the audio element.
i
You are a human, you know how this world behaves, how your team and colleagues behave, and what your users expect. You have experienced the world, and you want to work together with a system that has no experience in this world you live in. Every decision in your project that you don’t take and document will be taken for you by the AI.
Your responsibility of delivering quality code cannot be met if not even you know where long-lasting and difficult-to-change decisions are taken.
You must know what parts of your code need to be thought through and what must be vigorously tested.
Think about and discuss the architecture, interfaces, data structures, and algorithms you want to use. Think about how to test and validate your code to these specifications.
You need to communicate to the AI in detail what you want to achieve, otherwise it will result in code that is unusable for your purpose.
Other developers also need to communicate this information to the AI. That makes it efficient to write as much documentation as practical in a standardized format and into the code repository itself.
Document the requirements, specifications, constraints, and architecture of your project in detail.
Document your coding standards, best practices, and design patterns.
Use flowcharts, UML diagrams, and other visual aids to communicate complex structures and workflows.
Write pseudocode for complex algorithms and logic to guide the AI in understanding your intentions.
Develop efficient debug systems for the AI to use, reducing the need for multiple expensive CLI commands or browsers to verify code functionality. This will save time and resources while simplifying the process for the AI to identify and resolve code issues.
For example: Build a system that collects logs from all nodes in a distributed system and provides abstracted information like “The Data was send to all nodes”, “The Data X is saved on Node 1 but not on Node 2”.
Not all code is equally important. Some parts of your codebase are critical and need to be reviewed with extra care. Other parts are less important and can be generated with less oversight.
Use a system that allows you to mark how thoroughly each function has been reviewed.
For example you can use a prompt that will let the AI put the comment //A behind functions it wrote to indicate that the function has been written by an AI and is not yet reviewed by a human.
AIs will cheat and use shortcuts eventually. They will write mocks, stubs, and hard coded values to make the code tests succeed while the code itself is not working and most of the time dangerous. Often AIs will adapt or outright delete test code to let the code pass tests.
You must discourage this behavior by writing property based high level specification tests yourself. Build them in a way that makes it hard for the AI to cheat without having big code segments dedicated to it.
For example, use property based testing, restart the server and check in between if the database has the correct values.
Separate these test so the AI cannot edit them and prompt the AI not to change them.
Let an AI write property based interface tests for the expected behavior with as little context of the rest of the code as possible.
This will generate tests that are uninfluenced by the “implementation AI” which will prevent the tests from being adapted to the implementation in a way that makes them useless or less effective.
Separate these tests so the AI cannot edit them without approval and prompt the AI not to change them.
Use strict linting and formatting rules to ensure code quality and consistency. This will help you and your AI to find issues early.
Save time and money by utilizing path specific coding agent prompts like CLAUDE.md.
You can generate them automatically which will give your AI information it would otherwise as to create from scratch every time.
Try to provide as much high level information as practical, such as coding standards, best practices, design patterns, and specific requirements for the project. This will help the AI to generate code that is more aligned with your expectations and will reduce lookup time and cost.
Identify and mark functions that have a high security risk, such as authentication, authorization, and data handling. These functions should be reviewed and tested with extra care and in such a way that a human has comprehended the logic of the function in all its dimensions and is confident about its correctness and safety.
Make this explicit with a comment like //HIGH-RISK-UNREVIEWED and //HIGH-RISK-REVIEWED to make sure that other developers are aware of the importance of these functions and will review them with extra care.
Make sure that the AI is instructed to change the review state of these functions as soon as it changes a single character in the function.
Developers must make sure that the status of these functions is always correct.
Aim to reduce the complexity of the generated code where possible. Each single line of code will eat up your context window and make it harder for the AI and You to keep track of the overall logic of your code.
Each avoidable line of code is costing energy, money and probability of future unsuccessful AI tasks.
AI written code is cheap, use this to your advantage by exploring different solutions to a problem with experiments and prototypes with minimal specifications. This will allow you to find the best solution to a problem without investing too much time and resources in a single solution.
Break down complex tasks into smaller, manageable tasks for the AI. Instead of asking the AI to generate the complete project or component at once, break it down into smaller tasks, such as generating individual functions or classes. This will help you to maintain control over the code and it’s logic.
You have to check each component or module for its adherence to the specifications and requirements.
If you have lost the overview of the complexity and inner workings of the code, you have lost control over your code and must restart from a state where you were in control of your code.
...
Read the original on heidenstedt.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.