10 interesting stories served every morning and every evening.




1 763 shares, 34 trendiness

OpenCiv3 Home

OpenCiv3 (formerly known by the co­de­name C7) is an open-source, cross-plat­form, mod-ori­ented, mod­ern­ized reimag­in­ing of Civilization III by the fan com­mu­nity built with the Godot Engine and C#, with ca­pa­bil­i­ties in­spired by the best of the 4X genre and lessons learned from mod­ding Civ3. Our vi­sion is to make Civ3 as it could have been, re­built for to­day’s mod­ders and play­ers: re­mov­ing ar­bitary lim­its, fix­ing bro­ken fea­tures, ex­pand­ing mod ca­pa­bil­i­ties, and sup­port­ing mod­ern graph­ics and plat­forms. A game that can go be­yond C3C but re­tain all of its game­play and con­tent.

OpenCiv3 is un­der ac­tive de­vel­op­ment and cur­rently in an early pre-al­pha state. It is a rudi­men­tary playable game but lack­ing many me­chan­ics and late-game con­tent, and er­rors are likely. Keep up with our de­vel­op­ment for the lat­est up­dates and op­por­tu­ni­ties to con­tribute!

New Players Start Here: An Introduction to OpenCiv3 at CivFanatics

NOTE: OpenCiv3 is not af­fil­i­ated with civ­fa­nat­ics.com, Firaxis Games, BreakAway Games, Hasbro Interactive, Infogrames Interactive, Atari Interactive, or Take-Two Interactive Software. All trade­marks are prop­erty of their re­spec­tive own­ers.

The OpenCiv3 team is pleased to an­nounce the first pre­view re­lease of the v0.3 Dutch” mile­stone. This is a ma­jor en­hance­ment over the Carthage” re­lease, and our de­but with stand­alone mode fea­tur­ing place­holder graph­ics with­out the need for Civ3 me­dia files. A lo­cal in­stal­la­tion of Civ3 is still rec­om­mended for a more pol­ished ex­pe­ri­ence. See the re­lease notes for a full list of new fea­tures in each ver­sion.

OpenCiv3 Dutch Preview 1 with the same game in Standalone mode (top) and with im­ported Civ3 graph­ics (bottom)

Download the ap­pro­pri­ate zip file for your OS from the Dutch Preview 1 re­lease

All of­fi­cial re­leases of OpenCiv3 along with more de­tailed re­lease notes can be found on the GitHub re­leases page.

64-bit Windows, Linux, or Mac OS. Other plat­forms may be sup­ported in fu­ture re­leases.

Minimum hard­ware re­quire­ments have not yet been iden­ti­fied. Please let us know if OpenCiv3 does not per­form well on your sys­tem.

Recommended: A lo­cal copy of Civilization III files (the game it­self does NOT have to run) from Conquests or the Complete edi­tion. Standalone mode is avail­able with place­holder graph­ics for those who do not have a copy.

Civilization III Complete is avail­able for a pit­tance from Steam or GOG

This is a Windows 64-bit ex­e­cutable. OpenCiv3 will look for a lo­cal in­stal­la­tion of Civilization III in the Windows reg­istry au­to­mat­i­cally, or you may use an en­vi­ron­ment vari­able to point to the files.

If it is blocked, you may need to un­block it by

Check the Unblock” check­box near the bot­tom but­tons in the Security” sec­tion

If your Civilization III in­stal­la­tion is not de­tected, you can set the en­vi­ron­ment vari­able CIV3_HOME point­ing to it and restart OpenCiv3

This is an x86-64 Linux ex­e­cutable. You may use an en­vi­ron­ment vari­able to point to the files from a Civilization III in­stal­la­tion. You can just copy or mount the top-level Sid Meier’s Civilization III Complete” (Sans Complete” if your in­stall was from pre-Com­plete CDs) folder and its con­tents to your Linux sys­tem, or in­stall the game via Steam or GOG.

Set the CIV3_HOME en­vi­ron­ment vari­able to point to the Civ3 files, e.g. ex­port CIV3_HOME=“/path/to/civ3”

From that same ter­mi­nal where you set CIV3_HOME, run OpenCiv3.x86_64

To make this vari­able per­ma­nent, add it to your .profile or equiv­a­lent.

This is a uni­ver­sal 64-bit ex­e­cutable, so it should run on both Intel and M1 Macs. You may use an en­vi­ron­ment vari­able to point to the files from a Civilization III in­stal­la­tion. You can just copy or mount the top-level Sid Meier’s Civilization III Complete” (Sans Complete” if your in­stall was from pre-Com­plete CDs) folder and its con­tents to your Mac sys­tem, or in­stall the game via Steam or GOG.

Download the zip; it may com­plain bit­terly, and you may have to tell it to keep the down­load in­stead of trash­ing it

Double click the zip file, and a folder with OpenCiv3.app and a json file will ap­pear

If you try to open OpenCiv3.app it will tell you it’s dam­aged and try to trash it; it is not dam­aged

To un­block the down­loaded app, from a ter­mi­nal run xattr -cr /path/to/OpenCiv3.app; you can avoid typ­ing the path out by typ­ing xattr -cr and then drag­ging the OpenCiv3.app icon onto the ter­mi­nal win­dow

Set the CIV3_HOME en­vi­ron­ment vari­able to point to the Civ3 files, e.g. ex­port CIV3_HOME=“/path/to/civ3”

From that same ter­mi­nal where you set CIV3_HOME, run OpenCiv3.app with open /path/to/OpenCiv3.app, or again just type open and drag the OpenCiv3 icon onto the ter­mi­nal win­dow and press en­ter

OpenCiv3 uses many prim­i­tive place­holder as­sets; load­ing files from a lo­cal Civilization III in­stall is rec­om­mended (see plat­form spe­cific setup in­struc­tions above)

Support for play­ing Civ3 BIQ or SAV files is in­com­plete; some files will not load cor­rectly and crashes may oc­cur

For Mac:

Mac will try hard not to let you run this; it will tell you the app is dam­aged and can’t be opened and help­fully of­fer to trash it for you. From a ter­mi­nal you can xattr -cr /path/to/OpenCiv3.app to en­able run­ning it.

Mac will crash if you hit but­tons to start a new game (New Game, Quick Start, Tutorial, or Load Scenario) be­cause it cant find our new game’ save file we’re us­ing as a stand-in for map gen­er­a­tion. But you can Load Game and load c7-sta­tic-map-save.json or open a Civ3 SAV file to open that map

Other spe­cific bugs will be tracked on the GitHub is­sues page.

© OpenCiv3 con­trib­u­tors. OpenCiv3 is free and open source soft­ware re­leased un­der the MIT License.

...

Read the original on openciv3.org »

2 360 shares, 14 trendiness

Design at the speed of light

Already have an ac­count? Sign in

Sign in to your ac­count

Don’t have an ac­count? Sign up

...

Read the original on vecti.com »

3 331 shares, 124 trendiness

La Suite numérique

Full list of pro­jects avail­able here.

La Suite numérique (La Suite for short) is a full blown open-source dig­i­tal work­space for on­line col­lab­o­ra­tion and team­work.

La Suite is built by French gov­ern­ment agen­cies DINUM and ANCT. It is also the prod­uct of a close eu­ro­pean col­lab­o­ra­tion with the Netherlands and German state.

Our code base is a 100% open source and MIT li­cenced.

Come say hello on Matrix

...

Read the original on github.com »

4 330 shares, 16 trendiness

Split a recovery key among friends

This is a tool that en­crypts files and splits the de­cryp­tion key among trusted friends us­ing Shamir’s Secret Sharing. For ex­am­ple, you can give pieces to 5 friends and re­quire any 3 of them to co­op­er­ate to re­cover the key. No sin­gle friend can ac­cess your data alone.

Each friend re­ceives a self-con­tained bun­dle with re­cover.html—a browser-based tool that works of­fline, with no servers or in­ter­net re­quired. If this web­site dis­ap­pears, re­cov­ery still works.

Your file is en­crypted, the key is split into shares, and friends com­bine shares to re­cover it.

Different friend com­bi­na­tions can re­cover the file (any 3 of 5)

Add Bob’s and Carol’s shares (drag their README.txt files onto the page)

Watch the au­to­matic de­cryp­tion when thresh­old is met

This is the best way to un­der­stand what your friends would ex­pe­ri­ence dur­ing a real re­cov­ery.

* The code is open source—you can read it on GitHub

* Everything runs lo­cally in your browser; your files don’t leave your de­vice

* Try the demo bun­dles first to see ex­actly how it works be­fore us­ing it with real se­crets

I wanted a way to en­sure trusted friends could ac­cess im­por­tant files if some­thing hap­pened to me—with­out trust­ing any sin­gle per­son or ser­vice with every­thing. Shamir’s Secret Sharing seemed like the right ap­proach, but I could­n’t find a tool that gave friends a sim­ple, self-con­tained way to re­cover files to­gether. So I built one. I’m shar­ing it in case it’s use­ful to oth­ers.

...

Read the original on eljojo.github.io »

5 300 shares, 14 trendiness

How to effectively write quality code with AI

5 Write high level spec­i­fi­ca­tions and test by your­self9 Find and mark func­tions that have a high se­cu­rity risk12 Do not gen­er­ate blindly or to much com­plex­ity at once

Enjoy the au­dio ver­sion of this ar­ti­cle:Your browser does not sup­port the au­dio el­e­ment.

i

You are a hu­man, you know how this world be­haves, how your team and col­leagues be­have, and what your users ex­pect. You have ex­pe­ri­enced the world, and you want to work to­gether with a sys­tem that has no ex­pe­ri­ence in this world you live in. Every de­ci­sion in your pro­ject that you don’t take and doc­u­ment will be taken for you by the AI.

Your re­spon­si­bil­ity of de­liv­er­ing qual­ity code can­not be met if not even you know where long-last­ing and dif­fi­cult-to-change de­ci­sions are taken.

You must know what parts of your code need to be thought through and what must be vig­or­ously tested.

Think about and dis­cuss the ar­chi­tec­ture, in­ter­faces, data struc­tures, and al­go­rithms you want to use. Think about how to test and val­i­date your code to these spec­i­fi­ca­tions.

You need to com­mu­ni­cate to the AI in de­tail what you want to achieve, oth­er­wise it will re­sult in code that is un­us­able for your pur­pose.

Other de­vel­op­ers also need to com­mu­ni­cate this in­for­ma­tion to the AI. That makes it ef­fi­cient to write as much doc­u­men­ta­tion as prac­ti­cal in a stan­dard­ized for­mat and into the code repos­i­tory it­self.

Document the re­quire­ments, spec­i­fi­ca­tions, con­straints, and ar­chi­tec­ture of your pro­ject in de­tail.

Document your cod­ing stan­dards, best prac­tices, and de­sign pat­terns.

Use flow­charts, UML di­a­grams, and other vi­sual aids to com­mu­ni­cate com­plex struc­tures and work­flows.

Write pseudocode for com­plex al­go­rithms and logic to guide the AI in un­der­stand­ing your in­ten­tions.

Develop ef­fi­cient de­bug sys­tems for the AI to use, re­duc­ing the need for mul­ti­ple ex­pen­sive CLI com­mands or browsers to ver­ify code func­tion­al­ity. This will save time and re­sources while sim­pli­fy­ing the process for the AI to iden­tify and re­solve code is­sues.

For ex­am­ple: Build a sys­tem that col­lects logs from all nodes in a dis­trib­uted sys­tem and pro­vides ab­stracted in­for­ma­tion like The Data was send to all nodes”, The Data X is saved on Node 1 but not on Node 2”.

Not all code is equally im­por­tant. Some parts of your code­base are crit­i­cal and need to be re­viewed with ex­tra care. Other parts are less im­por­tant and can be gen­er­ated with less over­sight.

Use a sys­tem that al­lows you to mark how thor­oughly each func­tion has been re­viewed.

For ex­am­ple you can use a prompt that will let the AI put the com­ment //A be­hind func­tions it wrote to in­di­cate that the func­tion has been writ­ten by an AI and is not yet re­viewed by a hu­man.

AIs will cheat and use short­cuts even­tu­ally. They will write mocks, stubs, and hard coded val­ues to make the code tests suc­ceed while the code it­self is not work­ing and most of the time dan­ger­ous. Often AIs will adapt or out­right delete test code to let the code pass tests.

You must dis­cour­age this be­hav­ior by writ­ing prop­erty based high level spec­i­fi­ca­tion tests your­self. Build them in a way that makes it hard for the AI to cheat with­out hav­ing big code seg­ments ded­i­cated to it.

For ex­am­ple, use prop­erty based test­ing, restart the server and check in be­tween if the data­base has the cor­rect val­ues.

Separate these test so the AI can­not edit them and prompt the AI not to change them.

Let an AI write prop­erty based in­ter­face tests for the ex­pected be­hav­ior with as lit­tle con­text of the rest of the code as pos­si­ble.

This will gen­er­ate tests that are un­in­flu­enced by the implementation AI which will pre­vent the tests from be­ing adapted to the im­ple­men­ta­tion in a way that makes them use­less or less ef­fec­tive.

Separate these tests so the AI can­not edit them with­out ap­proval and prompt the AI not to change them.

Use strict lint­ing and for­mat­ting rules to en­sure code qual­ity and con­sis­tency. This will help you and your AI to find is­sues early.

Save time and money by uti­liz­ing path spe­cific cod­ing agent prompts like CLAUDE.md.

You can gen­er­ate them au­to­mat­i­cally which will give your AI in­for­ma­tion it would oth­er­wise as to cre­ate from scratch every time.

Try to pro­vide as much high level in­for­ma­tion as prac­ti­cal, such as cod­ing stan­dards, best prac­tices, de­sign pat­terns, and spe­cific re­quire­ments for the pro­ject. This will help the AI to gen­er­ate code that is more aligned with your ex­pec­ta­tions and will re­duce lookup time and cost.

Identify and mark func­tions that have a high se­cu­rity risk, such as au­then­ti­ca­tion, au­tho­riza­tion, and data han­dling. These func­tions should be re­viewed and tested with ex­tra care and in such a way that a hu­man has com­pre­hended the logic of the func­tion in all its di­men­sions and is con­fi­dent about its cor­rect­ness and safety.

Make this ex­plicit with a com­ment like //HIGH-RISK-UNREVIEWED and //HIGH-RISK-REVIEWED to make sure that other de­vel­op­ers are aware of the im­por­tance of these func­tions and will re­view them with ex­tra care.

Make sure that the AI is in­structed to change the re­view state of these func­tions as soon as it changes a sin­gle char­ac­ter in the func­tion.

Developers must make sure that the sta­tus of these func­tions is al­ways cor­rect.

Aim to re­duce the com­plex­ity of the gen­er­ated code where pos­si­ble. Each sin­gle line of code will eat up your con­text win­dow and make it harder for the AI and You to keep track of the over­all logic of your code.

Each avoid­able line of code is cost­ing en­ergy, money and prob­a­bil­ity of fu­ture un­suc­cess­ful AI tasks.

AI writ­ten code is cheap, use this to your ad­van­tage by ex­plor­ing dif­fer­ent so­lu­tions to a prob­lem with ex­per­i­ments and pro­to­types with min­i­mal spec­i­fi­ca­tions. This will al­low you to find the best so­lu­tion to a prob­lem with­out in­vest­ing too much time and re­sources in a sin­gle so­lu­tion.

Break down com­plex tasks into smaller, man­age­able tasks for the AI. Instead of ask­ing the AI to gen­er­ate the com­plete pro­ject or com­po­nent at once, break it down into smaller tasks, such as gen­er­at­ing in­di­vid­ual func­tions or classes. This will help you to main­tain con­trol over the code and it’s logic.

You have to check each com­po­nent or mod­ule for its ad­her­ence to the spec­i­fi­ca­tions and re­quire­ments.

If you have lost the overview of the com­plex­ity and in­ner work­ings of the code, you have lost con­trol over your code and must restart from a state where you were in con­trol of your code.

...

Read the original on heidenstedt.org »

6 273 shares, 14 trendiness

pydantic/monty: A minimal, secure Python interpreter written in Rust for use by AI

Experimental - This pro­ject is still in de­vel­op­ment, and not ready for the prime time.

A min­i­mal, se­cure Python in­ter­preter writ­ten in Rust for use by AI.

Monty avoids the cost, la­tency, com­plex­ity and gen­eral faff of us­ing a full con­tainer based sand­box for run­ning LLM gen­er­ated code.

Instead, it lets you safely run Python code writ­ten by an LLM em­bed­ded in your agent, with startup times mea­sured in sin­gle digit mi­crosec­onds not hun­dreds of mil­lisec­onds.

What Monty can do:

* Run a rea­son­able sub­set of Python code - enough for your agent to ex­press what it wants to do

* Completely block ac­cess to the host en­vi­ron­ment: filesys­tem, env vari­ables and net­work ac­cess are all im­ple­mented via ex­ter­nal func­tion calls the de­vel­oper can con­trol

* Call func­tions on the host - only func­tions you give it ac­cess to

* Run type­check­ing - monty sup­ports full mod­ern python type hints and comes with ty in­cluded in a sin­gle bi­nary to run type­check­ing

* Be snap­shot­ted to bytes at ex­ter­nal func­tion calls, mean­ing you can store the in­ter­preter state in a file or data­base, and re­sume later

* Startup ex­tremely fast (<1μs to go from code to ex­e­cu­tion re­sult), and has run­time per­for­mance that is sim­i­lar to CPython (generally be­tween 5x faster and 5x slower)

* Be called from Rust, Python, or Javascript - be­cause Monty has no de­pen­den­cies on cpython, you can use it any­where you can run Rust

* Control re­source us­age - Monty can track mem­ory us­age, al­lo­ca­tions, stack depth, and ex­e­cu­tion time and can­cel ex­e­cu­tion if it ex­ceeds pre­set lim­its

* Collect std­out and stderr and re­turn it to the caller

* Run async or sync code on the host via async or sync code on the host

What Monty can­not do:

* Use the stan­dard li­brary (except a few se­lect mod­ules: sys, typ­ing, asyn­cio, dat­a­classes (soon), json (soon))

* Use third party li­braries (like Pydantic), sup­port for ex­ter­nal python li­brary is not a goal

* de­fine classes (support should come soon)

* use match state­ments (again, sup­port should come soon)

In short, Monty is ex­tremely lim­ited and de­signed for one use case:

For mo­ti­va­tion on why you might want to do this, see:

In very sim­ple terms, the idea of all the above is that LLMs can work faster, cheaper and more re­li­ably if they’re asked to write Python (or Javascript) code, in­stead of re­ly­ing on tra­di­tional tool call­ing. Monty makes that pos­si­ble with­out the com­plex­ity of a sand­box or risk of run­ning code di­rectly on the host.

Note: Monty will (soon) be used to im­ple­ment code­mode in Pydantic AI

Monty can be called from Python, JavaScript/TypeScript or Rust.

uv add py­dan­tic-monty

from typ­ing im­port Any

im­port py­dan­tic_­monty

code = ”″

async def agent(prompt: str, mes­sages: Messages):

while True:

print(f’mes­sages so far: {messages}’)

out­put = await cal­l_llm(prompt, mes­sages)

if isin­stance(out­put, str):

re­turn out­put

mes­sages.ex­tend(out­put)

await agent(prompt, [])

type­_de­f­i­n­i­tions = ”″

from typ­ing im­port Any

Messages = list[dict[str, Any]]

async def cal­l_llm(prompt: str, mes­sages: Messages) -> str | Messages:

raise NotImplementedError()

prompt: str =

m = py­dan­tic_­monty.Monty(

code,

in­puts=[‘prompt’],

ex­ter­nal_­func­tions=[‘cal­l_llm’],

scrip­t_­name=‘agent.py’,

type­_check=True,

type­_check­_s­tubs=type­_de­f­i­n­i­tions,

Messages = list[dict[str, Any]]

async def cal­l_llm(prompt: str, mes­sages: Messages) -> str | Messages:

if len(mes­sages) < 2:

re­turn [{‘role’: system’, content’: example re­spon­se’}]

else:

re­turn f’ex­am­ple out­put, mes­sage count {len(messages)}′

async def main():

out­put = await py­dan­tic_­monty.run_­mon­ty_a­sync(

m,

in­puts={‘prompt’: testing’},

ex­ter­nal_­func­tions={‘cal­l_llm’: cal­l_llm},

print(out­put)

#> ex­am­ple out­put, mes­sage count 2

if __name__ == __main__’:

im­port asyn­cio

asyn­cio.run(main())

Use start() and re­sume() to han­dle ex­ter­nal func­tion calls it­er­a­tively, giv­ing you con­trol over each call:

im­port py­dan­tic_­monty

code = ”″

data = fetch(url)

len(data)

m = py­dan­tic_­monty.Monty(code, in­puts=[‘url’], ex­ter­nal_­func­tions=[‘fetch’])

# Start ex­e­cu­tion - pauses when fetch() is called

re­sult = m.start(in­puts={‘url’: https://​ex­am­ple.com})

print(type(re­sult))

Both Monty and MontySnapshot can be se­ri­al­ized to bytes and re­stored later. This al­lows caching parsed code or sus­pend­ing ex­e­cu­tion across process bound­aries:

im­port py­dan­tic_­monty

# Serialize parsed code to avoid re-pars­ing

m = py­dan­tic_­monty.Monty(‘x + 1’, in­puts=[‘x’])

data = m.dump()

# Later, re­store and run

m2 = py­dan­tic_­monty.Monty.load(data)

print(m2.run(in­puts={‘x’: 41}))

#> 42

# Serialize ex­e­cu­tion state mid-flight

m = py­dan­tic_­monty.Monty(‘fetch(url)’, in­puts=[‘url’], ex­ter­nal_­func­tions=[‘fetch’])

progress = m.start(in­puts={‘url’: https://​ex­am­ple.com})

state = progress.dump()

# Later, re­store and re­sume (e.g., in a dif­fer­ent process)

pro­gress2 = py­dan­tic_­monty.Mon­tyS­nap­shot.load(state)

re­sult = pro­gress2.re­sume(re­turn_­value=‘re­sponse data’)

print(re­sult.out­put)

#> re­sponse data

use monty::{Mon­tyRun, MontyObject, NoLimitTracker, StdPrint};

let code = r#”

def fib(n):

if n

MontyRun and RunProgress can be se­ri­al­ized us­ing the dump() and load() meth­ods:

use monty::{Mon­tyRun, MontyObject, NoLimitTracker, StdPrint};

// Serialize parsed code

...

Read the original on github.com »

7 260 shares, 13 trendiness

valdanylchuk/breezydemo: BreezyBox shell demo for esp32s3

This is a demo for how you can turn an ESP32-S3 mi­cro­con­troller into a tiny in­stant-on PC with its own shell, ed­i­tor, com­piler, and on­line apps in­staller. Something like Raspberry Pi, mi­nus the over­head of a full server/​desk­top grade OS. I think ESP32 is un­der­rated in hobby maker com­mu­nity for this PC-like use case. This demo uses BreezyBox, my mini-shell ESP-IDF com­po­nent.

First of all, see­ing is be­liev­ing (click to watch the video):

It started as a cyberdeck” style craft­ing pro­ject. Then I got car­ried away with the soft­ware part. I chose ESP32-S3 for the base plat­form. It has the nos­tal­gic ap­peal of the DOS era PCs, with sim­i­lar re­sources, and el­bow-deep-in-bytes cod­ing ex­pe­ri­ence, plus mod­ern wire­less comms.

ESP32-S3 can do every­thing those PCs did and more, but that is in­con­ve­nient out of the box, be­cause that is not the com­mer­cial use case it is po­si­tioned for. It also forces away the code bloat. If you are like me, and love small el­e­gant things, and tech­nol­ogy that punches way above its weight, you ought to try it!

So any­way, I de­cided to try and pack­age some key miss­ing parts: a ba­sic vterm, the cur­rent work­ing di­rec­tory (CWD) track­ing, a few fa­mil­iar UNIX-like com­mands, and an app in­staller. Believe it or not, the rest is al­ready there in ESP-IDF com­po­nents, in­clud­ing the elf_loader with dy­namic link­ing.

The re­sult is called BreezyBox”, by anal­ogy with the BusyBox com­mands suite. The name is just a light joke, it is not meant to be a full clone. You can im­port it with one com­mand in your ESP-IDF pro­ject, and if you have some stdio go­ing, even at Hello World” level, it should mostly just work. I call it a mini shell”, a naïve user might call it an OS (it is not, it runs on FreeRTOS), and you can also call it the user­land layer.

The BreezyBox com­po­nent leaves the dis­play and other board con­fig­u­ra­tion de­tails to the user’s firmware pro­ject, pro­vid­ing mainly the vterm/​vfs fea­tures, and some shell com­mands. This par­tic­u­lar ex­am­ple/​demo pro­ject sup­ports only one spe­cific dev board: Waveshare ESP32-S3-Touch-LCD-7B (no af­fil­i­a­tion). But you can see how all the parts con­nect, and adapt it to your dis­play/​board, or just copy some code snip­pets from here.

I sug­gest just fork it, clone it, and try to make it work on your board. Mine was about 40€; you can start with some ran­dom $10 two inch LCD S3 dev board if you like. Hint: LVGL text la­bel con­trol is the eas­i­est path to std­out on LCD that works al­most every­where. You can also start with a head­less board over USB con­sole, that takes zero code, and gives you free ANSI codes in stan­dard IDF Monitor in VSCode (or in Tabby).

You do not have to write your own font ren­derer like I did here; that was just to push past 30 FPS on a dis­play slightly too large for this chip.

This is free soft­ware un­der MIT License.

The best help is cur­rently more test­ing be­yond works on my com­puter”, more shared ex­am­ples and fun use cases:

More ELF apps — see the ex­am­ples at my breezyapps repo, they are su­per easy to fol­low. Even a care­fully writ­ten stdlib C pro­gram with no plat­form-spe­cific bits may work some­times, also with some ANSI codes. But be sure to ver­ify on the ac­tual ESP32-S3: the mem­ory is tight, the larger PSRAM re­quires align­ment, and there are other lim­its and quirks. You can pub­lish and in­stall the apps us­ing your own repo.

More full ex­am­ple firmware repos­i­to­ries: for dif­fer­ent boards, with dif­fer­ent styles. Maybe you pro­vide the ba­sic LVGL text la­bel ex­am­ple on some pop­u­lar board. Maybe you pre­fer C++ to plain C. Maybe you em­brace the GUI. Maybe you port some retro games. Maybe you even make it work on P4, or C6 (RISC-V, a com­pletely dif­fer­ent CPU). Maybe you at­tach some cool gad­gets to it. Maybe you build an ex­tra cool cy­berdeck case. Or maybe you re­pro­duce the ex­act same thing, and just share your setup ex­pe­ri­ence and hands-on im­pres­sions.

It would be so cool to see more peo­ple us­ing BreezyBox, and to have more ready-to-clone ex­am­ples for every­one!

...

Read the original on github.com »

8 193 shares, 7 trendiness

Tencent HY Research

...

Read the original on hy.tencent.com »

9 185 shares, 12 trendiness

Brendan Gregg's Blog

Recent posts:

04 Aug 2025 »

When to Hire a Computer Performance Engineering Team (2025) part 1 of 2

17 Mar 2024 »

The Return of the Frame Pointers

19 Mar 2022 »

Why Don’t You Use …

Blog in­dex

About

RSS

Recent posts:

04 Aug 2025 »

When to Hire a Computer Performance Engineering Team (2025) part 1 of 2

17 Mar 2024 »

The Return of the Frame Pointers

19 Mar 2022 »

Why Don’t You Use …

Blog in­dex

About

RSS

The stag­ger­ing and fast-grow­ing cost of AI dat­a­cen­ters is a call for per­for­mance en­gi­neer­ing like no other in his­tory; it’s not just about sav­ing costs — it’s about sav­ing the planet. I have joined OpenAI to work on this chal­lenge di­rectly, with an ini­tial fo­cus on ChatGPT per­for­mance. The scale is ex­treme and the growth is mind-bog­gling. As a leader in dat­a­cen­ter per­for­mance, I’ve re­al­ized that per­for­mance en­gi­neer­ing as we know it may not be enough — I’m think­ing of new en­gi­neer­ing meth­ods so that we can find big­ger op­ti­miza­tions than we have be­fore, and find them faster. It’s the op­por­tu­nity of a life­time and, un­like in ma­ture en­vi­ron­ments of scale, it feels as if there are no ob­sta­cles — no ar­eas con­sid­ered too dif­fi­cult to change. Do any­thing, do it at scale, and do it to­day.

Why OpenAI ex­actly? I had talked to in­dus­try ex­perts and friends who rec­om­mended sev­eral com­pa­nies, es­pe­cially OpenAI. However, I was still a bit cyn­i­cal about AI adop­tion. Like every­one, I was be­ing bom­barded with ads by var­i­ous com­pa­nies to use AI, but I won­dered: was any­one ac­tu­ally us­ing it? Everyday peo­ple with every­day uses? One day dur­ing a busy pe­riod of in­ter­view­ing, I re­al­ized I needed a hair­cut (as it hap­pened, it was the day be­fore I was due to speak with Sam Altman).

Mia the hair­styl­ist got to work, and ca­su­ally asked what I do for a liv­ing. I’m an Intel fel­low, I work on dat­a­cen­ter per­for­mance.” Silence. Maybe she did­n’t know what dat­a­cen­ters were or who Intel was. I fol­lowed up: I’m in­ter­view­ing for a new job to work on AI dat­a­cen­ters.” Mia lit up: Oh, I use ChatGPT all the time!” While she was cut­ting my hair — which takes a while — she told me about her many uses of ChatGPT. (I, of course, was a cap­tive au­di­ence.) She de­scribed uses I had­n’t thought of, and I re­al­ized how ChatGPT was be­com­ing an es­sen­tial tool for every­one. Just one ex­am­ple: She was wor­ried about a friend who was trav­el­ling in a far-away city, with lit­tle time­zone over­lap when they could chat, but she could talk to ChatGPT any­time about what the city was like and what tourist ac­tiv­i­ties her friend might be do­ing, which helped her feel con­nected. She liked the mem­ory fea­ture too, say­ing it was like talk­ing to a per­son who was liv­ing there.

I had pre­vi­ously chat­ted to other ran­dom peo­ple about AI, in­clud­ing a re­al­tor, a tax ac­coun­tant, and a part-time bee­keeper. All told me en­thu­si­as­ti­cally about their uses of ChatGPT; the bee­keeper, for ex­am­ple, uses it to help with small busi­ness pa­per­work. My wife was al­ready a big user, and I was us­ing it more and more, e.g. to san­ity-check quotes from trades­peo­ple. Now my hair­styl­ist, who rec­og­nized ChatGPT as a brand more read­ily than she did Intel, was prais­ing the tech­nol­ogy and teach­ing me about it. I stood on the street af­ter my hair­cut and let sink in how big this was, how this tech­nol­ogy has be­come an es­sen­tial aide for so many, how I could lead per­for­mance ef­forts and help save the planet. Joining OpenAI might be the biggest op­por­tu­nity of my life­time.

It’s nice to work on some­thing big that many peo­ple rec­og­nize and ap­pre­ci­ate. I felt this when work­ing at Netflix, and I’d been miss­ing that hu­man con­nec­tion when I changed jobs. But there are other fac­tors to con­sider be­yond a well-known prod­uct: what’s my role, who am I do­ing it with, and what is the com­pen­sa­tion?

I ended up hav­ing 26 in­ter­views and meet­ings (of course I kept a log) with var­i­ous AI tech gi­ants, so I learned a lot about the en­gi­neer­ing work they are do­ing and the en­gi­neers who do it. The work it­self re­minds me of Netflix cloud en­gi­neer­ing: huge scale, cloud com­put­ing chal­lenges, fast-paced code changes, and free­dom for en­gi­neers to make an im­pact. Lots of very in­ter­est­ing en­gi­neer­ing prob­lems across the stack. It’s not just GPUs, it’s every­thing.

The en­gi­neers I met were im­pres­sive: the AI gi­ants have been very se­lec­tive, to the point that I was­n’t to­tally sure I’d pass the in­ter­views my­self. Of the com­pa­nies I talked to, OpenAI had the largest num­ber of tal­ented en­gi­neers I al­ready knew, in­clud­ing for­mer Netflix col­leagues such as Vadim who was en­cour­ag­ing me to join. At Netflix, Vadim would bring me per­for­mance is­sues and watch over my shoul­der as I de­bugged and fixed them. It’s a big plus to have some­one at a com­pany who knows you well, knows the work, and thinks you’ll be good at the work.

Some peo­ple may be ex­cited by what it means for OpenAI to hire me, a well known fig­ure in com­puter per­for­mance, and of course I’d like to do great things. But to be fair on my fel­low staff, there are many per­for­mance en­gi­neers al­ready at OpenAI, in­clud­ing vet­er­ans I know from the in­dus­try, and they have been busy find­ing im­por­tant wins. I’m not the first, I’m just the lat­est.

AI was also an early dream of mine. As a child I was a fan of British SciFi, in­clud­ing Blake’s 7 (1978-1981) which fea­tured a sar­cas­tic, opin­ion­ated su­per­com­puter named Orac. Characters could talk to Orac and ask it to do re­search tasks. Orac could com­mu­ni­cate with all other com­put­ers in the uni­verse, del­e­gate work to them, and con­trol them (this was very fu­tur­is­tic in 1978, pre-In­ter­net as we know it).

Orac was con­sid­ered the most valu­able thing in the Blake’s 7 uni­verse, and by the time I was a uni­ver­sity en­gi­neer­ing stu­dent I wanted to build Orac. So I started de­vel­op­ing my own nat­ural lan­guage pro­cess­ing soft­ware. I did­n’t get very far, though: main mem­ory at the time was­n’t large enough to store an en­tire dic­tio­nary plus meta­data. I vis­ited a PC ven­dor with my re­quire­ments and they laughed, telling me to buy a main­frame in­stead. I re­al­ized I needed it to dis­tin­guish hot ver­sus cold data and leave cold data on disk, and maybe I should be us­ing a data­base… and that was about where I left that pro­ject.

Last year I started us­ing ChatGPT, and won­dered if it knew about Blake’s 7 and Orac. So I asked:

ChatGPT’s re­sponse nails the char­ac­ter. I added it to Settings->Personalization->Custom Instructions, and now it al­ways an­swers as Orac. I love it. (There’s also sur­pris­ing news for Blake’s 7 fans: A re­boot was just an­nounced!)

I am now a Member of Technical Staff for OpenAI, work­ing re­motely from Sydney, Australia, and re­port­ing to Justin Becker. The team I’ve joined is ChatGPT per­for­mance en­gi­neer­ing, and I’ll be work­ing with the other per­for­mance en­gi­neer­ing teams at the com­pany. One of my first pro­jects is a multi-org strat­egy for im­prov­ing per­for­mance and re­duc­ing costs.

There’s so many in­ter­est­ing things to work on, things I have done be­fore and things I haven’t. I’m al­ready us­ing Codex for more than just cod­ing. Will I be do­ing more eBPF, Ftrace, PMCs? I’m start­ing with OpenAI’s needs and see­ing where that takes me; but given those tech­nolo­gies are proven for find­ing dat­a­cen­ter per­for­mance wins, it seems likely — I can lead the way. (And if every­thing I’ve de­scribed here sounds in­ter­est­ing to you, OpenAI is hir­ing.)

I was at Linux Plumber’s Conference in Toyko in December, just af­ter I an­nounced leav­ing Intel, and dozens of peo­ple wanted to know where I was go­ing next and why. I thought I’d write this blog post to an­swer every­one at once. I also need to fin­ish part 2 of hir­ing a per­for­mance en­gi­neer­ing team (it was al­ready drafted be­fore I joined OpenAI). I haven’t for­got­ten.

It took months to wrap up my prior job and start at OpenAI, so I was due for an­other hair­cut. I thought it’d be neat to ask Mia about ChatGPT now that I work on it, then re­al­ized it had been months and she could have changed her mind. I asked ner­vously: Still us­ing ChatGPT?”. Mia re­sponded con­fi­dently: twenty-four seven!”

I checked with Mia, she was thrilled to be men­tioned in my post. This is also a per­sonal post: no one asked me to write this.

...

Read the original on www.brendangregg.com »

10 176 shares, 6 trendiness

New Testament, Apocrypha, Gnostics, Church Fathers

Browse by range of dat­ing or by cat­e­gory, New Testament — Apocrypha — Gnostics — Church Fathers — Other, or use the search box.

Acts of Peter and the Twelve

Discourse on the Eighth and Ninth

On the Origin of the World

Also avail­able on the Early Christian Writings CD-ROM

Special Features

Please book­mark the site for fu­ture ref­er­ence.

Let your friends know about the site too.

Early Christian Writings is the most com­plete col­lec­tion of Christian texts be­fore the Council of Nicaea in 325 AD. The site pro­vides trans­la­tions and com­men­tary for these sources, in­clud­ing the New Testament, Apocrypha, Gnostics, Church Fathers, and some non-Chris­t­ian ref­er­ences. The Early Christian Writings: New Testament, Apocrypha, Gnostics, Church Fathers” site is copy­right © Peter Kirby . Permission is given to link to any HTML file on the Early Christian Writings site.

Please buy the CD to sup­port the site, view it with­out ads, and get bonus stuff!

Gospels

Gospel Fragments

Apostolic Acts

Acts of Peter and the Twelve

Martyrologies

Fifth and Sixth Books of Esra

Dialogues with Jesus

Apocalypses

Acts

Acts of Peter and the Twelve

More Nag Hammadi

Discourse on the Eighth and the Ninth

Apostolic Fathers

Quoted Authors

More Quoted Authors

Pagan and Jewish

Jewish/Christian

...

Read the original on earlychristianwritings.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.