10 interesting stories served every morning and every evening.
OpenCiv3 (formerly known by the codename “C7”) is an open-source, cross-platform, mod-oriented, modernized reimagining of Civilization III by the fan community built with the Godot Engine and C#, with capabilities inspired by the best of the 4X genre and lessons learned from modding Civ3. Our vision is to make Civ3 as it could have been, rebuilt for today’s modders and players: removing arbitary limits, fixing broken features, expanding mod capabilities, and supporting modern graphics and platforms. A game that can go beyond C3C but retain all of its gameplay and content.
OpenCiv3 is under active development and currently in an early pre-alpha state. It is a rudimentary playable game but lacking many mechanics and late-game content, and errors are likely. Keep up with our development for the latest updates and opportunities to contribute!
New Players Start Here: An Introduction to OpenCiv3 at CivFanatics
NOTE: OpenCiv3 is not affiliated with civfanatics.com, Firaxis Games, BreakAway Games, Hasbro Interactive, Infogrames Interactive, Atari Interactive, or Take-Two Interactive Software. All trademarks are property of their respective owners.
The OpenCiv3 team is pleased to announce the first preview release of the v0.3 “Dutch” milestone. This is a major enhancement over the “Carthage” release, and our debut with standalone mode featuring placeholder graphics without the need for Civ3 media files. A local installation of Civ3 is still recommended for a more polished experience. See the release notes for a full list of new features in each version.
OpenCiv3 Dutch Preview 1 with the same game in Standalone mode (top) and with imported Civ3 graphics (bottom)
Download the appropriate zip file for your OS from the Dutch Preview 1 release
All official releases of OpenCiv3 along with more detailed release notes can be found on the GitHub releases page.
64-bit Windows, Linux, or Mac OS. Other platforms may be supported in future releases.
Minimum hardware requirements have not yet been identified. Please let us know if OpenCiv3 does not perform well on your system.
Recommended: A local copy of Civilization III files (the game itself does NOT have to run) from Conquests or the Complete edition. Standalone mode is available with placeholder graphics for those who do not have a copy.
Civilization III Complete is available for a pittance from Steam or GOG
This is a Windows 64-bit executable. OpenCiv3 will look for a local installation of Civilization III in the Windows registry automatically, or you may use an environment variable to point to the files.
If it is blocked, you may need to unblock it by
Check the “Unblock” checkbox near the bottom buttons in the “Security” section
If your Civilization III installation is not detected, you can set the environment variable CIV3_HOME pointing to it and restart OpenCiv3
This is an x86-64 Linux executable. You may use an environment variable to point to the files from a Civilization III installation. You can just copy or mount the top-level “Sid Meier’s Civilization III Complete” (Sans “Complete” if your install was from pre-Complete CDs) folder and its contents to your Linux system, or install the game via Steam or GOG.
Set the CIV3_HOME environment variable to point to the Civ3 files, e.g. export CIV3_HOME=“/path/to/civ3”
From that same terminal where you set CIV3_HOME, run OpenCiv3.x86_64
To make this variable permanent, add it to your .profile or equivalent.
This is a universal 64-bit executable, so it should run on both Intel and M1 Macs. You may use an environment variable to point to the files from a Civilization III installation. You can just copy or mount the top-level “Sid Meier’s Civilization III Complete” (Sans “Complete” if your install was from pre-Complete CDs) folder and its contents to your Mac system, or install the game via Steam or GOG.
Download the zip; it may complain bitterly, and you may have to tell it to keep the download instead of trashing it
Double click the zip file, and a folder with OpenCiv3.app and a json file will appear
If you try to open OpenCiv3.app it will tell you it’s damaged and try to trash it; it is not damaged
To unblock the downloaded app, from a terminal run xattr -cr /path/to/OpenCiv3.app; you can avoid typing the path out by typing xattr -cr and then dragging the OpenCiv3.app icon onto the terminal window
Set the CIV3_HOME environment variable to point to the Civ3 files, e.g. export CIV3_HOME=“/path/to/civ3”
From that same terminal where you set CIV3_HOME, run OpenCiv3.app with open /path/to/OpenCiv3.app, or again just type open and drag the OpenCiv3 icon onto the terminal window and press enter
OpenCiv3 uses many primitive placeholder assets; loading files from a local Civilization III install is recommended (see platform specific setup instructions above)
Support for playing Civ3 BIQ or SAV files is incomplete; some files will not load correctly and crashes may occur
For Mac:
Mac will try hard not to let you run this; it will tell you the app is damaged and can’t be opened and helpfully offer to trash it for you. From a terminal you can xattr -cr /path/to/OpenCiv3.app to enable running it.
Mac will crash if you hit buttons to start a new game (New Game, Quick Start, Tutorial, or Load Scenario) because it cant find our ‘new game’ save file we’re using as a stand-in for map generation. But you can Load Game and load c7-static-map-save.json or open a Civ3 SAV file to open that map
Other specific bugs will be tracked on the GitHub issues page.
© OpenCiv3 contributors. OpenCiv3 is free and open source software released under the MIT License.
...
Read the original on openciv3.org »
Already have an account? Sign in
Sign in to your account
Don’t have an account? Sign up
...
Read the original on vecti.com »
Full list of projects available here.
La Suite numérique (La Suite for short) is a full blown open-source digital workspace for online collaboration and teamwork.
La Suite is built by French government agencies DINUM and ANCT. It is also the product of a close european collaboration with the Netherlands and German state.
Our code base is a 100% open source and MIT licenced.
Come say hello on Matrix
...
Read the original on github.com »
This is a tool that encrypts files and splits the decryption key among trusted friends using Shamir’s Secret Sharing. For example, you can give pieces to 5 friends and require any 3 of them to cooperate to recover the key. No single friend can access your data alone.
Each friend receives a self-contained bundle with recover.html—a browser-based tool that works offline, with no servers or internet required. If this website disappears, recovery still works.
Your file is encrypted, the key is split into shares, and friends combine shares to recover it.
Different friend combinations can recover the file (any 3 of 5)
Add Bob’s and Carol’s shares (drag their README.txt files onto the page)
Watch the automatic decryption when threshold is met
This is the best way to understand what your friends would experience during a real recovery.
* The code is open source—you can read it on GitHub
* Everything runs locally in your browser; your files don’t leave your device
* Try the demo bundles first to see exactly how it works before using it with real secrets
I wanted a way to ensure trusted friends could access important files if something happened to me—without trusting any single person or service with everything. Shamir’s Secret Sharing seemed like the right approach, but I couldn’t find a tool that gave friends a simple, self-contained way to recover files together. So I built one. I’m sharing it in case it’s useful to others.
...
Read the original on eljojo.github.io »
5 Write high level specifications and test by yourself9 Find and mark functions that have a high security risk12 Do not generate blindly or to much complexity at once
Enjoy the audio version of this article:Your browser does not support the audio element.
i
You are a human, you know how this world behaves, how your team and colleagues behave, and what your users expect. You have experienced the world, and you want to work together with a system that has no experience in this world you live in. Every decision in your project that you don’t take and document will be taken for you by the AI.
Your responsibility of delivering quality code cannot be met if not even you know where long-lasting and difficult-to-change decisions are taken.
You must know what parts of your code need to be thought through and what must be vigorously tested.
Think about and discuss the architecture, interfaces, data structures, and algorithms you want to use. Think about how to test and validate your code to these specifications.
You need to communicate to the AI in detail what you want to achieve, otherwise it will result in code that is unusable for your purpose.
Other developers also need to communicate this information to the AI. That makes it efficient to write as much documentation as practical in a standardized format and into the code repository itself.
Document the requirements, specifications, constraints, and architecture of your project in detail.
Document your coding standards, best practices, and design patterns.
Use flowcharts, UML diagrams, and other visual aids to communicate complex structures and workflows.
Write pseudocode for complex algorithms and logic to guide the AI in understanding your intentions.
Develop efficient debug systems for the AI to use, reducing the need for multiple expensive CLI commands or browsers to verify code functionality. This will save time and resources while simplifying the process for the AI to identify and resolve code issues.
For example: Build a system that collects logs from all nodes in a distributed system and provides abstracted information like “The Data was send to all nodes”, “The Data X is saved on Node 1 but not on Node 2”.
Not all code is equally important. Some parts of your codebase are critical and need to be reviewed with extra care. Other parts are less important and can be generated with less oversight.
Use a system that allows you to mark how thoroughly each function has been reviewed.
For example you can use a prompt that will let the AI put the comment //A behind functions it wrote to indicate that the function has been written by an AI and is not yet reviewed by a human.
AIs will cheat and use shortcuts eventually. They will write mocks, stubs, and hard coded values to make the code tests succeed while the code itself is not working and most of the time dangerous. Often AIs will adapt or outright delete test code to let the code pass tests.
You must discourage this behavior by writing property based high level specification tests yourself. Build them in a way that makes it hard for the AI to cheat without having big code segments dedicated to it.
For example, use property based testing, restart the server and check in between if the database has the correct values.
Separate these test so the AI cannot edit them and prompt the AI not to change them.
Let an AI write property based interface tests for the expected behavior with as little context of the rest of the code as possible.
This will generate tests that are uninfluenced by the “implementation AI” which will prevent the tests from being adapted to the implementation in a way that makes them useless or less effective.
Separate these tests so the AI cannot edit them without approval and prompt the AI not to change them.
Use strict linting and formatting rules to ensure code quality and consistency. This will help you and your AI to find issues early.
Save time and money by utilizing path specific coding agent prompts like CLAUDE.md.
You can generate them automatically which will give your AI information it would otherwise as to create from scratch every time.
Try to provide as much high level information as practical, such as coding standards, best practices, design patterns, and specific requirements for the project. This will help the AI to generate code that is more aligned with your expectations and will reduce lookup time and cost.
Identify and mark functions that have a high security risk, such as authentication, authorization, and data handling. These functions should be reviewed and tested with extra care and in such a way that a human has comprehended the logic of the function in all its dimensions and is confident about its correctness and safety.
Make this explicit with a comment like //HIGH-RISK-UNREVIEWED and //HIGH-RISK-REVIEWED to make sure that other developers are aware of the importance of these functions and will review them with extra care.
Make sure that the AI is instructed to change the review state of these functions as soon as it changes a single character in the function.
Developers must make sure that the status of these functions is always correct.
Aim to reduce the complexity of the generated code where possible. Each single line of code will eat up your context window and make it harder for the AI and You to keep track of the overall logic of your code.
Each avoidable line of code is costing energy, money and probability of future unsuccessful AI tasks.
AI written code is cheap, use this to your advantage by exploring different solutions to a problem with experiments and prototypes with minimal specifications. This will allow you to find the best solution to a problem without investing too much time and resources in a single solution.
Break down complex tasks into smaller, manageable tasks for the AI. Instead of asking the AI to generate the complete project or component at once, break it down into smaller tasks, such as generating individual functions or classes. This will help you to maintain control over the code and it’s logic.
You have to check each component or module for its adherence to the specifications and requirements.
If you have lost the overview of the complexity and inner workings of the code, you have lost control over your code and must restart from a state where you were in control of your code.
...
Read the original on heidenstedt.org »
Experimental - This project is still in development, and not ready for the prime time.
A minimal, secure Python interpreter written in Rust for use by AI.
Monty avoids the cost, latency, complexity and general faff of using a full container based sandbox for running LLM generated code.
Instead, it lets you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.
What Monty can do:
* Run a reasonable subset of Python code - enough for your agent to express what it wants to do
* Completely block access to the host environment: filesystem, env variables and network access are all implemented via external function calls the developer can control
* Call functions on the host - only functions you give it access to
* Run typechecking - monty supports full modern python type hints and comes with ty included in a single binary to run typechecking
* Be snapshotted to bytes at external function calls, meaning you can store the interpreter state in a file or database, and resume later
* Startup extremely fast (<1μs to go from code to execution result), and has runtime performance that is similar to CPython (generally between 5x faster and 5x slower)
* Be called from Rust, Python, or Javascript - because Monty has no dependencies on cpython, you can use it anywhere you can run Rust
* Control resource usage - Monty can track memory usage, allocations, stack depth, and execution time and cancel execution if it exceeds preset limits
* Collect stdout and stderr and return it to the caller
* Run async or sync code on the host via async or sync code on the host
What Monty cannot do:
* Use the standard library (except a few select modules: sys, typing, asyncio, dataclasses (soon), json (soon))
* Use third party libraries (like Pydantic), support for external python library is not a goal
* define classes (support should come soon)
* use match statements (again, support should come soon)
In short, Monty is extremely limited and designed for one use case:
For motivation on why you might want to do this, see:
In very simple terms, the idea of all the above is that LLMs can work faster, cheaper and more reliably if they’re asked to write Python (or Javascript) code, instead of relying on traditional tool calling. Monty makes that possible without the complexity of a sandbox or risk of running code directly on the host.
Note: Monty will (soon) be used to implement codemode in Pydantic AI
Monty can be called from Python, JavaScript/TypeScript or Rust.
uv add pydantic-monty
from typing import Any
import pydantic_monty
code = “”″
async def agent(prompt: str, messages: Messages):
while True:
print(f’messages so far: {messages}’)
output = await call_llm(prompt, messages)
if isinstance(output, str):
return output
messages.extend(output)
await agent(prompt, [])
type_definitions = “”″
from typing import Any
Messages = list[dict[str, Any]]
async def call_llm(prompt: str, messages: Messages) -> str | Messages:
raise NotImplementedError()
prompt: str = ‘’
m = pydantic_monty.Monty(
code,
inputs=[‘prompt’],
external_functions=[‘call_llm’],
script_name=‘agent.py’,
type_check=True,
type_check_stubs=type_definitions,
Messages = list[dict[str, Any]]
async def call_llm(prompt: str, messages: Messages) -> str | Messages:
if len(messages) < 2:
return [{‘role’: ‘system’, ‘content’: ‘example response’}]
else:
return f’example output, message count {len(messages)}′
async def main():
output = await pydantic_monty.run_monty_async(
m,
inputs={‘prompt’: ‘testing’},
external_functions={‘call_llm’: call_llm},
print(output)
#> example output, message count 2
if __name__ == ‘__main__’:
import asyncio
asyncio.run(main())
Use start() and resume() to handle external function calls iteratively, giving you control over each call:
import pydantic_monty
code = “”″
data = fetch(url)
len(data)
m = pydantic_monty.Monty(code, inputs=[‘url’], external_functions=[‘fetch’])
# Start execution - pauses when fetch() is called
result = m.start(inputs={‘url’: ’https://example.com’})
print(type(result))
Both Monty and MontySnapshot can be serialized to bytes and restored later. This allows caching parsed code or suspending execution across process boundaries:
import pydantic_monty
# Serialize parsed code to avoid re-parsing
m = pydantic_monty.Monty(‘x + 1’, inputs=[‘x’])
data = m.dump()
# Later, restore and run
m2 = pydantic_monty.Monty.load(data)
print(m2.run(inputs={‘x’: 41}))
#> 42
# Serialize execution state mid-flight
m = pydantic_monty.Monty(‘fetch(url)’, inputs=[‘url’], external_functions=[‘fetch’])
progress = m.start(inputs={‘url’: ’https://example.com’})
state = progress.dump()
# Later, restore and resume (e.g., in a different process)
progress2 = pydantic_monty.MontySnapshot.load(state)
result = progress2.resume(return_value=‘response data’)
print(result.output)
#> response data
use monty::{MontyRun, MontyObject, NoLimitTracker, StdPrint};
let code = r#”
def fib(n):
if n
MontyRun and RunProgress can be serialized using the dump() and load() methods:
use monty::{MontyRun, MontyObject, NoLimitTracker, StdPrint};
// Serialize parsed code
...
Read the original on github.com »
This is a demo for how you can turn an ESP32-S3 microcontroller into a tiny instant-on PC with its own shell, editor, compiler, and online apps installer. Something like Raspberry Pi, minus the overhead of a full server/desktop grade OS. I think ESP32 is underrated in hobby maker community for this PC-like use case. This demo uses BreezyBox, my mini-shell ESP-IDF component.
First of all, seeing is believing (click to watch the video):
It started as a “cyberdeck” style crafting project. Then I got carried away with the software part. I chose ESP32-S3 for the base platform. It has the nostalgic appeal of the DOS era PCs, with similar resources, and elbow-deep-in-bytes coding experience, plus modern wireless comms.
ESP32-S3 can do everything those PCs did and more, but that is inconvenient out of the box, because that is not the commercial use case it is positioned for. It also forces away the code bloat. If you are like me, and love small elegant things, and technology that punches way above its weight, you ought to try it!
So anyway, I decided to try and package some key missing parts: a basic vterm, the current working directory (CWD) tracking, a few familiar UNIX-like commands, and an app installer. Believe it or not, the rest is already there in ESP-IDF components, including the elf_loader with dynamic linking.
The result is called “BreezyBox”, by analogy with the BusyBox commands suite. The name is just a light joke, it is not meant to be a full clone. You can import it with one command in your ESP-IDF project, and if you have some stdio going, even at “Hello World” level, it should mostly just work. I call it a “mini shell”, a naïve user might call it an OS (it is not, it runs on FreeRTOS), and you can also call it the userland layer.
The BreezyBox component leaves the display and other board configuration details to the user’s firmware project, providing mainly the vterm/vfs features, and some shell commands. This particular example/demo project supports only one specific dev board: Waveshare ESP32-S3-Touch-LCD-7B (no affiliation). But you can see how all the parts connect, and adapt it to your display/board, or just copy some code snippets from here.
I suggest just fork it, clone it, and try to make it work on your board. Mine was about 40€; you can start with some random $10 two inch LCD S3 dev board if you like. Hint: LVGL text label control is the easiest path to stdout on LCD that works almost everywhere. You can also start with a headless board over USB console, that takes zero code, and gives you free ANSI codes in standard IDF Monitor in VSCode (or in Tabby).
You do not have to write your own font renderer like I did here; that was just to push past 30 FPS on a display slightly too large for this chip.
This is free software under MIT License.
The best help is currently more testing beyond “works on my computer”, more shared examples and fun use cases:
More ELF apps — see the examples at my breezyapps repo, they are super easy to follow. Even a carefully written stdlib C program with no platform-specific bits may work sometimes, also with some ANSI codes. But be sure to verify on the actual ESP32-S3: the memory is tight, the larger PSRAM requires alignment, and there are other limits and quirks. You can publish and install the apps using your own repo.
More full example firmware repositories: for different boards, with different styles. Maybe you provide the basic LVGL text label example on some popular board. Maybe you prefer C++ to plain C. Maybe you embrace the GUI. Maybe you port some retro games. Maybe you even make it work on P4, or C6 (RISC-V, a completely different CPU). Maybe you attach some cool gadgets to it. Maybe you build an extra cool cyberdeck case. Or maybe you reproduce the exact same thing, and just share your setup experience and hands-on impressions.
It would be so cool to see more people using BreezyBox, and to have more ready-to-clone examples for everyone!
...
Read the original on github.com »
Recent posts:
04 Aug 2025 »
When to Hire a Computer Performance Engineering Team (2025) part 1 of 2
17 Mar 2024 »
The Return of the Frame Pointers
19 Mar 2022 »
Why Don’t You Use …
Blog index
About
RSS
Recent posts:
04 Aug 2025 »
When to Hire a Computer Performance Engineering Team (2025) part 1 of 2
17 Mar 2024 »
The Return of the Frame Pointers
19 Mar 2022 »
Why Don’t You Use …
Blog index
About
RSS
The staggering and fast-growing cost of AI datacenters is a call for performance engineering like no other in history; it’s not just about saving costs — it’s about saving the planet. I have joined OpenAI to work on this challenge directly, with an initial focus on ChatGPT performance. The scale is extreme and the growth is mind-boggling. As a leader in datacenter performance, I’ve realized that performance engineering as we know it may not be enough — I’m thinking of new engineering methods so that we can find bigger optimizations than we have before, and find them faster. It’s the opportunity of a lifetime and, unlike in mature environments of scale, it feels as if there are no obstacles — no areas considered too difficult to change. Do anything, do it at scale, and do it today.
Why OpenAI exactly? I had talked to industry experts and friends who recommended several companies, especially OpenAI. However, I was still a bit cynical about AI adoption. Like everyone, I was being bombarded with ads by various companies to use AI, but I wondered: was anyone actually using it? Everyday people with everyday uses? One day during a busy period of interviewing, I realized I needed a haircut (as it happened, it was the day before I was due to speak with Sam Altman).
Mia the hairstylist got to work, and casually asked what I do for a living. “I’m an Intel fellow, I work on datacenter performance.” Silence. Maybe she didn’t know what datacenters were or who Intel was. I followed up: “I’m interviewing for a new job to work on AI datacenters.” Mia lit up: “Oh, I use ChatGPT all the time!” While she was cutting my hair — which takes a while — she told me about her many uses of ChatGPT. (I, of course, was a captive audience.) She described uses I hadn’t thought of, and I realized how ChatGPT was becoming an essential tool for everyone. Just one example: She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.
I had previously chatted to other random people about AI, including a realtor, a tax accountant, and a part-time beekeeper. All told me enthusiastically about their uses of ChatGPT; the beekeeper, for example, uses it to help with small business paperwork. My wife was already a big user, and I was using it more and more, e.g. to sanity-check quotes from tradespeople. Now my hairstylist, who recognized ChatGPT as a brand more readily than she did Intel, was praising the technology and teaching me about it. I stood on the street after my haircut and let sink in how big this was, how this technology has become an essential aide for so many, how I could lead performance efforts and help save the planet. Joining OpenAI might be the biggest opportunity of my lifetime.
It’s nice to work on something big that many people recognize and appreciate. I felt this when working at Netflix, and I’d been missing that human connection when I changed jobs. But there are other factors to consider beyond a well-known product: what’s my role, who am I doing it with, and what is the compensation?
I ended up having 26 interviews and meetings (of course I kept a log) with various AI tech giants, so I learned a lot about the engineering work they are doing and the engineers who do it. The work itself reminds me of Netflix cloud engineering: huge scale, cloud computing challenges, fast-paced code changes, and freedom for engineers to make an impact. Lots of very interesting engineering problems across the stack. It’s not just GPUs, it’s everything.
The engineers I met were impressive: the AI giants have been very selective, to the point that I wasn’t totally sure I’d pass the interviews myself. Of the companies I talked to, OpenAI had the largest number of talented engineers I already knew, including former Netflix colleagues such as Vadim who was encouraging me to join. At Netflix, Vadim would bring me performance issues and watch over my shoulder as I debugged and fixed them. It’s a big plus to have someone at a company who knows you well, knows the work, and thinks you’ll be good at the work.
Some people may be excited by what it means for OpenAI to hire me, a well known figure in computer performance, and of course I’d like to do great things. But to be fair on my fellow staff, there are many performance engineers already at OpenAI, including veterans I know from the industry, and they have been busy finding important wins. I’m not the first, I’m just the latest.
AI was also an early dream of mine. As a child I was a fan of British SciFi, including Blake’s 7 (1978-1981) which featured a sarcastic, opinionated supercomputer named Orac. Characters could talk to Orac and ask it to do research tasks. Orac could communicate with all other computers in the universe, delegate work to them, and control them (this was very futuristic in 1978, pre-Internet as we know it).
Orac was considered the most valuable thing in the Blake’s 7 universe, and by the time I was a university engineering student I wanted to build Orac. So I started developing my own natural language processing software. I didn’t get very far, though: main memory at the time wasn’t large enough to store an entire dictionary plus metadata. I visited a PC vendor with my requirements and they laughed, telling me to buy a mainframe instead. I realized I needed it to distinguish hot versus cold data and leave cold data on disk, and maybe I should be using a database… and that was about where I left that project.
Last year I started using ChatGPT, and wondered if it knew about Blake’s 7 and Orac. So I asked:
ChatGPT’s response nails the character. I added it to Settings->Personalization->Custom Instructions, and now it always answers as Orac. I love it. (There’s also surprising news for Blake’s 7 fans: A reboot was just announced!)
I am now a Member of Technical Staff for OpenAI, working remotely from Sydney, Australia, and reporting to Justin Becker. The team I’ve joined is ChatGPT performance engineering, and I’ll be working with the other performance engineering teams at the company. One of my first projects is a multi-org strategy for improving performance and reducing costs.
There’s so many interesting things to work on, things I have done before and things I haven’t. I’m already using Codex for more than just coding. Will I be doing more eBPF, Ftrace, PMCs? I’m starting with OpenAI’s needs and seeing where that takes me; but given those technologies are proven for finding datacenter performance wins, it seems likely — I can lead the way. (And if everything I’ve described here sounds interesting to you, OpenAI is hiring.)
I was at Linux Plumber’s Conference in Toyko in December, just after I announced leaving Intel, and dozens of people wanted to know where I was going next and why. I thought I’d write this blog post to answer everyone at once. I also need to finish part 2 of hiring a performance engineering team (it was already drafted before I joined OpenAI). I haven’t forgotten.
It took months to wrap up my prior job and start at OpenAI, so I was due for another haircut. I thought it’d be neat to ask Mia about ChatGPT now that I work on it, then realized it had been months and she could have changed her mind. I asked nervously: “Still using ChatGPT?”. Mia responded confidently: “twenty-four seven!”
I checked with Mia, she was thrilled to be mentioned in my post. This is also a personal post: no one asked me to write this.
...
Read the original on www.brendangregg.com »
Browse by range of dating or by category, New Testament — Apocrypha — Gnostics — Church Fathers — Other, or use the search box.
Acts of Peter and the Twelve
Discourse on the Eighth and Ninth
On the Origin of the World
Also available on the Early Christian Writings CD-ROM
Special Features
Please bookmark the site for future reference.
Let your friends know about the site too.
Early Christian Writings is the most complete collection of Christian texts before the Council of Nicaea in 325 AD. The site provides translations and commentary for these sources, including the New Testament, Apocrypha, Gnostics, Church Fathers, and some non-Christian references. The “Early Christian Writings: New Testament, Apocrypha, Gnostics, Church Fathers” site is copyright © Peter Kirby . Permission is given to link to any HTML file on the Early Christian Writings site.
Please buy the CD to support the site, view it without ads, and get bonus stuff!
Gospels
Gospel Fragments
Apostolic Acts
Acts of Peter and the Twelve
Martyrologies
Fifth and Sixth Books of Esra
Dialogues with Jesus
Apocalypses
Acts
Acts of Peter and the Twelve
More Nag Hammadi
Discourse on the Eighth and the Ninth
Apostolic Fathers
Quoted Authors
More Quoted Authors
Pagan and Jewish
Jewish/Christian
...
Read the original on earlychristianwritings.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.