10 interesting stories served every morning and every evening.
Meet the ZedRipper — a 16-core, 83 MHz Z80 powerhouse as portable as it is impractical. The ZedRipper is my latest attempt to build a fun ‘project’ machine, with a couple of goals in mind:
Finally use one of the giant FPGA boards I had lying aroundPlay a little ‘alternate-history computer engineering’ with a hardware-focused approach to multitaskingBuild a machine that I could write fun, small programs for on my daily train rideBuild a platform that would allow for relatively easy computer-architecture experiments
For those that don’t have time for a wall of text about impractical computer architecture…
What is this beast?
The ZedRipper is basically my attempt to build the ultimate CP/M 2.2 computer.
64KB of dedicated RAM for each Z80All CPUs and devices connected with a fully-synchronous, uni-directional ring network operating at 83 MHz128MB of storage on SD Card (available via 16 x 8MB disk drives in CP/M)A ‘server’ core that boots into CP/M 2.2 and runs a CP/NET ﬁle server (written in Turbo Pascal 3 on the machine!) allowing shared access to the SD card15 ‘client’ cores running CP/NOS from ROM. Each client can access the shared storage and run any CP/M 2.2 programs without resource contention with the other cores.
The Road Not Taken
Is that a game of Chess and Planetfall to distract me from my Turbo Pascal editor?
My adventures with porting a game to my Kaypro left me with surprisingly warm feelings towards this primitive, 40 year old operating system, and I had an idea that I wanted to explore — what if history had taken a different turn, and personal computers had gone down the multi-CPU path right from the start? Even in the 1980s the CPUs themselves (and pretty quickly, the RAM, too) were fairly cheap, but multi-tasking for personal computers was exclusively focused on a ‘time-slicing’ approach whereby one big resource (the RAM or the CPU) got split between competing programs. The hardware just wasn’t really up to the task (and it was extremely difﬁcult to make programs for OSes like DOS play nicely with one another) until we got well into the 386-era and computers with 4MB+ of RAM.
In the course of my historical computing hobbies, I stumbled upon something that I thought was very fascinating — relatively early in its history, CP/M supported a ‘networked’ version called CP/NET. The idea behind it was was one that will still feel pretty familiar to most people — that an ofﬁce might have one or two ‘real’ machines with large disk drives and printers that it shared with ‘thin-client’ style machines that we’re basically just terminals with CPUs and RAM attached. Each user could basically act as if they had their own private CP/M machine with access to large disks and printers.
As I mentioned, the CPU and RAM (typically a Z80 with 64KB of DRAM) weren’t terribly expensive, but all of the trappings required to make something a useful computer (disks, printers, monitors, etc.) really added up. Adding additional CPU(s)+RAM somehow just felt too decadent at the time for someone to consider providing a single user with multiple CPUs and RAM. Even CP/M went the time-sliced multi-tasking route with the MP/M OS.
I found a company called Exidy that came the closest — in 1981 they released their “Multi-NET 80” machine, which allowed up to 16 Z80+RAM cards to be added to it, but it was once again designed to serve 16 individual users rather than a power user with 16 simultaneously running programs.
Fast-forward 40 years, and transistors are very cheap indeed. I inherited some pretty monster FPGA boards (Stratix IV 530GX parts) following a lab cleanup, and was looking for something fun to do with one of them. I had stumbled upon Grant Searle’s extremely fun “Multi-Comp” project at some point, and it was pretty easy to get a single-CPU CP/M machine up and running. But I wanted more. I had 530,000 LUTs and megabytes of on-die block ram just waiting for a cool idea. I decided to go big and see if I could build my own multi-core CP/M machine with true-multitasking — nothing clever, just brute force.
Getting the software up and running
I took a pretty hardware-centric approach to this project, and I didn’t actually write a single line of assembly. CPU 0 boots straight from the ROM Grant provided for his multi-comp project, and the other nodes actually boot from a 4KB CP/NOS ROM I found from an Altair simulator.
Both ROMs expect to interface with a serial terminal with a pretty standard interface, and the CP/NOS clients expect another serial port connected to a server. As custom logic is basically free on such a large FPGA, I designed some custom address-decoding logic that makes each CPU’s Z-Ring interface appear where it’s expected in the I/O address map.
The heart of the ZedRipper is one of these monsters sporting a Stratix IV 530GX FPGA. An HSMC breakout card is used to drive the display, receive data from the keyboard controller and connect to the SD Card. You actually use ethernet to upload a new ﬁrmware image, so the ethernet port is routed to the side of the case, along with the SD Card adapter and a (currently unused) slot for an external serial port.
The keyboard and conspicuous hole where a future pointing device will go
I had a compact PS/2 keyboard lying around (salvaged from one of my old laptop projects, actually) that I wanted to interface with the 2.5V I/O on my FPGA. I decided to go the ‘easy’ route, and toss in a Teensy 2.0 microcontroller.
The keyboard controller hot-glued to the underside of the keyboard
This does the PS/2-to-ASCII translation, and also allows easy mapping of some of the weirder keys (like F1-F12) to ‘magic’ terminal sequences for convenience. The Teensy then outputs bytes to the Z80 over a 9600 baud UART (with a simple resistor voltage divider to change the 5V output into 2.5V for the FPGA). Given that this whole project is basically cobbled together from things lying around my workshop, this was a convenient solution that worked out quite well.
The boot screen with the server running in the upper left and three user programs running on separate CPU cores
The display is a 1280×800 10.1″ display that accepts VGA input. The FPGA uses a simple resistor network to generate up to 64 colors (R2G2B2). The screen requires an 83.33 MHz pixel clock (1280×800@60Hz), so for simplicity’s sake, the entire design runs synchronously at that frequency.
Grant’s Multicomp project included VHDL code for a basic ANSI-compatible terminal. I re-wrote the terminal logic in Verilog (just for my own sanity), and then designed a video controller that supports 16 fully independent terminals, all connected via a single Z-Ring node. The 1280×800 display is effectively treated as a 160×50 character-based display (using an 8×16 font), and each terminal acts like an 80×25 ‘sprite’ that can be re-positioned anywhere on the screen (with a priority list to conﬁgure the order of precedence for the terminals being drawn). As each terminal is fully independent, it contains its own state machine, along with a 2KB character RAM and 2KB ‘attribute’ RAM (to hold the color information). Each character supports a 4-bit foreground and background color. Since all of the terminals must maintain the same character alignment, any given 8×16 ‘cell’ on the screen can only contain a single character, and all 16 terminals can share a 2KB ROM containing the font. In total then, the display logic uses up around 66KB of Block RAM.
The general effect of this is that I have an extremely simple window manager for my CP/M terminals, almost entirely in hardware. This is one of the areas that’s most fertile for exploring — at the moment only the server CPU is capable of re-positioning the terminals, but I have longer term plans to add in a mouse-like positioning device to allow a hardware-only mechanism for dragging windows around and changing the display priority.
As the terminal controller is just another node on the Z-Ring (and the Z-Ring interface for each Z80 is straightforward to re-target), future plans include possibly adding a ‘full-screen’ 160×50 terminal (possibly as a ‘background’) and an actual 1280x800x64-color bitmapped display using some of the fast external SRAM on the board.
Conjuring a pile of Z80s into existence is as easy as writing a generate loop in verilog, but how to connect them up in a sane way? One thing I’ve learned from my day job is that designing a network can be hard. General goals for this network:
As I mentioned earlier, my Z80s were expecting to interface with some serial ports, so the interface was fairly simple — make it look like a serial port! At its core, the Z-Ring is a synchronous, uni-directional ring network that uses credits for ﬂow control. Each node contains a 1-byte receive buffer for every other node on the network. Coming out of reset then, each node has 1 ‘credit’ for every other node on the network. The design is parameterized, so it could easily scale up to hundreds of nodes with only a bit more logic, but as it’s currently implemented the Z-Ring supports up to 32 nodes (so each node requires a 32-byte buffer).
The actual ‘bus’ consists of a valid bit, a ‘source’ ID, a ‘destination’ ID and a 1-byte payload (so 19 bits wide). I think it would be pretty straightforward to implement this using TTL logic (if one found themselves transported back to 1981 and couldn’t use FPGAs). Each ‘node’ has 2 pipelined sets of ﬂops on the bus — stage 0 and stage 1 — and when you inject a message, it waits until stage 0 is empty before muxing it into stage 1. Messages are injected at the ‘source’ node and travel around the ring until they reach their destination node, at which point they land in the corresponding buffer and update a ‘data ready’ ﬂag. When the receiving node reads from the buffer, it ‘re-injects’ the original message which continues around the ring until it reaches the source again, thus returning the credit. A ‘feature’ of this scheme is that if you do send a packet to non-existent address, the credit will be automatically returned to you when it loops back around.
As each stop on the ring consists of 2 pipeline stages, and there is no backpressuring, each message takes no more than 2*(number of nodes) cycles to be delivered. The current implementation has 17 nodes (16 CPUs + the display/keyboard controller) and runs with a 12nS clock, so to deliver a message and receive the credit back you are looking at a minimum of ~400 nS. The display controller can basically sink trafﬁc as quickly as it arrives, so each CPU has ~2-2.5 MB/s of bandwidth to its own terminal (with enough shared bandwidth on the bus to accommodate all 16 CPUs), which is quite a bit as far as terminals go.
The current implementation is perfectly adequate to get things up and running, but there are a number of pretty straightforward improvements that could be made:
Adding deeper receive buffers would potentially allow much higher bandwidth from a given node — there are plenty of free 1KB block rams on the FPGA, which would allow 32 credits x 32 nodes, so each CPU would in theory be capable of saturating the bus. Add support for an ‘address’ mode — Adding a 16-bit (or more!) address would allow DMA operations between nodes (and adding a simple DMA engine to each node would be pretty easy). The FPGA board has a ton of extra hardware (several megabytes of varying static RAMs, and a gigabyte or so of DDR3) that could be potentially fun to interface with.Add some sort of ﬂow-control (and buffering) between nodes to allow more ﬂexible decoupling.
But I’m perfectly content to leave those for a future rainy day for now.
The FPGA dev board requires a 14V-20V input, while the display requires a 12V input, and the Teensy and PS/2 keyboard requires a 5V input. Conveniently, the FPGA board has 3.3V, 5V and 12V regulators that are relatively easy to tap into, so the FPGA board accepts power directly from a beefy 5000 mAh / 14.4V LiPo battery pack and then supplies power to all of the other devices. One of the trickier bits of this project was that I didn’t want to have to dis-assemble the laptop to re-charge it, but the battery has both the normal +/- power connector, as well as a ‘balance’ connector that connects to each individual cell for recharging purposes. My somewhat ‘meh’ solution to this was to have the power switch toggle between connecting the main supply to the FPGA and to a charging plug (along with the balance connector) in a little internal compartment exposed by a sliding door. It’s kind of awkward, but you can just slide the door open and ﬁsh out the connectors to plug into the charger without needing to break out an M3 hex key.
I haven’t actually tested it properly, but the battery lasts for 3+ hours (which is more than adequate to cover my daily train ride). If I had to guess it’s probably closer to the ~6 hour range without any power optimization effort on my part. It doesn’t support simultaneous charging / usage, but the battery life is sufﬁciently good that it hasn’t been a problem.
The case is fairly standard ‘hackerspace’ construction — a combination of laser-cut 3mm plywood and 3D printed plastic for everything else. I sprung for proper position-control hinges for the screen, so it feels like a relatively normal (if somewhat less svelte) laptop when you’re using it. I wanted to give it some 1980’s ﬂair, so the screen actually has some “Cray”-ish angles at the top, and there is a pleather wrist-rest. The actual edge of the laser-cut plywood is pretty uncomfortable against your wrists while typing, so the wrist-rest is surprisingly functional.
I haven’t tried any actual CP/M benchmarking programs (I assume there are some out there, but I’ve never looked very hard), but, as this machine was mostly built with writing Turbo Pascal in mind, I did at least try some micro benchmarks. I can do between 15k-35k ﬂoating point operations/sec (using the 48-bit Real type in TP), and ~1 million integer operations/sec (using the 16-bit Integer type in TP), so all-in-all not too bad for an 8-bit CPU and a fairly nice programming environment.
Designing a ﬂoating point accelerator might be a fun project some day, and there is plenty of logic resources to support it.
As I’ve mentioned before, all of the logic so far is pretty lightweight, occupying a mere 7% of on-chip logic resources (although ~40% of the total on-chip block ram and 100% of the big M144k block rams).
There is plenty of room for fun experimentation going forward (and remarkably, compiling this project only takes ~10 minutes).
I have immediate plans (as in, I have the hardware lying around, I just haven’t had time to solder it yet) for the following:
Stain and seal things! It’s made of thin plywood. It really wants to be coated in something.Joystick-like pointing device — to be connected to the Teensy that acts as a keyboard controller and ﬁll that conspicuous hole.Battery Monitoring — once again, the ADC on the Teensy is going to provide some lightweight battery monitoring so that I have some idea how charged things areWiFi — I have an ESP32 lying around waiting to run Zimodem! Coupled with my phone in wiﬁ hotspot mode, it should allow me to have ’net access on the go =) There are good terminal apps available for CP/M, but it would be fun to try to write things like an IRC client or a very simple web browser. It also allows convenient use of kermit for ﬁle transfers to a modern computer running linux.Add an externally-accessible serial port for communicating with another machine (there is already a 3D-printed slot for the connector, I just need to wire it in)Status LED! There’s already a mounting hole in the front — current plan is to connect it to the SD Card’s drive access signal.
Longer term, there are lots of neat hardware ideas that might be fun to experiment with:
How fast can you make a Z80 go? The ﬁrst step would be to decouple the CPU speed from the pixel clock, but it would also be fun to try applying some modern computer architecture techniques to a Z80 (pipelining, register re-naming, branch prediction, wider memory for pre-fetching, etc.)Similarly, adding custom accelerators for things like ﬂoating point might be fun. There are 1024 completely unused DSP blocks on this chip, and I bet no one has tried to build an accelerator for the 48-bit Real format that turbo pascal uses.Use the existing hardware! This development board is brimming with unused memory, primarily:Better video hardware! The ﬁrst step would probably be to add support for a ‘full-screen’ 160×50 terminal and the ability to scale a regular 80×25 terminal up by 2x. The aforementioned external SSRAM would also make it quite straightforward to add a full 1280×800@6-bit, fully bit-mapped display.Expand the capabilities of the current terminal — I think I could add compatibility with the ADM-3A-ish terminal (plus graphics support) used by the Kaypro/84 series, so that way I would have access to a slightly larger set of software (and not have to port DD9!). I could also probably think of custom escape sequences that might be convenient to add.
I’ve only had the machine up and running for a few days, but I’ve got to say, it’s pretty great. The screen is nice and clear, the keyboard is spacious and comfortable, and it’s bulky, but it doesn’t actually weigh all that much (and still easily ﬁts in my backpack). It’s even surprisingly ergonomic to use on the train.
Usage-wise, I also think I’m really on to something. Just the ability to have a text editor open for taking notes in one window while I’m debugging some turbo pascal code in another window is extremely convenient (or taking notes while playing Zork!). It feels like this could have been a genuinely viable approach towards building a low-cost, multi-tasking CP/M environment.
Itching to build your own?
I don’t actually have an easy way to get ﬁles *off* of the machine yet, so for now the most useful part (the CP/Net ﬁle server written in Turbo Pascal) is kind of trapped on the machine. Stay tuned for a future update with all of the Verilog and TP code though (and shoot me an e-mail if you really can’t wait). At some point I should probably join the 21st century and get a github account, too. Alas, that whole ‘free time’ thing…
Record and share your terminal sessions, the right way.
Forget screen recording apps and blurry video.
Enjoy a lightweight, purely text-based approach to terminal recording.
Supports Linux, macOS and *BSD
asciinema [as-kee-nuh-muh] is a free and open source solution for recording
terminal sessions and sharing them on the web. Read about how it works.
asciinema [as-kee-nuh-muh] is a free and open source solution for recording
terminal sessions and sharing them on the web. Read about how it works.
Record right where you work - in a terminal. To start just run asciinema rec, to ﬁnish hit or type exit.
Any time you see a command you’d like to try in your own terminal just pause the player and copy-paste the content you want. It’s just a text after all!
Easily embed an asciicast player in your blog post, project documentation page or in your conference talk slides.
Skip to content
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Public · Anyone can follow this list
Private · Only you can access this list
Here’s the URL for this Tweet. Copy it to easily share with friends.
Add this Tweet to your website by copying the code below. Learn more
Add this video to your website by copying the code below. Learn more
Hmm, there was a problem reaching the server.
By embedding Twitter content in your website or app, you are agreeing to the Twitter Developer Agreement and Developer Policy.
Why you’re seeing this ad
Not on Twitter? Sign up, tune into the things you care about, and get updates as they happen.
» See SMS short codes for other countries
This timeline is where you’ll spend most of your time, getting instant updates about what matters to you.
Hover over the proﬁle pic and click the Following button to unfollow any account.
When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.
Add your thoughts about any Tweet with a Reply. Find a topic you’re passionate about, and jump right in.
Get instant insight into what people are talking about now.
Get more of what you love
Follow more accounts to get instant updates about topics you care about.
See the latest conversations about any topic instantly.
Catch up instantly on the best stories happening as they unfold.
Things that are illegal to build in most American cities now, a thread:
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
Lawmakers of both parties echoed those worries on Tuesday, threatening to take action if the companies didn’t satisfy their concerns.
“You’re going to ﬁnd a way to do this, or we’re going to do this for you,” said Senator Lindsey Graham, Republican of South Carolina and the chairman of the Judiciary Committee. “You’re either the solution or you’re the problem.”
If Mr. Barr wants to push the issue with Facebook or another tech company, he could take the issue to court, as the government did during the ﬁght over encryption with Apple in 2016. In that case, the Justice Department had secured a search warrant for the phone of an attacker in the San Bernardino shooting. Prosecutors successfully pursued a court order compelling Apple’s assistance. Apple opposed the order. But when the agency found another way to unlock the phone, it dropped the case.
Throughout the hearing on Tuesday, Facebook and Apple representatives said the companies were committed to working with law enforcement. The witness from Facebook detailed how the company could detect malicious content despite encryption.
Encrypting its messaging products is the central aspect of Facebook’s plan to rebrand itself as privacy focused, after being battered for years by revelations that it mishandled user data. But it has also put the company, which is already the subject of consumer privacy and antitrust investigations, on another collision course with governments around the world.
A new seat design comes with an innovative solution to this inﬂight issue, using “padded wings” that fold out from behind both sides of the seat back — allowing both for additional privacy and a cushioned spot to rest heads for some shut-eye.
CNN Travel went along to ﬁnd out more about what makes this seat different, and to test out just how comfy this concept really is.
Interspace is the brainchild of Luke Miles, New Territory’s founder and chief creative ofﬁcer. He spent three years working as Head of Design at Virgin Atlantic, so he knows his aircraft interiors inside out.
The designer tells CNN Travel he’d noticed how innovative airplane cabin designs usually focus on business or ﬁrst class experiences and he wanted to come up with a way to make the cheap seats comﬁer.
“We’re really keen as a business on trying to — it sounds a bit cliche — but trying to push some innovation back into the majority,” says Miles.
The wings on Interspace fold manually in and out of the chair. This allows for a streamlined look, and easy access to move up and down the row.
At the London launch, there were two seats on show, one depicting what it’s like to recline and the other in an upright position. Both seats allow guests to play around with the wings’ settings.
The ﬂexibility is intriguing. While it does seem that a whole cabin of passengers unfurling the wings at once might be a bit chaotic, it’d be pretty great to have that ability to change things up during a long ﬂight.
Seat designs already in circulation have experimented with built in neck cushioning. Cathay Paciﬁc’s economy seats on its A350 aircraft , for example, allow ﬂiers to move their headrest into six different positions.
But Miles took the radical move of eradicating the headrest altogether, pointing out that chairs at home or in the ofﬁce don’t typically include them.
The seat prototype on show at the launch was a carbon ﬁber, lightweight design — but the designer insists the wings could be ﬁtted to most existing seats, whatever their material.
“It’s just about very subtle, technological enablers, to just make the whole thing feel a bit more empathetic to you,” says Miles.
At 2019′s Aircraft Interiors Expo 2019 (AIX), CNN Travel tested out Airbus’ couch-style airplane seating idea , which combines three economy seats into one, allowing traveling partners to lounge together like they would in their living room, or solo ﬂyers to stretch out and get comfy.
Biologist Aristid Lindenmayer created , or , in 1968 as a way of formalizing patterns of bacteria growth. L-systems are a recursive, string-rewriting framework, commonly used today in computer graphics to visualize and simulate organic growth, with applications in plant development, procedural content generation, and fractal-like art.
The following describes L-system fundamentals, how they can be visually represented, and several classes of L-systems, like context-sensitive L-systems and stochastic L-systems. Much of the following has been derived from Przemyslaw Prusinkiewicz and Lindenmayer’s seminal work, The Algorithmic Beauty of Plants.
Fundamentally, an L-system is a set of rules that describe how to iteratively transform a string of symbols. A string, in this context, is a series of symbols, like “” or “”, and can be thought of as a word comprised of characters. Each rule, known as a production, describes the transformation of one symbol to another symbol, series of symbols, or no symbol at all. On each iteration, the productions are applied to each character simultaneously, resulting in a new series of symbols.
Productions in this rewriting system can be described with “before” and “after” states, often described as the predecessor and successor; for example, the production represents that the symbol transforms into the symbols on every iteration. The length of derivation, or the number of iterations, is represented by . Given a word and productions and , the following illustrates how the word transforms over several iterations, from to .
L-systems are formalized as a tuple with the following definition:
Where the components are:
* , the alphabet, or all potential symbols in the string.
* , the starting word, also known as the axiom, comprised of symbols from .
* , a series of productions describing the transformations or rules.
The L-system in Figure 1 can be formalized by deﬁning its axiom () and a series of productions () as:
The alphabet of all valid symbols can be inferred. It is implied that a symbol without a matching production has an identity production, e.g. .
This fundamental form of an L-system is described as a deterministic, context-free L-system, or (or sometimes DOL-system). are context-free, meaning that each predecessor is transformed regardless of its position in the string and its neighbors. Deterministic L-systems always produce the same result given the same conﬁgurations, as there is only one matching production for each predecessor.
L-systems can be represented visually via turtle graphics, of Logo fame. While L-systems are string rewriting systems, these strings are comprised of symbols, each which can represent some command. A turtle in computer graphics is similar to a pen plotter drawing lines in a 2D space. Imagine giving instructions to a pen plotter to draw a square: “draw 1cm. turn right. draw 1cm. turn right. draw 1cm. turn right. draw 1cm”. Though plotters don’t really have an orientation, an L-system’s turtle can be represented by Cartesian coordinates and , and an angle that describes its forward direction. From there, symbols in a string can represent commands to change the state of the turtle.
To move a turtle around in 2D, symbols must be chosen to represent movement and rotation. The symbols , and will be used here, as they are commonly selected for these commands in L-system interpreters. After deriving the result of an L-system using its production rules, the string can then be parsed from left to right, with the following symbols modifying the turtle state:
The variables and are global values indicating the magnitude of each symbol’s rotation or movement. In non-parametric L-systems, each symbol’s rotation and movement magnitude is a constant in the system.
Following the line in Figure 2 from the bottom left corner, the string can be read as “forward, forward, forward, right, forward, forward, right…” and so on.
A turtle may be decoupled from an L-system. The L-system has a starting string and a set of productions and outputs the resulting string. A turtle may take that ﬁnal string as an input, and output some visual representation. For example, many of the illustrations shown here use the same L-system solvers, while using different turtles where appropriate, like one turtle built using CanvasRenderingContext2D and another using WebGL.
Space-ﬁlling curves can be formalized via L-systems, resulting in a recursive, fractal-like pattern. More specifically, , deﬁned as space-ﬁlling, self-avoiding, simple, and self-similar. That is, a single, non-overlapping, recursive, continuous curve.
The Hilbert Curve (Figure 3) is an example of a FASS curve that can be represented as an L-system. Considered a Node-rewriting technique, this L-system’s productions declare that on each iteration, and symbols are replaced with more , and symbols. With the angle deﬁned as 90, this results in recursively generated square wave shape along a curve. While , and are interpreted by the turtle, other symbols can be used for productions. In this case, and are ignored when rendering, and only relevant when rewriting the string and matching productions.
In addition to a turtle traversing on a 2D plane, symbols may be introduced that instruct the turtle to draw in 3D. The Algorithmic Beauty of Plants uses the following symbols to control rendering in three dimensions:
Like the 2D Hilbert Curve (Figure 3), a three-dimensional version can also be created (Figure 4) using these additional symbols, resulting in a 3D FASS curve.
The space-ﬁlling Hilbert curve can be represented as a single, continuous line. For organic, tree-like structures, branching is used to represent a diverging fork. Two new symbols, square brackets, are introduced to represent a tree in an L-system’s string, with an opening bracket indicating the start of a new branch, with the remaining symbols between the brackets being members of that branch. Symbols after the end bracket indicate returning to the point of the branch’s origin. A stack is used to implement branching, storing the state of the turtle on the stack.
* push the current turtle state onto the stack.
* pop the top state from the stack and this becomes the current turtle state.
Symbols in a branch are transformed and replaced just as they were outside of a branch. This allows recursive, fractal-like behavior, with each branch forking into more branches, and so on.
Rather than productions evaluating symbols in isolation (context-free), rules may be deﬁned that only matches a symbol when it proceeds or succeeds another speciﬁc symbol.
contain production rules that specify symbols that must come before or after the predecessor in order to match, as opposed to context-free systems that evaluate predecessors in isolation.
These context rules are deﬁned using and in the production rule, adjacent to the predecessor. In Figure 5, the ﬁrst production rule only matches when the symbol is immediately after , thus replacing the predecessor with its successor, . This results in the symbol moving towards the right:
A similar system could be deﬁned that propagates from right to left, deﬁned via production , replacing when there is a after it in the string.
L-systems with these one-sided context productions may be considered . Productions may also have both a before-context and an after-context in systems considered . They can be represented as:
This production rule indicates that the predecessor will be replaced by successor when it is between an and a .
The previously described systems are all deterministic; the same system with the same input will always generate the same result. Stochastic L-systems are non-deterministic, deﬁned by several productions that match the same predecessor, chosen randomly given their weight on each iteration. The following production rules deﬁne that on each iteration, has a 50% chance to be rewritten as , and a 50% chance to be rewritten as .
This non-determinism is useful for procedurally creating variety and the seemingly random results of nature.
A high-performance, feature-packed library for all your mapping needs.
OpenLayers v6.1.1 is here! Check out the docs and the examples to get started. The full distribution can be downloaded from the release page.
Pull tiles from OSM, Bing, MapBox, Stamen, and any other XYZ source you can ﬁnd. OGC mapping services and untiled layers also supported.
Render vector data from GeoJSON, TopoJSON, KML, GML, Mapbox vector tiles, and other formats.
Leverages Canvas 2D, WebGL, and all the latest greatness from HTML5. Mobile support out of the box. Build lightweight custom proﬁles with just the components you need.
Style your map controls with straight-forward CSS. Hook into different levels of the API or use 3rd party libraries to customize and extend functionality.
Seen enough already? Go here to get started.
Get the latest release or dig through the archives.
Spend time learning the basics and graduate up to advanced mapping techniques.
Want to learn OpenLayers hands-on? Get started with the workshop.
Browse through the API docs for details on code usage.
In case you are not ready (yet) for the latest version of OpenLayers, we provide links to selected resources of older major versions of the software.
Latest v2: v2.13.1 (July 2013 i.e. really old) — you’ll ﬁnd everything you need on the 2.x page
Please consider upgrading to beneﬁt of the latest features and bug ﬁxes. Get best performance and usability for free by using recent versions of OpenLayers
Don’t change anything in your Docker container image and minify it by up to 30x making it secure too!
Keep doing what you are doing. No need to change anything. Use the base image you want. Use the package manager you want. Don’t worry about hand optimizing your Dockerﬁle. You shouldn’t have to throw away your tools and your workﬂow to have small container images.
Don’t worry about manually creating Seccomp and AppArmor security proﬁles. You shouldn’t have to become an expert in Linux syscalls, Seccomp and AppArmor to have secure containers. Even if you do know enough about it wasting time reverse engineering your application behavior can be time consuming.
docker-slim will optimize and secure your containers by understanding your application and what it needs using various analysis techniques. It will throw away what you don’t need reducing the attack surface for your container. What if you need some of those extra things to debug your container? You can use dedicated debugging side-car containers for that (more details below).
docker-slim has been used with Node.js, Python, Ruby, Java, Golang, Rust, Elixir and PHP (some app types) running on Ubuntu, Debian, CentOS, Alpine and even Distroless.
Watch this screencast to see how an application image is miniﬁed by more than 30x.
When docker-slim runs it gives you an opportunity to interact with the temporary container it creates. By default, it will pause and wait for your input before it continues its execution. You can change this behavior using the –continue-after ﬂag.
If your application exposes any web interfaces (e.g., when you have a web server or an HTTP API), you’ll see the port numbers on the host machine you will need to use to interact with your application (look for the port.list and target.port.info messages on the screen). For example, in the screencast above you’ll see that the internal application port 8000 is mapped to port 32911 on your host.
Note that docker-slim will interact with your application for you if you enable HTTP probing with the –http-probe ﬂag or other related HTTP probe ﬂags. Some web applications built with scripting languages like Python or Ruby require service interactions to load everything in the application. Enable HTTP probing unless it gets in your way.
Note: The examples are in a separate repository: https://github.com/docker-slim/examples
Now you can run docker-slim in containers and you get more convinient reporting defaults. For more info about the latest release see the CHANGELOG.
If the directory where you extracted the binaries is not in your PATH then you’ll need to run your docker-slim commands from that directory.
To use the Docker image distribution just start using the dslim/docker-slim container image.
See the USAGE DETAILS section for more details. You can also get additional information about the parameters running docker-slim. Run docker-slim without any parameters and you’ll get a high level overview of the available commands. Run a docker-slim command without any parameters and you’ll get more information about that command (e.g., docker-slim build).
If you want to auto-generate a Seccomp proﬁle AND minify your image use the build command. If you only want to auto-generate a Seccomp proﬁle (along with other interesting image metadata) use the proﬁle command.
Step two: use the generated Seccomp proﬁle
You can use the generated Seccomp proﬁle with your original image or with the miniﬁed image.
You can use the generated proﬁle with your original image or with the miniﬁed image DockerSlim created:
The demo run on Mac OS X, but you can build a linux version. Note that these steps are different from the steps in the demo video.
Get the docker-slim Mac, Linux or Linux ARM binaries. Unzip them and optionally add their directory to your PATH environment variable if you want to use the app from other locations.
The extracted directory contains two binaries:
* docker-slim-sensor <- the sensor application used to collect information from running containers
Clone the examples repo to use the sample apps (note: the examples have been moved to a separate repo). You can skip this step if you have your own app.
Create a Docker image for the sample node.js app in examples/node_ubuntu. You can skip this step if you have your own app.
eval “$(docker-machine env default)” docker-machine start default; see the Docker connect options section for more details.
DockerSlim creates a special container based on the target image you provided. It also creates a resource directory where it stores the information it discovers about your image: .
By default, docker-slim will run its http probe against the temporary container. If you are minifying a command line tool that doesn’t expose any web service interface you’ll need to explicitly disable http probing (by setting –http-probe=false).
Use curl (or other tools) to call the sample app (optional)
This is an optional step to make sure the target app container is doing something. Depending on the application it’s an optional step. For some applications it’s required if it loads new application resources dynamically based on the requests it’s processing (e.g., Ruby or Python).
You’ll see the mapped ports printed to the console when docker-slim starts the target container. You can also get the port number either from the docker ps or docker port commands. The current version of DockerSlim doesn’t allow you to map exposed network ports (it works like docker run … -P).
Press and wait until docker-slim says it’s done
By default or when http probing is enabled explicitly docker-slim will continue its execution once the http probe is done running. If you explicitly picked a different continue-after option follow the expected steps. For example, for the enter continue-after option you must press the enter button on your keyboard.
If http probing is enabled (when http-probe is set) and if continue-after is set to enter and you press the enter key before the built-in HTTP probe is done the probe might produce an EOF error because docker-slim will shut down the target container before all probe commands are done executing. It’s ok to ignore it unless you really need the probe to ﬁnish.
Once DockerSlim is done check that the new miniﬁed image is there
You should see my/sample-node-app.slim in the list of images. Right now all generated images have .slim at the end of its name.
* build - Collect fat image information and build a slim image from it
* info - Collect fat image information and reverse engineers its Dockerﬁle (no runtime container analysis)
* –report - command report location (target location where to save the executed command results; slim.report.json by default; set it to off to disable)
* –check-version - check if the current version is outdate
* –log-format - set the format used by logs (‘text’ (default), or ‘json’)
* –state-path value - DockerSlim state base path (must set it if the DockerSlim binaries are not in a writable directory!)
* –archive-state - Archives DockerSlim state to the selected Docker volume (default volume - docker-slim-state). By default, enabled when DockerSlim is running in a container (disabled otherwise). Set it to off to disable explicitly.
* –in-container - Set it to true to explicitly indicate that DockerSlim is running in a container (if it’s not set DockerSlim will try to analyze the environment where it’s running to determine if it’s containerized)
To get more command line option information run docker-slim without any parameters or select one of the top level commands to get the command-speciﬁc information.
To disable the version checks set the global –check-version ﬂag to false (e.g., –check-version=false) or you can use the DSLIM_CHECK_VERSION environment variable.
* –http-probe - enables HTTP probing (ENABLED by default; you have to disable the probe if you don’t need it by setting the ﬂag to false)
* –http-probe-cmd - additional HTTP probe command [zero or more]
* –http-probe-retry-count - number of retries for each HTTP probe (default: 5)
* –http-probe-retry-wait - number of seconds to wait before retrying HTTP probe (doubles when target is not ready; default: 8)
* –http-probe-ports - explicit list of ports to probe (in the order you want them to be probed; excluded ports are not probed!)
* –http-probe-full - do full HTTP probe for all selected ports (if false, ﬁnish after ﬁrst successful scan; default: false)
* –show-clogs - show container logs (from the container used to perform dynamic inspection)
* –show-blogs - show build logs (when the miniﬁed container is built)
* –remove-ﬁle-artifacts - remove ﬁle artifacts when command is done (note: you’ll loose autogenerated Seccomp and Apparmor proﬁles unless you copy them with the copy-meta-artifacts ﬂag or if you archive the state)
* –tag - use a custom tag for the generated image (instead of the default: )
* –mount - mount volume analyzing image (the mount parameter format is identical to the -v mount command in Docker) [zero or more]
* –include-path - Include directory or ﬁle from image [zero or more]
* –include-bin value - Include binary from image (executable or shared object using its absolute path)
* –include-exe value - Include executable from image (by executable name)
* –env - override ENV analyzing image [zero or more]
* –expose - use additional EXPOSE instructions analyzing image [zero or more]
* –link - add link to another container analyzing image [zero or more]
* –etc-hosts-map - add a host to IP mapping to /etc/hosts analyzing image [zero or more]
* –container-dns - add a dns server analyzing image [zero or more]
* –container-dns-search - add a dns search domain for unqualiﬁed hostnames analyzing image [zero or more]
* –from-dockerﬁle - The source Dockerﬁle name to build the fat image before it’s miniﬁed.
* –use-local-mounts - Mount local paths for target container artifact input and output (off, by default).
* –use-sensor-volume - Sensor volume name to use (set it to your Docker volume name if you manage your own docker-slim sensor volume).
* –keep-tmp-artifacts - Keep temporary artifacts when command is done (off, by default).
The –include-path option is useful if you want to customize your miniﬁed image adding extra ﬁles and directories. The –include-path-ﬁle option allows you to load multiple includes from a newline delimited ﬁle. Use this option if you have a lot of includes. The includes from –include-path and –include-path-ﬁle are combined together. Future versions will also include the –exclude-path option to have even more control.
The –continue-after option is useful if you need to script docker-slim. If you pick the probe option then docker-slim will continue executing the build command after the HTTP probe is done executing. If you pick the timeout option docker-slim will allow the target container to run for 60 seconds before it will attempt to collect the artifacts. You can specify a custom timeout value by passing a number of seconds you need instead of the timeout string. If you pick the signal option you’ll need to send a USR1 signal to the docker-slim process.
The –include-shell option provides a simple way to keep a basic shell in the miniﬁed container. Not all shell commands are included. To get additional shell commands or other command line utilities use the –include-exe’ and/or –include-bin’ options. Note that the extra apps and binaries might missed some of the non-binary dependencies (which don’t get picked up during static analysis). For those additional dependencies use the –include-path and –include-path-ﬁle options.
The –from-dockerﬁle option makes it possible to build a new miniﬁed image directly from source Dockerﬁle. Pass the Dockerﬁle name as the value for this ﬂag and pass the build context directory or URL instead of the docker image name as the last parameter for the docker-slim build command: docker-slim build –from-dockerﬁle Dockerﬁle –tag my/custom_miniﬁed_image_name . If you want to see the console output from the build stages (when the fat and slim images are built) add the –show-blogs build ﬂag. Note that the build console output is not interactive and it’s printed only after the corresponding build step is done. The fat image created during the build process has the .fat sufﬁx in its name. If you specify a custom image tag (with the –tag ﬂag) the .fat sufﬁx is added to the name part of the tag. If you don’t provide a custom tag the generated fat image name will have the following format: docker-slim-tmp-fat-image.. The miniﬁed image name will have the .slim sufﬁx added to that auto-generated container image name (docker-slim-tmp-fat-image.). Take a look at this python examples to see how it’s using the –from-dockerﬁle ﬂag.
The –use-local-mounts option is used to choose how the docker-slim sensor is added to the target container and how the sensor artifacts are delivered back to the master. If you enable this option you’ll get the original docker-slim behavior where it uses local ﬁle system volume mounts to add the sensor executable and to extract the artifacts from the target container. This option doesn’t always work as expected in the dockerized environment where docker-slim itself is running in a Docker container. When this option is disabled (default behavior) then a separate Docker volume is used to mount the sensor and the sensor artifacts are explicitly copied from the target container.
The current version of docker-slim is able to run in containers. It will try to detect if it’s running in a containerized environment, but you can also tell docker-slim explicitly using the –in-container global ﬂag.
You can run docker-slim in your container directly or you can use the docker-slim container in your containerized environment. If you are using the docker-slim container make sure you run it conﬁgured with the Docker IPC information, so it can communicate with the Docker daemon. The most common way to do it is by mounting the Docker unix socket to the docker-slim container. Some containerized environments (like Gitlab and their dind service) might not expose the Docker unix socket to you, so you’ll need to make sure the environment variables used to communicate with Docker (e.g., DOCKER_HOST) are passed to the docker-slim container. Note that if those environment variables reference any kind of local host names those names need to be replaced or you need to tell docker-slim about them using the –etc-hosts-map ﬂag. If those environment variables reference local ﬁles those local ﬁles (e.g., ﬁles for TLS cert validation) will need to be copied to a temporary container, so that temporary container can be used as a data container to make those ﬁles accessible by the docker-slim container.
When docker-slim runs in a container it will attempt to save its execution state in a separate Docker volume. If the volume doesn’t exist it will try to create it (docker-slim-state, by default). You can pick a different state volume or disable this behavior completely by using the global –archive-state ﬂag. If you do want to persist the docker-slim execution state (which includes the seccomp and AppArmor proﬁles) without using the state archiving feature you can mount your own volume that maps to the /bin/.docker-slim-state directory in the docker-slim container.
By default, docker-slim will try to create a Docker volume for its sensor unless one already exists. If this behavior is not supported by your containerized environment you can create a volume separately and pass its name to docker-slim using the –use-sensor-volume ﬂag.
Here’s a basic example of how to use the containerized version of docker-slim:
docker run -it –rm -v /var/run/docker.sock:/var/run/docker.sock dslim/docker-slim build your-docker-image-name
Here’s a GitLab example for their dind .gitlab-ci.yml conﬁg ﬁle:
docker run -e DOCKER_HOST=tcp://$(grep docker /etc/hosts | cut -f1):2375 dslim/docker-slim build your-docker-image-name
Here’s a CircleCI example for their remote docker .circleci/conﬁg.yml conﬁg ﬁle (used after the setup_remote_docker step):
docker create -v /dcert_path –name dcert alpine:latest /bin/true
docker cp $DOCKER_CERT_PATH/. dcert:/dcert_path
docker run –volumes-from dcert -e DOCKER_HOST=$DOCKER_HOST -e DOCKER_TLS_VERIFY=$DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH=/dcert_path dslim/docker-slim build your-docker-image-name
If you don’t specify any Docker connect options docker-slim expects to ﬁnd the following environment variables: DOCKER_HOST, DOCKER_TLS_VERIFY (optional), DOCKER_CERT_PATH (required if DOCKER_TLS_VERIFY is set to “1”)
On Mac OS X you get them when you run eval “$(docker-machine env default)” or when you use the Docker Quickstart Terminal.
If the Docker environment variables are conﬁgured to use TLS and to verify the Docker cert (default behavior), but you want to disable the TLS veriﬁcation you can override the TLS veriﬁcation behavior by setting the –tls-verify to false:
You can override all Docker connection options using these ﬂags: –host, –tls, –tls-verify, –tls-cert-path. These ﬂags correspond to the standard Docker options (and the environment variables).
If you want to use TLS with veriﬁcation:
If you want to use TLS without veriﬁcation:
If the Docker environment variables are not set and if you don’t specify any Docker connect options docker-slim will try to use the default unix socket.
If the HTTP probe is enabled (note: it is enabled by default) it will default to running GET / with HTTP and then HTTPS on every exposed port. You can add additional commands using the –http-probe-cmd and –http-probe-cmd-ﬁle options.
The –http-probe-cmd option is good when you want to specify a small number of simple commands where you select some or all of these HTTP command options: protocol, method (defaults to GET), resource (path and query string).
If you only want to use custom HTTP probe command and you don’t want the default GET / command added to the command list you explicitly provided you’ll need to set –http-probe to false when you specify your custom HTTP probe command. Note that this inconsistency will be addressed in the future releases to make it less confusing.
Here are a couple of examples:
Adds two extra probe commands: GET /api/info and POST /submit (tries http ﬁrst, then tries https):
docker-slim build –show-clogs –http-probe-cmd /api/info –http-probe-cmd POST:/submit my/sample-node-app-multi
Adds one extra probe command: POST /submit (using only http):