10 interesting stories served every morning and every evening.




1 420 shares, 39 trendiness, 0 words and 0 minutes reading time

Made in America

...

Read the original on www.reuters.com »

2 388 shares, 60 trendiness, 3319 words and 29 minutes reading time

The ZedRipper

Meet the ZedRipper — a 16-core, 83 MHz Z80 pow­er­house as portable as it is im­prac­ti­cal. The ZedRipper is my lat­est at­tempt to build a fun project’ ma­chine, with a cou­ple of goals in mind:

Finally use one of the gi­ant FPGA boards I had ly­ing around­Play a lit­tle alternate-history com­puter en­gi­neer­ing’ with a hard­ware-fo­cused ap­proach to mul­ti­task­ing­Build a ma­chine that I could write fun, small pro­grams for on my daily train ride­Build a plat­form that would al­low for rel­a­tively easy com­puter-ar­chi­tec­ture ex­per­i­ments

For those that don’t have time for a wall of text about im­prac­ti­cal com­puter ar­chi­tec­ture…

What is this beast?

The ZedRipper is ba­si­cally my at­tempt to build the ul­ti­mate CP/M 2.2 com­puter.

64KB of ded­i­cated RAM for each Z80All CPUs and de­vices con­nected with a fully-syn­chro­nous, uni-di­rec­tional ring net­work op­er­at­ing at 83 MHz128MB of stor­age on SD Card (available via 16 x 8MB disk dri­ves in CP/M)A server’ core that boots into CP/M 2.2 and runs a CP/NET file server (written in Turbo Pascal 3 on the ma­chine!) al­low­ing shared ac­cess to the SD card15 client’ cores run­ning CP/NOS from ROM. Each client can ac­cess the shared stor­age and run any CP/M 2.2 pro­grams with­out re­source con­tention with the other cores.

The Road Not Taken

Is that a game of Chess and Planetfall to dis­tract me from my Turbo Pascal ed­i­tor?

My ad­ven­tures with port­ing a game to my Kaypro left me with sur­pris­ingly warm feel­ings to­wards this prim­i­tive, 40 year old op­er­at­ing sys­tem, and I had an idea that I wanted to ex­plore — what if his­tory had taken a dif­fer­ent turn, and per­sonal com­put­ers had gone down the multi-CPU path right from the start? Even in the 1980s the CPUs them­selves (and pretty quickly, the RAM, too) were fairly cheap, but multi-task­ing for per­sonal com­put­ers was ex­clu­sively fo­cused on a time-slicing’ ap­proach whereby one big re­source (the RAM or the CPU) got split be­tween com­pet­ing pro­grams. The hard­ware just was­n’t re­ally up to the task (and it was ex­tremely dif­fi­cult to make pro­grams for OSes like DOS play nicely with one an­other) un­til we got well into the 386-era and com­put­ers with 4MB+ of RAM.

In the course of my his­tor­i­cal com­put­ing hob­bies, I stum­bled upon some­thing that I thought was very fas­ci­nat­ing — rel­a­tively early in its his­tory, CP/M sup­ported a networked’ ver­sion called CP/NET. The idea be­hind it was was one that will still feel pretty fa­mil­iar to most peo­ple — that an of­fice might have one or two real’ ma­chines with large disk dri­ves and print­ers that it shared with thin-client’ style ma­chines that we’re ba­si­cally just ter­mi­nals with CPUs and RAM at­tached. Each user could ba­si­cally act as if they had their own pri­vate CP/M ma­chine with ac­cess to large disks and print­ers.

As I men­tioned, the CPU and RAM (typically a Z80 with 64KB of DRAM) weren’t ter­ri­bly ex­pen­sive, but all of the trap­pings re­quired to make some­thing a use­ful com­puter (disks, print­ers, mon­i­tors, etc.) re­ally added up. Adding ad­di­tional CPU(s)+RAM some­how just felt too deca­dent at the time for some­one to con­sider pro­vid­ing a sin­gle user with mul­ti­ple CPUs and RAM. Even CP/M went the time-sliced multi-task­ing route with the MP/M OS.

I found a com­pany called Exidy that came the clos­est — in 1981 they re­leased their Multi-NET 80” ma­chine, which al­lowed up to 16 Z80+RAM cards to be added to it, but it was once again de­signed to serve 16 in­di­vid­ual users rather than a power user with 16 si­mul­ta­ne­ously run­ning pro­grams.

Fast-forward 40 years, and tran­sis­tors are very cheap in­deed. I in­her­ited some pretty mon­ster FPGA boards (Stratix IV 530GX parts) fol­low­ing a lab cleanup, and was look­ing for some­thing fun to do with one of them. I had stum­bled upon Grant Searle’s ex­tremely fun Multi-Comp” pro­ject at some point, and it was pretty easy to get a sin­gle-CPU CP/M ma­chine up and run­ning. But I wanted more. I had 530,000 LUTs and megabytes of on-die block ram just wait­ing for a cool idea. I de­cided to go big and see if I could build my own multi-core CP/M ma­chine with true-mul­ti­task­ing — noth­ing clever, just brute force.

Getting the soft­ware up and run­ning

I took a pretty hard­ware-cen­tric ap­proach to this pro­ject, and I did­n’t ac­tu­ally write a sin­gle line of as­sem­bly. CPU 0 boots straight from the ROM Grant pro­vided for his multi-comp pro­ject, and the other nodes ac­tu­ally boot from a 4KB CP/NOS ROM I found from an Altair sim­u­la­tor.

Both ROMs ex­pect to in­ter­face with a se­r­ial ter­mi­nal with a pretty stan­dard in­ter­face, and the CP/NOS clients ex­pect an­other se­r­ial port con­nected to a server. As cus­tom logic is ba­si­cally free on such a large FPGA, I de­signed some cus­tom ad­dress-de­cod­ing logic that makes each CPUs Z-Ring in­ter­face ap­pear where it’s ex­pected in the I/O ad­dress map.

The heart of the ZedRipper is one of these mon­sters sport­ing a Stratix IV 530GX FPGA. An HSMC break­out card is used to drive the dis­play, re­ceive data from the key­board con­troller and con­nect to the SD Card. You ac­tu­ally use eth­er­net to up­load a new firmware im­age, so the eth­er­net port is routed to the side of the case, along with the SD Card adapter and a (currently un­used) slot for an ex­ter­nal se­r­ial port.

The key­board and con­spic­u­ous hole where a fu­ture point­ing de­vice will go

I had a com­pact PS/2 key­board ly­ing around (salvaged from one of my old lap­top pro­jects, ac­tu­ally) that I wanted to in­ter­face with the 2.5V I/O on my FPGA. I de­cided to go the easy’ route, and toss in a Teensy 2.0 mi­cro­con­troller.

The key­board con­troller hot-glued to the un­der­side of the key­board

This does the PS/2-to-ASCII trans­la­tion, and also al­lows easy map­ping of some of the weirder keys (like F1-F12) to magic’ ter­mi­nal se­quences for con­ve­nience. The Teensy then out­puts bytes to the Z80 over a 9600 baud UART (with a sim­ple re­sis­tor volt­age di­vider to change the 5V out­put into 2.5V for the FPGA). Given that this whole pro­ject is ba­si­cally cob­bled to­gether from things ly­ing around my work­shop, this was a con­ve­nient so­lu­tion that worked out quite well.

The boot screen with the server run­ning in the up­per left and three user pro­grams run­ning on sep­a­rate CPU cores

The dis­play is a 1280×800 10.1″ dis­play that ac­cepts VGA in­put. The FPGA uses a sim­ple re­sis­tor net­work to gen­er­ate up to 64 col­ors (R2G2B2). The screen re­quires an 83.33 MHz pixel clock (1280×800@60Hz), so for sim­plic­i­ty’s sake, the en­tire de­sign runs syn­chro­nously at that fre­quency.

Grant’s Multicomp pro­ject in­cluded VHDL code for a ba­sic ANSI-compatible ter­mi­nal. I re-wrote the ter­mi­nal logic in Verilog (just for my own san­ity), and then de­signed a video con­troller that sup­ports 16 fully in­de­pen­dent ter­mi­nals, all con­nected via a sin­gle Z-Ring node. The 1280×800 dis­play is ef­fec­tively treated as a 160×50 char­ac­ter-based dis­play (using an 8×16 font), and each ter­mi­nal acts like an 80×25 sprite’ that can be re-po­si­tioned any­where on the screen (with a pri­or­ity list to con­fig­ure the or­der of prece­dence for the ter­mi­nals be­ing drawn). As each ter­mi­nal is fully in­de­pen­dent, it con­tains its own state ma­chine, along with a 2KB char­ac­ter RAM and 2KB attribute’ RAM (to hold the color in­for­ma­tion). Each char­ac­ter sup­ports a 4-bit fore­ground and back­ground color. Since all of the ter­mi­nals must main­tain the same char­ac­ter align­ment, any given 8×16 cell’ on the screen can only con­tain a sin­gle char­ac­ter, and all 16 ter­mi­nals can share a 2KB ROM con­tain­ing the font. In to­tal then, the dis­play logic uses up around 66KB of Block RAM.

The gen­eral ef­fect of this is that I have an ex­tremely sim­ple win­dow man­ager for my CP/M ter­mi­nals, al­most en­tirely in hard­ware. This is one of the ar­eas that’s most fer­tile for ex­plor­ing — at the mo­ment only the server CPU is ca­pa­ble of re-po­si­tion­ing the ter­mi­nals, but I have longer term plans to add in a mouse-like po­si­tion­ing de­vice to al­low a hard­ware-only mech­a­nism for drag­ging win­dows around and chang­ing the dis­play pri­or­ity.

As the ter­mi­nal con­troller is just an­other node on the Z-Ring (and the Z-Ring in­ter­face for each Z80 is straight­for­ward to re-tar­get), fu­ture plans in­clude pos­si­bly adding a full-screen’ 160×50 ter­mi­nal (possibly as a background’) and an ac­tual 1280x800x64-color bitmapped dis­play us­ing some of the fast ex­ter­nal SRAM on the board.

Conjuring a pile of Z80s into ex­is­tence is as easy as writ­ing a gen­er­ate loop in ver­ilog, but how to con­nect them up in a sane way? One thing I’ve learned from my day job is that de­sign­ing a net­work can be hard. General goals for this net­work:

As I men­tioned ear­lier, my Z80s were ex­pect­ing to in­ter­face with some se­r­ial ports, so the in­ter­face was fairly sim­ple — make it look like a se­r­ial port! At its core, the Z-Ring is a syn­chro­nous, uni-di­rec­tional ring net­work that uses cred­its for flow con­trol. Each node con­tains a 1-byte re­ceive buffer for every other node on the net­work. Coming out of re­set then, each node has 1 credit’ for every other node on the net­work. The de­sign is pa­ra­me­ter­ized, so it could eas­ily scale up to hun­dreds of nodes with only a bit more logic, but as it’s cur­rently im­ple­mented the Z-Ring sup­ports up to 32 nodes (so each node re­quires a 32-byte buffer).

The ac­tual bus’ con­sists of a valid bit, a source’ ID, a destination’ ID and a 1-byte pay­load (so 19 bits wide). I think it would be pretty straight­for­ward to im­ple­ment this us­ing TTL logic (if one found them­selves trans­ported back to 1981 and could­n’t use FPGAs). Each node’ has 2 pipelined sets of flops on the bus — stage 0 and stage 1 — and when you in­ject a mes­sage, it waits un­til stage 0 is empty be­fore mux­ing it into stage 1. Messages are in­jected at the source’ node and travel around the ring un­til they reach their des­ti­na­tion node, at which point they land in the cor­re­spond­ing buffer and up­date a data ready’ flag. When the re­ceiv­ing node reads from the buffer, it re-injects’ the orig­i­nal mes­sage which con­tin­ues around the ring un­til it reaches the source again, thus re­turn­ing the credit. A feature’ of this scheme is that if you do send a packet to non-ex­is­tent ad­dress, the credit will be au­to­mat­i­cally re­turned to you when it loops back around.

As each stop on the ring con­sists of 2 pipeline stages, and there is no back­pres­sur­ing, each mes­sage takes no more than 2*(number of nodes) cy­cles to be de­liv­ered. The cur­rent im­ple­men­ta­tion has 17 nodes (16 CPUs + the dis­play/​key­board con­troller) and runs with a 12nS clock, so to de­liver a mes­sage and re­ceive the credit back you are look­ing at a min­i­mum of ~400 nS. The dis­play con­troller can ba­si­cally sink traf­fic as quickly as it ar­rives, so each CPU has ~2-2.5 MB/s of band­width to its own ter­mi­nal (with enough shared band­width on the bus to ac­com­mo­date all 16 CPUs), which is quite a bit as far as ter­mi­nals go.

The cur­rent im­ple­men­ta­tion is per­fectly ad­e­quate to get things up and run­ning, but there are a num­ber of pretty straight­for­ward im­prove­ments that could be made:

Adding deeper re­ceive buffers would po­ten­tially al­low much higher band­width from a given node — there are plenty of free 1KB block rams on the FPGA, which would al­low 32 cred­its x 32 nodes, so each CPU would in the­ory be ca­pa­ble of sat­u­rat­ing the bus. Add sup­port for an address’ mode — Adding a 16-bit (or more!) ad­dress would al­low DMA op­er­a­tions be­tween nodes (and adding a sim­ple DMA en­gine to each node would be pretty easy). The FPGA board has a ton of ex­tra hard­ware (several megabytes of vary­ing sta­tic RAMs, and a gi­ga­byte or so of DDR3) that could be po­ten­tially fun to in­ter­face with.Add some sort of flow-con­trol (and buffer­ing) be­tween nodes to al­low more flex­i­ble de­cou­pling.

But I’m per­fectly con­tent to leave those for a fu­ture rainy day for now.

The FPGA dev board re­quires a 14V-20V in­put, while the dis­play re­quires a 12V in­put, and the Teensy and PS/2 key­board re­quires a 5V in­put. Conveniently, the FPGA board has 3.3V, 5V and 12V reg­u­la­tors that are rel­a­tively easy to tap into, so the FPGA board ac­cepts power di­rectly from a beefy 5000 mAh / 14.4V LiPo bat­tery pack and then sup­plies power to all of the other de­vices. One of the trick­ier bits of this pro­ject was that I did­n’t want to have to dis-as­sem­ble the lap­top to re-charge it, but the bat­tery has both the nor­mal +/- power con­nec­tor, as well as a balance’ con­nec­tor that con­nects to each in­di­vid­ual cell for recharg­ing pur­poses. My some­what meh’ so­lu­tion to this was to have the power switch tog­gle be­tween con­nect­ing the main sup­ply to the FPGA and to a charg­ing plug (along with the bal­ance con­nec­tor) in a lit­tle in­ter­nal com­part­ment ex­posed by a slid­ing door. It’s kind of awk­ward, but you can just slide the door open and fish out the con­nec­tors to plug into the charger with­out need­ing to break out an M3 hex key.

I haven’t ac­tu­ally tested it prop­erly, but the bat­tery lasts for 3+ hours (which is more than ad­e­quate to cover my daily train ride). If I had to guess it’s prob­a­bly closer to the ~6 hour range with­out any power op­ti­miza­tion ef­fort on my part. It does­n’t sup­port si­mul­ta­ne­ous charg­ing / us­age, but the bat­tery life is suf­fi­ciently good that it has­n’t been a prob­lem.

The case is fairly stan­dard hackerspace’ con­struc­tion — a com­bi­na­tion of laser-cut 3mm ply­wood and 3D printed plas­tic for every­thing else. I sprung for proper po­si­tion-con­trol hinges for the screen, so it feels like a rel­a­tively nor­mal (if some­what less svelte) lap­top when you’re us­ing it. I wanted to give it some 1980’s flair, so the screen ac­tu­ally has some Cray”-ish an­gles at the top, and there is a pleather wrist-rest. The ac­tual edge of the laser-cut ply­wood is pretty un­com­fort­able against your wrists while typ­ing, so the wrist-rest is sur­pris­ingly func­tional.

I haven’t tried any ac­tual CP/M bench­mark­ing pro­grams (I as­sume there are some out there, but I’ve never looked very hard), but, as this ma­chine was mostly built with writ­ing Turbo Pascal in mind, I did at least try some mi­cro bench­marks. I can do be­tween 15k-35k float­ing point op­er­a­tions/​sec (using the 48-bit Real type in TP), and ~1 mil­lion in­te­ger op­er­a­tions/​sec (using the 16-bit Integer type in TP), so all-in-all not too bad for an 8-bit CPU and a fairly nice pro­gram­ming en­vi­ron­ment.

Designing a float­ing point ac­cel­er­a­tor might be a fun pro­ject some day, and there is plenty of logic re­sources to sup­port it.

As I’ve men­tioned be­fore, all of the logic so far is pretty light­weight, oc­cu­py­ing a mere 7% of on-chip logic re­sources (although ~40% of the to­tal on-chip block ram and 100% of the big M144k block rams).

There is plenty of room for fun ex­per­i­men­ta­tion go­ing for­ward (and re­mark­ably, com­pil­ing this pro­ject only takes ~10 min­utes).

I have im­me­di­ate plans (as in, I have the hard­ware ly­ing around, I just haven’t had time to sol­der it yet) for the fol­low­ing:

Stain and seal things! It’s made of thin ply­wood. It re­ally wants to be coated in some­thing.Joy­stick-like point­ing de­vice — to be con­nected to the Teensy that acts as a key­board con­troller and fill that con­spic­u­ous hole.Bat­tery Monitoring — once again, the ADC on the Teensy is go­ing to pro­vide some light­weight bat­tery mon­i­tor­ing so that I have some idea how charged things areWiFi — I have an ESP32 ly­ing around wait­ing to run Zimodem! Coupled with my phone in wifi hotspot mode, it should al­low me to have net ac­cess on the go =) There are good ter­mi­nal apps avail­able for CP/M, but it would be fun to try to write things like an IRC client or a very sim­ple web browser. It also al­lows con­ve­nient use of ker­mit for file trans­fers to a mod­ern com­puter run­ning linux.Add an ex­ter­nally-ac­ces­si­ble se­r­ial port for com­mu­ni­cat­ing with an­other ma­chine (there is al­ready a 3D-printed slot for the con­nec­tor, I just need to wire it in)Sta­tus LED! There’s al­ready a mount­ing hole in the front — cur­rent plan is to con­nect it to the SD Card’s drive ac­cess sig­nal.

Longer term, there are lots of neat hard­ware ideas that might be fun to ex­per­i­ment with:

How fast can you make a Z80 go? The first step would be to de­cou­ple the CPU speed from the pixel clock, but it would also be fun to try ap­ply­ing some mod­ern com­puter ar­chi­tec­ture tech­niques to a Z80 (pipelining, reg­is­ter re-nam­ing, branch pre­dic­tion, wider mem­ory for pre-fetch­ing, etc.)Sim­i­larly, adding cus­tom ac­cel­er­a­tors for things like float­ing point might be fun. There are 1024 com­pletely un­used DSP blocks on this chip, and I bet no one has tried to build an ac­cel­er­a­tor for the 48-bit Real for­mat that turbo pas­cal uses.Use the ex­ist­ing hard­ware! This de­vel­op­ment board is brim­ming with un­used mem­ory, pri­mar­ily:Bet­ter video hard­ware! The first step would prob­a­bly be to add sup­port for a full-screen’ 160×50 ter­mi­nal and the abil­ity to scale a reg­u­lar 80×25 ter­mi­nal up by 2x. The afore­men­tioned ex­ter­nal SSRAM would also make it quite straight­for­ward to add a full 1280×800@6-bit, fully bit-mapped dis­play.Ex­pand the ca­pa­bil­i­ties of the cur­rent ter­mi­nal — I think I could add com­pat­i­bil­ity with the ADM-3A-ish ter­mi­nal (plus graph­ics sup­port) used by the Kaypro/84 se­ries, so that way I would have ac­cess to a slightly larger set of soft­ware (and not have to port DD9!). I could also prob­a­bly think of cus­tom es­cape se­quences that might be con­ve­nient to add.

I’ve only had the ma­chine up and run­ning for a few days, but I’ve got to say, it’s pretty great. The screen is nice and clear, the key­board is spa­cious and com­fort­able, and it’s bulky, but it does­n’t ac­tu­ally weigh all that much (and still eas­ily fits in my back­pack). It’s even sur­pris­ingly er­gonomic to use on the train.

Usage-wise, I also think I’m re­ally on to some­thing. Just the abil­ity to have a text ed­i­tor open for tak­ing notes in one win­dow while I’m de­bug­ging some turbo pas­cal code in an­other win­dow is ex­tremely con­ve­nient (or tak­ing notes while play­ing Zork!). It feels like this could have been a gen­uinely vi­able ap­proach to­wards build­ing a low-cost, multi-task­ing CP/M en­vi­ron­ment.

Itching to build your own?

I don’t ac­tu­ally have an easy way to get files *off* of the ma­chine yet, so for now the most use­ful part (the CP/Net file server writ­ten in Turbo Pascal) is kind of trapped on the ma­chine. Stay tuned for a fu­ture up­date with all of the Verilog and TP code though (and shoot me an e-mail if you re­ally can’t wait). At some point I should prob­a­bly join the 21st cen­tury and get a github ac­count, too. Alas, that whole free time’ thing…

...

Read the original on www.chrisfenton.com »

3 342 shares, 20 trendiness, 98 words and 1 minutes reading time

Record and share your terminal sessions, the right way

Record and share your ter­mi­nal ses­sions, the right way.

Forget screen record­ing apps and blurry video.

Enjoy a light­weight, purely text-based ap­proach to ter­mi­nal record­ing.

Start Recording

Supports Linux, ma­cOS and *BSD

asci­inema [as-kee-nuh-muh] is a free and open source so­lu­tion for record­ing

ter­mi­nal ses­sions and shar­ing them on the web. Read about how it works.

asci­inema [as-kee-nuh-muh] is a free and open source so­lu­tion for record­ing

ter­mi­nal ses­sions and shar­ing them on the web. Read about how it works.

Record right where you work - in a ter­mi­nal. To start just run asci­inema rec, to fin­ish hit or type exit.

Any time you see a com­mand you’d like to try in your own ter­mi­nal just pause the player and copy-paste the con­tent you want. It’s just a text af­ter all!

Easily em­bed an asci­icast player in your blog post, pro­ject doc­u­men­ta­tion page or in your con­fer­ence talk slides.

...

Read the original on asciinema.org »

4 322 shares, 31 trendiness, 312 words and 3 minutes reading time

Wall St. Bailout Stan on Twitter

Skip to con­tent

You can add lo­ca­tion in­for­ma­tion to your Tweets, such as your city or pre­cise lo­ca­tion, from the web and via third-party ap­pli­ca­tions. You al­ways have the op­tion to delete your Tweet lo­ca­tion his­tory. Learn more

Public · Anyone can fol­low this list

Private · Only you can ac­cess this list

Here’s the URL for this Tweet. Copy it to eas­ily share with friends.

Add this Tweet to your web­site by copy­ing the code be­low. Learn more

Add this video to your web­site by copy­ing the code be­low. Learn more

Hmm, there was a prob­lem reach­ing the server.

By em­bed­ding Twitter con­tent in your web­site or app, you are agree­ing to the Twitter Developer Agreement and Developer Policy.

Why you’re see­ing this ad

Not on Twitter? Sign up, tune into the things you care about, and get up­dates as they hap­pen.

» See SMS short codes for other coun­tries

This time­line is where you’ll spend most of your time, get­ting in­stant up­dates about what mat­ters to you.

Hover over the pro­file pic and click the Following but­ton to un­fol­low any ac­count.

When you see a Tweet you love, tap the heart — it lets the per­son who wrote it know you shared the love.

Add your thoughts about any Tweet with a Reply. Find a topic you’re pas­sion­ate about, and jump right in.

Get in­stant in­sight into what peo­ple are talk­ing about now.

Get more of what you love

Follow more ac­counts to get in­stant up­dates about top­ics you care about.

See the lat­est con­ver­sa­tions about any topic in­stantly.

Catch up in­stantly on the best sto­ries hap­pen­ing as they un­fold.

Things that are il­le­gal to build in most American cities now, a thread:

Loading seems to be tak­ing a while.

Twitter may be over ca­pac­ity or ex­pe­ri­enc­ing a mo­men­tary hic­cup. Try again or visit Twitter Status for more in­for­ma­tion.

...

Read the original on twitter.com »

5 306 shares, 24 trendiness, 240 words and 3 minutes reading time

Facebook and Barr Escalate Standoff Over Encrypted Messages

Lawmakers of both par­ties echoed those wor­ries on Tuesday, threat­en­ing to take ac­tion if the com­pa­nies did­n’t sat­isfy their con­cerns.

You’re go­ing to find a way to do this, or we’re go­ing to do this for you,” said Senator Lindsey Graham, Republican of South Carolina and the chair­man of the Judiciary Committee. You’re ei­ther the so­lu­tion or you’re the prob­lem.”

If Mr. Barr wants to push the is­sue with Facebook or an­other tech com­pany, he could take the is­sue to court, as the gov­ern­ment did dur­ing the fight over en­cryp­tion with Apple in 2016. In that case, the Justice Department had se­cured a search war­rant for the phone of an at­tacker in the San Bernardino shoot­ing. Prosecutors suc­cess­fully pur­sued a court or­der com­pelling Apple’s as­sis­tance. Apple op­posed the or­der. But when the agency found an­other way to un­lock the phone, it dropped the case.

Throughout the hear­ing on Tuesday, Facebook and Apple rep­re­sen­ta­tives said the com­pa­nies were com­mit­ted to work­ing with law en­force­ment. The wit­ness from Facebook de­tailed how the com­pany could de­tect ma­li­cious con­tent de­spite en­cryp­tion.

Encrypting its mes­sag­ing prod­ucts is the cen­tral as­pect of Facebook’s plan to re­brand it­self as pri­vacy fo­cused, af­ter be­ing bat­tered for years by rev­e­la­tions that it mis­han­dled user data. But it has also put the com­pany, which is al­ready the sub­ject of con­sumer pri­vacy and an­titrust in­ves­ti­ga­tions, on an­other col­li­sion course with gov­ern­ments around the world.

...

Read the original on www.nytimes.com »

6 281 shares, 13 trendiness, 381 words and 4 minutes reading time

New airplane seat makes it easier to sleep in economy

A new seat de­sign comes with an in­no­v­a­tive so­lu­tion to this in­flight is­sue, us­ing padded wings” that fold out from be­hind both sides of the seat back — al­low­ing both for ad­di­tional pri­vacy and a cush­ioned spot to rest heads for some shut-eye.

CNN Travel went along to find out more about what makes this seat dif­fer­ent, and to test out just how comfy this con­cept re­ally is.

Interspace is the brain­child of Luke Miles, New Territory’s founder and chief cre­ative of­fi­cer. He spent three years work­ing as Head of Design at Virgin Atlantic, so he knows his air­craft in­te­ri­ors in­side out.

The de­signer tells CNN Travel he’d no­ticed how in­no­v­a­tive air­plane cabin de­signs usu­ally fo­cus on busi­ness or first class ex­pe­ri­ences and he wanted to come up with a way to make the cheap seats com­fier.

We’re re­ally keen as a busi­ness on try­ing to — it sounds a bit cliche — but try­ing to push some in­no­va­tion back into the ma­jor­ity,” says Miles.

The wings on Interspace fold man­u­ally in and out of the chair. This al­lows for a stream­lined look, and easy ac­cess to move up and down the row.

At the London launch, there were two seats on show, one de­pict­ing what it’s like to re­cline and the other in an up­right po­si­tion. Both seats al­low guests to play around with the wings’ set­tings.

The flex­i­bil­ity is in­trigu­ing. While it does seem that a whole cabin of pas­sen­gers un­furl­ing the wings at once might be a bit chaotic, it’d be pretty great to have that abil­ity to change things up dur­ing a long flight.

Seat de­signs al­ready in cir­cu­la­tion have ex­per­i­mented with built in neck cush­ion­ing. Cathay Pacific’s econ­omy seats on its A350 air­craft , for ex­am­ple, al­low fliers to move their head­rest into six dif­fer­ent po­si­tions.

But Miles took the rad­i­cal move of erad­i­cat­ing the head­rest al­to­gether, point­ing out that chairs at home or in the of­fice don’t typ­i­cally in­clude them.

The seat pro­to­type on show at the launch was a car­bon fiber, light­weight de­sign — but the de­signer in­sists the wings could be fit­ted to most ex­ist­ing seats, what­ever their ma­te­r­ial.

It’s just about very sub­tle, tech­no­log­i­cal en­ablers, to just make the whole thing feel a bit more em­pa­thetic to you,” says Miles.

At 2019′s Aircraft Interiors Expo 2019 (AIX), CNN Travel tested out Airbus’ couch-style air­plane seat­ing idea , which com­bines three econ­omy seats into one, al­low­ing trav­el­ing part­ners to lounge to­gether like they would in their liv­ing room, or solo fly­ers to stretch out and get comfy.

...

Read the original on www.cnn.com »

7 251 shares, 12 trendiness, 1260 words and 12 minutes reading time

L-systems

Biologist Aristid Lindenmayer cre­ated , or , in 1968 as a way of for­mal­iz­ing pat­terns of bac­te­ria growth. L-systems are a re­cur­sive, string-rewrit­ing frame­work, com­monly used to­day in com­puter graph­ics to vi­su­al­ize and sim­u­late or­ganic growth, with ap­pli­ca­tions in plant de­vel­op­ment, pro­ce­dural con­tent gen­er­a­tion, and frac­tal-like art.

The fol­low­ing de­scribes L-system fun­da­men­tals, how they can be vi­su­ally rep­re­sented, and sev­eral classes of L-systems, like con­text-sen­si­tive L-systems and sto­chas­tic L-systems. Much of the fol­low­ing has been de­rived from Przemyslaw Prusinkiewicz and Lindenmayer’s sem­i­nal work, The Algorithmic Beauty of Plants.

Fundamentally, an L-system is a set of rules that de­scribe how to it­er­a­tively trans­form a string of sym­bols. A string, in this con­text, is a se­ries of sym­bols, like ” or ”, and can be thought of as a word com­prised of char­ac­ters. Each rule, known as a pro­duc­tion, de­scribes the trans­for­ma­tion of one sym­bol to an­other sym­bol, se­ries of sym­bols, or no sym­bol at all. On each it­er­a­tion, the pro­duc­tions are ap­plied to each char­ac­ter si­mul­ta­ne­ously, re­sult­ing in a new se­ries of sym­bols.

Productions in this rewrit­ing sys­tem can be de­scribed with before” and after” states, of­ten de­scribed as the pre­de­ces­sor and suc­ces­sor; for ex­am­ple, the pro­duc­tion rep­re­sents that the sym­bol trans­forms into the sym­bols on every it­er­a­tion. The length of de­riva­tion, or the num­ber of it­er­a­tions, is rep­re­sented by . Given a word and pro­duc­tions and , the fol­low­ing il­lus­trates how the word trans­forms over sev­eral it­er­a­tions, from to .

L-systems are for­mal­ized as a tu­ple with the fol­low­ing de­f­i­n­i­tion:

Where the com­po­nents are:

* , the al­pha­bet, or all po­ten­tial sym­bols in the string.

* , the start­ing word, also known as the ax­iom, com­prised of sym­bols from .

* , a se­ries of pro­duc­tions de­scrib­ing the trans­for­ma­tions or rules.

The L-system in Figure 1 can be for­mal­ized by defin­ing its ax­iom () and a se­ries of pro­duc­tions () as:

The al­pha­bet of all valid sym­bols can be in­ferred. It is im­plied that a sym­bol with­out a match­ing pro­duc­tion has an iden­tity pro­duc­tion, e.g. .

This fun­da­men­tal form of an L-system is de­scribed as a de­ter­min­is­tic, con­text-free L-system, or (or some­times DOL-system). are con­text-free, mean­ing that each pre­de­ces­sor is trans­formed re­gard­less of its po­si­tion in the string and its neigh­bors. Deterministic L-systems al­ways pro­duce the same re­sult given the same con­fig­u­ra­tions, as there is only one match­ing pro­duc­tion for each pre­de­ces­sor.

L-systems can be rep­re­sented vi­su­ally via tur­tle graph­ics, of Logo fame. While L-systems are string rewrit­ing sys­tems, these strings are com­prised of sym­bols, each which can rep­re­sent some com­mand. A tur­tle in com­puter graph­ics is sim­i­lar to a pen plot­ter draw­ing lines in a 2D space. Imagine giv­ing in­struc­tions to a pen plot­ter to draw a square: draw 1cm. turn right. draw 1cm. turn right. draw 1cm. turn right. draw 1cm”. Though plot­ters don’t re­ally have an ori­en­ta­tion, an L-system’s tur­tle can be rep­re­sented by Cartesian co­or­di­nates and , and an an­gle that de­scribes its for­ward di­rec­tion. From there, sym­bols in a string can rep­re­sent com­mands to change the state of the tur­tle.

To move a tur­tle around in 2D, sym­bols must be cho­sen to rep­re­sent move­ment and ro­ta­tion. The sym­bols , and will be used here, as they are com­monly se­lected for these com­mands in L-system in­ter­preters. After de­riv­ing the re­sult of an L-system us­ing its pro­duc­tion rules, the string can then be parsed from left to right, with the fol­low­ing sym­bols mod­i­fy­ing the tur­tle state:

The vari­ables and are global val­ues in­di­cat­ing the mag­ni­tude of each sym­bol’s ro­ta­tion or move­ment. In non-para­met­ric L-systems, each sym­bol’s ro­ta­tion and move­ment mag­ni­tude is a con­stant in the sys­tem.

Following the line in Figure 2 from the bot­tom left cor­ner, the string can be read as forward, for­ward, for­ward, right, for­ward, for­ward, right…” and so on.

A tur­tle may be de­cou­pled from an L-system. The L-system has a start­ing string and a set of pro­duc­tions and out­puts the re­sult­ing string. A tur­tle may take that fi­nal string as an in­put, and out­put some vi­sual rep­re­sen­ta­tion. For ex­am­ple, many of the il­lus­tra­tions shown here use the same L-system solvers, while us­ing dif­fer­ent tur­tles where ap­pro­pri­ate, like one tur­tle built us­ing CanvasRenderingContext2D and an­other us­ing WebGL.

Space-filling curves can be for­mal­ized via L-systems, re­sult­ing in a re­cur­sive, frac­tal-like pat­tern. More specif­i­cally, , de­fined as space-fill­ing, self-avoid­ing, sim­ple, and self-sim­i­lar. That is, a sin­gle, non-over­lap­ping, re­cur­sive, con­tin­u­ous curve.

The Hilbert Curve (Figure 3) is an ex­am­ple of a FASS curve that can be rep­re­sented as an L-system. Considered a Node-rewriting tech­nique, this L-system’s pro­duc­tions de­clare that on each it­er­a­tion, and sym­bols are re­placed with more , and sym­bols. With the an­gle de­fined as 90, this re­sults in re­cur­sively gen­er­ated square wave shape along a curve. While , and are in­ter­preted by the tur­tle, other sym­bols can be used for pro­duc­tions. In this case, and are ig­nored when ren­der­ing, and only rel­e­vant when rewrit­ing the string and match­ing pro­duc­tions.

In ad­di­tion to a tur­tle tra­vers­ing on a 2D plane, sym­bols may be in­tro­duced that in­struct the tur­tle to draw in 3D. The Algorithmic Beauty of Plants uses the fol­low­ing sym­bols to con­trol ren­der­ing in three di­men­sions:

Like the 2D Hilbert Curve (Figure 3), a three-di­men­sional ver­sion can also be cre­ated (Figure 4) us­ing these ad­di­tional sym­bols, re­sult­ing in a 3D FASS curve.

The space-fill­ing Hilbert curve can be rep­re­sented as a sin­gle, con­tin­u­ous line. For or­ganic, tree-like struc­tures, branch­ing is used to rep­re­sent a di­verg­ing fork. Two new sym­bols, square brack­ets, are in­tro­duced to rep­re­sent a tree in an L-system’s string, with an open­ing bracket in­di­cat­ing the start of a new branch, with the re­main­ing sym­bols be­tween the brack­ets be­ing mem­bers of that branch. Symbols af­ter the end bracket in­di­cate re­turn­ing to the point of the branch’s ori­gin. A stack is used to im­ple­ment branch­ing, stor­ing the state of the tur­tle on the stack.

* push the cur­rent tur­tle state onto the stack.

* pop the top state from the stack and this be­comes the cur­rent tur­tle state.

Symbols in a branch are trans­formed and re­placed just as they were out­side of a branch. This al­lows re­cur­sive, frac­tal-like be­hav­ior, with each branch fork­ing into more branches, and so on.

Rather than pro­duc­tions eval­u­at­ing sym­bols in iso­la­tion (context-free), rules may be de­fined that only matches a sym­bol when it pro­ceeds or suc­ceeds an­other spe­cific sym­bol.

con­tain pro­duc­tion rules that spec­ify sym­bols that must come be­fore or af­ter the pre­de­ces­sor in or­der to match, as op­posed to con­text-free sys­tems that eval­u­ate pre­de­ces­sors in iso­la­tion.

These con­text rules are de­fined us­ing and in the pro­duc­tion rule, ad­ja­cent to the pre­de­ces­sor. In Figure 5, the first pro­duc­tion rule only matches when the sym­bol is im­me­di­ately af­ter , thus re­plac­ing the pre­de­ces­sor with its suc­ces­sor, . This re­sults in the sym­bol mov­ing to­wards the right:

A sim­i­lar sys­tem could be de­fined that prop­a­gates from right to left, de­fined via pro­duc­tion , re­plac­ing when there is a af­ter it in the string.

L-systems with these one-sided con­text pro­duc­tions may be con­sid­ered . Productions may also have both a be­fore-con­text and an af­ter-con­text in sys­tems con­sid­ered . They can be rep­re­sented as:

This pro­duc­tion rule in­di­cates that the pre­de­ces­sor will be re­placed by suc­ces­sor when it is be­tween an and a .

The pre­vi­ously de­scribed sys­tems are all de­ter­min­is­tic; the same sys­tem with the same in­put will al­ways gen­er­ate the same re­sult. Stochastic L-systems are non-de­ter­min­is­tic, de­fined by sev­eral pro­duc­tions that match the same pre­de­ces­sor, cho­sen ran­domly given their weight on each it­er­a­tion. The fol­low­ing pro­duc­tion rules de­fine that on each it­er­a­tion, has a 50% chance to be rewrit­ten as , and a 50% chance to be rewrit­ten as .

This non-de­ter­min­ism is use­ful for pro­ce­du­rally cre­at­ing va­ri­ety and the seem­ingly ran­dom re­sults of na­ture.

...

Read the original on jsantell.com »

8 235 shares, 25 trendiness, 308 words and 3 minutes reading time

OpenLayers

A high-per­for­mance, fea­ture-packed li­brary for all your map­ping needs.

OpenLayers v6.1.1 is here! Check out the docs and the ex­am­ples to get started. The full dis­tri­b­u­tion can be down­loaded from the re­lease page.

OpenLayers makes it easy to put a dy­namic map in any web page. It can dis­play map tiles, vec­tor data and mark­ers loaded from any source. OpenLayers has been de­vel­oped to fur­ther the use of ge­o­graphic in­for­ma­tion of all kinds. It is com­pletely free, Open Source JavaScript, re­leased un­der the 2-clause BSD License (also known as the FreeBSD).

Pull tiles from OSM, Bing, MapBox, Stamen, and any other XYZ source you can find. OGC map­ping ser­vices and un­tiled lay­ers also sup­ported.

Render vec­tor data from GeoJSON, TopoJSON, KML, GML, Mapbox vec­tor tiles, and other for­mats.

Leverages Canvas 2D, WebGL, and all the lat­est great­ness from HTML5. Mobile sup­port out of the box. Build light­weight cus­tom pro­files with just the com­po­nents you need.

Style your map con­trols with straight-for­ward CSS. Hook into dif­fer­ent lev­els of the API or use 3rd party li­braries to cus­tomize and ex­tend func­tion­al­ity.

Seen enough al­ready? Go here to get started.

Get the lat­est re­lease or dig through the archives.

Spend time learn­ing the ba­sics and grad­u­ate up to ad­vanced map­ping tech­niques.

Want to learn OpenLayers hands-on? Get started with the work­shop.

Browse through the API docs for de­tails on code us­age.

In case you are not ready (yet) for the lat­est ver­sion of OpenLayers, we pro­vide links to se­lected re­sources of older ma­jor ver­sions of the soft­ware.

Latest v2: v2.13.1 (July 2013 i.e. re­ally old) — you’ll find every­thing you need on the 2.x page

Please con­sider up­grad­ing to ben­e­fit of the lat­est fea­tures and bug fixes. Get best per­for­mance and us­abil­ity for free by us­ing re­cent ver­sions of OpenLayers

...

Read the original on openlayers.org »

9 208 shares, 10 trendiness, 0 words and 0 minutes reading time

Machine Learning Crash Course

...

Read the original on developers.google.com »

10 208 shares, 32 trendiness, 3828 words and 32 minutes reading time

docker-slim/docker-slim

Don’t change any­thing in your Docker con­tainer im­age and minify it by up to 30x mak­ing it se­cure too!

Keep do­ing what you are do­ing. No need to change any­thing. Use the base im­age you want. Use the pack­age man­ager you want. Don’t worry about hand op­ti­miz­ing your Dockerfile. You should­n’t have to throw away your tools and your work­flow to have small con­tainer im­ages.

Don’t worry about man­u­ally cre­at­ing Seccomp and AppArmor se­cu­rity pro­files. You should­n’t have to be­come an ex­pert in Linux syscalls, Seccomp and AppArmor to have se­cure con­tain­ers. Even if you do know enough about it wast­ing time re­verse en­gi­neer­ing your ap­pli­ca­tion be­hav­ior can be time con­sum­ing.

docker-slim will op­ti­mize and se­cure your con­tain­ers by un­der­stand­ing your ap­pli­ca­tion and what it needs us­ing var­i­ous analy­sis tech­niques. It will throw away what you don’t need re­duc­ing the at­tack sur­face for your con­tainer. What if you need some of those ex­tra things to de­bug your con­tainer? You can use ded­i­cated de­bug­ging side-car con­tain­ers for that (more de­tails be­low).

docker-slim has been used with Node.js, Python, Ruby, Java, Golang, Rust, Elixir and PHP (some app types) run­ning on Ubuntu, Debian, CentOS, Alpine and even Distroless.

Watch this screen­cast to see how an ap­pli­ca­tion im­age is mini­fied by more than 30x.

When docker-slim runs it gives you an op­por­tu­nity to in­ter­act with the tem­po­rary con­tainer it cre­ates. By de­fault, it will pause and wait for your in­put be­fore it con­tin­ues its ex­e­cu­tion. You can change this be­hav­ior us­ing the –continue-after flag.

If your ap­pli­ca­tion ex­poses any web in­ter­faces (e.g., when you have a web server or an HTTP API), you’ll see the port num­bers on the host ma­chine you will need to use to in­ter­act with your ap­pli­ca­tion (look for the port.list and tar­get.port.info mes­sages on the screen). For ex­am­ple, in the screen­cast above you’ll see that the in­ter­nal ap­pli­ca­tion port 8000 is mapped to port 32911 on your host.

Note that docker-slim will in­ter­act with your ap­pli­ca­tion for you if you en­able HTTP prob­ing with the –http-probe flag or other re­lated HTTP probe flags. Some web ap­pli­ca­tions built with script­ing lan­guages like Python or Ruby re­quire ser­vice in­ter­ac­tions to load every­thing in the ap­pli­ca­tion. Enable HTTP prob­ing un­less it gets in your way.

Note: The ex­am­ples are in a sep­a­rate repos­i­tory: https://​github.com/​docker-slim/​ex­am­ples

Now you can run docker-slim in con­tain­ers and you get more con­vinient re­port­ing de­faults. For more info about the lat­est re­lease see the CHANGELOG.

If the di­rec­tory where you ex­tracted the bi­na­ries is not in your PATH then you’ll need to run your docker-slim com­mands from that di­rec­tory.

To use the Docker im­age dis­tri­b­u­tion just start us­ing the dslim/​docker-slim con­tainer im­age.

See the USAGE DETAILS sec­tion for more de­tails. You can also get ad­di­tional in­for­ma­tion about the pa­ra­me­ters run­ning docker-slim. Run docker-slim with­out any pa­ra­me­ters and you’ll get a high level overview of the avail­able com­mands. Run a docker-slim com­mand with­out any pa­ra­me­ters and you’ll get more in­for­ma­tion about that com­mand (e.g., docker-slim build).

If you want to auto-gen­er­ate a Seccomp pro­file AND minify your im­age use the build com­mand. If you only want to auto-gen­er­ate a Seccomp pro­file (along with other in­ter­est­ing im­age meta­data) use the pro­file com­mand.

Step two: use the gen­er­ated Seccomp pro­file

You can use the gen­er­ated Seccomp pro­file with your orig­i­nal im­age or with the mini­fied im­age.

You can use the gen­er­ated pro­file with your orig­i­nal im­age or with the mini­fied im­age DockerSlim cre­ated:

The demo run on Mac OS X, but you can build a linux ver­sion. Note that these steps are dif­fer­ent from the steps in the demo video.

Get the docker-slim Mac, Linux or Linux ARM bi­na­ries. Unzip them and op­tion­ally add their di­rec­tory to your PATH en­vi­ron­ment vari­able if you want to use the app from other lo­ca­tions.

The ex­tracted di­rec­tory con­tains two bi­na­ries:

* docker-slim-sen­sor <- the sen­sor ap­pli­ca­tion used to col­lect in­for­ma­tion from run­ning con­tain­ers

Clone the ex­am­ples repo to use the sam­ple apps (note: the ex­am­ples have been moved to a sep­a­rate repo). You can skip this step if you have your own app.

Create a Docker im­age for the sam­ple node.js app in ex­am­ples/​node_ubuntu. You can skip this step if you have your own app.

eval $(docker-machine env de­fault)” docker-ma­chine start de­fault; see the Docker con­nect op­tions sec­tion for more de­tails.

DockerSlim cre­ates a spe­cial con­tainer based on the tar­get im­age you pro­vided. It also cre­ates a re­source di­rec­tory where it stores the in­for­ma­tion it dis­cov­ers about your im­age: .

By de­fault, docker-slim will run its http probe against the tem­po­rary con­tainer. If you are mini­fy­ing a com­mand line tool that does­n’t ex­pose any web ser­vice in­ter­face you’ll need to ex­plic­itly dis­able http prob­ing (by set­ting –http-probe=false).

Use curl (or other tools) to call the sam­ple app (optional)

This is an op­tional step to make sure the tar­get app con­tainer is do­ing some­thing. Depending on the ap­pli­ca­tion it’s an op­tional step. For some ap­pli­ca­tions it’s re­quired if it loads new ap­pli­ca­tion re­sources dy­nam­i­cally based on the re­quests it’s pro­cess­ing (e.g., Ruby or Python).

You’ll see the mapped ports printed to the con­sole when docker-slim starts the tar­get con­tainer. You can also get the port num­ber ei­ther from the docker ps or docker port com­mands. The cur­rent ver­sion of DockerSlim does­n’t al­low you to map ex­posed net­work ports (it works like docker run … -P).

Press and wait un­til docker-slim says it’s done

By de­fault or when http prob­ing is en­abled ex­plic­itly docker-slim will con­tinue its ex­e­cu­tion once the http probe is done run­ning. If you ex­plic­itly picked a dif­fer­ent con­tinue-af­ter op­tion fol­low the ex­pected steps. For ex­am­ple, for the en­ter con­tinue-af­ter op­tion you must press the en­ter but­ton on your key­board.

If http prob­ing is en­abled (when http-probe is set) and if con­tinue-af­ter is set to en­ter and you press the en­ter key be­fore the built-in HTTP probe is done the probe might pro­duce an EOF er­ror be­cause docker-slim will shut down the tar­get con­tainer be­fore all probe com­mands are done ex­e­cut­ing. It’s ok to ig­nore it un­less you re­ally need the probe to fin­ish.

Once DockerSlim is done check that the new mini­fied im­age is there

You should see my/​sam­ple-node-app.slim in the list of im­ages. Right now all gen­er­ated im­ages have .slim at the end of its name.

* build - Collect fat im­age in­for­ma­tion and build a slim im­age from it

* info - Collect fat im­age in­for­ma­tion and re­verse en­gi­neers its Dockerfile (no run­time con­tainer analy­sis)

* –report - com­mand re­port lo­ca­tion (target lo­ca­tion where to save the ex­e­cuted com­mand re­sults; slim.re­port.json by de­fault; set it to off to dis­able)

* –check-version - check if the cur­rent ver­sion is out­date

* –log-format - set the for­mat used by logs (‘text’ (default), or json’)

* –state-path value - DockerSlim state base path (must set it if the DockerSlim bi­na­ries are not in a writable di­rec­tory!)

* –archive-state - Archives DockerSlim state to the se­lected Docker vol­ume (default vol­ume - docker-slim-state). By de­fault, en­abled when DockerSlim is run­ning in a con­tainer (disabled oth­er­wise). Set it to off to dis­able ex­plic­itly.

* –in-container - Set it to true to ex­plic­itly in­di­cate that DockerSlim is run­ning in a con­tainer (if it’s not set DockerSlim will try to an­a­lyze the en­vi­ron­ment where it’s run­ning to de­ter­mine if it’s con­tainer­ized)

To get more com­mand line op­tion in­for­ma­tion run docker-slim with­out any pa­ra­me­ters or se­lect one of the top level com­mands to get the com­mand-spe­cific in­for­ma­tion.

To dis­able the ver­sion checks set the global –check-version flag to false (e.g., –check-version=false) or you can use the DSLIM_CHECK_VERSION en­vi­ron­ment vari­able.

* –http-probe - en­ables HTTP prob­ing (ENABLED by de­fault; you have to dis­able the probe if you don’t need it by set­ting the flag to false)

* –http-probe-cmd - ad­di­tional HTTP probe com­mand [zero or more]

* –http-probe-retry-count - num­ber of re­tries for each HTTP probe (default: 5)

* –http-probe-retry-wait - num­ber of sec­onds to wait be­fore retry­ing HTTP probe (doubles when tar­get is not ready; de­fault: 8)

* –http-probe-ports - ex­plicit list of ports to probe (in the or­der you want them to be probed; ex­cluded ports are not probed!)

* –http-probe-full - do full HTTP probe for all se­lected ports (if false, fin­ish af­ter first suc­cess­ful scan; de­fault: false)

* –show-clogs - show con­tainer logs (from the con­tainer used to per­form dy­namic in­spec­tion)

* –show-blogs - show build logs (when the mini­fied con­tainer is built)

* –remove-file-artifacts - re­move file ar­ti­facts when com­mand is done (note: you’ll loose au­to­gen­er­ated Seccomp and Apparmor pro­files un­less you copy them with the copy-meta-ar­ti­facts flag or if you archive the state)

* –tag - use a cus­tom tag for the gen­er­ated im­age (instead of the de­fault: )

* –mount - mount vol­ume an­a­lyz­ing im­age (the mount pa­ra­me­ter for­mat is iden­ti­cal to the -v mount com­mand in Docker) [zero or more]

* –include-path - Include di­rec­tory or file from im­age [zero or more]

* –include-bin value - Include bi­nary from im­age (executable or shared ob­ject us­ing its ab­solute path)

* –include-exe value - Include ex­e­cutable from im­age (by ex­e­cutable name)

* –env - over­ride ENV an­a­lyz­ing im­age [zero or more]

* –expose - use ad­di­tional EXPOSE in­struc­tions an­a­lyz­ing im­age [zero or more]

* –link - add link to an­other con­tainer an­a­lyz­ing im­age [zero or more]

* –etc-hosts-map - add a host to IP map­ping to /etc/hosts an­a­lyz­ing im­age [zero or more]

* –container-dns - add a dns server an­a­lyz­ing im­age [zero or more]

* –container-dns-search - add a dns search do­main for un­qual­i­fied host­names an­a­lyz­ing im­age [zero or more]

* –from-dockerfile - The source Dockerfile name to build the fat im­age be­fore it’s mini­fied.

* –use-local-mounts - Mount lo­cal paths for tar­get con­tainer ar­ti­fact in­put and out­put (off, by de­fault).

* –use-sensor-volume - Sensor vol­ume name to use (set it to your Docker vol­ume name if you man­age your own docker-slim sen­sor vol­ume).

* –keep-tmp-artifacts - Keep tem­po­rary ar­ti­facts when com­mand is done (off, by de­fault).

The –include-path op­tion is use­ful if you want to cus­tomize your mini­fied im­age adding ex­tra files and di­rec­to­ries. The –include-path-file op­tion al­lows you to load mul­ti­ple in­cludes from a new­line de­lim­ited file. Use this op­tion if you have a lot of in­cludes. The in­cludes from –include-path and –include-path-file are com­bined to­gether. Future ver­sions will also in­clude the –exclude-path op­tion to have even more con­trol.

The –continue-after op­tion is use­ful if you need to script docker-slim. If you pick the probe op­tion then docker-slim will con­tinue ex­e­cut­ing the build com­mand af­ter the HTTP probe is done ex­e­cut­ing. If you pick the time­out op­tion docker-slim will al­low the tar­get con­tainer to run for 60 sec­onds be­fore it will at­tempt to col­lect the ar­ti­facts. You can spec­ify a cus­tom time­out value by pass­ing a num­ber of sec­onds you need in­stead of the time­out string. If you pick the sig­nal op­tion you’ll need to send a USR1 sig­nal to the docker-slim process.

The –include-shell op­tion pro­vides a sim­ple way to keep a ba­sic shell in the mini­fied con­tainer. Not all shell com­mands are in­cluded. To get ad­di­tional shell com­mands or other com­mand line util­i­ties use the –include-exe’ and/​or –include-bin’ op­tions. Note that the ex­tra apps and bi­na­ries might missed some of the non-bi­nary de­pen­den­cies (which don’t get picked up dur­ing sta­tic analy­sis). For those ad­di­tional de­pen­den­cies use the –include-path and –include-path-file op­tions.

The –from-dockerfile op­tion makes it pos­si­ble to build a new mini­fied im­age di­rectly from source Dockerfile. Pass the Dockerfile name as the value for this flag and pass the build con­text di­rec­tory or URL in­stead of the docker im­age name as the last pa­ra­me­ter for the docker-slim build com­mand: docker-slim build –from-dockerfile Dockerfile –tag my/​cus­tom_mini­fied_im­age_­name . If you want to see the con­sole out­put from the build stages (when the fat and slim im­ages are built) add the –show-blogs build flag. Note that the build con­sole out­put is not in­ter­ac­tive and it’s printed only af­ter the cor­re­spond­ing build step is done. The fat im­age cre­ated dur­ing the build process has the .fat suf­fix in its name. If you spec­ify a cus­tom im­age tag (with the –tag flag) the .fat suf­fix is added to the name part of the tag. If you don’t pro­vide a cus­tom tag the gen­er­ated fat im­age name will have the fol­low­ing for­mat: docker-slim-tmp-fat-im­age.. The mini­fied im­age name will have the .slim suf­fix added to that auto-gen­er­ated con­tainer im­age name (docker-slim-tmp-fat-image.). Take a look at this python ex­am­ples to see how it’s us­ing the –from-dockerfile flag.

The –use-local-mounts op­tion is used to choose how the docker-slim sen­sor is added to the tar­get con­tainer and how the sen­sor ar­ti­facts are de­liv­ered back to the mas­ter. If you en­able this op­tion you’ll get the orig­i­nal docker-slim be­hav­ior where it uses lo­cal file sys­tem vol­ume mounts to add the sen­sor ex­e­cutable and to ex­tract the ar­ti­facts from the tar­get con­tainer. This op­tion does­n’t al­ways work as ex­pected in the dock­er­ized en­vi­ron­ment where docker-slim it­self is run­ning in a Docker con­tainer. When this op­tion is dis­abled (default be­hav­ior) then a sep­a­rate Docker vol­ume is used to mount the sen­sor and the sen­sor ar­ti­facts are ex­plic­itly copied from the tar­get con­tainer.

The cur­rent ver­sion of docker-slim is able to run in con­tain­ers. It will try to de­tect if it’s run­ning in a con­tainer­ized en­vi­ron­ment, but you can also tell docker-slim ex­plic­itly us­ing the –in-container global flag.

You can run docker-slim in your con­tainer di­rectly or you can use the docker-slim con­tainer in your con­tainer­ized en­vi­ron­ment. If you are us­ing the docker-slim con­tainer make sure you run it con­fig­ured with the Docker IPC in­for­ma­tion, so it can com­mu­ni­cate with the Docker dae­mon. The most com­mon way to do it is by mount­ing the Docker unix socket to the docker-slim con­tainer. Some con­tainer­ized en­vi­ron­ments (like Gitlab and their dind ser­vice) might not ex­pose the Docker unix socket to you, so you’ll need to make sure the en­vi­ron­ment vari­ables used to com­mu­ni­cate with Docker (e.g., DOCKER_HOST) are passed to the docker-slim con­tainer. Note that if those en­vi­ron­ment vari­ables ref­er­ence any kind of lo­cal host names those names need to be re­placed or you need to tell docker-slim about them us­ing the –etc-hosts-map flag. If those en­vi­ron­ment vari­ables ref­er­ence lo­cal files those lo­cal files (e.g., files for TLS cert val­i­da­tion) will need to be copied to a tem­po­rary con­tainer, so that tem­po­rary con­tainer can be used as a data con­tainer to make those files ac­ces­si­ble by the docker-slim con­tainer.

When docker-slim runs in a con­tainer it will at­tempt to save its ex­e­cu­tion state in a sep­a­rate Docker vol­ume. If the vol­ume does­n’t ex­ist it will try to cre­ate it (docker-slim-state, by de­fault). You can pick a dif­fer­ent state vol­ume or dis­able this be­hav­ior com­pletely by us­ing the global –archive-state flag. If you do want to per­sist the docker-slim ex­e­cu­tion state (which in­cludes the sec­comp and AppArmor pro­files) with­out us­ing the state archiv­ing fea­ture you can mount your own vol­ume that maps to the /bin/.docker-slim-state di­rec­tory in the docker-slim con­tainer.

By de­fault, docker-slim will try to cre­ate a Docker vol­ume for its sen­sor un­less one al­ready ex­ists. If this be­hav­ior is not sup­ported by your con­tainer­ized en­vi­ron­ment you can cre­ate a vol­ume sep­a­rately and pass its name to docker-slim us­ing the –use-sensor-volume flag.

Here’s a ba­sic ex­am­ple of how to use the con­tainer­ized ver­sion of docker-slim:

docker run -it –rm -v /var/run/docker.sock:/var/run/docker.sock dslim/​docker-slim build your-docker-im­age-name

Here’s a GitLab ex­am­ple for their dind .gitlab-ci.yml con­fig file:

docker run -e DOCKER_HOST=tcp://$(grep docker /etc/hosts | cut -f1):2375 dslim/​docker-slim build your-docker-im­age-name

Here’s a CircleCI ex­am­ple for their re­mote docker .circleci/config.yml con­fig file (used af­ter the set­up_re­mote_­docker step):

docker cre­ate -v /dcert_path –name dcert alpine:lat­est /bin/true

docker cp $DOCKER_CERT_PATH/. dcert:/​dcert_­path

docker run –volumes-from dcert -e DOCKER_HOST=$DOCKER_HOST -e DOCKER_TLS_VERIFY=$DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH=/dcert_path dslim/​docker-slim build your-docker-im­age-name

If you don’t spec­ify any Docker con­nect op­tions docker-slim ex­pects to find the fol­low­ing en­vi­ron­ment vari­ables: DOCKER_HOST, DOCKER_TLS_VERIFY (optional), DOCKER_CERT_PATH (required if DOCKER_TLS_VERIFY is set to 1”)

On Mac OS X you get them when you run eval $(docker-machine env de­fault)” or when you use the Docker Quickstart Terminal.

If the Docker en­vi­ron­ment vari­ables are con­fig­ured to use TLS and to ver­ify the Docker cert (default be­hav­ior), but you want to dis­able the TLS ver­i­fi­ca­tion you can over­ride the TLS ver­i­fi­ca­tion be­hav­ior by set­ting the –tls-verify to false:

You can over­ride all Docker con­nec­tion op­tions us­ing these flags: –host, –tls, –tls-verify, –tls-cert-path. These flags cor­re­spond to the stan­dard Docker op­tions (and the en­vi­ron­ment vari­ables).

If you want to use TLS with ver­i­fi­ca­tion:

If you want to use TLS with­out ver­i­fi­ca­tion:

If the Docker en­vi­ron­ment vari­ables are not set and if you don’t spec­ify any Docker con­nect op­tions docker-slim will try to use the de­fault unix socket.

If the HTTP probe is en­abled (note: it is en­abled by de­fault) it will de­fault to run­ning GET / with HTTP and then HTTPS on every ex­posed port. You can add ad­di­tional com­mands us­ing the –http-probe-cmd and –http-probe-cmd-file op­tions.

The –http-probe-cmd op­tion is good when you want to spec­ify a small num­ber of sim­ple com­mands where you se­lect some or all of these HTTP com­mand op­tions: pro­to­col, method (defaults to GET), re­source (path and query string).

If you only want to use cus­tom HTTP probe com­mand and you don’t want the de­fault GET / com­mand added to the com­mand list you ex­plic­itly pro­vided you’ll need to set –http-probe to false when you spec­ify your cus­tom HTTP probe com­mand. Note that this in­con­sis­tency will be ad­dressed in the fu­ture re­leases to make it less con­fus­ing.

Here are a cou­ple of ex­am­ples:

Adds two ex­tra probe com­mands: GET /api/info and POST /submit (tries http first, then tries https):

docker-slim build –show-clogs –http-probe-cmd /api/info –http-probe-cmd POST:/submit my/​sam­ple-node-app-multi

Adds one ex­tra probe com­mand: POST /submit (using only http):

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.