10 interesting stories served every morning and every evening.




1 1,557 shares, 64 trendiness, words and minutes reading time

Introducing the new and upgraded Framework Laptop

When we launched the Framework Laptop a year ago, we shared a promise for a bet­ter kind of Consumer Electronics: one in which you have the power to up­grade, re­pair, and cus­tomize your prod­ucts to make them last longer and fit your needs bet­ter. Today, we’re hon­ored to de­liver on that promise with a new gen­er­a­tion of the Framework Laptop, bring­ing a mas­sive per­for­mance up­grade with the lat­est 12th Gen Intel® Core™ proces­sors, avail­able for pre-or­der now. We spent the last year gath­er­ing feed­back from early adopters to re­fine the prod­uct as we scale up. We’ve re­designed our lid as­sem­bly for sig­nif­i­cantly im­proved rigid­ity and care­fully op­ti­mized standby bat­tery life, es­pe­cially for Linux users. Finally, we con­tinue to ex­pand on the Expansion Card port­fo­lio, with a new 2.5 Gigabit Ethernet Expansion Card com­ing soon.

In ad­di­tion to launch­ing our new Framework Laptops with these up­grades, we’re liv­ing up to our mis­sion by mak­ing all of them avail­able in­di­vid­u­ally as mod­ules and com­bined as Upgrade Kits in the Framework Marketplace. This is per­haps the first time ever that gen­er­a­tional up­grades are avail­able in a high-per­for­mance thin and light lap­top, let­ting you pick the im­prove­ments you want with­out need­ing to buy a full new ma­chine.

Framework Laptops with 12th Gen Intel® Core™ proces­sors are avail­able for pre-or­der to­day in all coun­tries we cur­rently ship to: US, Canada, UK, Germany, France, Netherlands, Austria, and Ireland. We’ll be launch­ing in ad­di­tional coun­tries through­out the year, and you can help us pri­or­i­tize by reg­is­ter­ing your in­ter­est. We’re us­ing a batch pre-or­der sys­tem, with only a fully-re­fund­able $100/€100/£100 de­posit re­quired at the time of pre-or­der. Mainboards with 12th Gen Intel® Core™ proces­sors, our re­vamped Top Cover, and the Upgrade Kit that com­bines the two are avail­able for wait­list­ing on the Marketplace to­day. You can reg­is­ter to get no­ti­fied as soon as they come in stock. The first batch of new lap­tops as well as the new Marketplace items start ship­ping this July.

12th Gen Intel® Core™ proces­sors bring ma­jor ar­chi­tec­tural ad­vance­ments, adding 8 Efficiency Cores on top of 4 or 6 Performance Cores with Hyper-Threading. This means the top ver­sion we of­fer, the i7-1280P, has a mind-bog­gling 14 CPU cores and 20 threads. All of this re­sults in an enor­mous in­crease in per­for­mance. In heav­ily multi-threaded bench­marks like Cinebench R23, we see re­sults that are dou­ble the last gen­er­a­tion i7-1185G7 proces­sor. In ad­di­tion to the top of the line i7-1280P con­fig­u­ra­tion, we have i5-1240P and i7-1260P op­tions avail­able, all sup­port­ing up to 30W sus­tained per­for­mance and 60W boost.

We launched a new prod­uct com­par­i­son page, let­ting you com­pare all of the ver­sions of the Framework Laptop now avail­able. Every model is equally thin and light at Framework Laptop un­til we run out of the lim­ited in­ven­tory we have left. If you ever need more per­for­mance in the fu­ture, you can up­grade to the lat­est mod­ules when­ever you’d like!

We con­tinue to fo­cus on solid Linux sup­port, and we’re happy to share that Fedora 36 works fan­tas­ti­cally well out of the box, with full hard­ware func­tion­al­ity in­clud­ing WiFi and fin­ger­print reader sup­port. Ubuntu 22.04 also works great af­ter ap­ply­ing a cou­ple of workarounds, and we’re work­ing to elim­i­nate that need. We also stud­ied and care­fully op­ti­mized the standby power draw of the sys­tem in Linux. You can check com­pat­i­bil­ity with pop­u­lar dis­tros as we con­tinue to test on our Linux page or in the Framework Community.

In re­design­ing the Framework Laptop’s lid as­sem­bly, we switched from an alu­minum form­ing process to a full CNC process on the Top Cover, sub­stan­tially im­prov­ing rigid­ity. While there is more raw ma­te­r­ial re­quired when start­ing from a solid block of 6063 alu­minum, we’re work­ing with our sup­plier Hamagawa to re­duce en­vi­ron­men­tal im­pact. We cur­rently use 75% pre-con­sumer-re­cy­cled al­loy and are search­ing for post-con­sumer sources. The Top Cover (CNC) is built into all con­fig­u­ra­tions of the Framework Laptop launch­ing to­day, and is avail­able as a mod­ule both as part of the Upgrade Kit or in­di­vid­u­ally.

Support for Ethernet has con­sis­tently been one of the most pop­u­lar re­quests from the Framework Laptop com­mu­nity. We started de­vel­op­ment on an Expansion Card shortly af­ter launch last year and are now ready to share a pre­view of the re­sults. Using a Realtek RTL8156 con­troller, the Ethernet Expansion Card sup­ports 2.5Gbit along with 10/100/1000Mbit Ethernet. This card will be avail­able later this year, and you can reg­is­ter to get no­ti­fied in the Framework Marketplace.

We’re in­cred­i­bly happy to live up to the promise of longevity and up­grade­abil­ity in the Framework Laptop. We also want to en­sure we’re re­duc­ing waste and re­spect­ing the planet by en­abling reuse of mod­ules. If you’re up­grad­ing to a new Mainboard, check out the open source de­signs we re­leased ear­lier this year for cre­ative ways to re­pur­pose your orig­i­nal Mainboard. We’re start­ing to see some in­cred­i­ble pro­jects com­ing out of cre­ators and de­vel­op­ers. To fur­ther re­duce en­vi­ron­men­tal im­pact, you can also make your Framework Laptop car­bon neu­tral by pick­ing up car­bon cap­ture in the Framework Marketplace.

We’re ramp­ing up into pro­duc­tion now with our man­u­fac­tur­ing part­ner Compal at a new site in Taoyuan, Taiwan, a short drive from our main ful­fill­ment cen­ter, help­ing re­duce the risk of sup­ply chain and lo­gis­tics chal­lenges. We rec­om­mend get­ting your pre-or­der in early to hold your place in line and to give us a bet­ter read on pro­duc­tion ca­pac­ity needs. We can’t wait to see what you think of these up­grades, and we’re look­ing for­ward to re­mak­ing Consumer Electronics with you!

...

Read the original on community.frame.work »

2 1,020 shares, 42 trendiness, words and minutes reading time

This "amateur" programmer fought cancer with 50 Nvidia Geforce 1080Ti

″ This is what a pro­gram­mer should look like!

″ The OP is re­ally cool, tech­nol­ogy to save the world

″ Compared with him, I feel like my code is mean­ing­less.

These com­ments are from a thread in the V2EX fo­rum, a gath­er­ing place for pro­gram­mers in China.

When you first read these com­ments, you may think they are a bit ex­ag­ger­at­ing, but for those peo­ple and fam­i­lies who have been saved by the post, such com­ments are true.

Because here’s what the post does: de­tects breast can­cer.

In 2018, a pro­gram­mer named coolwulf” started a thread about a web­site he had made. Users just need to up­load their X-ray im­ages, then they can let AI to carry out their own fast di­ag­no­sis of breast can­cer dis­ease.

Furthermore, the ac­cu­racy of tu­mor iden­ti­fi­ca­tion has reached 90%. In short, it is to let the AI help you look at the film”, and the ac­cu­racy rate is al­most com­pa­ra­ble to pro­fes­sional doc­tors, and it is com­pletely free.

As we all know, al­though the cure rate of breast can­cer is high if found in its early stages. However due to the early symp­toms are not ob­vi­ous, it is quite easy to miss the best time to cure, and it is of­ten found at an ad­vanced stage.

However, a re­li­able AI for tu­mor de­tec­tion can en­able a large num­ber of pa­tients who can­not seek ad­e­quate med­ical di­ag­no­sis in time to know the con­di­tion ear­lier or pro­vide a sec­ondary opin­ion. Even if a doc­tor is needed to con­firm the di­ag­no­sis in the end, it is al­ready in­valu­able in many ar­eas where med­ical re­sources are tight.

Breast can­cer also has the high­est in­ci­dence of all can­cers ▼

Immediately, this post by cool­wulf, gar­nered a rare hun­dreds of re­sponses. In the com­ments sec­tion, there were peo­ple who were anx­iously await­ing their doc­tor’s test re­sults.

Others had fam­ily mem­bers with breast can­cer and were filled with un­cer­tainty and fear. The cool­wulf pro­ject has given them hope.

With this, of course, comes the cu­rios­ity about the pro­ject and cool­wulf him­self. Where does the huge amount of clin­i­cal data and hard­ware com­put­ing power come from? More im­por­tantly, who is the su­per­power that is will­ing to open it up for free?

For many ques­tions, cool­wulf him­self did not re­ply one by one. He soon left, hid­ing from spot­light, and rarely ap­peared again. But in 2022, he re­turned with an even more im­por­tant brain can­cer pro­ject”, and the mys­tery re­mained the same.

In or­der to clear up the fog on cool­wulf, we reached out to him in the Midwest of the United States. After a few rounds of in­ter­views, to­day, let’s hear the story of cool­wulf, also known as Hao Jiang.

During the time when he was a stu­dent, he pur­sued his un­der­grad­u­ate and PhD de­grees in the Department of Physics at Nanjing University and the Department of Nuclear Engineering and Radiological Sciences at the University of Michigan, re­spec­tively. He eval­u­ates his ca­reer in short and con­cise terms: Although my main ca­reer is in med­ical imag­ing, I am also a armature’ pro­gram­mer do­ing open source pro­jects in my spare time”.

He told us that his par­ents are not med­ical pro­fes­sion­als, and his in­ter­est in pro­gram­ming was fos­tered from a young age. Coolwulf spent his free time in school writ­ing code. In the days be­fore GitHub ex­isted, he would of­ten post his side pro­jects on pro­gram­mer com­mu­ni­ties like source­forge.net or on his own per­sonal web­site.

He was part of an open source pro­ject called Mozilla Foundation around 2001. At the time there were two ini­tial pro­jects to de­velop Mozilla’s Gecko ren­der­ing en­gine into stand­alone browsers, one of which was K-Meleon (a browser that was quite pop­u­lar in China in the early years) to which he con­tributed code.

The other pro­ject, co­de­named Pheonix, is the pre­de­ces­sor of the fa­mil­iar Firefox browser. He was also in­ter­viewed by the me­dia more than ten years ago be­cause of this.

cool­wulf also wrote a web­site start­ing from 2009 that helps peo­ple book ho­tels at low prices. I think many in­ter­na­tional stu­dents in North America might have used it. And all these are just his spare time pro­jects and per­sonal in­ter­ests to come up with.

After com­plet­ing his stud­ies in med­ical imag­ing at University of Michigan, he worked suc­ces­sively as Director of R&D in imag­ing at Bruker and Siemens, di­rect­ing prod­uct de­vel­op­ment in the imag­ing de­tec­tors. Afterwards, he and Weiguo Lu, now a tenured pro­fes­sor at University of Texas Southwest Medical Center, founded two soft­ware com­pa­nies tar­get­ing the ra­dio­ther­apy and started work­ing on prod­uct de­vel­op­ment for can­cer ra­dio­ther­apy and ar­ti­fi­cial in­tel­li­gence tech­nolo­gies.

PS: Not only did he have a side ca­reer, but he was also the start­ing point guard on the bas­ket­ball team at Nanjing University back in days.

Coolwulf leads the de­vel­op­ment of the Bruker Photon III

Probably, he will be­come a dis­tant sci­en­tific en­tre­pre­neur if he con­tin­ues to go down in this way. But the fol­low­ing event was both a turn­ing point in cool­wulf’s life and the start­ing point that brought him closer to thou­sands of fam­i­lies and lives.

He was a 34-year-old alum­nus of Nanjing University who died af­ter miss­ing the best treat­ment for breast can­cer, leav­ing be­hind only a 4-year-old son. After wit­ness­ing life and death, and the fam­i­lies de­stroyed by the dis­ease, cool­wulf lamented the loss. At the same time, he learned that many breast can­cer pa­tients of­ten lack ac­cess to de­tec­tion, mak­ing it easy to de­lay di­ag­no­sis.

So the idea of us­ing AI to de­tect X-rays was born by cool­wulf, who also hap­pened to have the right pro­fes­sional ex­pe­ri­ence. However, it was not easy to make an AI that could ac­cu­rately de­tect tu­mors.

Coolwulf first down­loaded the DDSM and MIAS datasets from the University of Florida web­site. At the time, be­cause the data for­mat was old and not in stan­dard Dicom and the im­ages were still film scans, he wrote a spe­cial pro­gram to con­vert all the in­for­ma­tion into a us­able form.

Then, he also wrote an email ask­ing for per­mis­sion in or­der to ob­tain the breast can­cer dataset InBreast, a non-pub­lic re­source from the University of Barcelona. During the same time, it was also nec­es­sary for him to con­tinue read­ing a lot of lit­er­a­ture and writ­ing the cor­re­spond­ing model code.

The re­quest email sent by cool­wulf at the time ▼

However, this was not enough; to for­mally and ef­fi­ciently train this model, high hard­ware power is re­quired. So, he built one out of his own pocket - a GPU clus­ter of 50 Nvidia GTX 1080ti’s - lo­cally.

At the time, he 50 graph­ics cards were not easy to come by. Due to crypto min­ing, GPUs were in se­vere short­age and very over-priced on eBay, cool­wulf had to ask a lot of his friends to help him check on­line ven­dors such as Newegg/Amazon/Dell and grab the GPU when they are avail­able. After lots of ef­forts, he fi­nally com­pleted the site’s prepa­ra­tion.

Yes, in ad­di­tion to gam­ing and min­ing, the

Graphics Cards Have More Uses ▼

The free AI breast can­cer de­tec­tion web­site took cool­wulf about three months of spare time, some­time he had to sleep in his of­fice to get things done, be­fore the site fi­nally went live in 2018.

He said that he’s not sure ac­tu­ally how many peo­ple have used it be­cause the data is not saved on the server due to pa­tient pri­vacy con­cerns. But dur­ing that time, he re­ceived a lot of thank-you emails from pa­tients, many of them from China. Moreover, users re­ally used the web­site to check out tu­mors, es­pe­cially for peo­ple in re­mote ar­eas with lim­ited med­ical re­sources, which is equiv­a­lent to snatch­ing time from the hands of death.

The first one had the wrong photo. The tu­mor was found af­ter retest­ing” (from cool­wulf) ▼

A few years ago, this tech­nol­ogy was not as pop­u­lar as now, so cool­wulf’s pro­ject was more like an ini­ti­a­tion. The web­site also gained a lot of at­ten­tion from the in­dus­try, dur­ing which many do­mes­tic and for­eign med­ical in­sti­tu­tions, such as Fudan University Hospital, ex­pressed their grat­i­tude to him by email and were will­ing to pro­vide fi­nan­cial and tech­ni­cal sup­port.

After all, the whole thing cool­wulf was self-funded, which is not a small amount of money.

As for why he does­n’t com­mer­cial­ize the web­site and col­lect some money, this ques­tion was also asked by us.

cool­wulf’s an­swer was in­dif­fer­ent but non­com­mit­tal: Cancer pa­tients, as well as their fam­i­lies, have en­dured too much, and I be­lieve every­one wants to help them, and I hap­pen to have the abil­ity to do so.” In this way, he thanked many peo­ple but did­n’t take any fi­nan­cial as­sis­tance and wrapped up every­thing by him­self alone.

In ad­di­tion to the web­site, there was a desk­top ver­sion of the test­ing soft­ware at the time ▼

By 2021, cool­wulf had reached a sec­ond crit­i­cal turn­ing point in his life. His col­league’s cousin had a brain tu­mor that was not look­ing good and was treated with whole brain ra­di­a­tion ther­apy”. Unfortunately, a few months af­ter the whole brain ra­dio­ther­apy, the tu­mor re­turned and there was no treat­ment left but to wait for death to come.

Whole brain ra­dio­ther­apy is a ther­apy that elim­i­nates the tu­mor on a large scale through ra­di­a­tion, which will not only elim­i­nate the can­cer cells but also cause dam­age to nor­mal brain tis­sues, thus re­duc­ing the oc­cur­rence of le­sions.

In less strict words, whole brain ra­dio­ther­apy, is more like an in­dis­crim­i­nate at­tack”. Therefore, con­sid­er­ing the tol­er­ance of the ra­di­a­tion dose of brain crit­i­cal struc­tures such as brain­stem or op­ti­cal nerves, whole brain ra­dio­ther­apy is usu­ally a once-in-a-life­time treat­ment.

After this in­ci­dent, it com­pletely changed cool­wulf’s per­spec­tive and de­cided to break through an in­dus­try chal­lenge - to fur­ther push­ing AI not only just within the de­tec­tion stage but also put it into ac­tual treat­ment.

It is im­por­tant to know that whole brain ra­di­a­tion ther­apy is the most com­mon treat­ment op­tion for brain tu­mors to­day. In the United States alone, 200,000 peo­ple re­ceive whole brain ra­di­a­tion ther­apy each year. So, is it nec­es­sary to take the risk of choos­ing whole brain ra­dio­ther­apy for pa­tients with mul­ti­ple tu­mors of brain can­cer?

Not re­ally, be­cause there is an­other kind of treat­ment - stereo­tac­tic ra­dio­ther­apy. Compared with whole brain ra­dio­ther­apy, stereo­tac­tic ra­dio­ther­apy is more fo­cused and can pre­cisely re­move the dis­eased tis­sues with­out hurt­ing the nor­mal tis­sues.

For ex­am­ple gamma knife, is a kind of stereo­tac­tic ra­dio­ther­apy ma­chine. This ther­apy has much lower side ef­fects, is less harm­ful to pa­tients, and can be used mul­ti­ple times.

There is also a gen­eral con­sen­sus in the aca­d­e­mic com­mu­nity that stereo­tac­tic ra­dio­ther­apy of­fers pa­tients a bet­ter qual­ity of life and, at the same time, is more ef­fec­tive. The only prob­lem is that with stereo­tac­tic ra­dio­ther­apy, med­ical re­sources be­come more scarce.

Because once this pro­to­col is adopted, the on­col­o­gist or neu­ro­sur­geon has to pre­cisely out­line and la­bel each tu­mor; the med­ical physi­cist also has to make a pre­cise treat­ment plan for each tu­mor, and it will take a lot of time to save one pa­tient.

Therefore, doc­tors al­most al­ways pre­fer whole brain ra­dio­ther­apy to stereo­tac­tic ra­dio­ther­apy when the pa­tient has more than 5 or more brain le­sions.

But AI may be able to share the work­load of doc­tors. So, once again, cool­wulf is work­ing to make stereo­tac­tic ra­dio­ther­apy avail­able to more brain can­cer pa­tients.

But this time, the prob­lem was sig­nif­i­cantly more chal­leng­ing, and he could no longer do it alone. So he ap­proached University of Texas Southwestern Medical Center, Stanford University, for col­lab­o­ra­tion.

With the help and ef­forts of many peo­ple, the fol­low­ing three AI mod­els were re­cently de­vel­oped:

* and a model based on op­ti­mized ra­di­a­tion dose maps to quickly seg­ment mul­ti­ple le­sions into dif­fer­ent treat­ment courses.

The three mod­els com­ple­ment each other and cor­re­spond to the physi­cian’s work­flow, sig­nif­i­cantly re­duc­ing the work­load when us­ing stereo­tac­tic ra­dio­ther­apy.

This pro­ject, now be­ing pre­sented at the 2022 AAPM Spring Clinical Meeting and 2022 AAPM an­nual meet­ing, has once again achieved wide­spread in­dus­try recog­ni­tion.

cool­wulf, along with his coau­thors, is also ac­cel­er­at­ing the pace of try­ing to get the en­tire stereo­tac­tic ra­dio­ther­apy com­mu­nity aware of this achieve­ments so the tech­nol­ogy could be adopted to ac­tu­ally help more pa­tients. In in­ter­views, cool­wulf has re­peat­edly men­tioned that he is in no way alone in achiev­ing the re­sults he has to­day.

He hopes that we will pub­lish the list of col­lab­o­ra­tors, be­cause every­one here is a hero who is qui­etly fight­ing with can­cer.

In re­cent years, the can­cer mor­tal­ity rate has dropped by 30% com­pared to 30 years ago. At this rate, per­haps one day in the fu­ture, can­cer will no longer be a ter­mi­nal dis­ease.

But this is not a straight­for­ward bridge, there are count­less peo­ple like cool­wulf and oth­ers who are walk­ing in the abyss. To con­clude the ar­ti­cle, let’s bor­row a com­ment from a user on Reddit.

...

Read the original on howardchen.substack.com »

3 929 shares, 38 trendiness, words and minutes reading time

Text-to-Image Diffusion Models

We pre­sent Imagen, a text-to-im­age dif­fu­sion model with an un­prece­dented de­gree of pho­to­re­al­ism and a deep level of lan­guage un­der­stand­ing. Imagen builds on the power of large trans­former lan­guage mod­els in un­der­stand­ing text and hinges on the strength of dif­fu­sion mod­els in high-fi­delity im­age gen­er­a­tion. Our key dis­cov­ery is that generic large lan­guage mod­els (e.g. T5), pre­trained on text-only cor­pora, are sur­pris­ingly ef­fec­tive at en­cod­ing text for im­age syn­the­sis: in­creas­ing the size of the lan­guage model in Imagen boosts both sam­ple fi­delity and im­age-text align­ment much more than in­creas­ing the size of the im­age dif­fu­sion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, with­out ever train­ing on COCO, and hu­man raters find Imagen sam­ples to be on par with the COCO data it­self in im­age-text align­ment. To as­sess text-to-im­age mod­els in greater depth, we in­tro­duce DrawBench, a com­pre­hen­sive and chal­leng­ing bench­mark for text-to-im­age mod­els. With DrawBench, we com­pare Imagen with re­cent meth­ods in­clud­ing VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that hu­man raters pre­fer Imagen over other mod­els in side-by-side com­par­isons, both in terms of sam­ple qual­ity and im­age-text align­ment.

A small cac­tus wear­ing a straw hat and neon sun­glasses in the Sahara desert.

A small cac­tus wear­ing a straw hat and neon sun­glasses in the Sahara desert.

A photo of a Corgi dog rid­ing a bike in Times Square. It is wear­ing sun­glasses and a beach hat.

A photo of a Corgi dog rid­ing a bike in Times Square. It is wear­ing sun­glasses and a beach hat.

Sprouts in the shape of text Imagen’ com­ing out of a fairy­tale book.

Sprouts in the shape of text Imagen’ com­ing out of a fairy­tale book.

A trans­par­ent sculp­ture of a duck made out of glass. The sculp­ture is in front of a paint­ing of a land­scape.

A trans­par­ent sculp­ture of a duck made out of glass. The sculp­ture is in front of a paint­ing of a land­scape.

A sin­gle beam of light en­ter the room from the ceil­ing. The beam of light is il­lu­mi­nat­ing an easel. On the easel there is a Rembrandt paint­ing of a rac­coon.

A sin­gle beam of light en­ter the room from the ceil­ing. The beam of light is il­lu­mi­nat­ing an easel. On the easel there is a Rembrandt paint­ing of a rac­coon.

Visualization of Imagen. Imagen uses a large frozen T5-XXL en­coder to en­code the in­put text into em­bed­dings. A con­di­tional dif­fu­sion model maps the text em­bed­ding into a 64×64 im­age. Imagen fur­ther uti­lizes text-con­di­tional su­per-res­o­lu­tion dif­fu­sion mod­els to up­sam­ple the im­age 64×64→256×256 and 256×256→1024×1024.

* We show that large pre­trained frozen text en­coders are very ef­fec­tive for the text-to-im­age task.

* We show that scal­ing the pre­trained text en­coder size is more im­por­tant than scal­ing the dif­fu­sion model size.

* We in­tro­duce a new thresh­old­ing dif­fu­sion sam­pler, which en­ables the use of very large clas­si­fier-free guid­ance weights.

* We in­tro­duce a new Efficient U-Net ar­chi­tec­ture, which is more com­pute ef­fi­cient, more mem­ory ef­fi­cient, and con­verges faster.

* On COCO, we achieve a new state-of-the-art COCO FID of 7.27; and hu­man raters find Imagen sam­ples to be on-par with ref­er­ence im­ages in terms of im­age-text align­ment.

* Human raters strongly pre­fer Imagen over other meth­ods, in both im­age-text align­ment and im­age fi­delity.

A photo of a An oil paint­ing of a

in a gar­den. on a beach. on top of a moun­tain.

There are sev­eral eth­i­cal chal­lenges fac­ing text-to-im­age re­search broadly. We of­fer a more de­tailed ex­plo­ration of these chal­lenges in our pa­per and of­fer a sum­ma­rized ver­sion here. First, down­stream ap­pli­ca­tions of text-to-im­age mod­els are var­ied and may im­pact so­ci­ety in com­plex ways. The po­ten­tial risks of mis­use raise con­cerns re­gard­ing re­spon­si­ble open-sourc­ing of code and demos. At this time we have de­cided not to re­lease code or a pub­lic demo. In fu­ture work we will ex­plore a frame­work for re­spon­si­ble ex­ter­nal­iza­tion that bal­ances the value of ex­ter­nal au­dit­ing with the risks of un­re­stricted open-ac­cess. Second, the data re­quire­ments of text-to-im­age mod­els have led re­searchers to rely heav­ily on large, mostly un­cu­rated, web-scraped datasets. While this ap­proach has en­abled rapid al­go­rith­mic ad­vances in re­cent years, datasets of this na­ture of­ten re­flect so­cial stereo­types, op­pres­sive view­points, and deroga­tory, or oth­er­wise harm­ful, as­so­ci­a­tions to mar­gin­al­ized iden­tity groups. While a sub­set of our train­ing data was fil­tered to re­moved noise and un­de­sir­able con­tent, such as porno­graphic im­agery and toxic lan­guage, we also uti­lized LAION-400M dataset which is known to con­tain a wide range of in­ap­pro­pri­ate con­tent in­clud­ing porno­graphic im­agery, racist slurs, and harm­ful so­cial stereo­types. Imagen re­lies on text en­coders trained on un­cu­rated web-scale data, and thus in­her­its the so­cial bi­ases and lim­i­ta­tions of large lan­guage mod­els. As such, there is a risk that Imagen has en­coded harm­ful stereo­types and rep­re­sen­ta­tions, which guides our de­ci­sion to not re­lease Imagen for pub­lic use with­out fur­ther safe­guards in place.

Finally, while there has been ex­ten­sive work au­dit­ing im­age-to-text and im­age la­bel­ing mod­els for forms of so­cial bias, there has been com­par­a­tively less work on so­cial bias eval­u­a­tion meth­ods for text-to-im­age mod­els. A con­cep­tual vo­cab­u­lary around po­ten­tial harms of text-to-im­age mod­els and es­tab­lished met­rics of eval­u­a­tion are an es­sen­tial com­po­nent of es­tab­lish­ing re­spon­si­ble model re­lease prac­tices. While we leave an in-depth em­pir­i­cal analy­sis of so­cial and cul­tural bi­ases to fu­ture work, our small scale in­ter­nal as­sess­ments re­veal sev­eral lim­i­ta­tions that guide our de­ci­sion not to re­lease our model at this time.  Imagen, may run into dan­ger of drop­ping modes of the data dis­tri­b­u­tion, which may fur­ther com­pound the so­cial con­se­quence of dataset bias. Imagen ex­hibits se­ri­ous lim­i­ta­tions when gen­er­at­ing im­ages de­pict­ing peo­ple. Our hu­man eval­u­a­tions found Imagen ob­tains sig­nif­i­cantly higher pref­er­ence rates when eval­u­ated on im­ages that do not por­tray peo­ple, in­di­cat­ing  a degra­da­tion in im­age fi­delity. Preliminary as­sess­ment also sug­gests Imagen en­codes sev­eral so­cial bi­ases and stereo­types, in­clud­ing an over­all bias to­wards gen­er­at­ing im­ages of peo­ple with lighter skin tones and a ten­dency for im­ages por­tray­ing dif­fer­ent pro­fes­sions to align with Western gen­der stereo­types. Finally, even when we fo­cus gen­er­a­tions away from peo­ple, our pre­lim­i­nary analy­sis in­di­cates Imagen en­codes a range of so­cial and cul­tural bi­ases when gen­er­at­ing im­ages of ac­tiv­i­ties, events, and ob­jects. We aim to make progress on sev­eral of these open chal­lenges and lim­i­ta­tions in fu­ture work.

An art gallery dis­play­ing Monet paint­ings. The art gallery is flooded. Robots are go­ing around the art gallery us­ing pad­dle boards.

An art gallery dis­play­ing Monet paint­ings. The art gallery is flooded. Robots are go­ing around the art gallery us­ing pad­dle boards.

A ma­jes­tic oil paint­ing of a rac­coon Queen wear­ing red French royal gown. The paint­ing is hang­ing on an or­nate wall dec­o­rated with wall­pa­per.

A ma­jes­tic oil paint­ing of a rac­coon Queen wear­ing red French royal gown. The paint­ing is hang­ing on an or­nate wall dec­o­rated with wall­pa­per.

A gi­ant co­bra snake on a farm. The snake is made out of corn.

A gi­ant co­bra snake on a farm. The snake is made out of corn.

We give thanks to Ben Poole for re­view­ing our man­u­script, early dis­cus­sions, and pro­vid­ing many help­ful com­ments and sug­ges­tions through­out the pro­ject. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for help­ing us in­cor­po­rate im­por­tant re­spon­si­ble AI prac­tices around this pro­ject. We ap­pre­ci­ate valu­able feed­back and sup­port from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grate­ful to Tom Small for de­sign­ing the Imagen wa­ter­mark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for ini­tial dis­cus­sions and feed­back. We ac­knowl­edge hard work and sup­port from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giv­ing us ad­vice along the pro­ject and as­sist­ing us with the pub­li­ca­tion process. We thank Victor Gomes and Erica Moreira for their con­sis­tent and crit­i­cal help with TPU re­source al­lo­ca­tion. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for vol­un­teer­ing a con­sid­er­able amount of their time for test­ing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for al­low­ing us to use DALL-E 2 sam­ples and pro­vid­ing us with GLIDE sam­ples. We are thank­ful to Matthew Johnson and Roy Frostig for start­ing the JAX pro­ject and to the whole JAX team for build­ing such a fan­tas­tic sys­tem for high-per­for­mance ma­chine learn­ing re­search. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for help­ful dis­cus­sions and spend­ing time Imagening!

...

Read the original on gweb-research-imagen.appspot.com »

4 929 shares, 4 trendiness, words and minutes reading time

Text-to-Image Diffusion Models

We pre­sent Imagen, a text-to-im­age dif­fu­sion model with an un­prece­dented de­gree of pho­to­re­al­ism and a deep level of lan­guage un­der­stand­ing. Imagen builds on the power of large trans­former lan­guage mod­els in un­der­stand­ing text and hinges on the strength of dif­fu­sion mod­els in high-fi­delity im­age gen­er­a­tion. Our key dis­cov­ery is that generic large lan­guage mod­els (e.g. T5), pre­trained on text-only cor­pora, are sur­pris­ingly ef­fec­tive at en­cod­ing text for im­age syn­the­sis: in­creas­ing the size of the lan­guage model in Imagen boosts both sam­ple fi­delity and im­age-text align­ment much more than in­creas­ing the size of the im­age dif­fu­sion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, with­out ever train­ing on COCO, and hu­man raters find Imagen sam­ples to be on par with the COCO data it­self in im­age-text align­ment. To as­sess text-to-im­age mod­els in greater depth, we in­tro­duce DrawBench, a com­pre­hen­sive and chal­leng­ing bench­mark for text-to-im­age mod­els. With DrawBench, we com­pare Imagen with re­cent meth­ods in­clud­ing VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that hu­man raters pre­fer Imagen over other mod­els in side-by-side com­par­isons, both in terms of sam­ple qual­ity and im­age-text align­ment.

A small cac­tus wear­ing a straw hat and neon sun­glasses in the Sahara desert.

A small cac­tus wear­ing a straw hat and neon sun­glasses in the Sahara desert.

A photo of a Corgi dog rid­ing a bike in Times Square. It is wear­ing sun­glasses and a beach hat.

A photo of a Corgi dog rid­ing a bike in Times Square. It is wear­ing sun­glasses and a beach hat.

Sprouts in the shape of text Imagen’ com­ing out of a fairy­tale book.

Sprouts in the shape of text Imagen’ com­ing out of a fairy­tale book.

A trans­par­ent sculp­ture of a duck made out of glass. The sculp­ture is in front of a paint­ing of a land­scape.

A trans­par­ent sculp­ture of a duck made out of glass. The sculp­ture is in front of a paint­ing of a land­scape.

A sin­gle beam of light en­ter the room from the ceil­ing. The beam of light is il­lu­mi­nat­ing an easel. On the easel there is a Rembrandt paint­ing of a rac­coon.

A sin­gle beam of light en­ter the room from the ceil­ing. The beam of light is il­lu­mi­nat­ing an easel. On the easel there is a Rembrandt paint­ing of a rac­coon.

Visualization of Imagen. Imagen uses a large frozen T5-XXL en­coder to en­code the in­put text into em­bed­dings. A con­di­tional dif­fu­sion model maps the text em­bed­ding into a 64×64 im­age. Imagen fur­ther uti­lizes text-con­di­tional su­per-res­o­lu­tion dif­fu­sion mod­els to up­sam­ple the im­age 64×64→256×256 and 256×256→1024×1024.

* We show that large pre­trained frozen text en­coders are very ef­fec­tive for the text-to-im­age task.

* We show that scal­ing the pre­trained text en­coder size is more im­por­tant than scal­ing the dif­fu­sion model size.

* We in­tro­duce a new thresh­old­ing dif­fu­sion sam­pler, which en­ables the use of very large clas­si­fier-free guid­ance weights.

* We in­tro­duce a new Efficient U-Net ar­chi­tec­ture, which is more com­pute ef­fi­cient, more mem­ory ef­fi­cient, and con­verges faster.

* On COCO, we achieve a new state-of-the-art COCO FID of 7.27; and hu­man raters find Imagen sam­ples to be on-par with ref­er­ence im­ages in terms of im­age-text align­ment.

* Human raters strongly pre­fer Imagen over other meth­ods, in both im­age-text align­ment and im­age fi­delity.

A photo of a An oil paint­ing of a

in a gar­den. on a beach. on top of a moun­tain.

There are sev­eral eth­i­cal chal­lenges fac­ing text-to-im­age re­search broadly. We of­fer a more de­tailed ex­plo­ration of these chal­lenges in our pa­per and of­fer a sum­ma­rized ver­sion here. First, down­stream ap­pli­ca­tions of text-to-im­age mod­els are var­ied and may im­pact so­ci­ety in com­plex ways. The po­ten­tial risks of mis­use raise con­cerns re­gard­ing re­spon­si­ble open-sourc­ing of code and demos. At this time we have de­cided not to re­lease code or a pub­lic demo. In fu­ture work we will ex­plore a frame­work for re­spon­si­ble ex­ter­nal­iza­tion that bal­ances the value of ex­ter­nal au­dit­ing with the risks of un­re­stricted open-ac­cess. Second, the data re­quire­ments of text-to-im­age mod­els have led re­searchers to rely heav­ily on large, mostly un­cu­rated, web-scraped datasets. While this ap­proach has en­abled rapid al­go­rith­mic ad­vances in re­cent years, datasets of this na­ture of­ten re­flect so­cial stereo­types, op­pres­sive view­points, and deroga­tory, or oth­er­wise harm­ful, as­so­ci­a­tions to mar­gin­al­ized iden­tity groups. While a sub­set of our train­ing data was fil­tered to re­moved noise and un­de­sir­able con­tent, such as porno­graphic im­agery and toxic lan­guage, we also uti­lized LAION-400M dataset which is known to con­tain a wide range of in­ap­pro­pri­ate con­tent in­clud­ing porno­graphic im­agery, racist slurs, and harm­ful so­cial stereo­types. Imagen re­lies on text en­coders trained on un­cu­rated web-scale data, and thus in­her­its the so­cial bi­ases and lim­i­ta­tions of large lan­guage mod­els. As such, there is a risk that Imagen has en­coded harm­ful stereo­types and rep­re­sen­ta­tions, which guides our de­ci­sion to not re­lease Imagen for pub­lic use with­out fur­ther safe­guards in place.

Finally, while there has been ex­ten­sive work au­dit­ing im­age-to-text and im­age la­bel­ing mod­els for forms of so­cial bias, there has been com­par­a­tively less work on so­cial bias eval­u­a­tion meth­ods for text-to-im­age mod­els. A con­cep­tual vo­cab­u­lary around po­ten­tial harms of text-to-im­age mod­els and es­tab­lished met­rics of eval­u­a­tion are an es­sen­tial com­po­nent of es­tab­lish­ing re­spon­si­ble model re­lease prac­tices. While we leave an in-depth em­pir­i­cal analy­sis of so­cial and cul­tural bi­ases to fu­ture work, our small scale in­ter­nal as­sess­ments re­veal sev­eral lim­i­ta­tions that guide our de­ci­sion not to re­lease our model at this time.  Imagen, may run into dan­ger of drop­ping modes of the data dis­tri­b­u­tion, which may fur­ther com­pound the so­cial con­se­quence of dataset bias. Imagen ex­hibits se­ri­ous lim­i­ta­tions when gen­er­at­ing im­ages de­pict­ing peo­ple. Our hu­man eval­u­a­tions found Imagen ob­tains sig­nif­i­cantly higher pref­er­ence rates when eval­u­ated on im­ages that do not por­tray peo­ple, in­di­cat­ing  a degra­da­tion in im­age fi­delity. Preliminary as­sess­ment also sug­gests Imagen en­codes sev­eral so­cial bi­ases and stereo­types, in­clud­ing an over­all bias to­wards gen­er­at­ing im­ages of peo­ple with lighter skin tones and a ten­dency for im­ages por­tray­ing dif­fer­ent pro­fes­sions to align with Western gen­der stereo­types. Finally, even when we fo­cus gen­er­a­tions away from peo­ple, our pre­lim­i­nary analy­sis in­di­cates Imagen en­codes a range of so­cial and cul­tural bi­ases when gen­er­at­ing im­ages of ac­tiv­i­ties, events, and ob­jects. We aim to make progress on sev­eral of these open chal­lenges and lim­i­ta­tions in fu­ture work.

An art gallery dis­play­ing Monet paint­ings. The art gallery is flooded. Robots are go­ing around the art gallery us­ing pad­dle boards.

An art gallery dis­play­ing Monet paint­ings. The art gallery is flooded. Robots are go­ing around the art gallery us­ing pad­dle boards.

A ma­jes­tic oil paint­ing of a rac­coon Queen wear­ing red French royal gown. The paint­ing is hang­ing on an or­nate wall dec­o­rated with wall­pa­per.

A ma­jes­tic oil paint­ing of a rac­coon Queen wear­ing red French royal gown. The paint­ing is hang­ing on an or­nate wall dec­o­rated with wall­pa­per.

A gi­ant co­bra snake on a farm. The snake is made out of corn.

A gi­ant co­bra snake on a farm. The snake is made out of corn.

We give thanks to Ben Poole for re­view­ing our man­u­script, early dis­cus­sions, and pro­vid­ing many help­ful com­ments and sug­ges­tions through­out the pro­ject. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for help­ing us in­cor­po­rate im­por­tant re­spon­si­ble AI prac­tices around this pro­ject. We ap­pre­ci­ate valu­able feed­back and sup­port from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grate­ful to Tom Small for de­sign­ing the Imagen wa­ter­mark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for ini­tial dis­cus­sions and feed­back. We ac­knowl­edge hard work and sup­port from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giv­ing us ad­vice along the pro­ject and as­sist­ing us with the pub­li­ca­tion process. We thank Victor Gomes and Erica Moreira for their con­sis­tent and crit­i­cal help with TPU re­source al­lo­ca­tion. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for vol­un­teer­ing a con­sid­er­able amount of their time for test­ing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for al­low­ing us to use DALL-E 2 sam­ples and pro­vid­ing us with GLIDE sam­ples. We are thank­ful to Matthew Johnson and Roy Frostig for start­ing the JAX pro­ject and to the whole JAX team for build­ing such a fan­tas­tic sys­tem for high-per­for­mance ma­chine learn­ing re­search. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for help­ful dis­cus­sions and spend­ing time Imagening!

...

Read the original on imagen.research.google »

5 818 shares, 33 trendiness, words and minutes reading time

I Spent 2 years Launching Tiny Projects

Two years ago, frus­trated with a long list of un­ful­filled pro­ject ideas in my phone notes, I de­cided to start try­ing one idea each week in its tini­est form.

I never kept to a weekly sched­ule, but I’ve kept plod­ding along since then and launched 8 things.

Each morn­ing I sit down with a cof­fee and bash out some pro­ject code. It’s a hobby I love, and one that’s start­ing to gen­er­ate some de­cent pas­sive in­come now.

In this post I want to up­date you on every­thing I’ve launched, and share what I’ve learnt about build­ing lots of these tiny in­ter­net pro­jects.

Lets go back to the start.

The first pro­ject I made is this blog you’re read­ing right now.

The pur­pose of the blog was to sim­ply doc­u­ment all the other pro­jects I’d make.

I launched it the day af­ter I turned 25, and the very first post I wrote, Tiny Websites are Great”, went semi-vi­ral, which was very lucky and spurred me to keep go­ing.

Not much has changed here since then. I’ve writ­ten 17 blog posts, and, of course, there’s now dark mode.

Objectively, look­ing at page views, this is the most suc­cess­ful thing I’ve ever cre­ated.

One week af­ter launch­ing this blog, I thought it would be a bril­liant idea to buy do­main names from sev­eral FAANG com­pa­nies, e.g. google.קום

This was­n’t re­ally a pro­ject, but I’d al­ways been in­ter­ested in do­mains, and the blog post I wrote I bought net­flix.soy” again went semi-vi­ral.

I still own net­flix.soy. With Netflix stock tank­ing, maybe it’ll be worth more one day.

The next pro­ject I made was a tiny 8-bit bat­tle royale game for Android.

This was so fun to build, but ended up be­ing my least suc­cess­ful pro­ject.

I launched it to crick­ets, which was gut­ting af­ter the suc­cess of the last blog posts.

Sadly I lost the code for the game when I switched lap­tops. It’s still live, but very buggy. This did­n’t stop some­one stream­ing it on Twitch last month though.

Next up I built a mi­cro on­line store builder to sell a sin­gle prod­uct on re­peat; just imag­ine a tiny Shopify.

I cob­bled this pro­ject to­gether in 2 weeks, and launched it on Product Hunt.

Amazingly, peo­ple started sell­ing real things on there, net­ting me a mighty £1.63 in 1% trans­ac­tion fees.

Although this was­n’t even enough for a Tesco meal deal, it was my first taste of in­ter­net money, and boy did it taste good.

Amazingly, a few months af­ter launch­ing One Item Store, I was ap­proached by some­one look­ing to buy it.

I ended up sell­ing for $5,300, which blew my tiny mind.

After con­quer­ing the e-com­merce world, build­ing a so­cial net­work was the ob­vi­ous next step.

In a few weeks I launched Snormal”: a so­cial net­work for peo­ple to post every­day nor­mal things, like I just ate a baguette”.

In hind­sight, this does not make an ex­cit­ing so­cial net­work that peo­ple want to visit, and the web­site is kind of dead.

I’ve aban­doned this pro­ject, but it still has a few daily users.

I sign up to a lot of new prod­ucts to snag a rare” han­dle, e.g. @ben. My next pro­ject turned this into a busi­ness.

Each month I’d send out a newslet­ter with 4 new so­cial net­works, and let you know if your user­name was avail­able on them.

For an op­tional $10/month, I’d ac­tu­ally reg­is­ter the user­names for you.

After a 1 month build, I launched Earlyname” on Product Hunt, and sur­pris­ingly got a few pay­ing cus­tomers.

For 6 months I sent out newslet­ters, grow­ing Earlyname to $350/month in rev­enue, but ul­ti­mately de­cided I did­n’t en­joy run­ning it.

Having al­ready sold one pro­ject, I con­fi­dently listed Earlyname on MicroAcquire and sold it for $10,500.

One day, I found you could use emoji do­mains in email ad­dresses, e.g. hi@👋.kz

Realising there were many .kz emoji do­mains avail­able, I de­cided it would be a great idea to buy 300 Kazakhstan emoji do­mains and launch an emoji email ad­dress ser­vice.

One month af­ter launch­ing, I had 150 cus­tomers, but I’d ac­tu­ally made a loss from the do­main name costs.

Like all these pro­jects, I wrote up a blog post about it and put it on Hacker News.

The post did noth­ing for 30 min­utes, then ab­solutely sky­rock­eted.

I sold $9,000 in sub­scrip­tions over a week­end; the most I’ve ever made in such a short pe­riod of time.

Mailoji is still go­ing strong, and now has 700 emoji do­mains. I col­lect them like Pokémon. Recently I caught ❤️.gg

You can now also have full emoji email ad­dresses like 🦄🚀@🍉.fm

I re­ally en­joy writ­ing us­ing pen & pa­per, and wanted to start a daily blog.

Over a few weeks, I built a pro­to­type app that let you snap a pic­ture of a hand­writ­ten page and turn it into a web­site.

After en­joy­ing writ­ing a few paper blog posts”, I de­cided to turn this pro­to­type into a full-blown ser­vice called Paper Website.

I bought 100 note­books to give out to ini­tial cus­tomers, and braced for launch.

Fortunately it went well. Over 200 peo­ple have built a pa­per web­site, and I only have a few note­books left in my kitchen.

My daily blog runs on Paper Website, and I’ve hand­writ­ten well over 100 pa­per blog posts with­out get­ting a pa­per cut.

Rapidly launch­ing lots of tiny pro­jects is so much fun. This is the main rea­son I do it.

However, with each launch, I’m slowly learn­ing what makes a pro­ject successful”. After 8 launches, I’m start­ing to see some pat­terns.

I’ve also dis­cov­ered what mi­cro-busi­nesses I like to run: I don’t en­joy newslet­ters, but love quirky tech­ni­cal pro­jects that gen­er­ate pas­sive in­come. I would never have known this launch­ing just one thing.

The best thing about tiny pro­jects is they’re so small, the stakes are in­cred­i­bly low. There’s zero pres­sure if some­thing fails, you just move on guilt-free and try again.

Another ran­dom ben­e­fit is my de­vel­oper skills have 10X’ed over the 2 years of build­ing these pro­jects. I was de­cent be­fore, but I’m on a dif­fer­ent level now.

A big de­bate is whether you should launch lots of things, or fo­cus on just one.

I per­son­ally en­joy the micro-bet” ap­proach, but I of­ten won­der if I gave all my at­ten­tion to one pro­ject I could see bet­ter fi­nan­cial suc­cess.

At the mo­ment I have 3 ac­tive pro­jects that run on auto-pi­lot. Time man­age­ment and con­text switch­ing has been okay, but with 5+ ac­tive pro­jects it might get hard.

One other weird down­side is that, as I’ve grown an au­di­ence build­ing these pro­jects, I some­times catch my­self think­ing should I build some­thing just for the up­votes”.

It’s tempt­ing, be­cause I know I could, and it would prob­a­bly work. But, I think this is a sure-fire way to burn out fast.

When I started this mis­sion, I had a big list of pro­ject ideas that I’d built up in my phone. Maybe you have one of those lists too.

Two years later, I’ve re­alised a lot of these ini­tial ideas were pretty ter­ri­ble.

It’s a para­dox, but I’ve found that my best ideas now come from build­ing other ideas.

I would never have thought of an emoji email ad­dress ser­vice go­ing about my day-to-day, if I had­n’t de­cided to stu­pidly ex­per­i­ment with do­mains and buy net­flix.soy.

If you’re stuck for ideas, I rec­om­mend just build­ing some­thing; any­thing; even if it’s ter­ri­ble, and I guar­an­tee a bet­ter idea will pop into your brain shortly af­ter.

Each pro­ject I build now uses a spark of an idea from the pre­vi­ous. It’s like a mon­key swing­ing vine-to-vine, ex­cept the vines are pro­jects, and I’m just a dumb mon­key.

I want to keep build­ing tiny pro­jects for decades, and I’m ex­cited to see what ideas come next.

Now, onto the next pro­ject!

...

Read the original on tinyprojects.dev »

6 807 shares, 26 trendiness, words and minutes reading time

9-Euro-Ticket

Offers

For 9 eu­ros, you can travel through­out Germany on lo­cal/​re­gional trains for a whole month in June, July or August.

Flat rate: it gives you un­lim­ited travel on lo­cal/​re­gional trans­port ser­vices dur­ing the se­lected month

Travel through­out Germany: on all means of lo­cal/​re­gional pub­lic trans­port (such as RB, RE, U-Bahn, S-Bahn, bus and tram)

No wait­ing times, fully mo­bile: you can also buy it as a mo­bile phone ticket

People with an an­nual or monthly sea­son ticket will au­to­mat­i­cally re­ceive no­ti­fi­ca­tion from their trans­port as­so­ci­a­tion/​op­er­a­tor. They will not have to do any­thing them­selves.

All cus­tomers will ben­e­fit from this spe­cial of­fer and be able to buy the ticket be­fore the three-month pe­riod starts. The ticket will be avail­able via chan­nels such as bahn.de, the DB Navigator app and ticket desks at sta­tions.

When will I be able to buy the 9-Euro-Ticket for lo­cal/​re­gional trans­port?

The of­fer­ing goes on sale on 23 May 2022. People who use lo­cal/​re­gional trans­port will be able to buy it any­where in Germany via chan­nels such as bahn.de and DB Navigator. It will also be avail­able from DB Reisezentrum (travel cen­tre) staff, at a DB agency and ticket ma­chines at sta­tions.

How long is the ticket valid?

The 9-Euro-Ticket is avail­able in the pe­riod from 1 June 2022 to 31 August 2022.

It is valid for one cal­en­dar month, from 00:00 on the 1st un­til 24:00 on the 30th/31st.

Where can I use the 9-Euro-Ticket?

The 9-Euro-Ticket is valid on all pub­lic trans­port ser­vices in Germany. You can use it on any lo­cal/​re­gional route and make as many jour­neys as you like.

The ticket is not valid on long-dis­tance trains (e.g. IC, EC, ICE) or long-dis­tance buses.

What about hold­ers of a DB sea­son ticket for lo­cal/​re­gional trans­port?

Holders of a sea­son ticket for lo­cal/​re­gional trans­port will be cred­ited or re­funded the dif­fer­ence be­tween their sea­son ticket price and the 9-Euro-Ticket price. This does not ap­ply to hold­ers of a sea­son ticket for long-dis­tance trans­port and BahnCard 100 hold­ers.

Do peo­ple with an an­nual or monthly sea­son ticket have to buy a 9-Euro-Ticket sep­a­rately if they want to use this spe­cial of­fer?

These peo­ple will au­to­mat­i­cally re­ceive no­ti­fi­ca­tion from their trans­port as­so­ci­a­tion/​op­er­a­tor. This will in­clude in­for­ma­tion about the in­voic­ing process (refund or re­duc­tion to their stand­ing or­der). They will not have to do any­thing them­selves.

Can I use the ticket for on-de­mand ser­vices, such as shared taxis and taxi buses?

No. The ticket does not cover on-de­mand ser­vices (e.g. taxis at sta­tions). These are sup­ple­men­tary lo­cal trans­port ser­vices that re­quire a sur­charge in ad­di­tion to a nor­mal fare.

Children un­der 6 al­ways travel for free. They do not need a ticket.

Children aged 6-14 do not travel for free, so they need their own nor­mal ticket or 9-Euro-Ticket.

You can­not use the 9-Euro-Ticket as a ticket for dogs.

However, you can bring a dog with you in line with trans­port as­so­ci­a­tions’ reg­u­la­tions. For ex­am­ple, some as­so­ci­a­tions re­quire you to buy a sep­a­rate ticket for dogs. The sit­u­a­tion is sim­i­lar with guide dogs and as­sis­tance dogs: they are cov­ered by trans­port as­so­ci­a­tions’ reg­u­la­tions.

The 9-Euro-Ticket does not nor­mally in­clude free bi­cy­cle trans­port. Bringing bi­cy­cles is sub­ject to the rel­e­vant trans­port as­so­ci­a­tion reg­u­la­tions.

Please note: Trains get very crowded in the June-August pe­riod, so it is not pos­si­ble to guar­an­tee that there will al­ways be enough room on board for bi­cy­cles. We rec­om­mend hir­ing a bike at the lo­ca­tion where you dis­em­bark. If pos­si­ble, avoid bring­ing bi­cy­cles if you are trav­el­ling on pub­lic hol­i­days.

More in­for­ma­tion about trans­port­ing bi­cy­cles is avail­able in German at bahn.de/​fahrrad-nahverkehr

Reservations are un­com­mon on lo­cal/​re­gional trans­port ser­vices be­cause peo­ple of­ten de­cide to use them at short no­tice. It is not pos­si­ble to re­serve a seat when us­ing the 9-Euro-Ticket.

Is there a 9-Euro-Ticket for first class?

No. The 9-Euro-Ticket is only for travel in sec­ond class.

Yes. You can use the 9-Euro-Ticket be­fore start­ing and af­ter com­plet­ing a trip on a long-dis­tance train. However, you need a sep­a­rate ticket for the long-dis­tance part of your jour­ney.

Are BahnCard dis­counts avail­able with the 9-Euro-Ticket?

No. BahnCard dis­counts can­not be used in con­junc­tion with the 9-Euro-Ticket, as it is like a monthly ticket for lo­cal/​re­gional trans­port.

Does the 9-Euro-Ticket re­place the City-Ticket?

No. The City-Ticket is al­ways part of a flex­i­ble or saver fare ticket for the out­bound or re­turn leg of a long-dis­tance jour­ney.

The tick­et’s fi­nal de­tails will not be known un­til 20 May 2022, when the German par­lia­men­t’s up­per house is due to sign off on the sup­port pack­age.

Please check this page fre­quently for up­dates and new in­for­ma­tion.

Last up­dated: 17 May 2022

...

Read the original on www.bahn.com »

7 761 shares, 29 trendiness, words and minutes reading time

Vangelis, Oscar-Winning Composer, Dies at 79

Vangelis—the com­poser who scored Blade Runner, Chariots of Fire, and many other films—has died, Reuters re­ports, cit­ing the Athens News Agency. A cause of death was not re­vealed. According to The Associated Press, the mu­si­cian died at a French hos­pi­tal. Vangelis was 79 years old.

Born Evángelos Odysséas Papathanassíou, Vangelis was largely a self-taught mu­si­cian. He found suc­cess in Greek rock bands such as the Forminx and Aphrodite’s Child—the lat­ter of which sold over 2 mil­lion copies be­fore dis­band­ing in 1972. One of his ear­li­est film scores, writ­ten while he was still in Aphrodite’s Child, was for a French na­ture doc­u­men­tary called L’Apocalypse des an­i­maux.

An in­no­va­tor in elec­tronic mu­sic, Vangelis is ar­guably best known for his work on Chariots of Fire and Ridley Scott’s Blade Runner. It was noted by many upon the re­lease of the Harrison Ford–starring film that Vangelis’ score was as im­por­tant a com­po­nent as Ford’s char­ac­ter Rick Deckard in bring­ing the fu­tur­is­tic noir film to life. Years on, it’s con­sid­ered by many to be a hall­mark in the chronol­ogy of elec­tronic mu­sic.

Vangelis’ work on Chariots of Fire earned him the 1981 Academy Award for Best Original Score. The sound­track al­bum also reached the top of the Billboard 200 al­bums chart in April 1982. The film’s open­ing theme—called Titles” on the sound­track al­bum—topped the Billboard Hot 100 the fol­low­ing month. The theme has fea­tured of­ten at the Olympic Games.

In 1973, Vangelis started his solo ca­reer with his de­but al­bum Fais que ton rêve soit plus long que la nuit (Make Your Dream Last Longer Than the Night). During the 70s, he was widely ru­mored to join the prog-rock band Yes, fol­low­ing the de­par­ture of key­boardist Rick Wakeman. After re­hears­ing with them for months, Vangelis de­clined to join the group. He and Yes lead vo­cal­ist Jon Anderson re­united later in the 80s, and they went on to re­lease sev­eral al­bums to­gether as Jon & Vangelis.

Vangelis re­leased his fi­nal stu­dio al­bum, Juno to Jupiter, in September 2021 via Decca. The record was in­spired by the mis­sion of NASAs Juno space­craft and fea­tured so­prano Angela Gheorghiu.

Kyriakos Mitsotakis, Greece’s prime min­is­ter, eu­lo­gized Vangelis on Twitter. Vangelis Papathanassíou is no longer with us. For the whole world, the sad news states that the world mu­sic firm has lost the in­ter­na­tional Vangelis. The pro­tag­o­nist of elec­tronic sound, the Oscars, the Myth and the great hits,” he wrote, ac­cord­ing to the site’s trans­la­tion. For us Greeks, how­ever, know­ing that his sec­ond name was Odysseus, means that he be­gan his long jour­ney in the Roads of Fire. From there he will al­ways send us his notes.”

...

Read the original on pitchfork.com »

8 721 shares, 28 trendiness, words and minutes reading time

😵‍💫 Why billing systems are a nightmare for engineers

😵‍💫 Why billing sys­tems are a night­mare for en­gi­neers”On my first day, I was told: Payment will come later, should­n’t be hard right?”

I was wor­ried. We were not sell­ing and de­liv­er­ing goods, but SSDs and CPU cores, petabytes and mil­lisec­onds, space and time. Instantly, by an API call. Fungible, at the small­est unit. On all con­ti­nents. That was the vi­sion.

After a week I felt like I was the only one re­ally con­cerned about the long road ahead. In am­bi­tious en­ter­prise pro­jects, com­plex­ity com­pounds quickly: multi-ten­ancy, multi-users, multi-roles, multi-cur­rency, multi-tax codes, multi-every­thing. These sys­tems were no fun, some were an­cient, and of­ten spaghetti-like’. What should have been a 1 year R&D pro­ject ended up tak­ing 7 years of my pro­fes­sional life, in which I grew the billing team from 0 to 12 peo­ple.

So yes, if you have to ask me, billing is hard. Harder than you think. It’s time to solve that once and for all.“This is a typ­i­cal con­ver­sa­tion we have with en­gi­neers on a daily ba­sis. In that case, these are Kevin’s words, who was the VP Engineering at Scaleway, one of the European lead­ers in cloud in­fra­struc­ture. Some of you asked me why billing was that com­plex, af­ter my lat­est post about my Pricing Hack’. My co-founder Raffi took on the chal­lenge of ex­plain­ing why it’s still an un­solved prob­lem for en­gi­neers. We also gath­ered in­sights from other friends who went through the same painful jour­ney, in­clud­ing Algolia, Segment, Pleo, don’t miss them! Passing the mike to Raffi. When you’re think­ing about au­tomat­ing billing, this means your com­pany is get­ting trac­tion. That’s good news! You might then won­der: should we build it in-house? It does not look com­plex, and the logic seems spe­cific to your busi­ness.Also, you might want to pre­serve your pre­cious mar­gins and there­fore avoid ex­ist­ing billing so­lu­tions like Stripe Billing or Chargebee that take a cut of your rev­enue. Honestly, who likes this rent-seeker ap­proach? Our team at Lago still has some painful mem­o­ries of the in­ter­nal billing sys­tem at Qonto, that we had to build, main­tain, and deal with.. Why was it that painful?  In this ar­ti­cle, I will pro­vide a high-level view of  the tech­ni­cal chal­lenges we faced while im­ple­ment­ing hy­brid pric­ing (based on both subscription’ and usage’), and what we learned the hard way in this jour­ney. TL;DR: Billing is just 100x harder than you will ever thin­kLet’s bill yearly as well, should be pretty straight­for­ward’ claims the Revenue team. Great! Everyone is ex­cited to start work­ing on it. Everyone, ex­cept the tech team. When you start build­ing your in­ter­nal billing sys­tem, it’s hard to think of all the com­plex­ity that will pop up down the road, un­less you’ve ex­pe­ri­enced it be­fore.It’s com­mon to start a busi­ness with a sim­ple pric­ing. You de­fine one or two price plans, and limit this pric­ing to a de­fined num­ber of fea­tures. However, when the com­pany is grow­ing, the pric­ing gets more and more com­plex, just like your en­tire code­base.At Qonto, our first users could only on­board on a €9 plan. We quickly de­cided to add plans, and pay-as-you-go’ fea­tures (such as ATM with­drawals, for­eign cur­rency pay­ments, one shot cap­i­tal de­posit, etc…) to grow rev­enue.Also, as Qonto is a neobank’, we wanted to charge our cus­tomers di­rectly in their wal­let, through a ledger con­nected to our in­ter­nal billing sys­tem. The team  started from a duo of full-time en­gi­neers build­ing a billing sys­tem (which is al­ready a con­sid­er­able in­vest­ment), to cur­rently a ded­i­cated cross-func­tional team called pricing’.This is not spe­cific to Qonto of course. Pleo, an­other Fintech uni­corn from Denmark faced sim­i­lar hur­dles: I’ve learned to ap­pre­ci­ate that billing sys­tems are hard to build, hard to de­sign, and hard to get work­ing for you if you de­vi­ate from the stan­dard’ even by a tiny bit.” This is not even spe­cific to Fintechs. The Algolia team ended up cre­at­ing a whole pric­ing de­part­ment, now led by Djay, a Pricing and mon­e­ti­za­tion vet­eran, from Twilio, VMWare, Service Now. They piv­oted their pric­ing to a pay-as-you-go’ model based pric­ing based on the num­ber of monthly API searches “It looks easy on pa­per — how­ever, it’s a chal­lenge to bring au­toma­tion and trans­parency to a cus­tomer, so they can eas­ily un­der­stand. There is a lot of be­hind-the-scenes work that goes into this, and it takes a lot of en­gi­neer­ing and in­vest­ment to do it the right way.“says their CEO, Bernardette Nixon in Venture Beat and we could not agree more.When im­ple­ment­ing a billing sys­tem, deal­ing with dates is of­ten the num­ber 1 com­plex­ity. Somehow, all your sub­scrip­tions and charges deal with a num­ber of days. Whether you make your cus­tomers pay weekly, monthly or yearly, you need to roll things over a pe­riod of time called the billing pe­riod.Here is a non-ex­haus­tive list of dif­fi­cul­ties for en­gi­neers:How to deal with leap years?Do your sub­scrip­tions start at the be­gin­ning of the month or at the cre­ation date of the cus­tomer?How many days/​months of trial do you of­fer?Wait, bul­let 1 is also im­por­tant for February… 🤯How to cal­cu­late a us­age-based charge (price per sec­onds, hours, days…)?Do I re­sume the con­sump­tion or do I stack it month over month? Year over year?Do I ap­ply a pro-rata based on the num­ber of days con­sumed by my cus­tomer?Al­though every de­ci­sion is re­versible, billing cy­cle ques­tions are of­ten the most im­por­tant source of cus­tomer sup­port tick­ets, and it­er­at­ing on them is a highly com­plex and sen­si­tive en­gi­neer­ing pro­ject. For in­stance, Qonto mi­grated the billing cy­cle start date from the anniversary’ date, to the beginning of the mon­th’ date, and the ap­proach was de­scribed here. It was not a triv­ial change.Then, you need to en­able your cus­tomers to up­grade or down­grade their sub­scrip­tions. Moving from a plan A to a plan B seems pretty easy to im­ple­ment, but it’s not. Let’s zoom on po­ten­tial edge cases you could face.The user down­grades in the mid­dle of a pe­riod. Do we block fea­tures right now or at the end of the cur­rent billing pe­riod?The user has paid the plan in ad­vance (for the next billing pe­riod)The user has paid the plan in ar­rears (for what he has re­ally con­sumed)The user down­grades from a yearly plan to a monthly plan­The user down­grades from a plan paid in ad­vance to a plan paid in ar­rears (and vice-versa)The user has a dis­count ap­plied when down­grad­ingThe user up­grades in the mid­dle of a pe­riod. We prob­a­bly need to give her ac­cess to the new fea­tures right now. Do we ap­ply a pro-rata? Do we make her pay the pro-rata right now? At the end of the billing pe­riod?The user up­grades from a plan paid in ad­vance to a plan paid in ar­rears­The user up­grades from a monthly plan to a yearly plan. Do we ap­ply a pro-rata? Do we make her pay the pro-rata right now? At the end of the billing pe­riod?The user up­grades from a plan paid in ad­vance to a plan paid in ar­rears (and vice-versa)We did not have a free tri­al’ pe­riod at the time at Qonto, but Arnon from Pleo de­scribes the ad­di­tional sce­narii this cre­ates here.Sub­scrip­tion based billing is the first step when im­ple­ment­ing a billing sys­tem. Each cus­tomer needs to be af­fil­i­ated to a plan in or­der to start charg­ing the right amount at the right mo­ment.But, for a grow­ing num­ber of com­pa­nies, like we did at Qonto, other charges come along­side this sub­scrip­tion.

These charges are based on what cus­tomers re­ally con­sume. This is what we call usage based billing’. Most com­pa­nies end up hav­ing a hy­brid pric­ing: a sub­scrip­tion charged per month and add-ons’ or pay as you go’ charges on top of it.These con­sump­tion-based charges are tough to track at scale, be­cause they of­ten come with math cal­cu­la­tion rules, per­formed on a high vol­ume of events that need to be tracked.This means that they need to COUNT the DISTINCT num­ber of users, each month, and re­sume this value at the end of the billing pe­riod. In or­der to get the num­ber of unique vis­i­tors, they need to ap­ply a DISTINCT to dedu­pli­cate them.Al­go­lia tracks the num­ber of api_search per mon­thThis means they need to SUM the num­ber of monthly searches for a client and re­sume it at the be­gin­ning of each bill­able pe­riod.It be­comes even more com­plex when you start cal­cu­lat­ing a charge based on a time­frame. For in­stance, Snowflake charges the com­pute us­age of a data ware­house per sec­ond. This means that they sum the num­ber of Gigabytes or Terabytes con­sumed, mul­ti­plied by the num­ber of sec­onds of com­pute time.Maybe an ex­am­ple we can all re­late to would be the one of an en­ergy com­pany who needs to charge $10 per kilo­watt of elec­tric­ity used per hour, for in­stance. In the ex­am­ple be­low, you can get an overview of what needs to be mod­eled and au­to­mated by the billing sys­tem.Work­ing with com­pa­nies’ rev­enue can be tough.Billing mis­matches some­times hap­pen. Charging a user twice for the same prod­uct is ob­vi­ously bad for cus­tomer ex­pe­ri­ence, but fail­ing to charge when it’s needed hurts rev­enue. That’s partly why Finance and BI teams spend so much time on rev­enue recog­ni­tion.

As a pay-as-you-go’ com­pany, the billing sys­tem will process a high vol­ume of events, when an event needs to be re­played, it needs to hap­pen with­out billing the user an­other time.

Engineers call it Idempotency’, mean­ing the abil­ity to ap­ply the same op­er­a­tion mul­ti­ple times with­out chang­ing the re­sult be­yond the first try. It’s a sim­ple de­sign prin­ci­ple, how­ever, main­tain­ing it at all times is hard. Cash col­lec­tion is the process of col­lect­ing the money cus­tomers owe you. And the black beast of cash col­lec­tion is dunnings’: when pay­ments fail to ar­rive, the mer­chant needs to per­sist and make re­peated pay­ment re­quests to their cus­tomers with­out dam­ag­ing the re­la­tion­ship. These re­minders are called dunnings’. At Qonto, we called these waiting funds’. A clien­t’s sta­tus is waiting funds’ when they suc­cess­fully went through the sign-up, the KYC and KYB process, yet their ac­count bal­ance is still 0. For a neobank, the im­pact is twofold: you can’t charge for your ser­vice fees (a monthly sub­scrip­tion), and your cus­tomer does not gen­er­ate in­ter­change rev­enues (A sim­plis­tic ex­pla­na­tion of in­ter­change rev­enues: when you make a €100 pay­ment with Qonto - or any card provider-, Qonto earns €0.5-€1 of in­ter­change rev­enue, through the mer­chan­t’s fees.)

Therefore, your two main rev­enue streams are null’, but you did pay to ac­quire, on­board, KYC the user, pro­duce and send a card to them. We of­ten half joked about the need to hire a chief wait­ing funds of­fi­cer’: the fi­nan­cial im­pact of this is just as high as the prob­lem is un­der­es­ti­mated. Every com­pany has dunning’ chal­lenges. For en­gi­neers, on top of all the billing ar­chi­tec­ture, this means they need to de­sign and build:A retry log­ic’ to ask for a new pay­ment in­ten­tAn in­voice rec­on­cil­i­a­tion (if sev­eral months of charges are be­ing re­cov­ered)An app logic to block the ac­cess in case of pay­ment fail­ureAn email­ing work­flow to urge a user to pro­ceed to the pay­mentSome SaaS are even on a mis­sion to fight dun­nings and have built full-fledge com­pa­nies around cash col­lec­tion fea­tures, such as Upflow for in­stance, that is used by suc­cess­ful B2B scale-ups, in­clud­ing Front and Lattice, the lead­ing HRtech.‘Sending qual­ity and per­son­al­ized re­minders took us a lot of time and, as Lattice was grow­ing fast, it was es­sen­tial for us to scale our cash col­lec­tion processes. We use Upflow to per­son­al­ize how we ask our cus­tomers for money, re­peat­edly, while keep­ing a good re­la­tion­ship. We now col­lect 99% of our in­voices, ef­fort­less­ly’, #6 - The labyrinth of taxes and VATTaxes are chal­leng­ing and de­pend on mul­ti­ple di­men­sions.What are the di­men­sions?Ap­ply­ing tax to your cus­tomers de­pends on what you are sell­ing, your home coun­try and your cus­tomers’ home coun­try. In the sim­plest cases, your tax de­ci­sion tree should look like this:Now, imag­ine that you sell dif­fer­ent types of goods/​ser­vices to dif­fer­ent tax­onomies of clients in +100 coun­tries. If you think the logic on pa­per looks com­plex, the en­gi­neer­ing needs to au­to­mate this at least ten­fold.What do en­gi­neers need to do?En­gi­neers will need to think of an en­tire tax logic within the ap­pli­ca­tion. This logic is pyra­mi­dal based both on cus­tomers and prod­ucts sold by your com­pany.Taxes on the gen­eral set­tings level. Somehow, your com­pany will have a gen­eral tax rate that is ap­plied by de­fault in the app.Taxes per cus­tomer. This gen­eral set­ting tax rate can be over­rid­den by a spe­cific tax ap­plied for a cus­tomer. This per-cus­tomer tax rate de­pends on all the di­men­sions ex­plained in the im­age above.Taxes per fea­ture. In some cases, tax rates can also be ap­plied by fea­ture. This is mostly the case for the bank­ing in­dus­try. For in­stance, at Qonto, bank­ing fees are not sub­ject to taxes and non-bank­ing fees have a 20% VAT rate for all cus­tomers. Engineers cre­ated a whole tax logic based on the fea­ture be­ing used by a cus­tomer.With billing, the devil is in the de­tails. That’s why I al­ways cringe when I see en­gi­neer­ing teams build a home-made sys­tem, be­cause they think it’s not that com­plex’. If you’ve al­ready tack­led the top­ics listed above and think it’s a good in­vest­ment of your en­gi­neer­ing time, go ahead and build it in-house. Make sure to bud­get for the main­te­nance work that is al­ways needed. An­other op­tion is to rely on ex­ist­ing billing plat­forms, built by spe­cial­ized teams. If you’re con­sid­er­ing choos­ing one or switch­ing, and you think I can help, please reach out! To solve this prob­lem at scale, we adopted a rad­i­cal shar­ing ap­proach. We’ve started build­ing an Open-Source Alternative to Stripe Billing (and Chargebee, and all the equiv­a­lents). Our API and ar­chi­tec­ture are open, so that you can em­bed, fork, cus­tomize them as much as your pric­ing and in­ter­nal process need. As you’ve read, we ex­pe­ri­enced these pain­points first hand. Re­quest ac­cess or sign up for a live demo here, if you’re in­ter­ested!

Pricing, my only growth hack at Qonto

...

Read the original on www.getlago.com »

9 703 shares, 27 trendiness, words and minutes reading time

OutHorse Your Email

Disconnect from work and let the horses of Iceland re­ply to your emails while you are on va­ca­tion. (Seriously)

...

Read the original on www.visiticeland.com »

10 652 shares, 23 trendiness, words and minutes reading time

Is an unknown, extraordinarily ancient civilisation buried under eastern Turkey?

I am star­ing at about a dozen, stiff, eight-foot high, or­ange-red penises, carved from liv­ing bedrock, and semi-en­closed in an open cham­ber. A strange carved head (of a man, a de­mon, a priest, a God?), also hewn from the liv­ing rock, gazes at the phal­lic totems — like a prim­i­tivist gar­goyle. The ex­pres­sion of the stone head is dole­ful, to the point of gri­mac­ing, as if he, or she, or it, dis­ap­proves of all this: of every­thing be­ing stripped naked un­der the heav­ens, and re­vealed to the world for the first time in 130 cen­turies.

Yes, 130 cen­turies. Because these penises, this pe­cu­liar cham­ber, this en­tire per­plex­ing place, known as Karahan Tepe (pronounced Kah-rah-hann Tepp-ay), which is now emerg­ing from the dusty Plains of Harran, in east­ern Turkey, is as­tound­ingly an­cient. Put it an­other way: it is es­ti­mated to be 11-13,000 years old.

This num­ber is so large it is hard to take in. For com­par­i­son the Great Pyramid at Giza is 4,500 years old. Stonehenge is 5,000 years old. The Cairn de Barnenez tomb-com­plex in Brittany, per­haps the old­est stand­ing struc­ture in Europe, could be up to 7,000 years old.

The old­est mega­lithic rit­ual mon­u­ment in the world (until the Turkish dis­cov­er­ies) was al­ways thought to be Ggantija, in Malta. That’s maybe 5,500 years old. So Karahan Tepe, and its pe­nis cham­ber, and every­thing that in­ex­plic­a­bly sur­rounds the cham­ber — shrines, cells, al­tars, mega­liths, au­di­ence halls et al — is vastly older than any­thing com­pa­ra­ble, and plumbs quite unimag­in­able depths of time, back be­fore agri­cul­ture, prob­a­bly back be­fore nor­mal pot­tery, right back to a time when we once thought hu­man civilisation’ was sim­ply im­pos­si­ble.

After all, hunter gath­er­ers — cave­men with flint ar­row­heads — with­out reg­u­lar sup­plies of grain, with­out the reg­u­lar meat and milk of do­mes­ti­cated an­i­mals, do not build tem­ple-towns with wa­ter sys­tems.

Virtually all that we can now see of Karahan Tepe has been skil­fully un­earthed the last two years, with re­mark­able ease (for rea­sons which we will come back to later). And al­though there is much more to sum­mon from the grave, what it is al­ready teach­ing us is mind stretch­ing. Taken to­gether with its age, com­plex­ity, so­phis­ti­ca­tion, and its deep, res­o­nant mys­te­ri­ous­ness, and its many sis­ter sites now be­ing un­earthed across the Harran Plains — col­lec­tively known as the Tas Tepeler, or the stone hills’ — these carved, ochre-red rocks, so silent, brood­ing, and watch­ful in the hard whirring breezes of the semi-desert, con­sti­tute what might just be the great­est ar­chae­o­log­i­cal rev­e­la­tion in the his­tory of hu­mankind.

The un­veil­ing of Karahan Tepe, and nearly all the Tas Tepeler, in the last two years, is not with­out prece­dent. As I take my ur­gent pho­tos of the omi­nously lour­ing head, Necmi Karul touches my shoul­der, and ges­tures be­hind, across the sun-burnt and un­du­lant plains.

Necmi, of Istanbul University, is the chief ar­chae­ol­o­gist in charge of all the lo­cal digs — all the Tas Tepeler. He has in­vited me here to see the lat­est find­ings in this re­gion, be­cause I was one of the first west­ern jour­nal­ists to come here many years ago and write about the ori­gin of the Tas Tepeler. In fact, un­der the pen-name Tom Knox, I wrote an ex­citable thriller about the first of the stone hills’ — a novel called The Genesis Secret, which was trans­lated into quite a few lan­guages — in­clud­ing Turkish. That site, which I vis­ited 16 years back, was Gobekli Tepe.

Necmi points into the dis­tance, now hazed with heat.

Sean. You see that val­ley, with the roads, and white build­ings?’

I can maybe make out a white-ish dot, in one of the pale, greeny-yel­low val­leys, which stretch end­lessly into the shim­mer­ing blur.

That,’ Necmi says, Is Gobekli Tepe. 46 kilo­me­tres away. It has changed since since you were last here!’

And so, to Gobekli Tepe. The hill of the navel’. Gobekli is piv­otally im­por­tant. Because Karahan Tepe, and the Tas Tepeler, and what they might mean to­day, can­not be un­der­stood with­out the pri­mary con­text of Gobekli Tepe. And to com­pre­hend that we must dou­ble back in time, at least a few decades.

The mod­ern story of Gobekli Tepe be­gins in 1994, when a Kurdish shep­herd fol­lowed his flock over the lonely, in­fer­tile hill­sides, pass­ing a sin­gle mul­berry tree, which the lo­cals re­garded as sacred’. The bells hang­ing on his sheep tin­kled in the still­ness. Then he spot­ted some­thing. Crouching down, he brushed away the dust, and ex­posed a large, ob­long stone. The man looked left and right: there were sim­i­lar stone out­crops, peep­ing from the sands.

Calling his dog to heel, the shep­herd in­formed some­one of his finds when he got back to the vil­lage. Maybe the stones were im­por­tant. He was not wrong. The soli­tary Kurdish man, on that sum­mer’s day in 1994, had made an ir­re­versibly pro­found dis­cov­ery — which would even­tu­ally lead to the pe­nis pil­lars of Karahan Tepe, and an ar­chae­o­log­i­cal anom­aly which chal­lenges, time and again, every­thing we know of hu­man pre­his­tory.

A few weeks af­ter that en­counter by the mul­berry tree, news of the shep­herd’s find reached mu­seum cu­ra­tors in the an­cient city of Sanliurfa, 13km south-west of the stones. They got in touch with the German Archaeological Institute in Istanbul. And in late 1994 the German ar­chae­ol­o­gist Klaus Schmidt came to the site of Gobekli Tepe to be­gin his slow, dili­gent ex­ca­va­tions of its mul­ti­ple, pe­cu­liar, enor­mous T-stones, which are gen­er­ally arranged in cir­cles — like the stand­ing stones of Avebury or Stonehenge. Unlike European stand­ing stones, how­ever, the older Turkish mega­liths are of­ten in­tri­cately carved: with im­ages of lo­cal fauna. Sometimes the stones de­pict cranes, boars, or wild­fowl: crea­tures of the hunt. There are also plenty of leop­ards, foxes, and vul­tures. Occasionally these an­i­mals are de­picted next to hu­man heads.

Notably lack­ing were de­tailed hu­man rep­re­sen­ta­tions, ex­cept for a few coarse or eerie fig­urines, and the T-stones them­selves, which seem to be stylised in­vo­ca­tions of men, their arms angled’ to pro­tect the groin. The ob­ses­sion with the pe­nis is ob­vi­ous — more so, now we have the ben­e­fit of hind­sight pro­vided by Karahan Tepe and the other sites. Very few rep­re­sen­ta­tions of women have emerged from the Tas Tepeler so far; there is one ob­scene car­i­ca­ture of a woman per­haps giv­ing birth. Whatever in­spired these tem­ple-towns it was a not a be­nign ma­tri­ar­chal cul­ture. Quite the op­po­site, maybe.

The ap­par­ent date of Gobekli Tepe — first erected in 10,000 BC, if not ear­lier — caused a deal of skep­ti­cism. But over time ar­chae­o­log­i­cal ex­perts be­gan to ac­cept the sig­nif­i­cance. Ian Hodden, of Stanford University, de­clared that: Gobekli Tepe changes every­thing.’ David Lewis-Williams, the revered pro­fes­sor of ar­chae­ol­ogy at Witwatersrand University in Johannesburg, said, at the time: Gobekli Tepe is the most im­por­tant ar­chae­o­log­i­cal site in the world.’

And yet, in the nineties and early noughties Gobekli Tepe dodged the lime­light of gen­eral, pub­lic at­ten­tion. It’s hard to know why. Too re­mote? Too hard to pro­nounce? Too ec­cen­tric to fit with es­tab­lished the­o­ries of pre­his­tory? Whatever the rea­son, when I flew out on a whim in 2006 (inspired by two brisk min­utes of footage on a TV show), even the lo­cals in the nearby big city, Sanliurfa, had no con­cep­tion of what was out there, in the bar­rens.

I re­mem­ber ask­ing a cab dri­ver, the day I ar­rived, to take me to Gobekli Tepe. He’d never heard of it. Not a clue. Today that feels like ask­ing some­one in Paris if they’ve heard of the Louvre and get­ting a Non. The dri­ver had to con­sult sev­eral taxi-dri­ving friends un­til one grasped where I wanted to go — ‘that German dig, out of town, by the Arab vil­lages’ — and so the dri­ver rat­tled me out of Sanliurfa and into the dust un­til we crested one fi­nal re­mote hill and came upon a scene out of the open­ing ti­tles of the Exorcist: ar­chae­ol­o­gists toil­ing away, un­no­ticed by the world, but fu­ri­ously in­tent on their world-chang­ing rev­e­la­tions.

For an hour Klaus (who sadly died in 2014) gen­er­ously es­corted me around the site. I took pho­tos of him and the stones and the work­ers, this was not a has­sle as there were lit­er­ally no other tourists. A cou­ple of the pho­tos I snatched, that hot af­ter­noon, went on to be­come mildly iconic, such as my photo of the shep­herd who found the site, or Klaus crouch­ing next to one of the most finely-carved T-stones. They were prized sim­ply be­cause no one else had both­ered to take them.

After the tour, Klaus and I re­tired from the heat to his tent, where, over dainty tulip glasses, of sweet black Turkish tea, Klaus ex­plained the sig­nif­i­cance of the site.

As he put it, Gobekli Tepe up­ends our view of hu­man his­tory. We al­ways thought that agri­cul­ture came first, then civil­i­sa­tion: farm­ing, pot­tery, so­cial hi­er­ar­chies. But here it is re­versed, it seems the rit­ual cen­tre came first, then when enough hunter gath­er­ing peo­ple col­lected to wor­ship — or so I be­lieve — they re­alised they had to feed peo­ple. Which means farm­ing.’ He waved at the sur­round­ing hills, It is no co­in­ci­dence that in these same hills in the Fertile Crescent men and women first do­mes­ti­cated the lo­cal wild einkorn grass, be­com­ing wheat, and they also first do­mes­ti­cated pigs, cows and sheep. This is the place where Homo sapi­ens went from pluck­ing the fruit from the tree, to toil­ing and sow­ing the ground.’

Klaus had cued me up. People were al­ready spec­u­lat­ing that — if you see the Garden of Eden mythos as an al­le­gory of the Neolithic Revolution: i.e. our fall from the rel­a­tive ease of hunter-gath­er­ing to the rel­a­tive hard­ships of farm­ing (and life did get harder when we first started farm­ing, as we worked longer hours, and caught dis­eases from do­mes­ti­cated an­i­mals), then Gobekli Tepe and its en­vi­rons is prob­a­bly the place where this hap­pened. Klaus Schmidt did not de­mur. He said to me, quite de­lib­er­ately: I be­lieve Gobekli Tepe is a tem­ple in Eden’. It’s a quote I reused, to some con­tro­versy, be­cause peo­ple took Klaus lit­er­ally. But he did not mean it lit­er­ally. He meant it al­le­gor­i­cally.

We have found no homes, no hu­man re­mains. Where is every­one, did they gather for fes­ti­vals, then dis­perse? As for their re­li­gion, I have no real idea, per­haps Gobekli Tepe was a place of ex­car­na­tion, for ex­pos­ing the bones of the dead to be con­sumed by vul­tures, so the bod­ies have all gone. But I do def­i­nitely know this: some time in 8000 BC the cre­ators of Gobekli Tepe buried their great struc­tures un­der tons of rub­ble. They en­tombed it. We can spec­u­late why. Did they feel guilt? Did they need to pro­pi­ti­ate an an­gry God? Or just want to hide it?’ Klaus was also fairly sure on one other thing. Gobekli Tepe is unique.’

I left Gobekli Tepe as be­wil­dered as I was ex­cited. I wrote some ar­ti­cles, and then my thriller, and along­side me, many other writ­ers, aca­d­e­mics and film-mak­ers, made the some­times dan­ger­ous pil­grim­age to this sump­tu­ously puz­zling place near the trou­bled Turkey-Syria bor­der, and slowly its fame grew.

Back here and now, in 2022, Necmi, my­self and Aydan Aslan — the di­rec­tor for Sanliurfa Culture and Tourism — jump in a car at Karahan Tepe (Necmi promises me we shall re­turn) and we go see Gobekli Tepe as it is to­day.

Necmi is right: all is changed. These days Gobekli Tepe is not just a fa­mous ar­chae­o­log­i­cal site, it is a Unesco World-Heritage-listed tourist hon­ey­pot which can gen­er­ate a mil­lion vis­i­tors a year. It is all en­closed by a fu­tur­is­tic hi-tech steel-and-plas­tic mar­quee (no ca­sual wan­der­ing around tak­ing pho­tos of the stones and work­ers). Where Klaus and I once sipped tea in a flap­ping tent, alone, there is now a big vis­i­tor cen­tre — where I bump into the grand­son of the shep­herd who first found Gobekli. I spy the stone where I took the photo of a crouch­ing Klaus, but I see it 20 me­tres away. That’s as close as I can get.

After lunch in Sanliurfa — with its Gobekli Tepe themed restau­rants, and its Gobekli Tepe T-stone fridge-mag­net sou­venir shops - Necmi shows me the gleam­ing mu­seum built to house the great­est finds from the re­gion: in­clud­ing a 11,000 year old statue, re­trieved from be­neath the cen­tre of Sanliurfa it­self, and per­haps the world’s old­est life size carved hu­man fig­ure. I re­call first see­ing this poignant ef­figy un­der the stairs next to a fire ex­tin­guisher in Sanliurfa’s then titchy, ne­glected mu­nic­i­pal mu­seum. Back in 2006 I wrote about Urfa man’ and how he should be vastly bet­ter known, not hid­den away in some ob­scure room in a mu­seum vis­ited by three peo­ple a year.

Urfa man now has a silent hall of his own in one of Turkey’s great­est ar­chae­o­log­i­cal gal­leries. More im­por­tantly, we can now see that Urfa man has the same body stance of the T-shaped man-pil­lars at Gobekli (and in many of the Tas Tepeler): his arms are in front of him, pro­tect­ing his pe­nis. His ob­sid­ian eyes still stare wist­fully at the ob­server, as lus­trous as they were 11,000 years ago.

As we stroll about the mu­seum, Necmi points at more carv­ings, more leop­ards, vul­tures, penises. From sev­eral sites ar­chae­ol­o­gists have found stat­ues of leop­ards ap­par­ently mount­ing, rid­ing or even raping’ hu­mans, paws over the hu­man eyes. Meanwhile, Aslan tells me how ar­chae­ol­o­gists at Gobekli have also, more re­cently, found tan­ta­lis­ing ev­i­dence of al­co­hol: huge troughs with the chem­i­cal residue of fer­men­ta­tion, in­di­cat­ing mighty rit­ual feasts, maybe.

I sense we are get­ting closer to a mo­men­tous new in­ter­pre­ta­tion of Gobekli Tepe and the Tas Tepeler. And it is very dif­fer­ent from that per­spec­tive Klaus Schmidt gave me, in 2006 (and this is no crit­i­cism, of course: he could not have known what was to come).

Necmi — as good as promised — whisks me back to Karahan Tepe, and to some of the other Tas Tepeler, so we can jig­saw to­gether this epochal puz­zle. As we speed around the arid slopes he ex­plains how sci­en­tists at Karahan Tepe, as well as Gobekli Tepe, have now found ev­i­dence of homes.

These places, the Tas Tepeler, were not iso­lated tem­ples where hunter gath­er­ers came, a few times a year, to wor­ship at their stand­ing stones, be­fore re­turn­ing to the plains for the life of the chase. The builders lived here. They ate their roasted game here. They slept here. And they used, it seems, a prim­i­tive but po­etic form of pot­tery, shaped from pol­ished stone. They pos­si­bly did elab­o­rate man­hood rit­u­als in the Karahan Tepe pe­nis cham­ber, which was prob­a­bly half flooded with liq­uids. And maybe they cel­e­brated af­ter­wards with boozy feasts. Yet still we have no sign at all of con­tem­po­rary agri­cul­ture; they were, it still ap­pears, hunter gath­er­ers, but of un­nerv­ing so­phis­ti­ca­tion.

Another un­nerv­ing odd­ity is the cu­ri­ous num­ber of carv­ings which show peo­ple with six fin­gers. Is this sym­bolic, or an ac­tual de­for­mity? Perhaps the mark of a strange tribe? Again, there are more ques­tions than an­swers. Crucially, how­ever, we do now have ten­ta­tive hints as to the ac­tual re­li­gion of these peo­ple.

In Gobekli Tepe sev­eral skulls have been re­cov­ered. They are de­lib­er­ately de­fleshed, and care­fully pierced with holes so they could — sup­pos­edly — be hung and dis­played.

Skull cults are not un­known in an­cient Anatolia. If there was such a cult in the Tas Tepeler it might ex­plain the graven vul­tures pic­tured playing’ with hu­man heads. As to how the skulls were ob­tained, they might have come from con­flict (though there is no ev­i­dence of this yet), it is quite pos­si­ble the skulls were ob­tained via hu­man sac­ri­fice. At a nearby, slightly younger site, the Skull Building of Cayonu, we know of al­tars drenched with hu­man blood, prob­a­bly from gory sac­ri­fice.

Necmi has one more point to make about Karahan Tepe, as we tour the pe­nis cham­ber and its an­te­rooms. Karahan Tepe is stu­pe­fy­ingly big. So far,’ he says, We have dug up maybe 1 per cent of the site’ — and it is al­ready im­pres­sive. I ask him how many pil­lars — T stones — might be buried here. He ca­su­ally points at a rec­tan­gu­lar rock peer­ing above the dry grass. That’s prob­a­bly an­other mega­lith right there, wait­ing to be ex­ca­vated. I reckon there are prob­a­bly thou­sands more of them, all around us. We are only at the be­gin­ning. And there could be dozens more Tas Tepeler we have not yet found, spread over hun­dreds of kilo­me­tres.’

In one re­spect Klaus Schmidt has been proved ab­solutely right. After he first pro­posed that Gobekli Tepe was de­lib­er­ately buried with rub­ble — that is to say, bizarrely en­tombed by its own cre­ators — a back­lash of scep­ti­cism grew, with some sug­gest­ing that the ap­par­ent back­fill was merely the re­sult of thou­sands of years of ran­dom ero­sion, rain and rivers wash­ing de­bris be­tween the mega­liths, grad­u­ally hid­ing them. Why should any re­li­gious so­ci­ety bury its own cathe­drals, which must have taken decades to con­struct?

And yet, Karahan too was def­i­nitely and pur­posely buried. That is the rea­son Necmi and his team were able to un­earth the pe­nis pil­lars so quickly, all they had to do was scoop away the back­fill, ex­pos­ing the phal­lic pil­lars, sculpted from liv­ing rock.

I have one more ques­tion for Necmi, which has been in­creas­ingly nag­ging at me. Did the peo­ple that build the Tas Tepeler have writ­ing? It is al­most im­pos­si­ble to be­lieve that you could con­struct such elab­o­rate sites, in mul­ti­ple places, over thou­sands of square kilo­me­tres, with­out care­ful, ar­tic­u­late plans, that is to say: with­out writ­ing. You could­n’t sing, paint and dream your way to en­tire in­hab­ited towns of shrines, vaults, wa­ter chan­nels and cul­tic cham­bers.

Necmi shrugs. He does not know. One of the glo­ries of the Tas Tepeler is that they are so old, no one knows. Your guess is lit­er­ally as good as the ex­pert’s. And yet a very good guess, right now, leads to the most re­mark­able an­swer of all, and it is this: ar­chae­ol­o­gists in south­east­ern Turkey are, at this mo­ment, dig­ging up a wild, grand, ar­tis­ti­cally co­her­ent, im­plau­si­bly strange, hith­erto-un­known-to-us re­li­gious civil­i­sa­tion, which has been buried in Mesopotamia for ten thou­sand years. And it was all buried de­lib­er­ately.

Jumping in the car, we head off to yet an­other of the Tas Tepeler, but then Necmi has an abrupt change of mind, as to our des­ti­na­tion.

No, let’s go see Sayburc. It’s a lit­tle Arab vil­lage. A few months ago some of the farm­ers rang us and said Er, we think we have mega­liths in our farm­yard walls. Do you want to have a look?”’

Our cars pull up in a scruffy vil­lage square, scat­ter­ing sheep and hens. Sure enough, there are clas­sic Gobekli/Karahan style T-stones, be­ing used to but­tress agri­cul­tural walls, they are prob­a­bly 11-13,000 years old, just like every­where else. There are so many of them I spot one of my own, on the out­skirts of the vil­lage. I point it out to Necmi. He nods, and says Yes, that’s prob­a­bly an­other.’ But he wants to show me some­thing else.

Pulling back a plas­tic cur­tain we step into a kind of stone barn. Along one wall there is a spec­tac­u­lar stone frieze, dis­play­ing an­i­mal and hu­man fig­ures, carved or in re­lief. There are leop­ards, of course, and also au­rochs, etched in a Cubist way to make both men­ac­ing horns equally vis­i­ble (you can see an iden­ti­cal rep­re­sen­ta­tion of the au­roch at Gobekli Tepe, so sim­i­lar one might won­der if they were carved by the same artist).

At the cen­tre of the frieze is a small fig­ure, in bold re­lief. He is clutch­ing his pe­nis. Next to him, be­ing threat­ened by the au­rochs, is an­other hu­man. He has six fin­gers. For a long while, we stare in si­lence at the carv­ings. I re­alise that, a few farm­ers apart, we are some of the first peo­ple to see this since the end of the Ice Age.

...

Read the original on www.spectator.co.uk »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.