10 interesting stories served every morning and every evening.
When we launched the Framework Laptop a year ago, we shared a promise for a better kind of Consumer Electronics: one in which you have the power to upgrade, repair, and customize your products to make them last longer and fit your needs better. Today, we’re honored to deliver on that promise with a new generation of the Framework Laptop, bringing a massive performance upgrade with the latest 12th Gen Intel® Core™ processors, available for pre-order now. We spent the last year gathering feedback from early adopters to refine the product as we scale up. We’ve redesigned our lid assembly for significantly improved rigidity and carefully optimized standby battery life, especially for Linux users. Finally, we continue to expand on the Expansion Card portfolio, with a new 2.5 Gigabit Ethernet Expansion Card coming soon.
In addition to launching our new Framework Laptops with these upgrades, we’re living up to our mission by making all of them available individually as modules and combined as Upgrade Kits in the Framework Marketplace. This is perhaps the first time ever that generational upgrades are available in a high-performance thin and light laptop, letting you pick the improvements you want without needing to buy a full new machine.
Framework Laptops with 12th Gen Intel® Core™ processors are available for pre-order today in all countries we currently ship to: US, Canada, UK, Germany, France, Netherlands, Austria, and Ireland. We’ll be launching in additional countries throughout the year, and you can help us prioritize by registering your interest. We’re using a batch pre-order system, with only a fully-refundable $100/€100/£100 deposit required at the time of pre-order. Mainboards with 12th Gen Intel® Core™ processors, our revamped Top Cover, and the Upgrade Kit that combines the two are available for waitlisting on the Marketplace today. You can register to get notified as soon as they come in stock. The first batch of new laptops as well as the new Marketplace items start shipping this July.
12th Gen Intel® Core™ processors bring major architectural advancements, adding 8 Efficiency Cores on top of 4 or 6 Performance Cores with Hyper-Threading. This means the top version we offer, the i7-1280P, has a mind-boggling 14 CPU cores and 20 threads. All of this results in an enormous increase in performance. In heavily multi-threaded benchmarks like Cinebench R23, we see results that are double the last generation i7-1185G7 processor. In addition to the top of the line i7-1280P configuration, we have i5-1240P and i7-1260P options available, all supporting up to 30W sustained performance and 60W boost.
We launched a new product comparison page, letting you compare all of the versions of the Framework Laptop now available. Every model is equally thin and light at Framework Laptop until we run out of the limited inventory we have left. If you ever need more performance in the future, you can upgrade to the latest modules whenever you’d like!
We continue to focus on solid Linux support, and we’re happy to share that Fedora 36 works fantastically well out of the box, with full hardware functionality including WiFi and fingerprint reader support. Ubuntu 22.04 also works great after applying a couple of workarounds, and we’re working to eliminate that need. We also studied and carefully optimized the standby power draw of the system in Linux. You can check compatibility with popular distros as we continue to test on our Linux page or in the Framework Community.
In redesigning the Framework Laptop’s lid assembly, we switched from an aluminum forming process to a full CNC process on the Top Cover, substantially improving rigidity. While there is more raw material required when starting from a solid block of 6063 aluminum, we’re working with our supplier Hamagawa to reduce environmental impact. We currently use 75% pre-consumer-recycled alloy and are searching for post-consumer sources. The Top Cover (CNC) is built into all configurations of the Framework Laptop launching today, and is available as a module both as part of the Upgrade Kit or individually.
Support for Ethernet has consistently been one of the most popular requests from the Framework Laptop community. We started development on an Expansion Card shortly after launch last year and are now ready to share a preview of the results. Using a Realtek RTL8156 controller, the Ethernet Expansion Card supports 2.5Gbit along with 10/100/1000Mbit Ethernet. This card will be available later this year, and you can register to get notified in the Framework Marketplace.
We’re incredibly happy to live up to the promise of longevity and upgradeability in the Framework Laptop. We also want to ensure we’re reducing waste and respecting the planet by enabling reuse of modules. If you’re upgrading to a new Mainboard, check out the open source designs we released earlier this year for creative ways to repurpose your original Mainboard. We’re starting to see some incredible projects coming out of creators and developers. To further reduce environmental impact, you can also make your Framework Laptop carbon neutral by picking up carbon capture in the Framework Marketplace.
We’re ramping up into production now with our manufacturing partner Compal at a new site in Taoyuan, Taiwan, a short drive from our main fulfillment center, helping reduce the risk of supply chain and logistics challenges. We recommend getting your pre-order in early to hold your place in line and to give us a better read on production capacity needs. We can’t wait to see what you think of these upgrades, and we’re looking forward to remaking Consumer Electronics with you!
...
Read the original on community.frame.work »
″ This is what a programmer should look like! ”
″ The OP is really cool, technology to save the world ”
″ Compared with him, I feel like my code is meaningless. ”
These comments are from a thread in the V2EX forum, a gathering place for programmers in China.
When you first read these comments, you may think they are a bit exaggerating, but for those people and families who have been saved by the post, such comments are true.
Because here’s what the post does: detects breast cancer.
In 2018, a programmer named “coolwulf” started a thread about a website he had made. Users just need to upload their X-ray images, then they can let AI to carry out their own fast diagnosis of breast cancer disease.
Furthermore, the accuracy of tumor identification has reached 90%. In short, it is to let the AI help you “look at the film”, and the accuracy rate is almost comparable to professional doctors, and it is completely free.
As we all know, although the cure rate of breast cancer is high if found in its early stages. However due to the early symptoms are not obvious, it is quite easy to miss the best time to cure, and it is often found at an advanced stage.
However, a reliable AI for tumor detection can enable a large number of patients who cannot seek adequate medical diagnosis in time to know the condition earlier or provide a secondary opinion. Even if a doctor is needed to confirm the diagnosis in the end, it is already invaluable in many areas where medical resources are tight.
Breast cancer also has the highest incidence of all cancers ▼
Immediately, this post by coolwulf, garnered a rare hundreds of responses. In the comments section, there were people who were anxiously awaiting their doctor’s test results.
Others had family members with breast cancer and were filled with uncertainty and fear. The coolwulf project has given them hope.
With this, of course, comes the curiosity about the project and coolwulf himself. Where does the huge amount of clinical data and hardware computing power come from? More importantly, who is the superpower that is willing to open it up for free?
For many questions, coolwulf himself did not reply one by one. He soon left, hiding from spotlight, and rarely appeared again. But in 2022, he returned with an even more important “brain cancer project”, and the mystery remained the same.
In order to clear up the fog on coolwulf, we reached out to him in the Midwest of the United States. After a few rounds of interviews, today, let’s hear the story of coolwulf, also known as Hao Jiang.
During the time when he was a student, he pursued his undergraduate and PhD degrees in the Department of Physics at Nanjing University and the Department of Nuclear Engineering and Radiological Sciences at the University of Michigan, respectively. He evaluates his career in short and concise terms: ” Although my main career is in medical imaging, I am also a ‘armature’ programmer doing open source projects in my spare time”.
He told us that his parents are not medical professionals, and his interest in programming was fostered from a young age. Coolwulf spent his free time in school writing code. In the days before GitHub existed, he would often post his side projects on programmer communities like sourceforge.net or on his own personal website.
He was part of an open source project called Mozilla Foundation around 2001. At the time there were two initial projects to develop Mozilla’s Gecko rendering engine into standalone browsers, one of which was K-Meleon (a browser that was quite popular in China in the early years) to which he contributed code.
The other project, codenamed Pheonix, is the predecessor of the familiar Firefox browser. He was also interviewed by the media more than ten years ago because of this.
coolwulf also wrote a website starting from 2009 that helps people book hotels at low prices. I think many international students in North America might have used it. And all these are just his spare time projects and personal interests to come up with.
After completing his studies in medical imaging at University of Michigan, he worked successively as Director of R&D in imaging at Bruker and Siemens, directing product development in the imaging detectors. Afterwards, he and Weiguo Lu, now a tenured professor at University of Texas Southwest Medical Center, founded two software companies targeting the radiotherapy and started working on product development for cancer radiotherapy and artificial intelligence technologies.
PS: Not only did he have a side career, but he was also the starting point guard on the basketball team at Nanjing University back in days.
Coolwulf leads the development of the Bruker Photon III ▼
Probably, he will become a distant scientific entrepreneur if he continues to go down in this way. But the following event was both a turning point in coolwulf’s life and the starting point that brought him closer to thousands of families and lives.
He was a 34-year-old alumnus of Nanjing University who died after missing the best treatment for breast cancer, leaving behind only a 4-year-old son. After witnessing life and death, and the families destroyed by the disease, coolwulf lamented the loss. At the same time, he learned that many breast cancer patients often lack access to detection, making it easy to delay diagnosis.
So the idea of using AI to detect X-rays was born by coolwulf, who also happened to have the right professional experience. However, it was not easy to make an AI that could accurately detect tumors.
Coolwulf first downloaded the DDSM and MIAS datasets from the University of Florida website. At the time, because the data format was old and not in standard Dicom and the images were still film scans, he wrote a special program to convert all the information into a usable form.
Then, he also wrote an email asking for permission in order to obtain the breast cancer dataset InBreast, a non-public resource from the University of Barcelona. During the same time, it was also necessary for him to continue reading a lot of literature and writing the corresponding model code.
The request email sent by coolwulf at the time ▼
However, this was not enough; to formally and efficiently train this model, high hardware power is required. So, he built one out of his own pocket - a GPU cluster of 50 Nvidia GTX 1080ti’s - locally.
At the time, he 50 graphics cards were not easy to come by. Due to crypto mining, GPUs were in severe shortage and very over-priced on eBay, coolwulf had to ask a lot of his friends to help him check online vendors such as Newegg/Amazon/Dell and grab the GPU when they are available. After lots of efforts, he finally completed the site’s preparation.
Yes, in addition to gaming and mining, the
Graphics Cards Have More Uses ▼
The free AI breast cancer detection website took coolwulf about three months of spare time, sometime he had to sleep in his office to get things done, before the site finally went live in 2018.
He said that he’s not sure actually how many people have used it because the data is not saved on the server due to patient privacy concerns. But during that time, he received a lot of thank-you emails from patients, many of them from China. Moreover, users really used the website to check out tumors, especially for people in remote areas with limited medical resources, which is equivalent to snatching time from the hands of death.
“The first one had the wrong photo. The tumor was found after retesting” (from coolwulf) ▼
A few years ago, this technology was not as popular as now, so coolwulf’s project was more like an initiation. The website also gained a lot of attention from the industry, during which many domestic and foreign medical institutions, such as Fudan University Hospital, expressed their gratitude to him by email and were willing to provide financial and technical support.
After all, the whole thing coolwulf was self-funded, which is not a small amount of money.
As for why he doesn’t commercialize the website and collect some money, this question was also asked by us.
coolwulf’s answer was indifferent but noncommittal: ” Cancer patients, as well as their families, have endured too much, and I believe everyone wants to help them, and I happen to have the ability to do so.” In this way, he thanked many people but didn’t take any financial assistance and wrapped up everything by himself alone.
In addition to the website, there was a desktop version of the testing software at the time ▼
By 2021, coolwulf had reached a second critical turning point in his life. His colleague’s cousin had a brain tumor that was not looking good and was treated with “whole brain radiation therapy”. Unfortunately, a few months after the whole brain radiotherapy, the tumor returned and there was no treatment left but to wait for death to come.
Whole brain radiotherapy is a therapy that eliminates the tumor on a large scale through radiation, which will not only eliminate the cancer cells but also cause damage to normal brain tissues, thus reducing the occurrence of lesions.
In less strict words, whole brain radiotherapy, is more like an ” indiscriminate attack”. Therefore, considering the tolerance of the radiation dose of brain critical structures such as brainstem or optical nerves, whole brain radiotherapy is usually a once-in-a-lifetime treatment.
After this incident, it completely changed coolwulf’s perspective and decided to break through an industry challenge - to further pushing AI not only just within the detection stage but also put it into actual treatment.
It is important to know that whole brain radiation therapy is the most common treatment option for brain tumors today. In the United States alone, 200,000 people receive whole brain radiation therapy each year. So, is it necessary to take the risk of choosing whole brain radiotherapy for patients with multiple tumors of brain cancer?
Not really, because there is another kind of treatment - stereotactic radiotherapy. Compared with whole brain radiotherapy, stereotactic radiotherapy is more focused and can precisely remove the diseased tissues without hurting the normal tissues.
For example gamma knife, is a kind of stereotactic radiotherapy machine. This therapy has much lower side effects, is less harmful to patients, and can be used multiple times.
There is also a general consensus in the academic community that stereotactic radiotherapy offers patients a better quality of life and, at the same time, is more effective. The only problem is that with stereotactic radiotherapy, medical resources become more scarce.
Because once this protocol is adopted, the oncologist or neurosurgeon has to precisely outline and label each tumor; the medical physicist also has to make a precise treatment plan for each tumor, and it will take a lot of time to save one patient.
Therefore, doctors almost always prefer whole brain radiotherapy to stereotactic radiotherapy when the patient has more than 5 or more brain lesions.
But AI may be able to share the workload of doctors. So, once again, coolwulf is working to make stereotactic radiotherapy available to more brain cancer patients.
But this time, the problem was significantly more challenging, and he could no longer do it alone. So he approached University of Texas Southwestern Medical Center, Stanford University, for collaboration.
With the help and efforts of many people, the following three AI models were recently developed:
* and a model based on optimized radiation dose maps to quickly segment multiple lesions into different treatment courses.
The three models complement each other and correspond to the physician’s workflow, significantly reducing the workload when using stereotactic radiotherapy.
This project, now being presented at the 2022 AAPM Spring Clinical Meeting and 2022 AAPM annual meeting, has once again achieved widespread industry recognition.
coolwulf, along with his coauthors, is also accelerating the pace of trying to get the entire stereotactic radiotherapy community aware of this achievements so the technology could be adopted to actually help more patients. In interviews, coolwulf has repeatedly mentioned that he is in no way alone in achieving the results he has today.
He hopes that we will publish the list of collaborators, because everyone here is a hero who is quietly fighting with cancer.
In recent years, the cancer mortality rate has dropped by 30% compared to 30 years ago. At this rate, perhaps one day in the future, cancer will no longer be a terminal disease.
But this is not a straightforward bridge, there are countless people like coolwulf and others who are walking in the abyss. To conclude the article, let’s borrow a comment from a user on Reddit.
...
Read the original on howardchen.substack.com »
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
Visualization of Imagen. Imagen uses a large frozen T5-XXL encoder to encode the input text into embeddings. A conditional diffusion model maps the text embedding into a 64×64 image. Imagen further utilizes text-conditional super-resolution diffusion models to upsample the image 64×64→256×256 and 256×256→1024×1024.
* We show that large pretrained frozen text encoders are very effective for the text-to-image task.
* We show that scaling the pretrained text encoder size is more important than scaling the diffusion model size.
* We introduce a new thresholding diffusion sampler, which enables the use of very large classifier-free guidance weights.
* We introduce a new Efficient U-Net architecture, which is more compute efficient, more memory efficient, and converges faster.
* On COCO, we achieve a new state-of-the-art COCO FID of 7.27; and human raters find Imagen samples to be on-par with reference images in terms of image-text alignment.
* Human raters strongly prefer Imagen over other methods, in both image-text alignment and image fidelity.
A photo of a An oil painting of a
in a garden. on a beach. on top of a mountain.
There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
Finally, while there has been extensive work auditing image-to-text and image labeling models for forms of social bias, there has been comparatively less work on social bias evaluation methods for text-to-image models. A conceptual vocabulary around potential harms of text-to-image models and established metrics of evaluation are an essential component of establishing responsible model release practices. While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time. Imagen, may run into danger of dropping modes of the data distribution, which may further compound the social consequence of dataset bias. Imagen exhibits serious limitations when generating images depicting people. Our human evaluations found Imagen obtains significantly higher preference rates when evaluated on images that do not portray people, indicating a degradation in image fidelity. Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A giant cobra snake on a farm. The snake is made out of corn.
A giant cobra snake on a farm. The snake is made out of corn.
We give thanks to Ben Poole for reviewing our manuscript, early discussions, and providing many helpful comments and suggestions throughout the project. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for helping us incorporate important responsible AI practices around this project. We appreciate valuable feedback and support from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grateful to Tom Small for designing the Imagen watermark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for initial discussions and feedback. We acknowledge hard work and support from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giving us advice along the project and assisting us with the publication process. We thank Victor Gomes and Erica Moreira for their consistent and critical help with TPU resource allocation. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for volunteering a considerable amount of their time for testing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for allowing us to use DALL-E 2 samples and providing us with GLIDE samples. We are thankful to Matthew Johnson and Roy Frostig for starting the JAX project and to the whole JAX team for building such a fantastic system for high-performance machine learning research. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for helpful discussions and spending time Imagening!
...
Read the original on gweb-research-imagen.appspot.com »
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
Visualization of Imagen. Imagen uses a large frozen T5-XXL encoder to encode the input text into embeddings. A conditional diffusion model maps the text embedding into a 64×64 image. Imagen further utilizes text-conditional super-resolution diffusion models to upsample the image 64×64→256×256 and 256×256→1024×1024.
* We show that large pretrained frozen text encoders are very effective for the text-to-image task.
* We show that scaling the pretrained text encoder size is more important than scaling the diffusion model size.
* We introduce a new thresholding diffusion sampler, which enables the use of very large classifier-free guidance weights.
* We introduce a new Efficient U-Net architecture, which is more compute efficient, more memory efficient, and converges faster.
* On COCO, we achieve a new state-of-the-art COCO FID of 7.27; and human raters find Imagen samples to be on-par with reference images in terms of image-text alignment.
* Human raters strongly prefer Imagen over other methods, in both image-text alignment and image fidelity.
A photo of a An oil painting of a
in a garden. on a beach. on top of a mountain.
There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
Finally, while there has been extensive work auditing image-to-text and image labeling models for forms of social bias, there has been comparatively less work on social bias evaluation methods for text-to-image models. A conceptual vocabulary around potential harms of text-to-image models and established metrics of evaluation are an essential component of establishing responsible model release practices. While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time. Imagen, may run into danger of dropping modes of the data distribution, which may further compound the social consequence of dataset bias. Imagen exhibits serious limitations when generating images depicting people. Our human evaluations found Imagen obtains significantly higher preference rates when evaluated on images that do not portray people, indicating a degradation in image fidelity. Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A giant cobra snake on a farm. The snake is made out of corn.
A giant cobra snake on a farm. The snake is made out of corn.
We give thanks to Ben Poole for reviewing our manuscript, early discussions, and providing many helpful comments and suggestions throughout the project. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for helping us incorporate important responsible AI practices around this project. We appreciate valuable feedback and support from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grateful to Tom Small for designing the Imagen watermark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for initial discussions and feedback. We acknowledge hard work and support from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giving us advice along the project and assisting us with the publication process. We thank Victor Gomes and Erica Moreira for their consistent and critical help with TPU resource allocation. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for volunteering a considerable amount of their time for testing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for allowing us to use DALL-E 2 samples and providing us with GLIDE samples. We are thankful to Matthew Johnson and Roy Frostig for starting the JAX project and to the whole JAX team for building such a fantastic system for high-performance machine learning research. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for helpful discussions and spending time Imagening!
...
Read the original on imagen.research.google »
Two years ago, frustrated with a long list of unfulfilled project ideas in my phone notes, I decided to start trying one idea each week in its tiniest form.
I never kept to a weekly schedule, but I’ve kept plodding along since then and launched 8 things.
Each morning I sit down with a coffee and bash out some project code. It’s a hobby I love, and one that’s starting to generate some decent passive income now.
In this post I want to update you on everything I’ve launched, and share what I’ve learnt about building lots of these tiny internet projects.
Lets go back to the start.
The first project I made is this blog you’re reading right now.
The purpose of the blog was to simply document all the other projects I’d make.
I launched it the day after I turned 25, and the very first post I wrote, “Tiny Websites are Great”, went semi-viral, which was very lucky and spurred me to keep going.
Not much has changed here since then. I’ve written 17 blog posts, and, of course, there’s now dark mode.
Objectively, looking at page views, this is the most successful thing I’ve ever created.
One week after launching this blog, I thought it would be a brilliant idea to buy domain names from several FAANG companies, e.g. google.קום
This wasn’t really a project, but I’d always been interested in domains, and the blog post I wrote “I bought netflix.soy” again went semi-viral.
I still own netflix.soy. With Netflix stock tanking, maybe it’ll be worth more one day.
The next project I made was a tiny 8-bit battle royale game for Android.
This was so fun to build, but ended up being my least successful project.
I launched it to crickets, which was gutting after the success of the last blog posts.
Sadly I lost the code for the game when I switched laptops. It’s still live, but very buggy. This didn’t stop someone streaming it on Twitch last month though.
Next up I built a micro online store builder to sell a single product on repeat; just imagine a tiny Shopify.
I cobbled this project together in 2 weeks, and launched it on Product Hunt.
Amazingly, people started selling real things on there, netting me a mighty £1.63 in 1% transaction fees.
Although this wasn’t even enough for a Tesco meal deal, it was my first taste of internet money, and boy did it taste good.
Amazingly, a few months after launching One Item Store, I was approached by someone looking to buy it.
I ended up selling for $5,300, which blew my tiny mind.
After conquering the e-commerce world, building a social network was the obvious next step.
In a few weeks I launched “Snormal”: a social network for people to post everyday normal things, like “I just ate a baguette”.
In hindsight, this does not make an exciting social network that people want to visit, and the website is kind of dead.
I’ve abandoned this project, but it still has a few daily users.
I sign up to a lot of new products to snag a “rare” handle, e.g. @ben. My next project turned this into a business.
Each month I’d send out a newsletter with 4 new social networks, and let you know if your username was available on them.
For an optional $10/month, I’d actually register the usernames for you.
After a 1 month build, I launched “Earlyname” on Product Hunt, and surprisingly got a few paying customers.
For 6 months I sent out newsletters, growing Earlyname to $350/month in revenue, but ultimately decided I didn’t enjoy running it.
Having already sold one project, I confidently listed Earlyname on MicroAcquire and sold it for $10,500.
One day, I found you could use emoji domains in email addresses, e.g. hi@👋.kz
Realising there were many .kz emoji domains available, I decided it would be a great idea to buy 300 Kazakhstan emoji domains and launch an emoji email address service.
One month after launching, I had 150 customers, but I’d actually made a loss from the domain name costs.
Like all these projects, I wrote up a blog post about it and put it on Hacker News.
The post did nothing for 30 minutes, then absolutely skyrocketed.
I sold $9,000 in subscriptions over a weekend; the most I’ve ever made in such a short period of time.
Mailoji is still going strong, and now has 700 emoji domains. I collect them like Pokémon. Recently I caught ❤️.gg
You can now also have full emoji email addresses like 🦄🚀@🍉.fm
I really enjoy writing using pen & paper, and wanted to start a daily blog.
Over a few weeks, I built a prototype app that let you snap a picture of a handwritten page and turn it into a website.
After enjoying writing a few “paper blog posts”, I decided to turn this prototype into a full-blown service called Paper Website.
I bought 100 notebooks to give out to initial customers, and braced for launch.
Fortunately it went well. Over 200 people have built a paper website, and I only have a few notebooks left in my kitchen.
My daily blog runs on Paper Website, and I’ve handwritten well over 100 paper blog posts without getting a paper cut.
Rapidly launching lots of tiny projects is so much fun. This is the main reason I do it.
However, with each launch, I’m slowly learning what makes a project “successful”. After 8 launches, I’m starting to see some patterns.
I’ve also discovered what micro-businesses I like to run: I don’t enjoy newsletters, but love quirky technical projects that generate passive income. I would never have known this launching just one thing.
The best thing about tiny projects is they’re so small, the stakes are incredibly low. There’s zero pressure if something fails, you just move on guilt-free and try again.
Another random benefit is my developer skills have 10X’ed over the 2 years of building these projects. I was decent before, but I’m on a different level now.
A big debate is whether you should launch lots of things, or focus on just one.
I personally enjoy the “micro-bet” approach, but I often wonder if I gave all my attention to one project I could see better financial success.
At the moment I have 3 active projects that run on auto-pilot. Time management and context switching has been okay, but with 5+ active projects it might get hard.
One other weird downside is that, as I’ve grown an audience building these projects, I sometimes catch myself thinking “should I build something just for the upvotes”.
It’s tempting, because I know I could, and it would probably work. But, I think this is a sure-fire way to burn out fast.
When I started this mission, I had a big list of project ideas that I’d built up in my phone. Maybe you have one of those lists too.
Two years later, I’ve realised a lot of these initial ideas were pretty terrible.
It’s a paradox, but I’ve found that my best ideas now come from building other ideas.
I would never have thought of an emoji email address service going about my day-to-day, if I hadn’t decided to stupidly experiment with domains and buy netflix.soy.
If you’re stuck for ideas, I recommend just building something; anything; even if it’s terrible, and I guarantee a better idea will pop into your brain shortly after.
Each project I build now uses a spark of an idea from the previous. It’s like a monkey swinging vine-to-vine, except the vines are projects, and I’m just a dumb monkey.
I want to keep building tiny projects for decades, and I’m excited to see what ideas come next.
Now, onto the next project!
...
Read the original on tinyprojects.dev »
Offers
For 9 euros, you can travel throughout Germany on local/regional trains for a whole month in June, July or August.
Flat rate: it gives you unlimited travel on local/regional transport services during the selected month
Travel throughout Germany: on all means of local/regional public transport (such as RB, RE, U-Bahn, S-Bahn, bus and tram)
No waiting times, fully mobile: you can also buy it as a mobile phone ticket
People with an annual or monthly season ticket will automatically receive notification from their transport association/operator. They will not have to do anything themselves.
All customers will benefit from this special offer and be able to buy the ticket before the three-month period starts. The ticket will be available via channels such as bahn.de, the DB Navigator app and ticket desks at stations.
When will I be able to buy the 9-Euro-Ticket for local/regional transport?
The offering goes on sale on 23 May 2022. People who use local/regional transport will be able to buy it anywhere in Germany via channels such as bahn.de and DB Navigator. It will also be available from DB Reisezentrum (travel centre) staff, at a DB agency and ticket machines at stations.
How long is the ticket valid?
The 9-Euro-Ticket is available in the period from 1 June 2022 to 31 August 2022.
It is valid for one calendar month, from 00:00 on the 1st until 24:00 on the 30th/31st.
Where can I use the 9-Euro-Ticket?
The 9-Euro-Ticket is valid on all public transport services in Germany. You can use it on any local/regional route and make as many journeys as you like.
The ticket is not valid on long-distance trains (e.g. IC, EC, ICE) or long-distance buses.
What about holders of a DB season ticket for local/regional transport?
Holders of a season ticket for local/regional transport will be credited or refunded the difference between their season ticket price and the 9-Euro-Ticket price. This does not apply to holders of a season ticket for long-distance transport and BahnCard 100 holders.
Do people with an annual or monthly season ticket have to buy a 9-Euro-Ticket separately if they want to use this special offer?
These people will automatically receive notification from their transport association/operator. This will include information about the invoicing process (refund or reduction to their standing order). They will not have to do anything themselves.
Can I use the ticket for on-demand services, such as shared taxis and taxi buses?
No. The ticket does not cover on-demand services (e.g. taxis at stations). These are supplementary local transport services that require a surcharge in addition to a normal fare.
Children under 6 always travel for free. They do not need a ticket.
Children aged 6-14 do not travel for free, so they need their own normal ticket or 9-Euro-Ticket.
You cannot use the 9-Euro-Ticket as a ticket for dogs.
However, you can bring a dog with you in line with transport associations’ regulations. For example, some associations require you to buy a separate ticket for dogs. The situation is similar with guide dogs and assistance dogs: they are covered by transport associations’ regulations.
The 9-Euro-Ticket does not normally include free bicycle transport. Bringing bicycles is subject to the relevant transport association regulations.
Please note: Trains get very crowded in the June-August period, so it is not possible to guarantee that there will always be enough room on board for bicycles. We recommend hiring a bike at the location where you disembark. If possible, avoid bringing bicycles if you are travelling on public holidays.
More information about transporting bicycles is available in German at bahn.de/fahrrad-nahverkehr
Reservations are uncommon on local/regional transport services because people often decide to use them at short notice. It is not possible to reserve a seat when using the 9-Euro-Ticket.
Is there a 9-Euro-Ticket for first class?
No. The 9-Euro-Ticket is only for travel in second class.
Yes. You can use the 9-Euro-Ticket before starting and after completing a trip on a long-distance train. However, you need a separate ticket for the long-distance part of your journey.
Are BahnCard discounts available with the 9-Euro-Ticket?
No. BahnCard discounts cannot be used in conjunction with the 9-Euro-Ticket, as it is like a monthly ticket for local/regional transport.
Does the 9-Euro-Ticket replace the City-Ticket?
No. The City-Ticket is always part of a flexible or saver fare ticket for the outbound or return leg of a long-distance journey.
The ticket’s final details will not be known until 20 May 2022, when the German parliament’s upper house is due to sign off on the support package.
Please check this page frequently for updates and new information.
Last updated: 17 May 2022
...
Read the original on www.bahn.com »
Vangelis—the composer who scored Blade Runner, Chariots of Fire, and many other films—has died, Reuters reports, citing the Athens News Agency. A cause of death was not revealed. According to The Associated Press, the musician died at a French hospital. Vangelis was 79 years old.
Born Evángelos Odysséas Papathanassíou, Vangelis was largely a self-taught musician. He found success in Greek rock bands such as the Forminx and Aphrodite’s Child—the latter of which sold over 2 million copies before disbanding in 1972. One of his earliest film scores, written while he was still in Aphrodite’s Child, was for a French nature documentary called L’Apocalypse des animaux.
An innovator in electronic music, Vangelis is arguably best known for his work on Chariots of Fire and Ridley Scott’s Blade Runner. It was noted by many upon the release of the Harrison Ford–starring film that Vangelis’ score was as important a component as Ford’s character Rick Deckard in bringing the futuristic noir film to life. Years on, it’s considered by many to be a hallmark in the chronology of electronic music.
Vangelis’ work on Chariots of Fire earned him the 1981 Academy Award for Best Original Score. The soundtrack album also reached the top of the Billboard 200 albums chart in April 1982. The film’s opening theme—called “Titles” on the soundtrack album—topped the Billboard Hot 100 the following month. The theme has featured often at the Olympic Games.
In 1973, Vangelis started his solo career with his debut album Fais que ton rêve soit plus long que la nuit (Make Your Dream Last Longer Than the Night). During the ’70s, he was widely rumored to join the prog-rock band Yes, following the departure of keyboardist Rick Wakeman. After rehearsing with them for months, Vangelis declined to join the group. He and Yes lead vocalist Jon Anderson reunited later in the ’80s, and they went on to release several albums together as Jon & Vangelis.
Vangelis released his final studio album, Juno to Jupiter, in September 2021 via Decca. The record was inspired by the mission of NASA’s Juno spacecraft and featured soprano Angela Gheorghiu.
Kyriakos Mitsotakis, Greece’s prime minister, eulogized Vangelis on Twitter. “Vangelis Papathanassíou is no longer with us. For the whole world, the sad news states that the world music firm has lost the international Vangelis. The protagonist of electronic sound, the Oscars, the Myth and the great hits,” he wrote, according to the site’s translation. “For us Greeks, however, knowing that his second name was Odysseus, means that he began his long journey in the Roads of Fire. From there he will always send us his notes.”
...
Read the original on pitchfork.com »
😵💫 Why billing systems are a nightmare for engineers”On my first day, I was told: “Payment will come later, shouldn’t be hard right?”
I was worried. We were not selling and delivering goods, but SSDs and CPU cores, petabytes and milliseconds, space and time. Instantly, by an API call. Fungible, at the smallest unit. On all continents. That was the vision.
After a week I felt like I was the only one really concerned about the long road ahead. In ambitious enterprise projects, complexity compounds quickly: multi-tenancy, multi-users, multi-roles, multi-currency, multi-tax codes, multi-everything. These systems were no fun, some were ancient, and often ‘spaghetti-like’. What should have been a 1 year R&D project ended up taking 7 years of my professional life, in which I grew the billing team from 0 to 12 people.
So yes, if you have to ask me, billing is hard. Harder than you think. It’s time to solve that once and for all.“This is a typical conversation we have with engineers on a daily basis. In that case, these are Kevin’s words, who was the VP Engineering at Scaleway, one of the European leaders in cloud infrastructure. Some of you asked me why billing was that complex, after my latest post about my ‘Pricing Hack’. My co-founder Raffi took on the challenge of explaining why it’s still an unsolved problem for engineers. We also gathered insights from other friends who went through the same painful journey, including Algolia, Segment, Pleo, don’t miss them! Passing the mike to Raffi. When you’re thinking about automating billing, this means your company is getting traction. That’s good news! You might then wonder: should we build it in-house? It does not look complex, and the logic seems specific to your business.Also, you might want to preserve your precious margins and therefore avoid existing billing solutions like Stripe Billing or Chargebee that take a cut of your revenue. Honestly, who likes this rent-seeker approach? Our team at Lago still has some painful memories of the internal billing system at Qonto, that we had to build, maintain, and deal with.. Why was it that painful? In this article, I will provide a high-level view of the technical challenges we faced while implementing hybrid pricing (based on both ‘subscription’ and ‘usage’), and what we learned the hard way in this journey. TL;DR: Billing is just 100x harder than you will ever thinkLet’s bill yearly as well, should be pretty straightforward’ claims the Revenue team. Great! Everyone is excited to start working on it. Everyone, except the tech team. When you start building your internal billing system, it’s hard to think of all the complexity that will pop up down the road, unless you’ve experienced it before.It’s common to start a business with a simple pricing. You define one or two price plans, and limit this pricing to a defined number of features. However, when the company is growing, the pricing gets more and more complex, just like your entire codebase.At Qonto, our first users could only onboard on a €9 plan. We quickly decided to add plans, and ‘pay-as-you-go’ features (such as ATM withdrawals, foreign currency payments, one shot capital deposit, etc…) to grow revenue.Also, as Qonto is a ‘neobank’, we wanted to charge our customers directly in their wallet, through a ledger connected to our internal billing system. The team started from a duo of full-time engineers building a billing system (which is already a considerable investment), to currently a dedicated cross-functional team called ‘pricing’.This is not specific to Qonto of course. Pleo, another Fintech unicorn from Denmark faced similar hurdles: “I’ve learned to appreciate that billing systems are hard to build, hard to design, and hard to get working for you if you deviate from ‘the standard’ even by a tiny bit.” This is not even specific to Fintechs. The Algolia team ended up creating a whole pricing department, now led by Djay, a Pricing and monetization veteran, from Twilio, VMWare, Service Now. They pivoted their pricing to a ‘pay-as-you-go’ model based pricing based on the number of monthly API searches “It looks easy on paper — however, it’s a challenge to bring automation and transparency to a customer, so they can easily understand. There is a lot of behind-the-scenes work that goes into this, and it takes a lot of engineering and investment to do it the right way.“says their CEO, Bernardette Nixon in Venture Beat and we could not agree more.When implementing a billing system, dealing with dates is often the number 1 complexity. Somehow, all your subscriptions and charges deal with a number of days. Whether you make your customers pay weekly, monthly or yearly, you need to roll things over a period of time called the billing period.Here is a non-exhaustive list of difficulties for engineers:How to deal with leap years?Do your subscriptions start at the beginning of the month or at the creation date of the customer?How many days/months of trial do you offer?Wait, bullet 1 is also important for February… 🤯How to calculate a usage-based charge (price per seconds, hours, days…)?Do I resume the consumption or do I stack it month over month? Year over year?Do I apply a pro-rata based on the number of days consumed by my customer?Although every decision is reversible, billing cycle questions are often the most important source of customer support tickets, and iterating on them is a highly complex and sensitive engineering project. For instance, Qonto migrated the billing cycle start date from the ‘anniversary’ date, to the ‘beginning of the month’ date, and the approach was described here. It was not a trivial change.Then, you need to enable your customers to upgrade or downgrade their subscriptions. Moving from a plan A to a plan B seems pretty easy to implement, but it’s not. Let’s zoom on potential edge cases you could face.The user downgrades in the middle of a period. Do we block features right now or at the end of the current billing period?The user has paid the plan in advance (for the next billing period)The user has paid the plan in arrears (for what he has really consumed)The user downgrades from a yearly plan to a monthly planThe user downgrades from a plan paid in advance to a plan paid in arrears (and vice-versa)The user has a discount applied when downgradingThe user upgrades in the middle of a period. We probably need to give her access to the new features right now. Do we apply a pro-rata? Do we make her pay the pro-rata right now? At the end of the billing period?The user upgrades from a plan paid in advance to a plan paid in arrearsThe user upgrades from a monthly plan to a yearly plan. Do we apply a pro-rata? Do we make her pay the pro-rata right now? At the end of the billing period?The user upgrades from a plan paid in advance to a plan paid in arrears (and vice-versa)We did not have a ‘free trial’ period at the time at Qonto, but Arnon from Pleo describes the additional scenarii this creates here.Subscription based billing is the first step when implementing a billing system. Each customer needs to be affiliated to a plan in order to start charging the right amount at the right moment.But, for a growing number of companies, like we did at Qonto, other charges come alongside this subscription.
These charges are based on what customers really consume. This is what we call ‘usage based billing’. Most companies end up having a hybrid pricing: a subscription charged per month and ‘add-ons’ or ‘pay as you go’ charges on top of it.These consumption-based charges are tough to track at scale, because they often come with math calculation rules, performed on a high volume of events that need to be tracked.This means that they need to COUNT the DISTINCT number of users, each month, and resume this value at the end of the billing period. In order to get the number of unique visitors, they need to apply a DISTINCT to deduplicate them.Algolia tracks the number of api_search per monthThis means they need to SUM the number of monthly searches for a client and resume it at the beginning of each billable period.It becomes even more complex when you start calculating a charge based on a timeframe. For instance, Snowflake charges the compute usage of a data warehouse per second. This means that they sum the number of Gigabytes or Terabytes consumed, multiplied by the number of seconds of compute time.Maybe an example we can all relate to would be the one of an energy company who needs to charge $10 per kilowatt of electricity used per hour, for instance. In the example below, you can get an overview of what needs to be modeled and automated by the billing system.Working with companies’ revenue can be tough.Billing mismatches sometimes happen. Charging a user twice for the same product is obviously bad for customer experience, but failing to charge when it’s needed hurts revenue. That’s partly why Finance and BI teams spend so much time on revenue recognition.
As a ‘pay-as-you-go’ company, the billing system will process a high volume of events, when an event needs to be replayed, it needs to happen without billing the user another time.
Engineers call it ‘Idempotency’, meaning the ability to apply the same operation multiple times without changing the result beyond the first try. It’s a simple design principle, however, maintaining it at all times is hard. Cash collection is the process of collecting the money customers owe you. And the black beast of cash collection is ‘dunnings’: when payments fail to arrive, the merchant needs to persist and make repeated payment requests to their customers without damaging the relationship. These reminders are called ‘dunnings’. At Qonto, we called these ‘waiting funds’. A client’s status is ‘waiting funds’ when they successfully went through the sign-up, the KYC and KYB process, yet their account balance is still 0. For a neobank, the impact is twofold: you can’t charge for your service fees (a monthly subscription), and your customer does not generate interchange revenues (A simplistic explanation of interchange revenues: when you make a €100 payment with Qonto - or any card provider-, Qonto earns €0.5-€1 of interchange revenue, through the merchant’s fees.)
Therefore, your two main revenue streams are ‘null’, but you did pay to acquire, onboard, KYC the user, produce and send a card to them. We often half joked about the need to hire a ‘chief waiting funds officer’: the financial impact of this is just as high as the problem is underestimated. Every company has ‘dunning’ challenges. For engineers, on top of all the billing architecture, this means they need to design and build:A ‘retry logic’ to ask for a new payment intentAn invoice reconciliation (if several months of charges are being recovered)An app logic to block the access in case of payment failureAn emailing workflow to urge a user to proceed to the paymentSome SaaS are even on a mission to fight dunnings and have built full-fledge companies around cash collection features, such as Upflow for instance, that is used by successful B2B scale-ups, including Front and Lattice, the leading HRtech.‘Sending quality and personalized reminders took us a lot of time and, as Lattice was growing fast, it was essential for us to scale our cash collection processes. We use Upflow to personalize how we ask our customers for money, repeatedly, while keeping a good relationship. We now collect 99% of our invoices, effortlessly’, #6 - The labyrinth of taxes and VATTaxes are challenging and depend on multiple dimensions.What are the dimensions?Applying tax to your customers depends on what you are selling, your home country and your customers’ home country. In the simplest cases, your tax decision tree should look like this:Now, imagine that you sell different types of goods/services to different taxonomies of clients in +100 countries. If you think the logic on paper looks complex, the engineering needs to automate this at least tenfold.What do engineers need to do?Engineers will need to think of an entire tax logic within the application. This logic is pyramidal based both on customers and products sold by your company.Taxes on the general settings level. Somehow, your company will have a general tax rate that is applied by default in the app.Taxes per customer. This general setting tax rate can be overridden by a specific tax applied for a customer. This per-customer tax rate depends on all the dimensions explained in the image above.Taxes per feature. In some cases, tax rates can also be applied by feature. This is mostly the case for the banking industry. For instance, at Qonto, banking fees are not subject to taxes and non-banking fees have a 20% VAT rate for all customers. Engineers created a whole tax logic based on the feature being used by a customer.With billing, the devil is in the details. That’s why I always cringe when I see engineering teams build a home-made system, because they think it’s not ‘that complex’. If you’ve already tackled the topics listed above and think it’s a good investment of your engineering time, go ahead and build it in-house. Make sure to budget for the maintenance work that is always needed. Another option is to rely on existing billing platforms, built by specialized teams. If you’re considering choosing one or switching, and you think I can help, please reach out! To solve this problem at scale, we adopted a radical sharing approach. We’ve started building an Open-Source Alternative to Stripe Billing (and Chargebee, and all the equivalents). Our API and architecture are open, so that you can embed, fork, customize them as much as your pricing and internal process need. As you’ve read, we experienced these painpoints first hand. Request access or sign up for a live demo here, if you’re interested!
Pricing, my only growth hack at Qonto
...
Read the original on www.getlago.com »
Disconnect from work and let the horses of Iceland reply to your emails while you are on vacation. (Seriously)
...
Read the original on www.visiticeland.com »
I am staring at about a dozen, stiff, eight-foot high, orange-red penises, carved from living bedrock, and semi-enclosed in an open chamber. A strange carved head (of a man, a demon, a priest, a God?), also hewn from the living rock, gazes at the phallic totems — like a primitivist gargoyle. The expression of the stone head is doleful, to the point of grimacing, as if he, or she, or it, disapproves of all this: of everything being stripped naked under the heavens, and revealed to the world for the first time in 130 centuries.
Yes, 130 centuries. Because these penises, this peculiar chamber, this entire perplexing place, known as Karahan Tepe (pronounced Kah-rah-hann Tepp-ay), which is now emerging from the dusty Plains of Harran, in eastern Turkey, is astoundingly ancient. Put it another way: it is estimated to be 11-13,000 years old.
This number is so large it is hard to take in. For comparison the Great Pyramid at Giza is 4,500 years old. Stonehenge is 5,000 years old. The Cairn de Barnenez tomb-complex in Brittany, perhaps the oldest standing structure in Europe, could be up to 7,000 years old.
The oldest megalithic ritual monument in the world (until the Turkish discoveries) was always thought to be Ggantija, in Malta. That’s maybe 5,500 years old. So Karahan Tepe, and its penis chamber, and everything that inexplicably surrounds the chamber — shrines, cells, altars, megaliths, audience halls et al — is vastly older than anything comparable, and plumbs quite unimaginable depths of time, back before agriculture, probably back before normal pottery, right back to a time when we once thought human ‘civilisation’ was simply impossible.
After all, hunter gatherers — cavemen with flint arrowheads — without regular supplies of grain, without the regular meat and milk of domesticated animals, do not build temple-towns with water systems.
Virtually all that we can now see of Karahan Tepe has been skilfully unearthed the last two years, with remarkable ease (for reasons which we will come back to later). And although there is much more to summon from the grave, what it is already teaching us is mind stretching. Taken together with its age, complexity, sophistication, and its deep, resonant mysteriousness, and its many sister sites now being unearthed across the Harran Plains — collectively known as the Tas Tepeler, or the ‘stone hills’ — these carved, ochre-red rocks, so silent, brooding, and watchful in the hard whirring breezes of the semi-desert, constitute what might just be the greatest archaeological revelation in the history of humankind.
The unveiling of Karahan Tepe, and nearly all the Tas Tepeler, in the last two years, is not without precedent. As I take my urgent photos of the ominously louring head, Necmi Karul touches my shoulder, and gestures behind, across the sun-burnt and undulant plains.
Necmi, of Istanbul University, is the chief archaeologist in charge of all the local digs — all the Tas Tepeler. He has invited me here to see the latest findings in this region, because I was one of the first western journalists to come here many years ago and write about the origin of the Tas Tepeler. In fact, under the pen-name Tom Knox, I wrote an excitable thriller about the first of the ‘stone hills’ — a novel called The Genesis Secret, which was translated into quite a few languages — including Turkish. That site, which I visited 16 years back, was Gobekli Tepe.
Necmi points into the distance, now hazed with heat.
‘Sean. You see that valley, with the roads, and white buildings?’
I can maybe make out a white-ish dot, in one of the pale, greeny-yellow valleys, which stretch endlessly into the shimmering blur.
‘That,’ Necmi says, ‘Is Gobekli Tepe. 46 kilometres away. It has changed since since you were last here!’
And so, to Gobekli Tepe. The ‘hill of the navel’. Gobekli is pivotally important. Because Karahan Tepe, and the Tas Tepeler, and what they might mean today, cannot be understood without the primary context of Gobekli Tepe. And to comprehend that we must double back in time, at least a few decades.
The modern story of Gobekli Tepe begins in 1994, when a Kurdish shepherd followed his flock over the lonely, infertile hillsides, passing a single mulberry tree, which the locals regarded as ‘sacred’. The bells hanging on his sheep tinkled in the stillness. Then he spotted something. Crouching down, he brushed away the dust, and exposed a large, oblong stone. The man looked left and right: there were similar stone outcrops, peeping from the sands.
Calling his dog to heel, the shepherd informed someone of his finds when he got back to the village. Maybe the stones were important. He was not wrong. The solitary Kurdish man, on that summer’s day in 1994, had made an irreversibly profound discovery — which would eventually lead to the penis pillars of Karahan Tepe, and an archaeological anomaly which challenges, time and again, everything we know of human prehistory.
A few weeks after that encounter by the mulberry tree, news of the shepherd’s find reached museum curators in the ancient city of Sanliurfa, 13km south-west of the stones. They got in touch with the German Archaeological Institute in Istanbul. And in late 1994 the German archaeologist Klaus Schmidt came to the site of Gobekli Tepe to begin his slow, diligent excavations of its multiple, peculiar, enormous T-stones, which are generally arranged in circles — like the standing stones of Avebury or Stonehenge. Unlike European standing stones, however, the older Turkish megaliths are often intricately carved: with images of local fauna. Sometimes the stones depict cranes, boars, or wildfowl: creatures of the hunt. There are also plenty of leopards, foxes, and vultures. Occasionally these animals are depicted next to human heads.
Notably lacking were detailed human representations, except for a few coarse or eerie figurines, and the T-stones themselves, which seem to be stylised invocations of men, their arms ‘angled’ to protect the groin. The obsession with the penis is obvious — more so, now we have the benefit of hindsight provided by Karahan Tepe and the other sites. Very few representations of women have emerged from the Tas Tepeler so far; there is one obscene caricature of a woman perhaps giving birth. Whatever inspired these temple-towns it was a not a benign matriarchal culture. Quite the opposite, maybe.
The apparent date of Gobekli Tepe — first erected in 10,000 BC, if not earlier — caused a deal of skepticism. But over time archaeological experts began to accept the significance. Ian Hodden, of Stanford University, declared that: ‘Gobekli Tepe changes everything.’ David Lewis-Williams, the revered professor of archaeology at Witwatersrand University in Johannesburg, said, at the time: ‘Gobekli Tepe is the most important archaeological site in the world.’
And yet, in the nineties and early noughties Gobekli Tepe dodged the limelight of general, public attention. It’s hard to know why. Too remote? Too hard to pronounce? Too eccentric to fit with established theories of prehistory? Whatever the reason, when I flew out on a whim in 2006 (inspired by two brisk minutes of footage on a TV show), even the locals in the nearby big city, Sanliurfa, had no conception of what was out there, in the barrens.
I remember asking a cab driver, the day I arrived, to take me to Gobekli Tepe. He’d never heard of it. Not a clue. Today that feels like asking someone in Paris if they’ve heard of the Louvre and getting a Non. The driver had to consult several taxi-driving friends until one grasped where I wanted to go — ‘that German dig, out of town, by the Arab villages’ — and so the driver rattled me out of Sanliurfa and into the dust until we crested one final remote hill and came upon a scene out of the opening titles of the Exorcist: archaeologists toiling away, unnoticed by the world, but furiously intent on their world-changing revelations.
For an hour Klaus (who sadly died in 2014) generously escorted me around the site. I took photos of him and the stones and the workers, this was not a hassle as there were literally no other tourists. A couple of the photos I snatched, that hot afternoon, went on to become mildly iconic, such as my photo of the shepherd who found the site, or Klaus crouching next to one of the most finely-carved T-stones. They were prized simply because no one else had bothered to take them.
After the tour, Klaus and I retired from the heat to his tent, where, over dainty tulip glasses, of sweet black Turkish tea, Klaus explained the significance of the site.
As he put it, ‘Gobekli Tepe upends our view of human history. We always thought that agriculture came first, then civilisation: farming, pottery, social hierarchies. But here it is reversed, it seems the ritual centre came first, then when enough hunter gathering people collected to worship — or so I believe — they realised they had to feed people. Which means farming.’ He waved at the surrounding hills, ‘It is no coincidence that in these same hills in the Fertile Crescent men and women first domesticated the local wild einkorn grass, becoming wheat, and they also first domesticated pigs, cows and sheep. This is the place where Homo sapiens went from plucking the fruit from the tree, to toiling and sowing the ground.’
Klaus had cued me up. People were already speculating that — if you see the Garden of Eden mythos as an allegory of the Neolithic Revolution: i.e. our fall from the relative ease of hunter-gathering to the relative hardships of farming (and life did get harder when we first started farming, as we worked longer hours, and caught diseases from domesticated animals), then Gobekli Tepe and its environs is probably the place where this happened. Klaus Schmidt did not demur. He said to me, quite deliberately: ‘I believe Gobekli Tepe is a temple in Eden’. It’s a quote I reused, to some controversy, because people took Klaus literally. But he did not mean it literally. He meant it allegorically.
‘We have found no homes, no human remains. Where is everyone, did they gather for festivals, then disperse? As for their religion, I have no real idea, perhaps Gobekli Tepe was a place of excarnation, for exposing the bones of the dead to be consumed by vultures, so the bodies have all gone. But I do definitely know this: some time in 8000 BC the creators of Gobekli Tepe buried their great structures under tons of rubble. They entombed it. We can speculate why. Did they feel guilt? Did they need to propitiate an angry God? Or just want to hide it?’ Klaus was also fairly sure on one other thing. ‘Gobekli Tepe is unique.’
I left Gobekli Tepe as bewildered as I was excited. I wrote some articles, and then my thriller, and alongside me, many other writers, academics and film-makers, made the sometimes dangerous pilgrimage to this sumptuously puzzling place near the troubled Turkey-Syria border, and slowly its fame grew.
Back here and now, in 2022, Necmi, myself and Aydan Aslan — the director for Sanliurfa Culture and Tourism — jump in a car at Karahan Tepe (Necmi promises me we shall return) and we go see Gobekli Tepe as it is today.
Necmi is right: all is changed. These days Gobekli Tepe is not just a famous archaeological site, it is a Unesco World-Heritage-listed tourist honeypot which can generate a million visitors a year. It is all enclosed by a futuristic hi-tech steel-and-plastic marquee (no casual wandering around taking photos of the stones and workers). Where Klaus and I once sipped tea in a flapping tent, alone, there is now a big visitor centre — where I bump into the grandson of the shepherd who first found Gobekli. I spy the stone where I took the photo of a crouching Klaus, but I see it 20 metres away. That’s as close as I can get.
After lunch in Sanliurfa — with its Gobekli Tepe themed restaurants, and its Gobekli Tepe T-stone fridge-magnet souvenir shops - Necmi shows me the gleaming museum built to house the greatest finds from the region: including a 11,000 year old statue, retrieved from beneath the centre of Sanliurfa itself, and perhaps the world’s oldest life size carved human figure. I recall first seeing this poignant effigy under the stairs next to a fire extinguisher in Sanliurfa’s then titchy, neglected municipal museum. Back in 2006 I wrote about ‘Urfa man’ and how he should be vastly better known, not hidden away in some obscure room in a museum visited by three people a year.
Urfa man now has a silent hall of his own in one of Turkey’s greatest archaeological galleries. More importantly, we can now see that Urfa man has the same body stance of the T-shaped man-pillars at Gobekli (and in many of the Tas Tepeler): his arms are in front of him, protecting his penis. His obsidian eyes still stare wistfully at the observer, as lustrous as they were 11,000 years ago.
As we stroll about the museum, Necmi points at more carvings, more leopards, vultures, penises. From several sites archaeologists have found statues of leopards apparently mounting, riding or even ‘raping’ humans, paws over the human eyes. Meanwhile, Aslan tells me how archaeologists at Gobekli have also, more recently, found tantalising evidence of alcohol: huge troughs with the chemical residue of fermentation, indicating mighty ritual feasts, maybe.
I sense we are getting closer to a momentous new interpretation of Gobekli Tepe and the Tas Tepeler. And it is very different from that perspective Klaus Schmidt gave me, in 2006 (and this is no criticism, of course: he could not have known what was to come).
Necmi — as good as promised — whisks me back to Karahan Tepe, and to some of the other Tas Tepeler, so we can jigsaw together this epochal puzzle. As we speed around the arid slopes he explains how scientists at Karahan Tepe, as well as Gobekli Tepe, have now found evidence of homes.
These places, the Tas Tepeler, were not isolated temples where hunter gatherers came, a few times a year, to worship at their standing stones, before returning to the plains for the life of the chase. The builders lived here. They ate their roasted game here. They slept here. And they used, it seems, a primitive but poetic form of pottery, shaped from polished stone. They possibly did elaborate manhood rituals in the Karahan Tepe penis chamber, which was probably half flooded with liquids. And maybe they celebrated afterwards with boozy feasts. Yet still we have no sign at all of contemporary agriculture; they were, it still appears, hunter gatherers, but of unnerving sophistication.
Another unnerving oddity is the curious number of carvings which show people with six fingers. Is this symbolic, or an actual deformity? Perhaps the mark of a strange tribe? Again, there are more questions than answers. Crucially, however, we do now have tentative hints as to the actual religion of these people.
In Gobekli Tepe several skulls have been recovered. They are deliberately defleshed, and carefully pierced with holes so they could — supposedly — be hung and displayed.
Skull cults are not unknown in ancient Anatolia. If there was such a cult in the Tas Tepeler it might explain the graven vultures pictured ‘playing’ with human heads. As to how the skulls were obtained, they might have come from conflict (though there is no evidence of this yet), it is quite possible the skulls were obtained via human sacrifice. At a nearby, slightly younger site, the Skull Building of Cayonu, we know of altars drenched with human blood, probably from gory sacrifice.
Necmi has one more point to make about Karahan Tepe, as we tour the penis chamber and its anterooms. Karahan Tepe is stupefyingly big. ‘So far,’ he says, ‘We have dug up maybe 1 per cent of the site’ — and it is already impressive. I ask him how many pillars — T stones — might be buried here. He casually points at a rectangular rock peering above the dry grass. ‘That’s probably another megalith right there, waiting to be excavated. I reckon there are probably thousands more of them, all around us. We are only at the beginning. And there could be dozens more Tas Tepeler we have not yet found, spread over hundreds of kilometres.’
In one respect Klaus Schmidt has been proved absolutely right. After he first proposed that Gobekli Tepe was deliberately buried with rubble — that is to say, bizarrely entombed by its own creators — a backlash of scepticism grew, with some suggesting that the apparent backfill was merely the result of thousands of years of random erosion, rain and rivers washing debris between the megaliths, gradually hiding them. Why should any religious society bury its own cathedrals, which must have taken decades to construct?
And yet, Karahan too was definitely and purposely buried. That is the reason Necmi and his team were able to unearth the penis pillars so quickly, all they had to do was scoop away the backfill, exposing the phallic pillars, sculpted from living rock.
I have one more question for Necmi, which has been increasingly nagging at me. Did the people that build the Tas Tepeler have writing? It is almost impossible to believe that you could construct such elaborate sites, in multiple places, over thousands of square kilometres, without careful, articulate plans, that is to say: without writing. You couldn’t sing, paint and dream your way to entire inhabited towns of shrines, vaults, water channels and cultic chambers.
Necmi shrugs. He does not know. One of the glories of the Tas Tepeler is that they are so old, no one knows. Your guess is literally as good as the expert’s. And yet a very good guess, right now, leads to the most remarkable answer of all, and it is this: archaeologists in southeastern Turkey are, at this moment, digging up a wild, grand, artistically coherent, implausibly strange, hitherto-unknown-to-us religious civilisation, which has been buried in Mesopotamia for ten thousand years. And it was all buried deliberately.
Jumping in the car, we head off to yet another of the Tas Tepeler, but then Necmi has an abrupt change of mind, as to our destination.
‘No, let’s go see Sayburc. It’s a little Arab village. A few months ago some of the farmers rang us and said “Er, we think we have megaliths in our farmyard walls. Do you want to have a look?”’
Our cars pull up in a scruffy village square, scattering sheep and hens. Sure enough, there are classic Gobekli/Karahan style T-stones, being used to buttress agricultural walls, they are probably 11-13,000 years old, just like everywhere else. There are so many of them I spot one of my own, on the outskirts of the village. I point it out to Necmi. He nods, and says ‘Yes, that’s probably another.’ But he wants to show me something else.
Pulling back a plastic curtain we step into a kind of stone barn. Along one wall there is a spectacular stone frieze, displaying animal and human figures, carved or in relief. There are leopards, of course, and also aurochs, etched in a Cubist way to make both menacing horns equally visible (you can see an identical representation of the auroch at Gobekli Tepe, so similar one might wonder if they were carved by the same artist).
At the centre of the frieze is a small figure, in bold relief. He is clutching his penis. Next to him, being threatened by the aurochs, is another human. He has six fingers. For a long while, we stare in silence at the carvings. I realise that, a few farmers apart, we are some of the first people to see this since the end of the Ice Age.
...
Read the original on www.spectator.co.uk »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.