10 interesting stories served every morning and every evening.
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
Visualization of Imagen. Imagen uses a large frozen T5-XXL encoder to encode the input text into embeddings. A conditional diffusion model maps the text embedding into a 64×64 image. Imagen further utilizes text-conditional super-resolution diffusion models to upsample the image 64×64→256×256 and 256×256→1024×1024.
* We show that large pretrained frozen text encoders are very effective for the text-to-image task.
* We show that scaling the pretrained text encoder size is more important than scaling the diffusion model size.
* We introduce a new thresholding diffusion sampler, which enables the use of very large classifier-free guidance weights.
* We introduce a new Efficient U-Net architecture, which is more compute efficient, more memory efficient, and converges faster.
* On COCO, we achieve a new state-of-the-art COCO FID of 7.27; and human raters find Imagen samples to be on-par with reference images in terms of image-text alignment.
* Human raters strongly prefer Imagen over other methods, in both image-text alignment and image fidelity.
A photo of a An oil painting of a
in a garden. on a beach. on top of a mountain.
There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
Finally, while there has been extensive work auditing image-to-text and image labeling models for forms of social bias, there has been comparatively less work on social bias evaluation methods for text-to-image models. A conceptual vocabulary around potential harms of text-to-image models and established metrics of evaluation are an essential component of establishing responsible model release practices. While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time. Imagen, may run into danger of dropping modes of the data distribution, which may further compound the social consequence of dataset bias. Imagen exhibits serious limitations when generating images depicting people. Our human evaluations found Imagen obtains significantly higher preference rates when evaluated on images that do not portray people, indicating a degradation in image fidelity. Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A giant cobra snake on a farm. The snake is made out of corn.
A giant cobra snake on a farm. The snake is made out of corn.
We give thanks to Ben Poole for reviewing our manuscript, early discussions, and providing many helpful comments and suggestions throughout the project. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for helping us incorporate important responsible AI practices around this project. We appreciate valuable feedback and support from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grateful to Tom Small for designing the Imagen watermark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for initial discussions and feedback. We acknowledge hard work and support from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giving us advice along the project and assisting us with the publication process. We thank Victor Gomes and Erica Moreira for their consistent and critical help with TPU resource allocation. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for volunteering a considerable amount of their time for testing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for allowing us to use DALL-E 2 samples and providing us with GLIDE samples. We are thankful to Matthew Johnson and Roy Frostig for starting the JAX project and to the whole JAX team for building such a fantastic system for high-performance machine learning research. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for helpful discussions and spending time Imagening!
...
Read the original on gweb-research-imagen.appspot.com »
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A small cactus wearing a straw hat and neon sunglasses in the Sahara desert.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
Sprouts in the shape of text ‘Imagen’ coming out of a fairytale book.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.
Visualization of Imagen. Imagen uses a large frozen T5-XXL encoder to encode the input text into embeddings. A conditional diffusion model maps the text embedding into a 64×64 image. Imagen further utilizes text-conditional super-resolution diffusion models to upsample the image 64×64→256×256 and 256×256→1024×1024.
* We show that large pretrained frozen text encoders are very effective for the text-to-image task.
* We show that scaling the pretrained text encoder size is more important than scaling the diffusion model size.
* We introduce a new thresholding diffusion sampler, which enables the use of very large classifier-free guidance weights.
* We introduce a new Efficient U-Net architecture, which is more compute efficient, more memory efficient, and converges faster.
* On COCO, we achieve a new state-of-the-art COCO FID of 7.27; and human raters find Imagen samples to be on-par with reference images in terms of image-text alignment.
* Human raters strongly prefer Imagen over other methods, in both image-text alignment and image fidelity.
A photo of a An oil painting of a
in a garden. on a beach. on top of a mountain.
There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
Finally, while there has been extensive work auditing image-to-text and image labeling models for forms of social bias, there has been comparatively less work on social bias evaluation methods for text-to-image models. A conceptual vocabulary around potential harms of text-to-image models and established metrics of evaluation are an essential component of establishing responsible model release practices. While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time. Imagen, may run into danger of dropping modes of the data distribution, which may further compound the social consequence of dataset bias. Imagen exhibits serious limitations when generating images depicting people. Our human evaluations found Imagen obtains significantly higher preference rates when evaluated on images that do not portray people, indicating a degradation in image fidelity. Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper.
A giant cobra snake on a farm. The snake is made out of corn.
A giant cobra snake on a farm. The snake is made out of corn.
We give thanks to Ben Poole for reviewing our manuscript, early discussions, and providing many helpful comments and suggestions throughout the project. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for helping us incorporate important responsible AI practices around this project. We appreciate valuable feedback and support from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grateful to Tom Small for designing the Imagen watermark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for initial discussions and feedback. We acknowledge hard work and support from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giving us advice along the project and assisting us with the publication process. We thank Victor Gomes and Erica Moreira for their consistent and critical help with TPU resource allocation. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for volunteering a considerable amount of their time for testing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for allowing us to use DALL-E 2 samples and providing us with GLIDE samples. We are thankful to Matthew Johnson and Roy Frostig for starting the JAX project and to the whole JAX team for building such a fantastic system for high-performance machine learning research. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for helpful discussions and spending time Imagening!
...
Read the original on imagen.research.google »
JavaScript must be enabled on your browser, otherwise content or functionality of starlink.com may be limited or unavailable.
Starlink is a division of SpaceX. Visit us at
spacex.com
...
Read the original on www.starlink.com »
The EU Commission’s draft regulation on preventing and combating child abuse is a frontal attack on civil rights. And the EU Commission is pushing for this draft to become law with Trump-like exaggerations.
As citizens we can expect more from the EU Commission. The least we can ask for when the Commission wants to introduce surveillance mechanisms that will immensely weaken Europe’s cybersecurity would be honest communication.
No one denies that child sexual abuse is a big issue that needs to be addressed. But when proposing such drastic measures like CSAM scanning of every private chat message, the arguments must be sound. Otherwise, the EU Commission is not helping anyone — not the children, and not our free, democratic societies.
The EU Commission has managed to push three arguments into the public debate to swing the public opinion in favor of scanning for CSA material on every device. But the arguments are blatantly wrong:
One in Five: The EU Commission claims that One in Five children in the EU would be sexually abused.
AI-based surveillance would not harm our right to privacy, but save the children.
90 % of CSAM would be hosted on European servers
The EU Commission uses the ‘One in Five’ claim to justify the proposed general mass surveillance of all European citizens.
Yes, child abuse is an immense problem. Every expert in the field of child protection will agree that politics need to do more to protect the most vulnerable in our society: children.
Nevertheless, the question of proportion must be looked at very closely when it comes to CSAM scanning on our personal devices:
Is it okay that the EU introduces mass
surveillance mechanisms for all EU citizens in an attempt to tackle child sexual abuse?
To find an answer to this question, I would like to ask the EU Commission several questions:
There is no statistic to be found that supports the ‘One in Five’ claim. This figure is prominently put on a website by the Council of Europe, but without giving any source.
According to the World Health Organization (WHO) 9.6% of children worldwide are sexually abused. Contrary to the EU figures, this data is based on a study, an analyses of community surveys.
Nevertheless, let’s ignore the European Commission’s exaggeration of affected children as the number published by the WHO is still very high and must be addressed.
The WHO number suggests that more than 6 million children in the EU suffer from sexual abuse.
Consequently, we can agree that the EU must do something to stop child sexual abuse.
Another question that is very important when introducing surveillance measures to tackle child sexual abuse is the one of effectiveness.
If monitoring of our private communication (CSAM scanning) would help save millions of children in Europe from sexual abuse, many
people would agree to the measure. But would that actually be the case?
On the same website that the EU Commission claims that ‘1 in 5’ children are affected, they also say that “Between 70%
and 85% of children know their abuser. The vast majority of children are victims of people they trust.”
This begs the question: How, just how, is scanning for CSAM on every chat message going to help prevent child sexual
abuse within the family, the sports club or the church?
To find out whether monitoring of private messages for CSA material may help tackle child sexual abuse, we must take a look at actual monitoring data that is already available.
As an email provider based in Germany we have such data. Our transparency report shows that we are regularly receiving valid telecommunications surveillance orders from German authorities to prosecute potential criminals.
One could think that Tutanota as a privacy-focused, end-to-end encrypted email service would be the go-to place for criminal offenders, for instance for sharing CSAM. In consequence, one would expect the number of court orders issued in regard to “child pornography” to be high.
In 2021 we received ONE telecommunications surveillance order based on suspicion that the account was used in regard
to “child pornography”.
This is 1,3% of all orders that we received in 2021. More than two thirds of orders were issued in regard to “ransomware”; a few individual cases in regard to copyright infringement, preparation of serious crimes, blackmail and terror.
Numbers published by the German Federal Office of Justice paint a similar picture: In Germany, more than 47.3 per cent of the measures for the surveillance of telecommunications according to § 100a StPO were ordered to find suspects of drug related offenses in 2019. Only 0.1 per cent of the orders - or 21(!) in total - where issued in relation to “child pornography”.
In 2019, there were 13.670 cases of child abuse according to the
statistic of the German Federal Ministry of the Interior
in Germany.
If we take these numbers together, there were 13.670 children abused in Germany in 2019. In only 21 of these cases a
telecommunications surveillance order was issued.
It becomes obvious that the monitoring of telecommunications (which is already possible) does not play a significant role to track down perpetrators.
The conclusion here is obvious: ‘More surveillance’ will not bring ‘more security’ to the children in Europe.
Similarly to the ‘One in Five’ claim, the EU Commission claims that 90% of child sexual abuse material is hosted on
European servers. Again the EU Commission uses this claim to justify its planned CSAM scanning.
However, even experts in this field, the German eco Association that works together with the authorities to take down CSAM (Child Sexual Abuse Material), state that “in their estimation, the numbers are a long way from the claimed 90 percent”. Alexandra Koch-Skiba of the eco Association also
says: “In our view, the draft has the potential to create a free pass for government surveillance. This is ineffective and illegal.
Sustainable protection of children and young people would instead require more staff for investigations and comprehensive prosecution.”
Even German law enforcement officials are
criticizing the EU plans behind closed doors. They argue that there would be other ways to track down more offenders. “If it’s just about having more cases and catching more perpetrators, then you don’t need such an encroachment on fundamental rights,” says another longtime child abuse investigator.
It is unbelievable that the EU Commission uses these exaggerations to swing the public opinion in favor of CSAM scanning. It
looks like the argument ‘to protect the children’ is used to introduce Chinese-like surveillance mechanisms. Here in Europe.
...
Read the original on tutanota.com »
The Cat S22 Flip takes the cell phone back to what it should be… a phone. Made for those who want a device as simple to use as it is tough, the Cat S22 Flip features physical buttons and a large touch screen, letting you choose how you interact with it. The Cat S22 Flip’s ‘Snap it to End it’ calling gives you confidence that when it is closed the call is over.
Android™ 11 (Go Edition)
Programmable PTT Button
IP68 & MIL-SPEC 810H
Drop tested up to 6ft on to steel
Waterproof to a depth of 5ft for up to 35 mins
The Cat S22 Flip brings the worlds biggest operating system, Android™ 11 (Go Edition) and its Play Store to the traditional cellphone design so you no longer have to choose between a conventional cellphone or a smartphone. Powerful speakers help you hear in the loudest of environments, and a larger battery keeps the Cat S22 Flip going, so no matter if you are a first responder on the front line or a farmer out in the field, the Cat S22 Flip is a phone you can depend on.
Engineered to the highest rugged standards, the Cat S22 Flip is everything you expect from a Cat phone, with the hinge alone is tested 150 thousand times. The Cat S22 flip features the same IP68 and MIL-SPEC 810H rating as our larger phones, meaning it can be dropped, dunked and washed regularly using the harshest of chemicals, bleaches and sanitizers. So you can wash it thoroughly and regularly, helping to keep you and those around you safe from germs.
The Cat S22 Flip is designed to work in the toughest of environments so you don’t have to worry about your devices. This is backed up with our 2 year warranty so you can stay confident that no matter what happens.
The Cat S22 Flip built for American enterprise. With its rugged build and Android Go operating system, the Cat S22 Flip the perfect phone for a huge range of workers from those on the front line to those in the field and many more.
Android™ 11 (Go Edition) is the lighter version of Google’s Android™ system, giving you access to key apps and security benefits of Android without the need for a larger, expensive device, making it the perfect option for whether you are looking for a device for yourself or your team.
Up to 6ft on to steelHandles low to high temperature differences between -13°F to 122°F for up to 30 minsPressurized alcohol abrasion tests at 500gF/cm2 over hundreds of cycles
By continuing to use this site you consent to the use of cookies on your device as described in our privacy policy unless you have disabled them. You can change your cookie settings at any time but parts of our site will not function correctly without them.
...
Read the original on www.catphones.com »
Other App From Our Hands
...
Read the original on www.simplemobiletools.com »
Unprecedented evidence from internal police networks in China’s Xinjiang region proves prison-like nature of re-education camps, shows top Chinese leaders’ direct involvement in the mass internment campaign.
The Xinjiang Police Files are a major cache of speeches, images, documents and spreadsheets obtained by a third party from confidential internal police networks. They provide a groundbreaking inside view of the nature and scale of Beijing’s secretive campaign of interning between 1-2 million Uyghurs and other ethnic citizens in China’s northwestern Xinjiang region.
The files have been authenticated through peer-reviewed scholarly research. Investigative research teams from over a dozen global media outlets have also verified portions of the data.
Read our in-depth reports of the Xinjiang Police Files
View PowerPoints and images showing police security drills in Xinjiang’s camps and villages
Watch as Dr. Adrian Zenz, an international expert on internal Chinese government documents and the Xinjiang internment campaign, breaks down the contents of the Xinjiang Police Files, why they are important, and how civil society and governments should respond.
A project of the Victims of Communism Memorial Foundation.
...
Read the original on www.xinjiangpolicefiles.org »
Here’s the summary of the hardware and the software that powers Healthchecks.io.
Since 2017, Healthchecks.io runs on dedicated servers at Hetzner. The current lineup is:
All servers are located in the Falkenstein data center park, scattered across the FSN-DCx data centers so they are not all behind the same core switch. The monthly Hetzner bill is €484.
* Systemd manages services that need to run continuously (haproxy, nginx, postgresql, etc.)
* Wireguard for private networking between the servers. Tiered topology: HAProxy servers cannot talk to PostgreSQL servers.
* Netdata agent for monitoring the machines and the services running on them. Connected to Netdata Cloud for easy overview of all servers.
* HAProxy 2.2 for terminating TLS connections, and load balancing between app servers. Enables easy rolling updates of application servers.
* PostgreSQL 13, streaming replication from primary to standby. No automatic failover: I can trigger failover with a single command, but the decision is manual.
* hchk, a small application written in Go, handles ping API (hc-ping.com) and inbound email.
* NGINX handles rate limiting, static file serving, and reverse proxying to uWSGI and hchk.
Healthchecks.io, the cron job monitoring service, uses cron jobs itself for the following periodic tasks:
* Once a day, make a full database backup, encrypt it with gpg, and upload it to AWS S3.
* Once a day, send “Your account is inactive and is about to be deleted” notifications to inactive users.
* Once a day, send “Your subscription will renew on …” for annual subscriptions that are due in 1 month.
* My main dev machine is a desktop PC with a single 27″ 1440p display.
* Sublime Text for editing source code. A combination of meld, Sublime Merge and command-line git for working with git.
* Yubikeys for signing git commits and logging into servers.
* Fabric scripts for deploying code and running maintenance tasks on servers.
* A dedicated laptop inside a dedicated backpack, for dealing with emergencies while away from the main PC.
Comments, questions, ideas? Let me know via email or on Twitter!
...
Read the original on blog.healthchecks.io »
Skip to main content
Questions to Ask about Your Diagnosis
Questions to Ask about Your Treatment
Questions to Ask About Cancer
Talking to Others about Your Advanced Cancer
Coping with Your Feelings During Advanced Cancer
Questions to Ask about Advanced Cancer
Coping
Questions to Ask About Cancer
Advanced Cancer
All Cancer Types
Research on Causes of Cancer
Progress
Training
Time to End Cancer as We Know It
Overview & Mission
You need to enable JavaScript to run this app.
CF33-hNIS-antiPDL1 for the Treatment of Metastatic Triple Negative Breast Cancer
This phase I trial tests the safety, side effects, and best dose of CF33-hNIS-antiPDL1 in treating patients with triple negative breast cancer that has spread to other places in the body (metastatic). CF33-hNIS-antiPDL1 is an oncolytic virus. This is a virus that is designed to infect tumor cells and break them down.
Documented informed consent of the participant and/or legally authorized representative
* Assent, when appropriate, will be obtained per institutional guidelines
Agreement to research biopsies on study, once during study and end of study, exceptions may be granted with study principal investigator (PI) approval
Histologically confirmed metastatic triple negative breast cancer. Triple negative status will be defined as estrogen receptor (ER) and progesterone receptor (PR) =< 10% by immunohistochemistry (IHC) and HER2 negative, per American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) guidelines
Patients must have progressed on or been intolerant of at least 2 prior lines of therapy for advanced/metastatic disease. Patients that qualify for immunotherapy and/or PARP inhibitors must have progressed on or been intolerant of these agents
Fully recovered from the acute toxic effects (except alopecia) to =< grade 2 to prior anti-cancer therapy
Must have a superficial tumor (cutaneous, subcutaneous), breast lesion or nodal metastases amenable to safe repeated intratumoral injections per treating physician and interventional radiologist review
Absolute neutrophil count (ANC) >= 1,500/mm^3
* NOTE: Growth factor is not permitted within 14 days of ANC assessment unless cytopenia is secondary to disease involvement
Platelets >= 100,000/mm^3
* NOTE: Platelet transfusions are not permitted within 14 days of platelet assessment unless cytopenia is secondary to disease involvement
Serum creatinine =< 1.5 mg/dL or creatinine clearance of >= 50 mL/min per 24 hour urine test or the Cockcroft-Gault formula
Agreement by females and males of childbearing potential* and their partners to use an effective method of birth control (defined as a hormonal or barrier method) or abstain from heterosexual activity for the course of the study through at least 6 months after the last dose of protocol therapy
* Childbearing potential defined as not being surgically sterilized (men and women) or have not been free from menses for > 1 year (women only)
Chemotherapy, biological therapy, immunotherapy or investigational therapy within 14 days prior to day 1 of protocol therapy
Major surgery or radiation therapy within 28 days of study therapy
Has received a vaccination within 30 days of first study injection
History of allergic reactions attributed to compounds of similar chemical or biologic composition to study agent
Patients with a known history of hepatitis B or hepatitis C infection who have active disease as evidenced by hepatitis (Hep) B surface antigen status or Hep C polymerase chain reaction (PCR) status obtained within 14 days of cycle 1, day 1
Another malignancy within 3 years, except non-melanomatous skin cancer
Patients may not have clinically unstable brain metastases. Patients may be enrolled with a history of treated brain metastases that are clinically stable for >= 4 weeks prior to start of study treatment
Any other condition that would, in the Investigator’s judgment, contraindicate the patient’s participation in the clinical study due to safety concerns with clinical study procedures
Prospective participants who, in the opinion of the investigator, may not be able to comply with all study procedures (including compliance issues related to feasibility/logistics)
I. To determine the safety and tolerability of a novel chimeric oncolytic orthopoxvirus, oncolytic virus CF33-expressing hNIS/Anti-PD-L1 antibody (CF33-hNIS-antiPDL1), by the evaluation of toxicities including: type, frequency, severity, attribution, time course, reversibility and duration according to Common Terminology Criteria for Adverse Events (CTCAE) 5.0 criteria. I. To determine the optimal biologic dose (OBD) (defined as a safe dose that induces an immune response in tumors [increase checkpoint target PD-L1 by at least 5% and/or increase T cell infiltration by at least 10%]) and the recommended phase II dose (RP2D) for future expansion trial.II. To determine tumor response rates by Response Evaluation Criteria in Solid Tumors (RECIST) version (v)1.1 (primary) and immune-modified (i)RECIST (secondary).III. To document possible therapeutic efficacy and evaluate progression-free survival, overall survival and response.I. To determine the immune and genomic profiles of tumors before and after CF33-hNIS-antiPDL1 therapy.Patients receive CF33-hNIS-antiPDL1 intratumorally (IT) on days 1 and 15. Treatment repeats every 28 days for up to 3 cycles in the absence of disease progression or unacceptable toxicity.After completion of study treatment, patients are followed up at 30 days, then every 3 months for 1 year.
Have a question?
We’re here to help
Which trials are right for you?
Use the checklist in our guide to gather the information you’ll need.
...
Read the original on www.cancer.gov »
Sign up
This organization has no public members. You must be a member to see who’s a part of this organization.
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.