10 interesting stories served every morning and every evening.
Why procrastination is about managing emotions, not timeAddress the real reasons you procrastinate and you’re more likely to start achieving your goals. Like many writers, I’m a supreme expert at procrastination. When I ought to be working on an assignment, with the clock ticking towards my deadline, I’ll sit there watching pointless political interviews or boxing highlights on YouTube (cat videos aren’t my thing). At its worst I can almost begin to feel a little crazy – you need to be working, I say to myself, so what on Earth are you doing?
According to traditional thinking — still espoused by university counselling centres around the world, such as the University of Manchester in the UK and the University of Rochester in the US — I, along with my fellow procrastinators, have a time management problem. By this view, I haven’t fully appreciated how long my assignment is going to take and I’m not paying enough attention to how much time I’m currently wasting on ‘cyberloaﬁng’. With better scheduling and a better grip on time, so the logic goes, I will stop procrastinating and get on with my work.
Increasingly, however, psychologists are realising this is wrong. Experts like Tim Pychyl at Carleton University in Canada and his collaborator Fuschia Sirois at the University of Shefﬁeld in the UK have proposed that procrastination is an issue with managing our emotions, not our time. The task we’re putting off is making us feel bad – perhaps it’s boring, too difﬁcult or we’re worried about failing – and to make ourselves feel better in the moment, we start doing something else, like watching videos.Chronic procrastination is linked with mental and physical health costs, from depression and anxiety to cardiovascular disease (Credit: Alamy)This fresh perspective on procrastination is beginning to open up exciting new approaches to reducing the habit; it could even help you improve your own approach to work. “Self-change of any of sort is not a simple thing, and it typically follows the old adage of two steps forward and one step back,” says Pychyl. “All of this said, I am conﬁdent that anyone can learn to stop procrastinating.”
One of the ﬁrst investigations to inspire the emotional view of procrastination was published in the early 2000s by researchers at Case Western Reserve University in Ohio. They ﬁrst prompted people to feel bad (by asking them to read sad stories) and showed that this increased their inclination to procrastinate by doing puzzles or playing video games instead of preparing for the intelligence test they knew was coming. Subsequent studies by the same team showed low mood only increases procrastination if enjoyable activities are available as a distraction, and only if people believe they can change their moods. One study used ‘mood-freezing candles’ to trick some volunteers into thinking their low mood was frozen and, in this case, they didn’t bother procrastinating.
The emotional regulation theory of procrastination makes intuitive sense. In my case, it’s not that I don’t realise how long my assignment will take (I know I need to be working on it right now) or that I haven’t scheduled enough time for my YouTube viewing – in fact, I don’t really even want to watch those videos, I’m just drawn to them as a way of avoiding the discomfort of knuckling down to work. In the psychologists’ jargon, I’m procrastinating to achieve a short-term positive ‘hedonic shift’, at the cost of my longer-term goals.Procrastination — while effectively distracting in the short-term — can lead to guilt, which ultimately compounds the initial stressThe emotional regulation view of procrastination also helps explain some strange modern phenomena, like the fad for watching online cat videos which have attracted billions of views on YouTube. A survey of thousands of people by Jessica Myrick at the Media School at Indiana University conﬁrmed procrastination as a common motive for viewing the cat videos and that watching them led to a boost in positive mood. It’s not that people hadn’t adequately scheduled time for watching the videos; often they were only watching the clips to make themselves feel better when they should be doing something else less fun.
Myrick’s research also highlighted another emotional aspect to procrastination. Many of those surveyed felt guilty after watching the cat videos. This speaks to how procrastination is a misguided emotional regulation strategy. While it might bring short-term relief, it only stores up problems for later. In my own case, by delaying my work I just end up feeling even more stressed, not to mention the gathering clouds of guilt and frustration.
It’s perhaps little wonder that research by Fuschia Sirois has shown chronic procrastination — that is, being inclined to procrastinate on a regular, long-term basis — is associated with a host of adverse mental and physical health consequences, including anxiety and depression, poor health such as colds and ﬂu, and even more serious conditions like cardiovascular disease.Researchers say procrastinating helps us feel better when certain tasks ﬁll us with negative emotions — if they are too difﬁcult or boring, say (Credit: Getty Images)Sirois believes procrastination has these adverse consequences through two routes – ﬁrst, it’s stressful to keep putting off important tasks and failing to fulﬁl your goals, and second, the procrastination often involves delaying important health behaviours, such as taking up exercise or visiting the doctor. “Over time high stress and poor health behaviours are well known to have a synergistic and cumulative effect on health that can increase risk for a number of serious and chronic health conditions such as heart disease, diabetes, arthritis, and even cancer,” she says.
All of this means that overcoming procrastination could have a major positive impact on your life. Sirois says her research suggests that “decreasing a tendency to chronically procrastinate by one point [on a ﬁve-point procrastination scale] would also potentially mean that your risk for having poor heart health would reduce by 63%”.
On a positive note, if procrastination is an emotional regulation issue, this offers important clues for how to address it most effectively. An approach based on Acceptance and Commitment Therapy or ‘ACT’, an off-shoot of Cognitive Behavioural Therapy, seems especially apt. ACT teaches the beneﬁts of ‘psychological ﬂexibility’ – that is, being able to tolerate uncomfortable thoughts and feelings, staying in the present moment in spite of them, and prioritising choices and actions that help you get closer to what you most value in life.
Relevant here is cutting edge research that’s shown students who procrastinate more tend to score higher on psychological inﬂexibility. That is, they’re dominated by their psychological reactions, like frustration and worry, at the expense of their life values; high scorers agree with statements like ‘I’m afraid of my feelings’ and ‘My painful experiences and memories make it difﬁcult for me to live a life that I would value’. Those who procrastinate more also score lower on ‘committed action’, which describes how much a person persists with actions and behaviours in pursuit of their goals. Low scorers tend to agree with statements like ‘If I feel distressed or discouraged, I let my commitments slide’.Research shows that once the ﬁrst step is made towards a task, following through becomes easierACT trains people both to increase their psychological ﬂexibility (for example, through mindfulness) and their committed action (for example, by ﬁnding creative ways to pursue goals that serve their values — what matters most to them in life), and preliminary research involving students has been promising, with ACT proving more effective than CBT in one trial over the longer-term.
Of course, most of us probably won’t have the option of signing up to an ACT course any time soon — and in any case we’re bound to keep putting off looking for one — so how can we go about applying these principles today? “When someone ﬁnally recognises that procrastination isn’t a time management problem but is instead an emotion regulation problem, then they are ready to embrace my favourite tip,” says Pychyl.
The next time you’re tempted to procrastinate, “make your focus as simple as ‘What’s the next action – a simple next step – I would take on this task if I were to get started on it now?’”. Doing this, he says, takes your mind off your feelings and onto easily achievable action. “Our research and lived experience show very clearly that once we get started, we’re typically able to keep going. Getting started is everything.”
Dr Christian Jarrettis a senior editor at Aeon magazine. His next book, about personality change, will be published in 2021.
After ~12.5 years at Google and ~10 years working on Go (#golang), it’s time for me to do something new. Tomorrow is my last day at Google.
Working at Google and on Go has been a highlight of my career. Go really made programming fun for me again, and I’ve had fun helping make it. I want to thank Rob Pike for letting me work on Go full time (instead of just as a distraction on painfully long gBus rides) as well as Russ Cox and Ian Lance Taylor and Robert Griesemer and others for all the patience while I learned my way around. I’ve loved hacking on various packages and systems with the team and members of the community, giving a bunch of talks, hanging out in Denver, Sydney, MTV, NYC, at FOSDEM and other meet-ups, etc. While I’ve learned a bunch while working on Go, more excitingly I discovered many things that I didn’t know I didn’t know, and it was a joy watching the whole team and community work their (to me) magic.
I’ll still be around the Go community, but less, and differently. My @golang.org email will continue to work and please continue to mail me or copy me on GitHub (@bradﬁtz), especially for something broken that might be my fault.
In somewhat chronological order, but not entirely:
Worked on the Social Graph API, continually extracting and FOAF and other such
semantic social links between pages, indexing it all, and exporting it over a public API. It’s since been shut down,
but I learned a lot about Google indexing, production (Borg, BigTable, MapReduce), got up to speed on Google-used
languages & style (C++, Java, Python, Sawzall).
Web-iﬁed the Google open source CLA process; it previously involved faxing and stuff, which nobody wanted to do.
Worked on Gmail’s backend for a bit, specifically its address book backend.
Integrated personal address book search into Google’s main search. (So searching google.com for somebody in your contacts
would show your address book entry for them.) It dark launched as an experiment, but never went live because at the time
we didn’t have enough SSDs in enough data centers to meet latency budgets. (I remember meetings with various teams
drawing Gannt charts a few milliseconds wide, showing RPCs and latency budgets… blew my mind at the time.)
Rewrote memcached (which I wrote pre-Google) in Google C++
and using Google’s RPC system and then added memcache support
to App Engine. The Google memcache server continues to be
used by many teams.
Fixed a bunch of performance bugs in Android when it ﬁrst came out, got invited to join the team.
Worked on the Android framework team doing performance analysis and tooling and ﬁxes.
(some of the commits)
Did a bunch of logs processing
on nightly uploaded performance samples from Android team & Googler dogfood phones and identiﬁed problematic code & stack traces & RPCs
causing UI jank.
Worked on a distributed build system for Android that let you do make -j9999 of arbitrary code out over
a cluster of machine building in a custom FUSE ﬁlesystem that watched what got written and communicated back to the coordinator
server. It was internal code, but its spirit lives on in hanwen/termite, a Go implementation written
Started writing in Go for various Android analysis tasks.
Started writing in Go for personal projects (Perkeep, then named Camlistore)
Started sending changes to Go to ﬁx/add to the Go standard library. (starting May 5, 2010 with some os. Chtimes
Invited to join the Go team.
Worked on most of the Go standard library. Primary author of net/http, database/sql, os/exec, Go’s build/test CI system, etc.
20% project: co-author of PubSubHubBub, which became WebSub.
Rewrote the Google download server from C++ to Go. This was one
of (or the?) ﬁrst production Go service at Google, and involved plenty of dependency work to make it happen. It also showed
various shortcomings in the Go runtime which are long since ﬁxed.
Gave many Go talks at various conferences.
Wrote Go’s HTTP/2 implementation, client and server, in Go 1.6.
8 managers, IIRC. (not at once, fortunately; “Eight bosses?” “Eight.” “Eight, Bob”)
15 Go releases (and several before Go 1)
See go/bradﬁtz for the internal version of this document. (It’s approximately the same but with a bit more stuff I can’t or don’t want to share publicly.)
Why are you leaving?
Little bored. Not learning as much as I used to. I’ve been doing the same thing too long and need a change. It’d be nice to primarily work in Go rather than work on Go.
When I ﬁrst joined Google it was a chaotic ﬁrst couple years while I learned Google’s internal codebase, build system, a bunch of new languages, Borg, Bigtable, etc. Then I joined Android it was fun/learning chaos again. Go was the same when I joined and it was a new, fast-moving experiment. Now Go is very popular, stable and, while there’s a lot to do, things–often necessarily–move pretty slowly. Moving slowly is ﬁne, and hyper-specializing in small corners of Go makes sense at scale (few percent improvements add up!), but I want to build something new again.
I don’t want to get stuck in a comfortable rut. (And Google certainly is comfortable, except for open ﬂoor plans.)
TBA. But building something new.
Skip to content
earth or somewhere close to it
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Public · Anyone can follow this list
Private · Only you can access this list
Here’s the URL for this Tweet. Copy it to easily share with friends.
Add this Tweet to your website by copying the code below. Learn more
Add this video to your website by copying the code below. Learn more
Hmm, there was a problem reaching the server.
By embedding Twitter content in your website or app, you are agreeing to the Twitter Developer Agreement and Developer Policy.
Why you’re seeing this ad
Not on Twitter? Sign up, tune into the things you care about, and get updates as they happen.
» See SMS short codes for other countries
This timeline is where you’ll spend most of your time, getting instant updates about what matters to you.
Hover over the proﬁle pic and click the Following button to unfollow any account.
When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.
Add your thoughts about any Tweet with a Reply. Find a topic you’re passionate about, and jump right in.
Get instant insight into what people are talking about now.
Get more of what you love
Follow more accounts to get instant updates about topics you care about.
See the latest conversations about any topic instantly.
Catch up instantly on the best stories happening as they unfold.
This is why i use ad blockers and a pi-hole server…….. request map of nytimes.pic.twitter.com/yf9Sv4daQH
Thanks. Twitter will use this to make your timeline better.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain conﬁdential between the company selling the data and the clients purchasing it.
The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.
Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.
The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies’ LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which speciﬁc video they watched.
Do you know about any other companies selling data? We’d love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on firstname.lastname@example.org, or email email@example.com.
Although the data does not include personal information such as users’ names, it still contains a wealth of speciﬁc browsing data, and experts say it could be possible to deanonymize certain users.
In a press release from July, Jumpshot claims to be “the only company that unlocks walled garden data” and seeks to “provide marketers with deeper visibility into the entire online customer journey.” Jumpshot has previously discussed some of its clients publicly. But other companies mentioned in Jumpshot documents include Expedia, IBM, Intuit, which makes TurboTax, Loreal, and Home Depot. Employees are instructed not to talk publicly about Jumpshot’s relationships with these companies.
“It’s very granular, and it’s great data for these companies, because it’s down to the device level with a timestamp,” the source said, referring to the speciﬁcity and sensitivity of the data being sold. Motherboard granted the source anonymity to speak more candidly about Jumpshot’s processes.
Until recently, Avast was collecting the browsing data of its customers who had installed the company’s browser plugin, which is designed to warn users of suspicious websites. Security researcher and AdBlock Plus creator Wladimir Palant published a blog post in October showing that Avast harvest user data with that plugin. Shortly after, browser makers Mozilla, Opera, and Google removed Avast’s and subsidiary AVG’s extensions from their respective browser extension stores. Avast had previously explained this data collection and sharing in a blog and forum post in 2015. Avast has since stopped sending browsing data collected by these extensions to Jumpshot, Avast said in a statement to Motherboard and PCMag.
However, the data collection is ongoing, the source and documents indicate. Instead of harvesting information through software attached to the browser, Avast is doing it through the anti-virus software itself. Last week, months after it was spotted using its browser extensions to send data to Jumpshot, Avast began asking its existing free antivirus consumers to opt-in to data collection, according to an internal document.
“If they opt-in, that device becomes part of the Jumpshot Panel and all browser-based internet activity will be reported to Jumpshot,” an internal product handbook reads. “What URLs did these devices visit, in what order and when?” it adds, summarising what questions the product may be able to answer.
Senator Ron Wyden, who in December asked Avast why it was selling users’ browsing data, said in a statement, “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my ofﬁce. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”
Despite Avast currently asking users to opt back into the data collection via a pop-up in the antivirus software, multiple Avast users said they did not know that Avast was selling browsing data.
“I was not aware of this,” Keith, a user of the free Avast antivirus product who only provided their ﬁrst name, told Motherboard. “That sounds scary. I usually say no to data tracking,” they said, adding that they haven’t yet seen the new opt-in pop-up from Avast.
“Did not know that they did that :(,” another free Avast antivirus user said in a Twitter direct message.
Motherboard and PCMag contacted over two dozen companies mentioned in internal documents. Only a handful responded to questions asking what they do with data based on the browsing history of Avast users.
“We sometimes use information from third-party providers to help improve our business, products and services. We require these providers to have the appropriate rights to share this information with us. In this case, we receive anonymized audience data, which cannot be used to identify individual customers,” a Home Depot spokesperson wrote in an emailed statement.
Microsoft declined to comment on the speciﬁcs of why it purchased products from Jumpshot, but said that it doesn’t have a current relationship with the company. A Yelp spokesperson wrote in an email, “In 2018, as part of a request for information by antitrust authorities, Yelp’s policy team was asked to estimate the impact of Google’s anticompetitive behavior on the local search marketplace. Jumpshot was engaged on a one-time basis to generate a report of anonymized, high-level trend data which validated other estimates of Google’s siphoning of trafﬁc from the web. No PII was requested or accessed.”
“Every search. Every click. Every buy. On every site.”
Southwest Airlines said it had discussions with Jumpshot but didn’t reach an agreement with the company. IBM said it did not have a record of being a client, and Altria said it is not working with Jumpshot, although didn’t specify if it did so previously. Sephora said it has not worked with Jumpshot. Google did not respond to a request for comment.
On its website and in press releases, Jumpshot names Pepsi, and consulting giants Bain & Company and McKinsey as clients.
As well as Expedia, Intuit, and Loreal, other companies which are not already mentioned in public Jumpshot announcements include coffee company Keurig, YouTube promotion service vidIQ, and consumer insights ﬁrm Hitwise. None of those companies responded to a request for comment.
On its website, Jumpshot lists some previous case studies for using its browsing data. Magazine and digital media giant Condé Nast, for example, used Jumpshot’s products to see whether the media company’s advertisements resulted in more purchases on Amazon and elsewhere. Condé Nast did not respond to a request for comment.
Jumpshot sells a variety of different products based on data collected by Avast’s antivirus software installed on users’ computers. Clients in the institutional ﬁnance sector often buy a feed of the top 10,000 domains that Avast users are visiting to try and spot trends, the product handbook reads.
Another Jumpshot product is the company’s so-called “All Click Feed.” It allows a client to buy information on all of the clicks Jumpshot has seen on a particular domain, like Amazon.com, Walmart.com, Target.com, BestBuy.com, or Ebay.com.
In a tweet sent last month intended to entice new clients, Jumpshot noted that it collects “Every search. Every click. Every buy. On every site” [emphasis Jumpshot’s.]
Jumpshot’s data could show how someone with Avast antivirus installed on their computer searched for a product on Google, clicked on a link that went to Amazon, and then maybe added an item to their cart on a different website, before ﬁnally buying a product, the source who provided the documents explained.
One company that purchased the All Clicks Feed is New York-based marketing ﬁrm Omnicom Media Group, according to a copy of its contract with Jumpshot. Omnicom paid Jumpshot $2,075,000 for access to data in 2019, the contract shows. It also included another product called “Insight Feed” for 20 different domains. The fee for data in 2020 and then 2021 is listed as $2,225,000 and $2,275,000 respectively, the document adds.
Jumpshot gave Omnicom access to all click feeds from 14 different countries around the world, including the U. S., England, Canada, Australia, and New Zealand. The product also includes the inferred gender of users “based on browsing behavior,” their inferred age, and “the entire URL string” but with personally identiﬁable information (PII) removed, the contract adds.
Omnicom did not respond to multiple requests for comment.
According to the Omnicom contract, the “device ID” of each user is hashed, meaning the company buying the data should not be able to identify who exactly is behind each piece of browsing activity. Instead, Jumpshot’s products are supposed to give insights to companies who may want to see what products are particularly popular, or how effective an ad campaign is working.
“What we don’t do is report on the Jumpshot Device ID that executed the clicks to protect against the triangulation of PII,” one internal Jumpshot document reads.
But Jumpshot’s data may not be totally anonymous. The internal product handbook says that device IDs do not change for each user, “unless a user completely uninstalls and reinstalls the security software.” Numerous articles and academic studies have shown how it is possible to unmask people using so-called anonymized data. In 2006, New York Times reporters were able to identify a speciﬁc person from a cache of supposedly anonymous search data that AOL publicly released. Although the tested data was more focused on social media links, which Jumpshot redacts somewhat, a 2017 study from Stanford University found it was possible to identify people from anonymous web browsing data.
“De-identiﬁcation has shown to be a very failure-prone process. There are so many ways it can go wrong,” Günes Acar, who studies large-scale internet tracking at the Computer Security and Industrial Cryptography research group at the Department of Electrical Engineering of the Katholieke Universiteit Leuven, said.
De-anonymization becomes a greater concern when considering how the eventual end-users of Jumpshot’s data could combine it with their own data.
“Most of the threats posed by de-anonymization—where you are identifying people—comes from the ability to merge the information with other data,” Acar said. A set of Jumpshot data obtained by Motherboard and PCMag shows how each visited URL comes with a precise timestamp down to the millisecond, which could allow a company with its own bank of customer data to see one user visiting their own site, and then follow them across other sites in the Jumpshot data.
“It’s almost impossible to de-identify data,” Eric Goldman, a professor at the Santa Clara University School of Law, said. “When they promise to de-identify the data, I don’t believe it.”
Motherboard and PCMag asked Avast a series of detailed questions about how it protects user anonymity as well as details on some of the company’s contracts. Avast did not answer most of the questions but wrote in a statement, “Because of our approach, we ensure that Jumpshot does not acquire personal identiﬁcation information, including name, email address or contact details, from people using our popular free antivirus software.”
“Users have always had the ability to opt out of sharing data with Jumpshot. As of July 2019, we had already begun implementing an explicit opt-in choice for all new downloads of our AV, and we are now also prompting our existing free users to make an explicit choice, a process which will be completed in February 2020,” it said, adding that the company complies with the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) across its entire global user base.
“We have a long track record of protecting users’ devices and data against malware, and we understand and take seriously the responsibility to balance user privacy with the necessary use of data,” the statement added.
“It’s almost impossible to de-identify data.”
When PCMag installed Avast’s antivirus product for the ﬁrst time this month, the software did ask if they wanted to opt-in to data collection.
“If you allow it, we’ll provide our subsidiary Jumpshot Inc. with a stripped and de-identiﬁed data set derived from your browsing history for the purpose of enabling Jumpshot to analyze markets and business trends and gather other valuable insights,” the opt-in message read. The pop-up did not go into detail on how Jumpshot then uses this browsing data, however.
“The data is fully de-identiﬁed and aggregated and cannot be used to personally identify or target you. Jumpshot may share aggregated insights with its customers,” the pop-up added.
Just a few days ago, the Twitter account for Avast subsidiary AVG tweeted, “Do you remember the last time you cleaned your #browser history? Storing your browsing history for a long time can take up memory on your device and can put your private info at risk.”
Update: This piece has been updated to include a response from Sephora.
What would you like to do?
Embed this gist in your website.
Clone with Git or checkout with SVN using the repository’s web address.
Here are some terms to mute on Twitter to clean your timeline up a bit.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
Monty Python stars have led the tributes to their co-star Terry Jones, who has died at the age of 77.
The Welsh actor and writer played a variety of characters in the iconic comedy group’s Flying Circus TV series, and directed several of their ﬁlms.
He died on Tuesday, four years after contracting a rare form of dementia known as FTD.
David Walliams and Simon Pegg were among other comedians who remembered him.
Fellow Python star Sir Michael Palin described Jones as “one of the funniest writer-performers of his generation”.
In a tweet, John Cleese said he was “a man of so many talents and such endless enthusiasm”.
Eric Idle, another member of the highly inﬂuential comedy troupe, recalled the “many laughs [and] moments of total hilarity” they shared.
“It’s too sad if you knew him, but if you didn’t you will always smile at the many wonderfully funny moments he gave us,” he went on.
Terry Gilliam, with whom Jones directed the group’s ﬁlm The Holy Grail in 1975, described his fellow Python as a “brilliant, constantly questioning, iconoclastic, righteously argumentative and angry but outrageously funny and generous and kind human being”.
“One could never hope for a better friend,” he said.
Palin added: “Terry was one of my closest, most valued friends. He was kind, generous, supportive and passionate about living life to the full.
“He was far more than one of the funniest writer-performers of his generation, he was the complete Renaissance comedian - writer, director, presenter, historian, brilliant children’s author, and the warmest, most wonderful company you could wish to have.”
Screenwriter Charlie Brooker posted: “RIP the actual genius Terry Jones. Far too many brilliant moments to choose from.”
David Walliams thanked his comedy hero “for a lifetime of laughter”.
Simon Pegg - who acted in Jones’ ﬁnal ﬁlm as director, 2015′s Absolutely Anything - said: “Terry was a sweet, gentle, funny man who was a joy to work with and impossible not to love.”
And comedian Eddie Izzard told BBC News: “It’s a tragedy - the good go too early. Monty Python changed the face of world comedy. It will live forever. It’s a terrible loss.”
Shane Allen, BBC controller of comedy commissioning, wrote that it was a “sad day to lose an absolute titan of British comedy” and “one of the founding fathers of the most inﬂuential and pioneering comedy ensembles of all time”.
Jones was born in Colwyn Bay and went on to study at Oxford University, where he met his future Python pal Palin in the Oxford Revue - a student comedy group.
Alongside Palin, Idle and the likes of David Jason, he appeared in the BBC children’s satirical sketch show Do Not Adjust Your Set, which would set the template for their work to come with Python.
He wrote and starred in Monty Python’s Flying Circus TV show and the comedy collective’s ﬁlms, as a range of much-loved characters. These included Arthur “Two Sheds” Jackson, Cardinal Biggles of the Spanish Inquisition and Mr Creosote.
In addition to directing The Holy Grail with Gilliam, Jones took sole directorial charge of 1979′s Life of Brian and The Meaning of Life in 1983.
Cleese said: “Of his many achievements, for me the greatest gift he gave us all was his direction of Life of Brian. Perfection.”
Beyond Monty Python, he wrote the screenplay for the 1986 ﬁlm Labyrinth, starring David Bowie.
Monty Python’s Flying Circus, the groundbreaking comedy series that made Jones and his fellow cast members international stars, ﬁrst aired on BBC One in October 1969.
Surreal, anarchic and bawdily irreverent, the show’s blend of live-action sketches and animated interludes mocked both broadcasting conventions and societal norms.
Jones and Palin had met at Oxford, while Cleese, Graham Chapman and Eric Idle studied at Cambridge. After university, they took part in various comedy shows before forming Monty Python with US-born animator Terry Gilliam.
After four series, the troupe moved to the big screen to make Arthurian spoof Monty Python and the Holy Grail and Monty Python’s Life of Brian, a controversial parody of Biblical epics.
Monty Python’s The Meaning of Life, their ﬁnal ﬁlm as a collective, returned to the original series’ sketch-based format.
The surviving members reunited periodically after Chapman’s death in 1989, most notably for a run of live shows at the O2 in London in 2014.
* “Now, you listen here! He’s not the Messiah. He’s a very naughty boy!” - as Brian’s mother in Monty Python’s Life of Brian
* “I’m alive, I’m alive!” - as the naked hermit who gives away the location of a hiding Brian in Life of Brian
* “I shall use my largest scales” - as Sir Belvedere, who oversees a witch trial in Monty Python and the Holy Grail
* “What, the curtains?” - as Prince Herbert, who is told “One day, lad, all this will be yours” in Holy Grail
* “Spam, spam, spam, spam, spam, spam, spam” - as the greasy spoon waitress in a Monty Python sketch
The statement from Jones’ family noted his “uncompromising individuality, relentless intellect and extraordinary humour [that] has given pleasure to countless millions across six decades”.
“Over the past few days his wife, children, extended family and many close friends have been constantly with Terry as he gently slipped away at his home in north London.
“His work with Monty Python, his books, ﬁlms, television programmes, poems and other work will live on forever, a ﬁtting legacy to a true polymath.”
The family thanked Jones’ “wonderful medical professionals and carers for making the past few years not only bearable but often joyful”.
They said: “We hope that this disease will one day be eradicated entirely. We ask that our privacy be respected at this sensitive time and give thanks that we lived in the presence of an extraordinarily talented, playful and happy man living a truly authentic life, in his words ‘Lovingly frosted with glucose.’”
* FTD is an uncommon type of dementia that mainly affects the front and sides of the brain
* The initial symptoms are, in most cases, changes in behaviour, mental ability and language, which will get worse over time
* Language problems can include loss of vocabulary, repetition and forgetting the meaning of common words
* FTD is caused by abnormal protein clumps forming inside the brain cells, which are thought to damage the cells and stop them working. Why this happens is not fully understood
* Although there are currently no cures or treatments to slow FTD down, there are treatments that can help control some symptoms
Follow us on Facebook, or on Twitter @BBCNewsEnts. If you have a story suggestion email firstname.lastname@example.org.
Today, on Wikipedia’s 19th birthday, the Wikimedia Foundation has received reports that access to Wikipedia in Turkey is actively being restored.* This latest development follows a 26 December 2019 ruling by the Constitutional Court of Turkey that the more than two and a half year block imposed by the Turkish government was unconstitutional. Earlier today, the Turkish Constitutional Court made the full text of that ruling available to the public, and shortly after, we received reports that access was restored to Wikipedia.
We are thrilled that the people of Turkey will once again be able to participate in the largest global conversation about the culture and history of Turkey online and continue to make Wikipedia a vibrant source of information about Turkey and the world.
“We are thrilled to be reunited with the people of Turkey,” said Katherine Maher, Executive Director of the Wikimedia Foundation. “At Wikimedia we are committed to protecting everyone’s fundamental right to access information. We are excited to share this important moment with our Turkish contributor community on behalf of knowledge-seekers everywhere.”
We are actively reviewing the full text of the ruling by the Constitutional Court of Turkey. In the meantime, our case before the European Court of Human Rights is still being considered by the Court. We ﬁled a petition in the European Court of Human Rights in spring of last year, and in July, the Court granted our case priority status. We will continue to advocate for strong protections for free expression online in Turkey and around the world.
Wikipedia is a global free knowledge resource written and edited by people around the world. Because of this open editing model, Wikipedia is also a resource everyone can be a part of actively shaping — adding knowledge about their culture, country, interests, studies, and more through Wikipedia’s articles. Volunteers work together to write articles about many different topics ranging from history, pop culture, science, sports, and more using reliable sources to verify the facts. It is through this collective process of writing, discussion, and debate that Wikipedia becomes more neutral, more comprehensive, and more representative of the world’s knowledge.
More than 85 percent of the articles on Wikipedia are in languages other than English, which includes the Turkish Wikipedia’s more than 335,000 articles, written by Turkish-speaking volunteers for Turkish-speaking people.
In the time that the block was in effect, we heard from students, teachers, professionals and more in Turkey about how the block had impacted their daily lives. For many students, the block had occurred just days before their ﬁnal exams. On social media, members of the international volunteer Wikipedia editor community and countless individuals shared messages of support with #WeMissTurkey and their desire to once again collaborate with the people of Turkey on Wikipedia.
With the decision today, our editors in Turkey will once again be able to fully participate in sharing and contributing to free knowledge online.
* We have received reports that several internet service providers in Turkey, depending on the location, have restored access to Wikipedia in Turkey, with some still in the process of restoring access. We will keep this statement updated as further access is restored.
Click on a cell below to toggle bit values. Use this to build intuition for the IEEE ﬂoating-point format. See Wikipedia for details on the half-precision, single-precision and double-precision ﬂoating-point formats.
7. Searching in code
What do I want
11. Future and my holy grail of search
Having information in the digital form, collecting and writing notes is incredibly valuable.
Our brains are good at associations, pattern matching and creative thinking, not storing arrays of structured data, and external memory is one of the main thinking hacks computers aid us with.
However this information is not so useful if you can’t access and search it quickly. Instant search changes the way you think. Ever got sense of ﬂow while working through some problem, and trying different things from Stackoverﬂow or documentation?
These days, if you have decent connection, you are seconds away from ﬁnding almost any public knowledge in the internet. However, there is another aspect of information: personal and speciﬁc to your needs, work and hobbies. It’s your todo list, your private notes, books you are reading. Of course, it’s not that well integrated with the outside world, hence the tooling and experience of interacting with it is very different.
To ﬁnd something from my Messenger history with a friend, I need to be online, open Facebook, navigate to search and use the interface Facebook’s employees thought convenient (spoiler: it sucks)
It’s my information, something that came out from my brain. Why can’t I have it available anywhere, anytime, presented the way I prefer?
To ﬁnd something in my Kobo ebook, I need to reach my device physically and type the query using the virtual keyboard (yep, e-ink lag!). Not a very pleasant experience.
It’s something I own and have read. Why does it have to be so hard?
Such things are pretty frustrating to me, so I’ve been working on making them easier. Search has to be incremental, fast and as convenient to use as possible. I’ll be sharing some of workﬂows, tricks and thoughts in this post.
The post is geared towards using Emacs and Org-mode, but hopefully you’ll ﬁnd some useful tricks for your current tools and workﬂow even if you don’t. There is (almost) nothing inherently special about Emacs, I’m sure you can achieve similar workﬂows in other modern text editors given they are ﬂexible enough.
Note: throughout the post I will link to my emacs conﬁg snippets. To prevent code references from staling, I use permalinks, but check master branch as well in case of patches or more comments in code.
* my personal notes, tasks and knowledge repository (this blog included)
By personal information I refer to things like todo list, personal wiki or whatever you use to store information relevant to your life.
For the most part, I keep things in Org mode, and I use Emacs to work with it. Apart from regular means of plaintext search (I’ll write about it later), for me it’s important to search over tags:
org-tags-view is available by default and an easy way to run simple tag searches
It’s a relatively new package, with new query syntax as the main feature, which is much easier to use and remember than builtin Org query syntax: comparison.
I mainly use these commands:
helm-org-ql for incremental search in the current buffer
org-ql-search interactively prompts you for the search target, sort and grouping
Another notable mention is org-riﬂe, which is an entry based search, presenting headings along with the matched content in Helm buffer. However as the author mentioned it might be obsoleted by org-ql soon.
Here are some typical workﬂows with my org-mode:
I see an interesting article or think of something which would be good to share with a friend, but at the moment it’s not quite a good time to send it. I can just capture it and attach a tag (e.g. or ). That way next time we chat I can just look up things under their tag and send them.
It works the other way around as well: imagine they sent me a link or asked me to do something, but I can’t do it immediately. I have a special script that converts chat messages into todo items and automatically attaches the corresponding tag. I write more about it here.
Unfortunately, I can’t just sit and write comprehensible texts without preparation. Typically I have thoughts on the topic now and then, which I just note down and mark with the tag. When I feel it’s time to prepare the post, I can just search by the tag (e.g. tags:search for this post), and reﬁle the items into the ﬁle with the post draft.
As I mentioned, I ﬁnd having to switch to the browser, wait till the website loads and cope with crappy search implementations very distracting and frustrating.
What is more, often you don’t even remember whether exactly you were discussing something: on Telegram or Facebook or Reddit? So having a single point of entry to your information and uniﬁed search over all of your stuff is extremely helpful.
For instant messaging, I’m using plaintext mirrors, so chat history is always available in plaintext on my computers:
Didn’t bother with org-mode because ﬁles would be too huge and there isn’t much structure anyway.
Sadly export tool stopped working because of API restrictions, but I’m not using VK much anymore either. At least I got historic messages.
Most services where I can comment, write or leave annotation, I’m mirroring as org-mode. I write about it in detail here: part I, part II.
That gives me source data for a search engine over anything I’ve ever:
All these ﬁles are either non-Org or somewhat heavy for structured Org-mode search. In addition, I have many old ﬁles from my pre-orgmode era when I was using Gitit or Zim.
To search over them, I’m using Emacs and Ripgrep (you can read why later):
runs ripgrep against my/search-targets variable contains paths to notes, chat logs, Orger outputs etc.
The interesting bit about my/search is –my/one-off-helm-follow-mode call. It’s a somewhat horrible hack that automatically enables helm-follow mode so you don’t have to press C-c C-f every time you invoke helm.
Finally, to make sure I can invoke search in an instance, I’m using a global keybinding.
Here’s a demo gif (5Mb) of using this to search ‘greg egan’ in my knowledge repository. You can see that as a result, I’m getting my Kobo highlights (), my reading list () and even some video ()!
Recoll is an indexer that runs as daemon (or a regular cron job) and a full text search tool.
It supports many formats and other features, so I suggest checking them out for yourself.
Even though I index all my documents, I ﬁnd it quicker to run grep I described above to search in plaintext. So for me, Recoll is mostly for searching and quickly jumping to results in PDFs and EPUBs (see screenshot).
There is helm-recoll Emacs module, but I found it a bit awkward to use, and Recoll GUI feels significantly superior. Basically only thing helm-recoll does is presenting you list of ﬁlenames that match your query. It feels that it should be straightforward to modify the module and integrate abstract, snippets and other things you can query Recoll for.
Considering I don’t need use Recoll it too often, I just gave up on helm-recoll and using GUI.
I’m also running a Web UI on my VPS, so I can use it from my phone, or potentially from other computers.
Recoll’s distinguishing features are proper search query language
and realtime, inotify based indexing. I don’t have that much data yet to beneﬁt massively from proper search queries, but I can see that it could be potentially useful in future as amount of personal data grows.
Most of my notes and knowledge repository are plaintext, so it is easily and continuously shared on my phone via Dropbox/Syncthing.
Since using Emacs on Android is hardly a meaningful experience, I’m working around it by using other apps.
I can’t recommend it enough, it’s got many things done right, very fast and the code is extremely readable and well tested so it’s easy to contribute.
It has its own small query language (at the time org-ql didn’t exist).
You can save search queries, which ends up being pretty similar to custom Org-mode agendas. Searches can be displayed as persistent widgets, e.g. I ﬁnd convenient to have a phone screen dedicated to ‘Buy’ search (t.buy query) or ‘Do at work’ search (t.@work query).
As I described above, I keep few saved search queries for some friends so I can recall what I wanted to discuss with them.
Docsearch is a not very well-known tool (e.g. zero search results on Reddit or Twitter), but I don’t know any alternatives for it.
It’s a fulltext indexing and search app for plaintext ﬁles, but apparently it even supports EPUBs and PDFs.
Here’s how matches list looks. Screenshots on Google Play give a pretty good idea what the app does.
I ﬁnd it convenient for quick search over things that are not imported in Orgzly, e.g. chat logs (Telegram, VK) and huge org-mode ﬁles I described above.
It’s a bit backwards in terms of UI (even though I like that it’s compact and functional), but main downside is it’s not opensource. I’d be extremely happy to replace this with some open source application, so please let me know if you know one!
On the rare occasions when I need to search in pdfs or books (which I don’t sync on my phone) , I just use Recoll Web UI that I’m selfhosting.
If you’re reading this at all, chances you’re quite good at using web search already. Gwern got a good writeup on the subject.
Knowing how to compose a search query is one thing, but navigating to the service, waiting till it loads, moving to searchbox takes precious time. Many people forget about custom search engines. Here are ones I’m using:
Some of these obvious, some deserve separate mention:
reddit contains vast amounts of (somewhat curated) human knowledge
Google search often gives dubious and not very meaningful results on certain topics (e.g. product reviews, exercise, dieting). On reddit, you’d at least ﬁnd real people sharing their honest and real opinions. Chances are that if a link is good, you would ﬁnd it on on reddit anyway.
twitter is similar: there is certainly more spam there, but sometimes it’s interesting to type a link or blog post title in twitter search to see how real people reacted.
That has limited utility, e.g. doesn’t work with politicized content, but if the topic of interest is rare, could be very useful.
* pinboard is an awesome source of curated content as well
Next, I ﬁnd it very convenient to have some code documentation available locally. First, it helps when you’re on wonky internet or just ofﬂine for whatever reason. Second, it’s feels really fast, even if you’re on ﬁber.
Here’s what I’m using for that:
Sadly, the extension mentioned above doesn’t work with schema for some reason, so to add it in Firefox, you can use the method described here, it’s as easy as adding a bookmark.
Recently I ran into devdocs.io, it’s using your browser’s ofﬂine storage to cache the documentation. I’m still getting used to it, but it’s amazing how faster it is than jumping to documentation online. You can use it with multiple languages, you just type the search engine preﬁx ﬁrst, and then language preﬁx (e.g. dd cpp emplace_back).
Finally, it may be convenient to set up engine-mode in Emacs, or search-engine layer in Spacemacs. It lets you invoke a browser search directly from Emacs (e.g. SPC s G to do google search). I ﬁnd it convenient when I need to search many things in bulk.
I ﬁnd it convenient to enable ‘highlight all’ for search within a page.
When I used Chrome, one thing that annoyed me was that it populates search engines automatically, and there is not way to disable it.
There is a nice open source extension that prevents Chrome from doing it.
TLDR: I tried different existing code search tools, was disappointed and ended up using Emacs + Ripgrep.
I’ve got lots of personal projects, experiments, data processing and backup scripts on my computer. I also tend to create a git repository at a slightest opportunity primarily as a means of code backup/rollback and progress tracking, but often it results in actual projects, so I would need a repository anyway. Naturally, these repositories end up scattered across the whole ﬁlesystem, making it tricky to remember where I’ve put the code or that it even existed in the ﬁrst place.
It’s very convenient to have some sort of code search engine if you’re in a similar situation to mine for multiple reasons:
For instance, I want to remove some unused function or refactor something in my package, which is a Python library to access my personal data. It’s used in lots of scripts or dashboards that run in Cron every day.
I could just go for it, remove the function and hope nothing fails, but if it does then I’d have to deal with ﬁxing it again. It’s frustrating and I’d rather search for function usages in all of my code and make sure it’s actually safe to remove.
When you’re getting familiar to some new library or framework, you often end up googling how to solve problems twice. Sometimes you remember solving the problem you’ve already had, but don’t quite recall where.
For instance for me, such library is Sqlalchemy. It’s very convenient for handling databases, but I only need it infrequently, so can never remember how to work with it. Reading documentation all over again is not very helpful because I’ve got very few usecases and queries that are speciﬁc to my purposes.
If I can search for sqlalchemy in my code, it shows every repository where I used it so I can quickly copy bit of code I’m interested at.
It happens that I remember writing code for some purpose, but don’t quite recall where I put it. Even if you keep all your repos in the same location, you might forget how you named it.
Full text search, however, allows to ﬁnd it if you remember some comments or class/function names.
However good is library’s documentation, sometimes it just isn’t covering your typical needs. If you’re a power user, docs are almost never enough and you end up reading the code to bend the library into doing what you want.
For me such libraries are Org mode or Hakyll, so I often had to search in their code on Github. Searching on Github however is quite awkward. It’s slow, it’s not incremental and lacks navigation.
If I have a local clone of the repository on my disk, I can search over it in an instant (without having it opened in the ﬁrst place) and use familiar tools (e.g. IDE) for navigation.
At the time, I was just using recursive grep and then opening some of results in vim to reﬁne. That’s a pretty pathetic workﬂow.
run against code on my ﬁlesystem
Just any source ﬁles, so it wouldn’t have to fetch repositories from Github and keep them somewhere separately.
Ideally, inotify-based, but any means of refreshing search index without having to commit ﬁrst would be nice.
semantic search in definitions/variables etc with fallback to simple search if the language isn’t supported
So, I wanted some code search and indexing tool that could watch over all the source ﬁle on my ﬁlesystem and let me search through them.
It sounds as a fairly straightforward wish, but to my surprise, none of existing projects I found and tried do the job:
Lets you index Github/Bitbucket/Gitlab repos etc, but the process for adding local repositories is extremely tedious. Also apparently, it clones repositories ﬁrst so it’s not exactly realtime indexing.