10 interesting stories served every morning and every evening.
Egypt has been certified malaria-free by the World Health Organization (WHO) - an achievement hailed by the UN public health agency as “truly historic”.
“Malaria is as old as Egyptian civilization itself, but the disease that plagued pharaohs now belongs to its history,” said WHO chief Tedros Adhanom Ghebreyesus.
Egyptian authorities launched their first efforts to stamp out the deadly mosquito-borne infectious disease nearly 100 years.
Certification is granted when a country proves that the transmission chain is interrupted for at least the previous three consecutive years. Malaria kills at least 600,000 people every year, nearly all of them in Africa.
...
Read the original on www.bbc.com »
Please do not write below this line
I have been vexed for some time by the request at the bottom of
each letter that I am not to write below the line.
I emailed TVL/BBC on 5 November 2006 to find out why:
I’ve had a letter from TV Licencing and I’m interested in the
statement at the bottom of the page. It says: ’Please do not write below this
line’. I would like to know why the letter requests this. The line referred to
is about half an inch from the bottom edge of the letter.
What will happen if I write there? How would you know? I am not
asked to return the letter, so why the request?
Seven weeks later, on 27 December 2006, I receive
this from one Kelly Wright:
Thank you for contacting us. Unfortunately I am unable to deal
with your request, as you have not provided your address and licence number. If
you have moved address I will need both your new and old address. Once I have
this information I will action your request and send you the appropriate form
or confirmation.
I do not
have a licence. The letter was sent to me as part of TVL’s routine mail-out. It
was not solicited by me. Copies of these letters are commonly reproduced on the
internet [example].
You will see that these letters say “Please do not
write below this line”. So did the one sent to me. Please explain to me why I
am not allowed to write below the line.
Evidently, my question was too taxing for Ms
Wright, as the next response came from Ruairi Mcclean, on 3 January
2007:
The reason you would receive such letters is because we would have
no record of a TV licence at your address.
The reason you cannot write below the line is because the letters
go through a OCR machine, and anything below the line is rejected.
OCR is an abbreviation for “optical character
recognition”, software that scans documents for editing on a computer.
This explanation surprised me. I did not
understand why, having sent me a letter, TVL wanted it back to scan and edit;
why not scan it before sending it to me? I responded on 27 January
2007:
Thank you
for your reply. You say that I cannot write below the line because the letter
will go through a OCR machine, and anything below the line will be rejected.
I have two further questions:
i) I take it from your reply that
a TV officer is planning to collect the letter back off me in order to scan it.
Please tell me what purpose this serves.
ii) Anything I write above the line would also be
rejected by the OCR. Why am I allowed to write above the line, if I am not
allowed to write in the narrow strip beneath it?
On 30 January 2007, I received this reply from Cas
Scott:
A
Licensing officer may call at your property not to collect the letters but to
check that you are not watching a TV.
You may write above the line but as we advised you
previously anything written below the line when they go through the OCR machine
they will be rejected.
If you would like to confirm your address I can up date our
records to advise no Television is being watched.
Thank you for confirming that I may write above the line.
Please explain why, having sent me the
letter, you want it back for scanning. Also, please explain how I am to get the
letter to you.
Without your address we are unable to amend our records to show
that you are not using TV equipment.
To return an enquiry letter to TV licensing,
simply return it.
I don’t seem to be getting a straight
answer.
Thank you for your reply. The purpose of my query is not to ask
why you want my address.
The information that I am seeking is why you want the letter back
for scanning. There was nothing on the letter that said I had to return it.
Please note that I
am not Miss Scott.
The information about returning the letter was not on the letter
itself but on the envelope.
The only
reason we ask you to return the letter is to help us update our records,
however if you could provide us with your address we can update our records
without you returning the letter.
Having kept all my TVL/BBC envelopes, I examined
them to see whether any displayed an instruction that I was to return the
letter. None did. There was a return address but only was for undelivered
letters. I resist the temptation to pursue this point.
Thank you for your reply.
What I still do not understand is why you would
wish to OCR my letter in order to update the records.
Obviously, the number below the line must be very
important. Please could you explain its purpose.
I
apologise that it has not been made clear to you. An OCR stands for a Optical
Character System. This machine enables us to deal/process with large volumes of
information in a relatively short space of time. The OCR machine reads the
information below the line and updates the corresponding records on our
computer system many times faster than if manually processed.
If the information
below the line is obscured in anyway the OCR machine will be unable to read the
information effectively. The number below the line is a unique number that
relates to the specific property that the letter has been sent to. Once this
number is read by the OCR machine it will automatically update the computer
records that relate to that letter/property/licence/application.
...
Read the original on www.bbctvlicence.com »
A tool that allows you to extract a list of html pages from a website and compile them into an ePub book to be imported into your eReader of choice.
For advanced users who can write javascript, you can add additional parser definition to customize parsing of any site.
Check out the wiki for usage.
Custom sites with UL/OL elements as table of content, or regex on Link text, or use query selector
Custom web app with predefined Title (header) element and Next button (clickable)
...
Read the original on github.com »
An amateur historian has discovered a long-lost short story by Bram Stoker, published just seven years before his legendary gothic novel Dracula. Brian Cleary stumbled upon the 134-year-old ghostly tale while browsing the archives of the National Library of Ireland.Gibbet Hill was originally published in a Dublin newspaper in 1890 - when the Irishman started working on Dracula - but has been undocumented ever since.Stoker biographer Paul Murray says the story sheds light on his development as an author and was a significant “station on his route to publishing Dracula”.
The ghostly story tells the tale of a sailor murdered by three criminals whose bodies were strung up on a hanging gallows as a warning to passing travellers. It is set in Gibbet Hill in Surrey, a location also referenced in Charles Dickens’ 1839 novel Nicholas Nickleby.Mr Cleary made the discovery after taking time off work following a sudden onset of hearing loss in 2021 - during which period he would pass the time at the national library in Stoker’s native Dublin.In October 2023, the Stoker fan came across an unfamiliar title in an 1890 Christmas supplement of the Daily Express Dublin Edition.Mr Clearly told the AFP news agency: “I read the words Gibbet Hill and I knew that wasn’t a Bram Stoker story that I had ever heard of in any of the biographies or bibliographies.“”I sat looking at the screen wondering, am I the only living person who had read it?”He said of the moment he made the discovery: “What on earth do I do with it?”The library’s director Audrey Whitty said Mr Cleary called her and said: “I’ve found something extraordinary in your newspaper archives - you won’t believe it.“She added that his “astonishing amateur detective work” was a testament to the library’s archives.“There are truly world-important discoveries waiting to be found”, she said.
After his initial sleuthing, Mr Cleary contacted biographer Paul Murray - who confirmed there had been no trace of the story for over a century. He said 1890 was when he was a young writer and made his first notes for Dracula.“It’s a classic Stoker story, the struggle between good and evil, evil which crops up in exotic and unexplained ways,” he added.Gibbet Hill is being published alongside artwork by the Irish artist Paul McKinley by the Rotunda Foundation - the fundraising arm of Dublin’s Rotunda Hospital for which Mr Cleary worked.All proceeds will go to the newly formed Charlotte Stoker Fund - named after Bram Stoker’s mother who was a hearing loss campaigner - to fund research on infant hearing loss.The discovery is also being highlighted in the city’s Bram Stoker festival later this month.
...
Read the original on www.bbc.com »
Carriers fight plan to require unlocking of phones 60 days after activation.
T-Mobile and AT&T say US regulators should drop a plan to require unlocking of phones within 60 days of activation, claiming that locking phones to a carrier’s network makes it possible to provide cheaper handsets to consumers. “If the Commission mandates a uniform unlocking policy, it is consumers—not providers—who stand to lose the most,” T-Mobile alleged in an October 17 filing with the Federal Communications Commission.
The proposed rule has support from consumer advocacy groups who say it will give users more choice and lower their costs. T-Mobile has been criticized for locking phones for up to a year, which makes it impossible to use a phone on a rival’s network. T-Mobile claims that with a 60-day unlocking rule, “consumers risk losing access to the benefits of free or heavily subsidized handsets because the proposal would force providers to reduce the line-up of their most compelling handset offers.”
If the proposed rule is enacted, “T-Mobile estimates that its prepaid customers, for example, would see subsidies reduced by 40 percent to 70 percent for both its lower and higher-end devices, such as the Moto G, Samsung A15, and iPhone 12,” the carrier said. “A handset unlocking mandate would also leave providers little choice but to limit their handset offers to lower cost and often lesser performing handsets.”
T-Mobile and other carriers are responding to a call for public comments that began after the FCC approved a Notice of Proposed Rulemaking (NPRM) in a 5–0 vote. The FCC is proposing “to require all mobile wireless service providers to unlock handsets 60 days after a consumer’s handset is activated with the provider, unless within the 60-day period the service provider determines the handset was purchased through fraud.”
When the FCC proposed the 60-day unlocking rule in July 2024, the agency criticized T-Mobile for locking prepaid phones for a year. The NPRM pointed out that “T-Mobile recently increased its locking period for one of its brands, Metro by T-Mobile, from 180 days to 365 days.”
T-Mobile’s policy says the carrier will only unlock mobile devices on prepaid plans if “at least 365 days… have passed since the device was activated on the T-Mobile network.”
“You bought your phone, you should be able to take it to any provider you want,” FCC Chairwoman Jessica Rosenworcel said when the FCC proposed the rule. “Some providers already operate this way. Others do not. In fact, some have recently increased the time their customers must wait until they can unlock their device by as much as 100 percent.”
T-Mobile executives, who also argue that the FCC lacks authority to impose the proposed rule, met with FCC officials last week to express their concerns.
“T-Mobile is passionate about winning customers for life, and explained how its handset unlocking policies greatly benefit our customers,” the carrier said in its post-meeting filing. “Our policies allow us to deliver access to high-speed mobile broadband on a nationwide 5G network via handsets that are free or heavily discounted off the manufacturer’s suggested retail price. T-Mobile’s unlocking policies are transparent, and there is absolutely no evidence of consumer harm stemming from these policies. T-Mobile’s current unlocking policies also help T-Mobile combat handset theft and fraud by sophisticated, international criminal organizations.”
For postpaid users, T-Mobile says it allows unlocking of fully paid-off phones that have been active for at least 40 days. But given the 365-day lock on prepaid users, T-Mobile’s overall policy is more onerous than those of other carriers. T-Mobile has also faced angry customers because of a recent decision to raise prices on plans that were advertised as having a lifetime price lock.
AT&T enables unlocking of paid-off phones after 60 days for postpaid users and after six months for prepaid users. AT&T lodged similar complaints as T-Mobile, saying in an October 7 filing that the FCC’s proposed rules would “mak[e] handsets less affordable for consumers, especially those in low-income households,” and “exacerbate handset arbitrage, fraud, and trafficking. ”
AT&T told the FCC that “requiring providers to unlock handsets before they are paid-off would ultimately harm consumers by creating upward pressure on handset prices and disincentives to finance handsets on flexible terms.” If the FCC implements any rules, it should maintain “existing contractual arrangements between customers and providers, ensure that providers have at least 180 days to detect fraud before unlocking a device, and include at least a 24-month period for providers to implement any new rules,” AT&T said.
Verizon, which already faces unlocking rules because of requirements imposed on spectrum licenses it owns, automatically unlocks phones after 60 days for prepaid and postpaid users. Among the three major carriers, Verizon is the most amenable to the FCC’s new rules.
An October 18 filing supporting a strict unlocking rule was submitted by numerous consumer advocacy groups including Public Knowledge, New America’s Open Technology Institute, Consumer Reports, the National Consumers League, the National Consumer Law Center, and the National Digital Inclusion Alliance.
“Wireless users are subject to unnecessary restrictions in the form of locked devices, which tie them to their service providers even when better options may be available. Handset locking practices limit consumer freedom and lessen competition by creating an artificial technological barrier to switching providers,” the groups said.
The groups cited the Verizon rules as a model and urged the FCC to require “that device unlocking is truly automatic—that is, unlocked after the requisite time period without any additional actions of the consumer.” Carriers should not be allowed to lock phones for longer than 60 days even when a phone is on a financing plan with outstanding payments, the groups’ letter said:
Providers should be required to transition out of selling devices without this [automatic unlocking] capability and the industry-wide rule should be the same as the one protecting Verizon customers today: after the expiration of the initial period, the handset must automatically unlock regardless of whether: (1) the customer asks for the handset to be unlocked or (2) the handset is fully paid off. Removing this barrier to switching will make the standard simple for consumers and encourage providers to compete more vigorously on mobile service price, quality, and innovation.
In an October 2 filing, Verizon said it supports “a uniform approach to handset unlocking that allows all wireless providers to lock wireless handsets for a reasonable period of time to limit fraud and to enable device subsidies, followed by automatic unlocking absent evidence of fraud.”
Verizon said 60 days should be the minimum for postpaid devices so that carriers have time to detect fraud and theft, and that “a longer, 180-day locking period for prepaid is necessary to enable wireless providers to continue offering subsidies that make phones affordable for prepaid customers.” Regardless of what time frame the FCC chooses, Verizon said “a uniform unlocking policy that applies to all providers… will benefit both consumers and competition.”
While the FCC is likely to impose an unlocking rule, one question is whether it will apply when a carrier has provided a discounted phone. The FCC’s NPRM asked the public for “comment on the impact of a 60-day unlocking requirement in connection with service providers’ incentives to offer discounted handsets for postpaid and prepaid service plans.”
The FCC acknowledged Verizon’s argument “that providers may rely on handset locking to sustain their ability to offer handset subsidies and that such subsidies may be particularly important in prepaid environments.” But the FCC noted that public interest groups “argue that locked handsets tied to prepaid plans can disadvantage low-income customers most of all since they may not have the resources to switch service providers or purchase new handsets.”
The public interest groups also note that unlocked handsets “facilitate a robust secondary market for used devices, providing consumers with more affordable options,” the NPRM said.
The FCC says it can impose phone-unlocking rules using its legal authority under Title III of the Communications Act “to protect the public interest through spectrum licensing and regulations to require mobile wireless service providers to provide handset unlocking.” The FCC said it previously relied on the same Title III authority when it imposed the unlocking rules on 700 MHz C Block spectrum licenses purchased by Verizon.
T-Mobile told the FCC in a filing last month that “none of the litany of Title III provisions cited in the NPRM support the expansive authority asserted here to regulate consumer handsets (rather than telecommunications services).” T-Mobile also said that “the Commission’s legal vulnerabilities on this score are only magnified in light of recent Supreme Court precedent.”
The Supreme Court recently overturned the 40-year-old Chevron precedent that gave agencies like the FCC judicial deference when interpreting ambiguous laws. The end of Chevron makes it harder for agencies to issue regulations without explicit authorization from Congress. This is a potential problem for the FCC in its fight to revive net neutrality rules, which are currently blocked by a court order pending the outcome of litigation.
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
To the astonishment of forecasters, a tiny hurricane just sprang up near Cuba
Solar power from space? Actually, it might happen in a couple of years.
...
Read the original on arstechnica.com »
One afternoon in January 2011, Hussein Mourtada leapt onto his desk and started dancing. He wasn’t alone: Some of the graduate students who shared his Paris office were there, too. But he didn’t care. The mathematician realized that he could finally confirm a sneaking suspicion he’d first had while writing his doctoral dissertation, which he’d finished a few months earlier. He’d been studying special points, called singularities, where curves cross themselves or come to sharp turns. Now he had unexpectedly found what he’d been looking for, a way to prove that these singularities had a surprisingly deep underlying structure. Hidden within that structure were mysterious mathematical statements first written down a century earlier by a young Indian mathematician named Srinivasa Ramanujan. They had come to him in a dream.
Ramanujan brings life to the myth of the self-taught genius. He grew up poor and uneducated and did much of his research while isolated in southern India, barely able to afford food. In 1912, when he was 24, he began to send a series of letters to prominent mathematicians. These were mostly ignored, but one recipient, the English mathematician G. H. Hardy, corresponded with Ramanujan for a year and eventually persuaded him to come to England, smoothing the way with the colonial bureaucracies.
It became apparent to Hardy and his colleagues that Ramanujan could sense mathematical truths — could access entire worlds — that others simply could not. (Hardy, a mathematical giant in his own right, is said to have quipped that his greatest contribution to mathematics was the discovery of Ramanujan.) Before Ramanujan died in 1920 at the age of 32, he came up with thousands of elegant and surprising results, often without proof. He was fond of saying that his equations had been bestowed on him by the gods.
More than 100 years later, mathematicians are still trying to catch up to Ramanujan’s divine genius, as his visions appear again and again in disparate corners of the world of mathematics.
Ramanujan is perhaps most famous for coming up with partition identities, equations about the different ways you can break a whole number up into smaller parts (such as 7 = 5 + 1 + 1). In the 1980s, mathematicians began to find deep and surprising connections between these equations and other areas of mathematics: in statistical mechanics and the study of phase transitions, in knot theory and string theory, in number theory and representation theory and the study of symmetries.
Most recently, they’ve appeared in Mourtada’s work on curves and surfaces that are defined by algebraic equations, an area of study called algebraic geometry. Mourtada and his collaborators have spent more than a decade trying to better understand that link, and to exploit it to uncover rafts of brand-new identities that resemble those Ramanujan wrote down.
“It turned out that these kinds of results have basically occurred in almost every branch of mathematics. That’s an amazing thing,” said Ole Warnaar of the University of Queensland in Australia. “It’s not just a happy coincidence. I don’t want to sound religious, but the mathematical god is trying to tell us something.”
Ramanujan’s mathematical prowess was obvious to those who knew him. Without formal training, he excelled; by the time he was in high school he had devoured advanced, though often outdated, textbooks, and was doing independent research on different kinds of numerical properties and patterns.
In 1904, he was granted a full scholarship to the Government Arts College in Kumbakonam, the small city where he had grown up, in what is now the Indian state of Tamil Nadu. But he ignored all subjects besides math and lost his scholarship within a year. He later enrolled in another university, this time in Madras (now Chennai), the provincial capital some 250 kilometers north. Again he flunked out.
He continued his research on his own for years, often while in poor health. During that time, he tutored students in math to support himself. Eventually he secured a job as a clerk at the Madras Port Trust in 1912. He pursued mathematics on the side and published some of his results in Indian journals.
Hoping to get some of his work into more prestigious and widely read publications, Ramanujan wrote letters to several British mathematicians, enclosing pages of findings for their review. “I have not trodden through the conventional regular course which is followed in a university course,” he wrote, “but I am striking out a new path for myself.” Among the recipients was Hardy, an expert in number theory and analysis at the University of Cambridge.
Hardy was shocked at what he saw. Ramanujan had identified and then solved a number of continued fractions — expressions that can be written as infinite nests of fractions within fractions, such as:
They “defeated me completely; I had never seen anything in the least like them before,” Hardy later wrote. “They must be true because, if they were not true, no one would have had the imagination to invent them.” The formulas, unproved, were so striking that they inspired Hardy to offer Ramanujan a fellowship at Cambridge. In 1914, Ramanujan arrived in England, and for the next five years he studied and collaborated with Hardy.
One of Ramanujan’s first tasks was to prove a general statement about his continued fractions. To do so, he needed to prove two other statements. But he couldn’t. Neither could Hardy, nor could any of the colleagues he reached out to.
It turned out that they didn’t need to. The statements had been proved 20 years earlier by a little-known English mathematician named L. J. Rogers. Rogers wrote poorly, and at the time the proofs were published no one paid any attention. (Rogers was content to do his research in relative obscurity, play piano, garden and apply his spare time to a variety of other pursuits.) Ramanujan uncovered this work in 1917, and the pair of statements later became known as the Rogers-Ramanujan identities.
Amid Ramanujan’s prodigious output, these statements stand out. They have carried through the decades and across nearly all of mathematics. They are the seeds that mathematicians continue to sow, growing brilliant new gardens seemingly wherever they fall.
Ramanujan fell ill and returned to India in 1919, where he died the next year. It would fall to others to explore the world his identities had revealed.
Hussein Mourtada grew up in the 1980s in Lebanon, in a small city called Baalbek. As a teenager, he didn’t like studying and preferred to play: soccer, billiards, basketball. Math, too. “It looked like a game,” he said. “And I liked playing.”
As an undergraduate at the Lebanese University in Beirut, he studied both law and mathematics, with an eye to a legal career. But he soon found that while he enjoyed the philosophical aspects of law, he did not enjoy it in practice. He turned his attention to math, where he was particularly drawn to the community. As a child, his teachers and classmates were what excited him about going to school, even though he often fell asleep during class. As a budding mathematician, “I had the impression that these are beautiful people,” he said. “They are honest. You need to be honest with yourself to be a mathematician. Otherwise, it doesn’t work.”
He moved to France for his doctorate and started to focus on algebraic geometry — the study of algebraic varieties, or shapes cut out by polynomial equations. These are equations that can be written as sums of variables raised to whole-number powers. A line, for instance, is cut out by the equation x + y = 0, a circle by x2 + y2 = 1, a figure eight by x4 = x2 − y2. While the line and circle are completely smooth, the figure eight has a point where it intersects itself — a singularity.
It’s easy to spot singularities when you’re dealing with shapes that you can draw on a sheet of paper. But higher-dimensional algebraic varieties are far more complicated and impossible to visualize. Algebraic geometers are in the business of understanding their singularities, too.
They’ve developed all sorts of tools to do this. One dates back to the mathematician John Nash, who in the 1960s started studying related objects called arc spaces. Nash would take a point, or singularity, and define infinitely many short trajectories — little arcs — that passed through it. By looking at all these short trajectories together, he could test how smooth his variety was at that point. “If you want to see if it’s smooth, you want to pet it,” said Gleb Pogudin of the École Polytechnique in France.
In practical terms, an arc space provides an infinite collection of polynomial equations. “This is really the thing that Mourtada is expert in: understanding the meaning of those equations,” said Bernard Teissier, a colleague of Mourtada’s at the Institute of Mathematics of Jussieu in Paris. “Because these equations can be very complicated. But they have a certain music to them. There is a lot of structure which governs the nature of these equations, and he’s just the person, I think, who best listens to this music and understands what it means.”
...
Read the original on www.quantamagazine.org »
TAMPA, Fla. — The Intelsat 33e satellite has broken up in geostationary orbit (GEO) and lost power, ceasing communications services for customers across Europe, Africa and parts of Asia Pacific.
Intelsat said in an Oct. 19 news release it is working with satellite maker Boeing to address an anomaly that emerged earlier that day, but “believe it is unlikely that the satellite will be recoverable.” An Intelsat spokesperson said the satellite was not insured at the time of the issue.
The U. S. Space Force reported Oct. 19 it is tracking around 20 pieces of debris associated with the spacecraft.
“U. S. Space Forces-Space (S4S) has confirmed the breakup of Intelsat 33E (#41748, 2016-053B) in GEO on October 19, 2024, at approximately 0430 UTC,” states an alert posted on SpaceTrack, the U.S. Department of Defense’s space-tracking platform.
“Currently tracking around 20 associated pieces — analysis ongoing. S4S has observed no immediate threats and is continuing to conduct routine conjunction assessments to support the safety and sustainability of the space domain.”
Douglas Hendrix, CEO of ExoAnalytic Solutions, said the U. S.-based space-tracking company identified 57 pieces of debris Oct. 21 associated with the breakup.
“We are warning operators of any spacecraft that we think are at risk of collision,” Hendrix said via email.
A snapshot of Intelsat 33e’s break-up taken Oct. 19 by U. K.-based Spaceflux. 44071 and 58698 are the WGS 10 (USA 291) and Ovzon-3 satellites, respectively, which Spaceflux said are unlikely in danger of being hit by the debris. “The problem is that there is a lot of uncertainty regarding the orbits of these fragments at the moment,” Spaceflux spokesperson Viktoria Urban said Oct. 21. “They can be potentially dangerous for other satellites but we do not know that yet.” Credit: Spaceflux.
Intelsat said it is working to move customers to other satellites in Intelsat’s fleet or spacecraft operated by third parties.
Intelsat 33e launched in August 2016 and entered service in January 2017 at 60 degrees East, about three months later than planned following an issue with its primary thruster.
A second propulsion issue that emerged during in-orbit tests helped knock off around 3.5 years from the satellite’s initially estimated 15-year lifespan.
Intelsat 33e is the second in Intelsat’s EpicNG (next-generation) series of high-throughput satellites.
The first, Intelsat-29e, was declared a total loss in 2019 after just three years in orbit. That failure was pinned on either a meteoroid impact or a wiring flaw that led to an electrostatic discharge following heightened solar weather activity.
This article was updated Oct. 21 with more details about the incident.
...
Read the original on spacenews.com »
Be sure to visit our Facebook Page.
The IOCCC Flight Simulator was the winning entry in the 1998 International Obfuscated C Code Contest. It is a flight simulator in under 2 kilobytes of code, complete with relatively accurate 6-degree-of-freedom dynamics, loadable wireframe scenery, and a small instrument panel.
IOCCC Flight Simulator runs on Unix-like systems with X Windows. As per contest rules, it is in the public domain.
You have just stepped out of the real world and into the virtual. You are now sitting in the cockpit of a Piper Cherokee airplane, heading north, flying 1000 feet above ground level.
Use the keyboard to fly the airplane. The arrow keys represent the control stick. Press the Up Arrow key to push the stick forward. Press the left arrow key to move the stick left, and so on. Press Enter to re-center the stick. Use Page Up and Page Down increase and decrease the throttle, respectively. (The rudder is automatically coordinated with the turn rate, so rudder pedals are not represented.)
On your display, you will see on the bottom left corner three instruments. The first is the airspeed indicator; it tells you how fast you’re going in knots. The second is the heading indicator, or compass. 0 is north, 90 is east, 180 is south, 270 is west. The third instrument is the altimeter, which measures your height above ground level in feet.
cat horizon.sc pittsburgh.sc | ./banks
banks is the name of the program (a quirk of IOCCC rules, and no pun intended). horizon.sc and pittsburgh.sc are scenery files.
* The airplane is modeled as a six degree-of-freedom rigid body, accurately reflecting its dynamics (for normal flight conditions, at least).
* Fly through a virtual 3-D world, while sitting at your X console.
* Head-up display contains three instruments: a true airspeed indicator, a heading indicator (compass), and an altimeter.
* Flight controls may be mapped to any keys at compile time by redefining the macros in the build file. Nice if your keyboard doesn’t have arrow keys.
* Time step size can be set at compile time. This is useful to reduce flicker on network X connections. (But be careful: step sizes longer than about 0.03 seconds tend to have numerical stability problems.)
* Airplane never runs out of fuel!
Each of the files is a scenery file. The simulator program reads in the scenery from standard input on startup. You may input more than one scenery file, as long as there are less than 1000 total lines of input.
Here is a brief description of the scenery files:
* horizon.sc — A horizon, nothing more. You will probably always want to input this piece of scenery.
* mountains.sc — An alternate horizon; a little more mountainous.
* pittsburgh.sc — Scenery of downtown Pittsburgh. The downtown area is initially located to your right.
* bb.sc — Simple obstacle course. Try to fly over the buildings and under the bridges.
* pyramids.sc — Fly over the tombs of the ancient Pharaohs in this (fictitious) Egyptian landscape.
A few examples of how to input scenery:
cat horizon.sc pittsburgh.sc | ./banks
cat mountains.sc bb.sc | ./banks
cat mountains.sc river.sc pyramids.sc | ./banks
You can simulate flying through a cloud bank as well:
./banks < /dev/null
You will usually want at least a horizon, though.
The format of scenery files is simple, by the way. They’re just a list of 3-D coordinates, and the simulator simply draws line segments from point to point as listed in the scenery file. 0 0 0 is used to end a series of consecutive line segments. Note that in the coordinate system used, the third coordinate represents altitude in a negative sense: negative numbers are positive altitudes.
I’m sure you’ll be making your own scenery files very soon!!!
Several options must be passed to the compiler to make the build work. The provided build file has the appropriate options set to default values. Use this section if you want to compile with different options.
To map a key to a control, you must pass an option to the compiler in the format “-Dcontrol=key”. The possible controls you can map are described in the table below:
Control Description Default Key
IT Open throttle XK_Page_Up
DT Close throttle XK_Page_Down
FD Move stick forward XK_Up
BK Move stick back XK_Down
LT Move stick left XK_Left
RT Move stick right XK_Right
CS Center stick XK_Enter
Values for the possible keys can be found in the X Windows header file . This file is most likely a cross-reference to another header,
You must map all seven controls to keys at compile time, or the compilation will fail.
For example, to map Center Stick to the space-bar, the compile option would be “-DCS=XK_space”.
To set the time step size, you must pass the following option to the compiler: “-Ddt=duration”, where dt is literal, and where duration is the time in seconds you want the time step to be.
Two things to keep in mind when selecting a time step. Time steps that are too large (more than about 0.03) will cause numerical stability problems and should be avoided. Setting the time step to be smaller than your clock resolution will slow down the simulator, because the system pauses for more time than the simulator expects.
The best advice is to set time step size to your system timer resolution. Try a longer period if you’re getting too much flicker.
Here we are flying towards Downtown Pittsburgh. We can see the Point, several buildings including the USX tower, and several bridges including the Smithfield Street bridge. We see three instruments near the bottom.
IOCCC stands for “International Obfuscated C Code Contest.” It is an quasi-annual contest to see who can write the most unreadable, unintelligible, unmanagable, but legal C program.
In the 1998 IOCCC, My flight simulator won the “Best of Show” prize. Here is the source code to the program:
#include
Note that this program will not compile out-of-the-box. It requires certain compile-time parameters. The folloing script builds it on my Linux system:
#! /bin/sh
cc banks.c -o banks -DIT=XK_Page_Up -DDT=XK_Page_Down \
-DUP=XK_Up -DDN=XK_Down -DLT=XK_Left -DRT=XK_Right \
-DCS=XK_Return -Ddt=0.02 -lm -lX11 -L/usr/X11R6/lib
If you want to try this program, I suggest you download the 1998 IOCCC Winners Distribution.
One of the rules of the contest was that the program could not be longer than 1536 bytes (excluding spaces, tabs, newlines, semicolons, and braces). Needless to say, cramming a flight simulator into such a small file was fairly difficult. I will say that if it weren’t for the wonderful property of orthogonal matrices, this flight simulator would not have been possible.
* The IOCCC Simulator appeared in a book, Calculated Bets by Steve Skiena.
* Wikipedia has a listing of IOCCC Simluator in its IOCCC Entry.
* The official International Obfuscated C Code Contest website
* IOCCC Flight Simulator’s Facebook page. (I made this page because random people around the world would send me friend requests and it was creeping me out.)
I do not distribute this program myself. If you want it, you can download the 1998 IOCCC Winners Distribution. The distribution comes with a dozen or so other winning entries, all quite interesting programs.
Note that this is a source distribution, and you will have compile it to run it. I’ve tested it on some versions of Linux, AIX, Irix, and Sun.
IOCCC Flight Simulator source code is in the public domain; there are no copyright restrictions on it whatsoever. However, the winners distribution has been copyrighted by the IOCCC judges. See the hint files in the distribution for details.
...
Read the original on blog.aerojockey.com »
Remember when being a “Senior Software Engineer” actually meant something? I do, and I can’t help but feel nostalgic for that clarity. In recent years, our industry has witnessed rampant title inflation, turning what used to be a clear-cut junior-mid-senior progression into a confusing parade of inflated roles.
The “senior” title, once a badge of substantial experience and expertise, has been particularly devalued. Today, developers are being crowned “senior” faster than ever, often with just three to four years under their belts. It’s as if the path to seniority, once a marathon of skill-building and diverse experiences, has turned into a sprinter’s dash.
This explosion of grandiose titles isn’t just confusing—it’s eroding the meaning of career milestones in tech. Each new title tries to outdo the last in impressiveness, while paradoxically meaning less and less. For everyone involved—from job seekers to hiring managers—this inflation has muddied the waters of professional progression and recognition.
Being a senior engineer meant far more than just logging years on the job. It was a title earned through a diverse set of experiences and challenges that shaped not just their technical skills, but their entire approach to software development.
A true senior engineer is a battle-tested problem solver. They’ve faced and conquered complex technical challenges across multiple projects, dealing with more than just tricky bugs. These are the architects who’ve untangled system-wide issues that require deep understanding and creative solutions. They’re the ones who can navigate and refactor sprawling legacy codebases with confidence, understanding the delicate balance between maintaining existing systems and building new ones.
Senior engineers have been through the crucible of major production outages. They’ve felt the heat of a system melting down in real-time and learned to stay calm under pressure. These experiences have taught them to diagnose issues rapidly and lead a team through a crisis, making critical decisions when every second counts.
But technical skills alone don’t make a senior engineer. They’re also architectural visionaries who can see beyond immediate tasks to design scalable, maintainable systems. Their decisions positively impact projects years down the line, showcasing a level of foresight that only comes with extensive experience. They’ve developed the soft skills to be effective mentors and leaders, guiding junior developers not just in coding, but in navigating the complex landscape of software development.
Perhaps most importantly, senior engineers remain humble and curious despite their experience. They’re continuous learners, adapting to new technologies and methodologies, always expanding their toolkit. They’ve developed a strong sense of professional ethics, understanding the broader implications of their work and advocating for responsible development practices.
This depth of experience isn’t typically gained in just a few years. It’s forged through diverse projects, multiple tech stacks, and yes, a fair share of failures and lessons learned along the way.
The fierce competition for talent has led companies, especially startups, to use titles as a retention tactic. Unable to always match the salaries offered by tech giants, these companies resort to inflating titles as a form of non-monetary compensation. While this might seem like a clever short-term solution, it’s creating long-term problems for the industry by diluting the meaning of these titles.
The rise of professional networking platforms like LinkedIn has exacerbated this issue. These platforms have turned titles into personal branding tools, creating immense pressure for individuals to sport impressive-sounding roles. This “LinkedIn Effect” has everyone, from fresh graduates to seasoned professionals, yearning for titles that look good on their profiles, often prioritizing appearance over substance.
HR departments, grappling with the increasing complexity of tech roles, have contributed to this problem as well. In an attempt to accurately categorize the myriad of specialized positions in our rapidly evolving field, they’ve created a proliferation of niche titles. While these titles might be descriptive, they’ve made it increasingly difficult to compare roles across companies, further muddying the waters of career progression.
Lastly, many companies have begun using title promotions as a retention strategy. The intent is to recognize and retain valuable employees, but this approach often backfires. When titles are handed out like participation trophies, they cease to align with actual growth in responsibilities or skills. This misalignment not only devalues the titles themselves but also sets unrealistic expectations for the newly promoted employees.
In essence, what we’re seeing is a perfect storm of market pressures, personal branding needs, organizational challenges, and short-sighted retention strategies. Together, these factors have inflated titles to the point where they risk losing their meaning entirely.
Title inflation isn’t just about words on a business card or a LinkedIn profile. It’s a problem that strikes at the heart of our industry’s integrity and functionality. When we inflate titles, we’re essentially lying to ourselves and each other about our capabilities and experience.
This deception has real consequences. It creates a mismatch between expectations and reality, leading to situations where people are placed in roles they’re not prepared for. Imagine a “senior” engineer with three years of experience trying to architect a complex system or mentor junior developers. The potential for failure is high, and the stress on that individual is immense.
For those in leadership positions, it’s paramount to resist the temptation of using inflated titles as a quick fix for retention or recruitment challenges. Instead, focus on creating meaningful career progression frameworks that tie advancements to concrete skills and responsibilities. Consider implementing a system similar to those used by larger tech companies, where levels (like L3, L4, L5) provide a more nuanced view of seniority without resorting to title inflation.
Companies can take a stand by standardizing their title structures and being transparent about what each level means. This could involve creating detailed job descriptions that clearly outline the expectations and responsibilities for each role. By doing so, you not only provide clarity for your employees but also contribute to a more standardized industry-wide understanding of titles.
HR departments have a critical role to play. They can work on developing more sophisticated ways to categorize and compare roles across the industry. This might involve collaborating with tech leads to create standardized skill matrices that can be used to evaluate candidates and employees more objectively.
Companies that resist title inflation gain a significant competitive edge. By maintaining meaningful titles, they attract and retain top talent who value authentic growth over inflated roles. This leads to more accurate hiring, improved team dynamics, and enhanced productivity. Realistic titles also foster trust, both internally and with clients, positioning the company as a beacon of integrity in the industry. Ultimately, companies with well-defined, honest title structures build stronger, more capable teams and a reputation for excellence that sets them apart in the market.
This article was originally published on
https://www.trevorlasn.com/blog/software-engineer-titles-have-almost-lost-all-their-meaning. It was written by a human and polished using grammar tools for clarity.
...
Read the original on www.trevorlasn.com »
Skip to main contentFind anything you save across the site in your account Language is said to make us human. What if birds talk, too?“Social birds . . . are constantly chatting to each other,” Mike Webster, an animal-communication expert at Cornell, says. “What in the hell are they saying?”On a drizzly day in Grünau im Almtal, Austria, a gaggle of greylag geese shared a peaceful moment on a grassy field near a stream. One goose, named Edes, was preening quietly; others were resting with their beaks pointed tailward, nestled into their feathers. Then a camouflaged speaker that scientists had placed nearby started to play. First came a recorded honk from an unpartnered male goose named Joshua. Edes went on with his preening. Next came a honk that was lower in pitch than the first, with a slight bray. Edes looked up. As the other geese remained tucked in their warm positions, incurious, Edes scanned the field. He had just heard a recorded “distance call” from his life partner, a female goose whom scientists had named Bon Jovi. Edes and his fellow-geese live near the Konrad Lorenz Research Center for Behavior and Cognition, which is named for a Nobel laureate whose imprinting experiments, in the nineteen-thirties, convinced goslings that he was their mother. (They took to following him in a downy line.) Greylag geese in the area have been studied continually ever since. The director of the center, a biologist and bird ecologist named Sonia Kleindorfer, showed me footage of Edes to demonstrate the subtlety of goose communication.Geese maintain elaborate social structures, travel in family groups, and can navigate from Sweden to Spain. In a fight, an unpartnered greylag goose has a higher heart rate than a partnered one, and the heart rate of a recently widowed goose can remain depressed for about a year. These birds have things to discuss. Still, geese are not the Ciceros of the bird world. A lyrebird sings long, elaborate songs; ravens really can say “nevermore.” Geese are known for nasal honks. How much nuance can there be in a honk?Greylag geese, it turns out, have at least ten different kinds of calls. “We are completely underappreciating the way they communicate,” Kleindorfer told me. “They give a departure call when they leave, and a contact call after they arrive. They know if their allies are there, if the bold geese are there. There is so much information that geese are getting from calls.”Bird vocalizations are usually divided into songs and calls, but these are wobbly categories. What is designated a song in one species may be shorter in duration than what, in another species, is termed a call. Onomatopoeic groupings such as tseets, chirrups, rreeyoos, seeew-soooos, and dahs are also indeterminate: people transcribe the same sounds in different ways, and no bird version of the Académie Française exists to adjudicate. The vocalizations of birds are fundamentally incommensurate with human ones. We have a larynx and two vocal cords; they have what’s called a syrinx, which is a bit like having two larynxes that you can use at the same time.Kleindorfer, the daughter of a mathematician and an actress, looks like a cross between a hiker and the film star Sophia Loren. From February to April, she researches Darwin’s finches in the Galápagos; from September to December, songbirds in Australia; and, for the rest of the year, the geese outside her office door. Early in her education, as an undergraduate at the University of Pennsylvania, she was taught that “male songbirds sing, females don’t, and if females do sing it’s an error.” The attitude at the time, she told me, was that “females are drab, inconspicuous, and quiet.” A few years after earning a Ph.D. in zoology at the University of Vienna, Kleindorfer took a job as a research biologist at Flinders University, in Australia, where songbird species originally evolved. “Imagine my surprise,” she told me. “I heard all these females singing songs as complex as the male songs.” Much of her ensuing career has focussed on bird vocalizations that were either underappreciated or unknown.Kleindorfer decided to study bird eggs and early development, which were then neglected research topics. “Maybe this was because only females have eggs and I was a woman in science,” she told me. “I don’t have a better reason.” Kleindorfer had noticed that mustached-warbler chicks seemed to respond to the alarm calls of adult warblers, even though the thinking at the time was that such calls were directed at other adults, or possibly at predators. “If I put a snake nearby, the parental alarm call made the chicks in the nest jump,” she said. “If I put a marsh harrier”—a hawklike predatory bird—“nearby, the response to the parental alarm call was that the chicks would duck.” The chicks were responding appropriately to different alarm calls—a satisfying finding.Kleindorfer also studied the superb fairy wren, a songbird that weighs about as much as a walnut and sports a flirty, upright tail. Despite their fanciful names, fairy wrens are commonplace in Australia. They are socially monogamous but sexually promiscuous—they are essentially in open marriages—and they bring up their young collectively. Arguably, they have even more to chat about than geese do. Fairy-wren nests are about the size of cupped human hands, built to contain pale, speckled eggs that are smaller than thumbnails. Kleindorfer and her team wired up nests with cameras and microphones and soon discovered something that they hadn’t known to look for. “The mothers in nests were producing an incubation call—a call to the eggs,” she told me. It was like a lullaby. Why would a mother bird make any sound that could attract predators to the nest? “Songbird embryos don’t have well-developed ears, so this was completely unexpected,” she said. “That started a twenty-year project—why is she calling to the eggs?”The team compared incubation calls to the begging calls of young chicks. “It was very odd,” Kleindorfer recalled. “Each nest had its own distinct begging call.” What’s more, each begging call matched an element from the mother’s incubation call. This suggested, startlingly, that birds could learn a literal mother tongue while still in ovo. (Humans do this, too; French and German babies have distinct cries.) Even “foster” chicks, who as eggs were physically moved from one nest to another, learned begging calls from their foster mothers, rather than from their genetic mothers. This was big news in the ornithology world. “The paradigm of how songbirds learn—after hatching, from their father’s song—was overthrown,” she said. The same process was soon documented in more songbird species.Language is often cited as the quality that distinguishes us as humans. When I asked Robert Berwick, an M.I.T. computational linguist, about birds, he argued that “they’re not trying to say anything in the sense of James Joyce trying to say something.” Still, he and Kleindorfer both pointed out that humans and songbirds share a trait that many animals lack: we are “vocal learners,” meaning that we can learn to make new sounds throughout our lives. (Bats, whales, dolphins, and elephants can, too.) “To me, the most amazing thing is that every generation of vocal learners has its own sound,” Kleindorfer said. “So, just like our English is different from Shakespeare’s English, the songbirds, too, have very different songs from five hundred years ago. I am sure of it.” We humans have long tried, often mistakenly, to differentiate ourselves from nonhuman animals—by arguing that only we have souls, or use tools, or are capable of self-awareness. Perhaps we should see what the birds have to say.Animals have prominent speaking roles in many of our oldest stories. Eve has a memorable conversation with a snake. In Norse mythology, two ravens, Huginn and Muninn, serve as spies to the god Odin, whispering to him the news of the world. In many cultures, the “language of birds” refers to a divine or perfect language—the language of angels. In the scientific realm, however, the notion that nonhuman animals use language is often seen as foolish or naïve. Some birds may be excellent mimics, like parrots, but they can also mimic chainsaws or barking dogs; scholars don’t usually consider imitation a form of understanding. The prevailing dogma is that birds sing either to impress mates or to defend their territory. (I suspect that most of human communication could also be slotted into those categories.) In college, I was taught a stranger but similarly diminishing idea: that songbirds sing in the morning to burn fat, so that they are light enough to fly around during the day. Apparently, this idea is no longer taken seriously.Even among species we view as being closer to ourselves, such as primates, scientists have tended to talk about “communication” instead of “language.” But it’s tricky to say where the line is, or what we mean by “communication,” since even bacteria communicate, as Berwick pointed out to me. “I think it’s best to think of language not as speech but as a cognitive ability in the mind that sometimes leads to speech,” he said, giving the example of inward conversations we have with ourselves. The linguist Noam Chomsky has said, “It’s about as likely that an ape will prove to have a language ability as there is an island somewhere with a species of flightless birds waiting for humans to teach them to fly.” Chomsky’s 2017 book on the evolution of language, co-authored with Berwick, is titled“Why Only Us.”Over the years, however, some researchers have looked closely at the contexts in which certain animal vocalizations are made. In the late nineteen-seventies, two primatologists, Dorothy Cheney and Robert Seyfarth, were studying vervet monkeys in Kenya. Vervets have dark faces and pale fur; they are about the size of a small backpack and are hunted by pythons, eagles, and leopards. Cheney and Seyfarth documented something remarkable: one recorded vervet vocalization made vervets look up, presumably for eagles; another made them look down, presumably for pythons; and a third sent them running up into the trees, a good defense against approaching leopards. Young vervets sometimes use these calls faultily, perhaps sounding a leopard alarm for a warthog. But they get better as they grow up. They learn.A newer generation of scientists has been trying to understand bird vocalizations. The alarm calls of Siberian jays can be said to have been partially translated. One of their screeches indicates a sitting hawk (which prompts other jays to come together in a group), another a flying hawk (jays hide, which makes them difficult to spot), and a third a hawk actively attacking (jays fly to the treetops to search for the attacker, and possibly flee). When cheery birds known as tufted titmice make a piercing sound, other titmice may respond by collectively harrying an invading predator. Some birds even lie. Fork-tailed drongos—common, innocuous-looking little dark birds that live in Africa—sometimes mimic the alarm calls of starlings or meerkats. Duped listeners flee the nonexistent threat, leaving behind a buffet for the drongo.Upon seeing an owl, a chickadee might sound a loud chick-a-dee-dee-dee, adding dees in relation to how dangerous the predator is perceived to be. This call is also understood by nuthatches, which will join in to mob and harass the predator, forming a kind of defensive alliance. If you record an Australian bird warning of a nearby cuckoo—cuckoos leave their eggs in the nests of other species and often kill their step-siblings—birds in China will understand the call.Kleindorfer considers Cheney and Seyfarth, the primatologists, to be important sources of inspiration. After she moved to Australia, she and her colleagues built up a sound library of Australian songbirds. They also made recordings of quieter, familial bird sounds, such as the incubation calls. Each family unit, they discovered, had its own “familect,” a system of sounds that chicks learn from their parents. Curiously, chicks seemed to adopt sounds sung either by their mother or by their father—but they avoided the sounds used by both parents. If the mother sings ABCXYZ and the father sings ABCGHI, then the chicks tend to sing the sound units X, Y, Z, G, H, and I. It’s as if the young birds separate themselves from their parents by not speaking the shared sounds, but also stay close to their parents by learning what’s unique to Mom and unique to Dad. When female chicks grow up, they are attracted to mates whose repertoire is familiar (he’s one of us!), but not too familiar (he’s not my brother or dad).Birds in general are turning out to have intellectual abilities far greater than most people had imagined. It’s not just that parrots and crows can do math as ably as young children, or that scrub jays cleverly cache and then uncache their food to fool other jays. Even inconspicuous and uncelebrated birds are capable of learning, and of sharing their learning with others. In the nineteen-twenties, tits from Swaythling, England, figured out how to open the caps of milk bottles, and by the late forties tits across Ireland, Wales, and England had learned the trick. If language is more a capacity than it is a speech act, it seems possible that birds possess it.In 1889, Ludwig Paul Koch, an eight-year-old boy in Frankfurt, Germany, received a present from his father: an Edison phonograph and some wax cylinders for recording sounds. The oldest known audio of birdsong is young Koch’s recording of his pet white-rumped shama, a smallish songbird with a dark head, an orange body, and feathers that resemble a white bustle on its glossy black tail. A shama sings like a small chamber orchestra, with slippery, percussive, and sweet sounds in phrases of varying lengths. Many similar recordings followed. In 1929, the Cornell Library of Natural Sounds—now the Macaulay Library—was started with a few hard-won recordings of a sparrow, a wren, and a grosbeak. (Cornell is to ornithology what the Juilliard School is to music.)Koch, who was Jewish, became a professional musician but fled Germany in the nineteen-thirties. In England, he became a beloved presence on BBC radio. Sounding like a singsong, sanguine sibling of Werner Herzog, he guided Brits through the charms of birdsong. (A yellow icterine warbler, he told listeners, “frequently called me by my Christian name . . . Ludwig, Ludwig.”) Koch often expressed the hope that such recordings might be used for science. Many years later, they were.In 2010, Grant Van Horn was an undergraduate at U.C. San Diego and working in a computer-vision and machine-learning lab led by the computer scientist Serge Belongie. The lab was looking for a good data set to train an image-recognition program. At the time, Van Horn told me, many of the top images on Flickr, the popular photography Web site, were of birds. Van Horn was no birder, but he wanted to see if he and his colleagues could teach a computer program to distinguish between closely related species, such as a house wren and a marsh wren. As it turned out, they could.The lab’s work soon attracted the attention of ornithology researchers at Cornell. Van Horn recalls them telling him and his colleagues, “in the nicest possible way, ‘Look, guys—this data set is quaint and poorly constructed, and the species that you chose to study make no sense. Do you want to effectively redo this whole process, but do it in collaboration with us?’ ” When Van Horn visited Cornell, the scientists took him out birding every morning and evening, and he remembers wishing that he could take their expertise back to California with him. The collaboration eventually helped the Cornell Lab of Ornithology develop Merlin Bird ID, an app that could reliably identify several hundred species of bird from photographs. It proved immensely popular—but the Cornell team had always had larger ambitions. “They kept asking, ‘How can we do this with sound?’ ” Van Horn recalled. That was what the scientists were most interested in. But he assumed that auditory recognition was outside his expertise—until, out of curiosity, he attended a workshop on audio-related machine learning.“I kind of had an epiphany,” Van Horn said. Sound recognition often relied on spectrograms, visual representations of sounds similar to what you see in audio-editing software. Mike Webster, an animal-communication expert at Cornell who directs the Macaulay Library, and who worked on Merlin, told me, “When people figured out how to visualize sound—how to actually take measures of it—that led to just an explosion of research and understanding about how and why birds communicate with each other.” Much of the work in sound recognition, Van Horn realized, was actually visual: “I thought, Let me bring these computer-vision skills to bear.” An early test could differentiate between recordings of alder flycatchers and willow flycatchers.“It’s still too heavy. What if instead of two armrests we just share one?”In 2021, the number of bird recordings in the Macaulay Library, many of which were submitted by citizen scientists, reached a million. That same year, Cornell released Merlin Sound ID, which was originally trained on around two hundred and fifty hours of bird sounds, as well as on background noises (whistling wind, passing cars), all manually annotated by experts. At first, Sound ID could identify about three hundred different North American birds, with a bias toward those found around Cornell. Three years and a million additional recordings later, Sound ID can now very accurately identify about fourteen hundred species. The lab hopes that number can grow to roughly eight thousand, out of around eleven thousand known bird species.Amateurs now have a remarkable ability to recognize the birds that are cooing or chirping—which has generated more interest in birds and directed more citizen-science recordings to the Macaulay Library. But decoding the bird vocalizations is another matter. One problem is that certain sorts of recordings are more plentiful than others. “Most of our database is songs,” Webster said. “We can now understand songs at a level that we couldn’t before.” Alarm calls are also relatively easy to capture. But something like nest chatter, which is quieter and less predictable, is more elusive: “There are whole categories of bird communication that we’ve hardly even started to look at.” Webster isn’t expecting there to be straightforward translations of birds’ sounds into human language; animals live in perceptual worlds that are just too different from our own. Still, he sees machine learning as a powerful new tool. “There are a lot of people who have dreams of using A.I. to allow us to decipher what animals are saying,” he told me.After three decades of research, Webster is preparing to retire. When I asked him what he hoped the next generation of scientists might learn, he thought for a moment. “Well, social birds. They are constantly chatting to each other,” he said. “Making little noises. Often very quietly. It’s like they’re having a whisper conversation. What in the hell are they saying to one another? I’d really like to know.”Until my eleven-year-old daughter became interested in birds, I barely knew a starling from a sparrow. She once asked me, incredulously, “You’re saying you can’t tell a male sparrow from a female?” For a long time, we lived just east of the Lincoln Tunnel, where “birds” meant pigeons and seagulls, but within weeks of moving to Brooklyn we saw a red-tailed hawk on a lamppost. My daughter began talking about dark-eyed juncos and tufted titmice and peregrine falcons; we started visiting bird sanctuaries, and I eventually outgrew my favoritism toward mammals. Like millions of others, we started to use Merlin Bird ID. Usually, we heard birds before we saw them. Some local sparrows nesting in a hollow pole on our block sounded like Laurel and Hardy bickering.“Anthropomorphism” is a familiar term that describes a common error: the assumption that animals have human qualities. A less familiar term, “anthropectomy,” also describes a kind of error—that of baselessly assuming animals don’t share certain qualities with us. Which kind of error is a person more likely to make? Or are these not errors but, rather, starting points, with someone like Jane Goodall starting from the premise that 98.7 per cent of our DNA is shared with chimps, and someone else starting from the fact that we humans have sequenced our own DNA and no other species has even invented pliers? Since we’re still arguing about what language is, it’s difficult to say which assumption about animal language is more presumptuous.Toshitaka Suzuki first started to wonder if birds speak their own language during his last year of college, at Toho University, in Tokyo. He was on a hike in the forests of Karuizawa when he witnessed what struck him as a strange drama among some common Japanese tits, birds that resemble chickadees. One tit called out dee-dee-dee near some scattered sunflower seeds; other tits flew over and began to eat. “Then one bird called out hee-hee-hee, and the birds all flew off into nearby bushes,” Suzuki recalled. He could see no reason for them to abandon their feast.Seconds later, a sparrow hawk swooped in; all of the tits had escaped safely. “I thought, Maybe hee-hee-hee means ‘Hawk incoming, run away!’ ” Suzuki said. He already took birds seriously and knew a lot about them; he had studied under Hiroshi Hasegawa, a scientist who was central in bringing the short-tailed albatross back from near-extinction. But Suzuki had thought of bird vocalizations as, for the most part, emotive, like music, or as a kind of beautiful nonsense. He has now devoted eighteen years to researching tits and their communication. “I couldn’t have imagined how long I would be studying tits, because I love other animals as well,” Suzuki told me. Like many researchers, he hopes that the more we understand birds the likelier we are to protect them.In April, 2023, at the University of Tokyo, Suzuki founded what he calls the world’s first laboratory specifically devoted to animal linguistics. He argues that more work should be done to explore what cognitive abilities underlie human language—and then to investigate whether these abilities are present in animals. (In many ways, this approach mirrors the work of Berwick and Chomsky, but leads to different conclusions.) Some are skeptical of his push to compare animal communication to human language. “It’s just so far removed from the complexity of human language that it doesn’t make sense to use the same word,” Todd Freeberg, an animal-communication researcher at the University of Tennessee, told me. When a chickadee amplifies a call by adding dees, some researchers might say that they are engaging in referential signalling, by adjusting their call to the seriousness of the threat. But Freeberg points out that extra dees could also be a result of heightened arousal, in general—less a conscious message than a physical response.Like Kleindorfer, Suzuki took an interest in nests. Early on, he showed that chicks in nesting boxes respond to a call associated with crow sightings by crouching, and to a call associated with snakes by fleeing the nest altogether. The arc of Suzuki’s research has, to some extent, followed a series of arguments about what qualities are required for communication to rise to the status of language. Humans are noted for their ability to form a mental image—a concept—of what they are communicating. Suzuki designed an experiment in which he played a variety of calls and moved a stick in a variety of ways; only when he played a snake-alarm call and moved a stick in a snakelike way did the birds tend to react as if a snake were present. To him, this suggested that they had some concept of snake-ness. (He said that the experiment was inspired by the way that humans perceive shapes in clouds.)In a 2023 study, Suzuki showed that tits responded differently to a recorded ABCD call than they did to a remixed version of the call, such as DABC—a potential challenge to linguists who see sophisticated syntax as being unique to human language. (Studies of southern pied babblers and of chestnut-crowned babblers also have interesting syntax results.) Symbolic gestures—also often considered unique to humans—were addressed in a particularly adorable Suzuki paper, in 2024. His team watched mated pairs of tits as they entered their nest boxes. The opening to each nest box was small, allowing only one bird at a time to pass. But sometimes one bird, usually the female, fluttered its wings in what seemed to be an “after you” gesture. The other bird would then enter the box first. The fluttering didn’t point at the nest box. In Suzuki’s view, this suggested that the flutter was not a simple indication but, instead, a symbolic gesture—another item crossed off on the unique-to-human-language list.Perhaps the nest-box study needs to be replicated; perhaps there are alternative interpretations of the results in the concept and syntax studies. Suzuki is open to such critiques. But he is also skeptical of many prominent ideas in linguistics, such as Chomsky and Berwick’s argument that a slight evolutionary change in the brain unlocked a new linguistic capacity in humans: the unique and powerful ability to connect individual units in a hierarchical and expressive way. (Suzuki thinks that language more likely emerged bit by bit.) By Suzuki’s latest count, the tit’s vocal repertoire has more than two hundred distinct calls and phrases. He has many more experiments to conduct.Recently, my daughter and I took an early-morning trip to Little Stony Point, in the Hudson Valley, and met up with two people who have no particular need for an app like Merlin Bird ID. Andrew C. Vallely does field-ornithology work for the American Museum of Natural History; he’s become friends, by way of bird-watching, with Jeffrey Yang, an editor at New Directions Publishing. Yang had been seeing a lot of migrating warblers, which had flown well over a thousand miles—did we want to come try our luck? At his suggestion, I warned my daughter that there was no telling whether we’d actually see any.About five minutes down the trail, in a not particularly distinguished wood (we could still hear cars and an excavator across the river), we saw a kingfisher diving and an adolescent eagle on a bare tree. As we walked, stopped, walked, stopped, we repeatedly heard what sounded like the call of a red-tailed hawk. But it soon became clear, at least to Vallely and Yang, that it was a jay mimicking a hawk. “They do that sometimes,” Vallely told me.“That’s one thought,” he said. “Vocal mimicry can be pretty mysterious.”We heard the “tea kettle tea kettle” call of a Carolina wren; it sounded like a game of marbles to me. We saw a warbling vireo, a Cape May warbler, a blackpoll warbler, and a black-and-white warbler—birds so small that it was difficult to fathom how far some of them had travelled to be there. We heard little chips that sounded like a window being cleaned; a crickety decrescendo that was not made by crickets; a sound like a trill running into a wall; a high-pitched three-fast-one-slow, like a child playing Beethoven’s Fifth Symphony. We encountered forty-four species by Yang’s able count, and at the very end we saw a Swainson’s thrush, who apparently wasn’t in the mood to show off. Bird-watching, I thought, is a misleading term. So much of the fleeting, present-tense pleasure of it is bird-listening.The quiet of the pandemic brought natural sounds to the foreground for Maddie Cusimano, who was then a graduate researcher of auditory perception at M.I.T.’s Center for Brains, Minds, and Machines. “Like a lot of people, I had the sense of getting to know the birds around me for the first time,” she told me. Two doves were often visible from her window; she read that, in some dove pairs, one bird sings to the other in the mornings, and in the evenings the roles reverse. Cusimano was familiar enough with machine learning that, when she tried out bird-identification apps, she thought, I could help make this kind of thing.Cusimano is now a senior scientist specializing in A.I. research at the Earth Species Project (E.S.P.), a nonprofit dedicated to “using artificial intelligence to decode non-human communication.” E.S.P.’s current efforts examine such species as zebra finches, crows, and beluga whales, but its early work has been preoccupied with preliminary challenges: the “cocktail-party problem” of picking up individual sounds in a noisy environment; how to correlate particular noises with the precise contexts in which they occur. “It’s like we want to write the Magna Carta, but first we need to make the quill,” Katie Zacarian, the organization’s co-founder and C.E.O., told me. Zacarian isn’t expecting a Google Translate for animal languages, but she does believe that we can understand animals better. She remembers that, as a kid, people often brought her father, an entomologist, pictures and specimens and asked: What is this? Her mother was a researcher and an administrator of multilingual education programs. “There’s this underlying current, in their work, of decoding,” she told me.When I talked to Cusimano, on Zoom, she pulled up a collection of sound files of crows. Her data set comes from Daniela Canestrari and Vittorio Baglione, researchers at the University of León, in Spain, who have been studying Spanish carrion crows for more than twenty-five years. Cusimano has spent countless hours sitting in San Francisco, listening to these birds, and can sometimes guess which one she’s hearing. “This is maybe what you expect a crow to sound like,” she said, playing me two caws. “But then there’s also this,” she said, playing a whispery rasp. “Some sounds are very long.” She played a ghostly oooo. “Then these two sounds, which you would never think were coming from crows.” One sounded like the click of a computer mouse; another sounded froggy. Her favorite recording reminded me of a duck’s quack. “I love these sounds,” she told me.One ambition of Cusimano’s work is to find correlations between these varied vocalizations and the precise contexts in which they occurred. Research partners recently identified a quiet grunt that is most often made right when an adult crow returns to a nest—perhaps a way of saying, “Wake up!” A small bio-logger on the back of a crow can provide audio along with other data—a bird’s-tail view. “You hear their wingbeats, you can hear their friends calling, and them calling back to their friends,” Cusimano told me. The data feels intimate: “You hear baby chicks a distance away, then you hear the bird take off, and the chick sounds are getting louder. And then the crow lands in the nest. You’re in the middle of this crow family.”Can a machine be trained to distinguish individual birds by the sounds they make? Can it pick up on vocalizations across individuals which share similar functions? Machine learning is excellent at detecting correlations, but some are irrelevant and even misleading. Cusimano developed an algorithm to distinguish among caws made by various crows, which had names such as Naranja, Rosa, and Azul. She seemed to have succeeded. Then she realized that the computer might be categorizing the sounds based on distinctive background noises, which corresponded to the placement of the recording devices. “The algorithms can pick up on tiny little clues that confound the actual problems we want to find answers to,” she said.Those who live and work alongside animals, whether they’re scientists or not, often think, as a matter of course, that animals can speak with one another, and in depth. Instead of being surprised by the discovery of each “unexpected” animal ability, maybe we should be surprised that humans have such low expectations. Many of us laugh—or shake our heads sadly—when we read that Descartes supposedly threw a cat out of a window to see if it would show fear, as a sort of test for consciousness. (He believed that nonhuman animals were senseless automatons.) Yet many of us would also consider it a wonder that, according to a recent study, elephants seem to have distinct names for one another, which their elephant friends and family use among themselves.When I started researching this story, I was amazed by each additional avian accomplishment that I learned about, especially in small, ordinary birds. It wasn’t only that they communicated this or that to one another but that they were full of concerns—that they were at the center of their own worlds. But shouldn’t I have intuited that this was the case all along? I had baselessly assumed that birds had little on their minds. The other day, my daughter and I were walking to her soccer practice, passing by sparrows and also people. “We know almost nothing about birds,” she told me. “There’s so much we don’t even notice.” She thought for a moment. “I think they have just as much language as we do, but a lot of it is in their mind. So we don’t hear it.” ♦
...
Read the original on www.newyorker.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.