10 interesting stories served every morning and every evening.
Language models today, while useful for a variety of tasks, are still limited. The only information they can learn from is their training data. This information can be out-of-date and is one-size fits all across applications. Furthermore, the only thing language models can do out-of-the-box is emit text. This text can contain useful instructions, but to actually follow these instructions you need another process.
Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data. In response to a user’s explicit request, plugins can also enable language models to perform safe, constrained actions on their behalf, increasing the usefulness of the system overall.
We expect that open standards will emerge to unify the ways in which applications expose an AI-facing interface. We are working on an early attempt at what such a standard might look like, and we’re looking for feedback from developers interested in building with us.
Today, we’re beginning to gradually enable existing plugins from our early collaborators for ChatGPT users, beginning with ChatGPT Plus subscribers. We’re also beginning to roll out the ability for developers to create their own plugins for ChatGPT.
In the coming months, as we learn from deployment and continue to improve our safety systems, we’ll iterate on this protocol, and we plan to enable developers using OpenAI models to integrate plugins into their own applications beyond ChatGPT.
...
Read the original on openai.com »
At GitHub, our mission has always been to innovate ahead of the curve and give developers everything they need to be happier and more productive in a world powered by software. When we began experimenting with large language models several years ago, it quickly became clear that generative AI represents the future of software development. We partnered with OpenAI to create GitHub Copilot, the world’s first at-scale generative AI development tool made with OpenAI’s Codex model, a descendent of GPT-3.
GitHub Copilot started a new age of software development as an AI pair programmer that keeps developers in the flow by auto-completing comments and code. And less than two years since its launch, GitHub Copilot is already writing 46% of code and helps developers code up to 55% faster.
But AI-powered auto-completion is just the starting point. Our R&D team at GitHub Next has been working to move past the editor and evolve GitHub Copilot into a readily accessible AI assistant throughout the entire development lifecycle. This is GitHub Copilot X—our vision for the future of AI-powered software development. We are not only adopting OpenAI’s new GPT-4 model, but are introducing chat and voice for Copilot, and bringing Copilot to pull requests, the command line, and docs to answer questions on your projects.
With AI available at every step, we can fundamentally redefine developer productivity. We are reducing boilerplate and manual tasks and making complex work easier across the developer lifecycle. By doing so, we’re enabling every developer to focus all their creativity on the big picture: building the innovation of tomorrow and accelerating human progress, today.
Want to see what’s new? Discover GitHub Copilot X—our vision for the future of AI-powered software development. Learn more >
* A ChatGPT-like experience in your editor with GitHub Copilot Chat: We are bringing a chat interface to the editor that’s focused on developer scenarios and natively integrates with VS Code and Visual Studio. This does far more than suggest code. GitHub Copilot Chat is not just a chat window. It recognizes what code a developer has typed, what error messages are shown, and it’s deeply embedded into the IDE. A developer can get in-depth analysis and explanations of what code blocks are intended to do, generate unit tests, and even get proposed fixes to bugs.
GitHub Copilot Chat builds upon the work that OpenAI and Microsoft have done with ChatGPT and the new Bing. It will also join our voice-to-code AI technology extension we previously demoed, which we’re now calling GitHub Copilot Voice, where developers can verbally give natural language prompts.
Sign up for the technical preview >
* Copilot for Pull Requests: You can now sign up for a technical preview of the first AI-generated descriptions for pull requests on GitHub. This new functionality is powered by OpenAI’s new GPT-4 model and adds support for AI-powered tags in pull request descriptions through a GitHub app that organization admins and individual repository owners can install. These tags are automatically filled out by GitHub Copilot based on the changed code. Developers can then review or modify the suggested description.
Enroll your repository in the technical preview >
This is just the first step we’re taking to rethink how pull requests work on GitHub. We’re testing new capabilities internally where GitHub Copilot will automatically suggest sentences and paragraphs as developers create pull requests by dynamically pulling in information about code changes.
We are also preparing a new feature where GitHub Copilot will automatically warn developers if they’re missing sufficient testing for a pull request and then suggest potential tests that can be edited, accepted, or rejected based on a project’s needs.
This complements our efforts with GitHub Copilot Chat where developers can ask GitHub Copilot to generate tests right from their editor—so, in the event a developer may not have sufficient test coverage, GitHub Copilot will alert them once they submit a pull request. It will also help project owners to set policies around testing, while supporting developers to meet these policies.
* Get AI-generated answers about documentation: We are launching GitHub Copilot for Docs, an experimental tool that uses a chat interface to provide users with AI-generated responses to questions about documentation—including questions developers have about the languages, frameworks, and technologies they’re using. We’re starting with documentation for React, Azure Docs, and MDN, so we can learn and iterate quickly with the developers and users of these projects.
We’re also working to bring this functionality to any organization’s repositories and internal documentation—so any developer can ask questions via a ChatGPT-like interface about documentation, idiomatic code, or in-house software in their organization and get instant answers.
We know that the benefits of a conversational interface are immense, and we are working to enable semantic understanding of the entirety of GitHub across public and private knowledge bases to better personalize GitHub Copilot’s answers for organizations, teams, companies, and individual developers alike based on their codebase and documentation.
Moving forward, we are exploring the best ways to index resources beyond documentation such as issues, pull requests, discussions, and wikis to give developers everything they need to answer technical questions.
Our work to rethink pull requests and documentation is powered by OpenAI’s newly released GPT-4 AI model.
Even though this model was just released, we’re already seeing significant gains in logical reasoning and code generation. With GPT-4, the state of AI is beginning to catch up with our ambition to create an AI pair programmer that assists with every development task at every point in the developer experience.
Moreover, it’s helping GitHub Copilot understand more of a developer’s codebase to offer more tailored suggestions in PRs and better summations of documentation.
* Copilot for the command line interface (CLI): Next to the editor and pull request, the terminal is the place where developers spend the most time. But even the most proficient developers need to scroll through many pages to remember the precise syntax of many commands. This is why we are launching GitHub Copilot CLI. It can compose commands and loops, and throw around obscure find flags to satisfy your query.
From reading docs to writing code to submitting pull requests and beyond, we’re working to personalize GitHub Copilot for every team, project, and repository it’s used in, creating a radically improved software development lifecycle. Together with Microsoft’s knowledge model, we will harness the reservoir of data and insights that exist in every organization, to strengthen the connection between all workers and developers, so every idea can go from code to reality without friction. At the same time, we will continue to innovate and update the heart of GitHub Copilot—the AI pair programmer that started it all.
GitHub Copilot X is on the horizon, and with it a new generation of more productive, fulfilled, and happy developers who will ship better software for everyone. So—let’s build from here.
...
Read the original on github.blog »
Software engineers are, if not unique, then darn near
unique in the ease with which we can create tools to improve our own professional lives; this however can come at a steep cost over time for people who constantly flit back and forth between different tools without investing the time to learn their own kit in depth. As someone with a healthy respect for the tacit knowledge of people better than
me, I think a great 80/20 heuristic is “Learn the oldies first”: venerable Unix tools like cat, ls, cd, grep, and cut. (sed and awk, too, if you have the good fortune of landing yourself in an actual modern sysadmin role.)
But there are tools whose return on investment is so immediate, and whose value prop is so unique, that the 80/20 heuristic breaks down entirely for them. fzf is one of them. And it makes me sad to see so many people download it, run it as-is at the command line, and then just shake their heads and walk away, saying “I don’t get it”.
Here I want to change that. Pretend you live on a more-or-less standard Ubuntu box. You’ve just installed
fzf using the standard install script — now what?
In most terminals, Linux and Windows alike, Ctrl+R gives you backwards search for your commands. The reason you, like me, may not have heard about this until you had already been hacking away for ten flippin’ years at the shell is because the base version kind of sucks for 2 reasons:
You need to give an exact match to get what you’re
trying to remember.
You get only one preview, so if you miss that exact
match by even one character, you’re on a wild goose chase.
fzf is a bit of a weird program because installing it
actually overwrites a whole bunch of keyboard shortcuts, in the interest of making them better. Normally I would hate this. But…
… This is a considerable improvement on the baseline.
Let’s say you boot into an empty terminal. You’re trying to quickly find your nascent SaaS side hustle repos and cd to it - but it’s been weeks since you’ve been there, your actual full time job has been unusually fun and engaging… How do you find it?
Answer: With fzf. fzf rewrites Alt+C into a souped-up fuzzy-cd shortcut that lets you hop around very quickly when all you remember is the vague name of the directory in question.
Okay, we’ve got the shortcuts out of the way. Honestly these two guys alone provide the majority of the value I get out of fzf - but let’s look at what the command, by itself, does.
It fuzzy-finds file locations! Relative ones, at least, to your own directory. This… isn’t that useful, by itself.
And you get a fuzzy-open-in-editor experience!
The other day I was trying to hack together baby’s first live-reload with a Firefox extension, entr, and nginx. And I found myself asking: Where the heck is nginx.conf?
Use my half-remembered knowledge of the FHS to guess around, with trees and greps, or
Just know and commit it to memory and feel superior to everyone else, or
Just pipe find . ‘/’ to fzf and start searching.
I like this clip a lot because it shows some of the subtle tradeoffs of using fzf, as well as one of the more advanced searching features - searching for conf$ will filter out any line that doesn’t end in conf. Notice that fzf temporarily wigs out when find hits a whole lot of “Permission denied” errors - but then recovers itself a few seconds later.
Are those extra few seconds worth the tradeoff for being able to find config files in such a braindead manner? It is for me.
Thanks to sigmonsays, Hacker News, for reminding me of this feature!
About halfway between “overwrite a keyboard shortcut” and “use fzf as-is” is
using two stars for fuzzy tab completion. Here’s using it to do something quite similar to vi $(fzf), as above:
You do have to hit Enter one more time after you actually get the command, fair warning.
I’m not yet in the habit of using this all that often, since my only real use case at home is as a replacement for $(fzf), and I just find explicitly calling the boy easier to remember. I imagine it’s a similar experience for tab-tab-star-heads as watching my coworker copy-and-paste manually from the
terminal instead of using :read ! echo “Hello world!” is for me
For when you neither rememeber exactly what you’re moving, nor where you’re moving it to, but you remember the abstract concept of distance over time well enough to know it simply must be done, and something extremely specific about the nature of each item to be shunted.
Everything I say below can be done with grep as well, but the recursive-by-default nature of rg (also known as
ripgrep) is where the tool really comes into its own. I highly recommend you download it and use it for the following examples as well. But if you’re toolshy, don’t worry!
rg . | fzf: Fuzzy search every line in every file
Now we’re getting into some real amnesiac territory >:3c.
rg . | fzf | cut -d ”:” -f 1: Fuzzy search every line, in every file, and return the file location
vim $(rg . | fzf | cut -d ”:” -f 1): Fuzzy search every line, in every file, and open that file
...
Read the original on andrew-quinn.me »
Cyclists are now the “single largest vehicular mode counted during peak times on City streets,” says a report to the transportation committee of the City of London Corporation, the municipal governing body of London’s square mile.
The traffic count figures are in a briefing document provided to councilors for a committee meeting next Tuesday.
At peak times, people cycling represent 40% of road traffic in the City and 27% throughout the day.
Over the last decade, the use of motor vehicles has been increasingly restricted in the financial heart of the U. K. The 24-hour traffic count was conducted on a wet and windy November day last year.
Walking remains the main way people travel on the City’s streets, says the report to councilors. However, the number of pedestrians is currently below pre-pandemic figures, with the volumes of motor vehicles also 80% of what they were in 2019.
However, cyclist numbers are at 102% of pre-pandemic levels. The number of motorists has fallen by 64% since 1999, while the number of cyclists has increased by 386%.
“Long-term trends observed from count data taken from 12 sites across the City since 1999 show motor vehicle volumes continuing to decline and cycle volumes continuing to increase,” says the traffic order paper to councilors, due to be discussed on 7 March.
The online publication of the materials was spotted by Twitter user @lastnotlost.
Apart from during the pandemic, the most significant percentage drops in motor vehicle use were between 2007-2009 and 2014-16, reveals the briefing document.
Danny Williams, the CEO of arms-reach government body Active Travel England, said the considerable uptick in cycling levels in the City of London was “quite astonishing.”
...
Read the original on www.forbes.com »
ACM, the Association for Computing Machinery, today named Bob Metcalfe as recipient of the 2022 ACM A. M. Turing Award for the invention, standardization, and commercialization of Ethernet.
Metcalfe is an Emeritus Professor of Electrical and Computer Engineering (ECE) at The University of Texas at Austin and a Research Affiliate in Computational Engineering at the Massachusetts Institute of Technology (MIT) Computer Science & Artificial Intelligence Laboratory (CSAIL).
The ACM A. M. Turing Award, often referred to as the “Nobel Prize in Computing,” carries a $1 million prize, with financial support provided by Google, Inc. The Award is named for Alan M. Turing, the British mathematician who articulated the mathematical foundations of computing.
In 1973, while a computer scientist at the Xerox Palo Alto Research Center (PARC), Metcalfe circulated a now-famous memo describing a “broadcast communication network” for connecting some of the first personal computers, PARC’s Altos, within a building. The first Ethernet ran at 2.94 megabits per second, which was about 10,000 times faster than the terminal networks it would replace.
Although Metcalfe’s original design proposed implementing this network over coaxial cable, the memo envisioned “communication over an ether,” making the design adaptable to future innovations in media technology including legacy telephone twisted pair, optical fiber, radio (Wi-Fi), and even power networks, to replace the coaxial cable as the “ether.” That memo laid the groundwork for what we now know today as Ethernet.
Metcalfe’s Ethernet design incorporated insights from his experience with ALOHAnet, a pioneering computer networking system developed at the University of Hawaii. Metcalfe recruited David Boggs (d. 2022), a co-inventor of Ethernet, to help build a 100-node PARC Ethernet. That first Ethernet was then replicated within Xerox to proliferate a corporate internet.
In their classic 1976 Communications of the ACM article, “Ethernet: Distributed Packet Switching for Local Computer Networks,” Metcalfe and Boggs described the design of Ethernet. Metcalfe then led a team that developed the 10Mbps Ethernet to form the basis of subsequent standards.
After leaving Xerox in 1979, Metcalfe remained the chief evangelist for Ethernet and continued to guide its development while working to ensure industry adoption of an open standard. He led an effort by Digital Equipment Corporation (DEC), Intel, and Xerox to develop a 10Mbps Ethernet specification—the DIX standard. The IEEE 802 committee was formed to establish a local area network (LAN) standard. A slight variant of DIX became the first IEEE 802.3 standard, which is still vibrant today.
As the founder of his own Silicon Valley Internet startup, 3Com Corporation, in 1979, Metcalfe bolstered the commercial appeal of Ethernet by selling network software, Ethernet transceivers, and Ethernet cards for minicomputers and workstations. When IBM introduced its personal computer (PC), 3Com introduced one of the first Ethernet interfaces for IBM PCs and their proliferating clones.
Today, Ethernet is the main conduit of wired network communications around the world, handling data rates from 10 Mbps to 400 Gbps, with 800 Gbps and 1.6 Tbps technologies emerging. Ethernet has also become an enormous market, with revenue from Ethernet switches alone exceeding $30 billion in 2021, according to the International Data Corporation.
Metcalfe insists on calling Wi-Fi by its original name, Wireless Ethernet, for old times’ sake.
“Ethernet has been the dominant way of connecting computers to other devices, to each other, and to the Internet,” explains ACM President Yannis Ioannidis. “Metcalfe’s original design ideas have enabled the bandwidth of Ethernet to grow geometrically. It is rare to see a technology scale from its origins to today’s multigigabit-per-second capacity. Even with the advent of WiFi, Ethernet remains the staple mode of data communication, especially when security and reliability are prioritized. It is especially fitting to recognize such an impactful invention during its 50th anniversary year.”
“Ethernet is the foundational technology of the Internet, which supports more than 5 billion users and enables much of modern life,” added Jeff Dean, Google Senior Fellow and SVP of Google Research and AI. “Today, with an estimated 7 billion ports around the globe, Ethernet is so ubiquitous that we take it for granted. It’s easy to forget that our interconnected world would not be the same if not for Bob Metcalfe’s invention and his enduring vision that every computer needed to be networked.”
Metcalfe will be formally presented with the ACM A. M. Turing Award at the annual ACM Awards Banquet, which will be held this year on Saturday, June 10 at the Palace Hotel in San Francisco.
Robert Melancton Metcalfe is Emeritus Professor of Electrical and Computer Engineering (ECE) after 11 years at The University of Texas at Austin. He has recently become a Research Affiliate in Computational Engineering at his alma mater, the Massachusetts Institute of Technology (MIT) Computer Science & Artificial Intelligence Laboratory (CSAIL).
Metcalfe graduated from MIT in 1969 with Bachelor degrees in Electrical Engineering and Industrial Management. He earned a Master’s degree in Applied Mathematics in 1970 and a PhD in Computer Science in 1973 from Harvard University.
Metcalfe’s honors include the National Medal of Technology, IEEE Medal of Honor, Marconi Prize, Japan Computer & Communications Prize, ACM Grace Murray Hopper Award, and IEEE Alexander Graham Bell Medal. He is a Fellow of the US National Academy of Engineering, the American Academy of Arts and Sciences, and the National Inventors, Consumer Electronics, and Internet Halls of Fame.
The A. M. Turing Award, the ACM’s most prestigious technical award, is given for major contributions of lasting importance to computing.
This site celebrates all the winners since the award’s creation in 1966. It contains biographical information, a description of their accomplishments, straightforward explanations of their fields of specialization, and text or video of their A. M. Turing Award Lecture.
...
Read the original on amturing.acm.org »
/ Sign up for Verge Deals to get deals on products we’ve tested sent to your inbox daily.
...
Read the original on www.theverge.com »
Your Mac can be quite a chatty fellow. Talking to strangers all over the world.
You deserve to know to whom your Apps are talking to.
With Little Snitch Mini, you can.
Your apps connect whenever they want to wherever they want. , only if you want.
It shows you each and every Internet connection of all apps on your Mac. And if you don’t like what you see, you simply push the Stop-Button.
Say hello to Blocklists!
It’s never been easier to get rid of unwanted Internet connections.
Choose from a curated collection of blocklists covering thousands of ad servers, tracking servers and much more. They are kept up-to-date automatically, for optimal protection of your privacy.
Learn more…
Blocklists are organized in categories. So you can quickly find the ones that best suit your needs.
Some of your apps have seen more of the world than you have! They send data to the farthest corners of our planet.
With the map view of Little Snitch Mini you can follow their tracks!
The versatile traffic chart shows you detailed statistics of the data amounts sent and received by your apps over the last 12 months.
What’s going on right now?
The status menu shows an animated live overview of your Mac’s most recent network activity.
The network monitoring functionality, including the real-time connection list, traffic diagrams and the animated map view can be used for free!
The full feature set, including connection blocking, extended traffic history time ranges, advanced display and filtering options and more is available as an in-app purchase.
...
Read the original on obdev.at »
Now available with AMD Ryzen™ 7040 Series and 13th Gen Intel® Core™
Performance upgrades for your Framework Laptop 13, with the latest Intel and AMD processor options.
Higher-capacity batteries, improved hinges, matte displays, and more for your Framework Laptop 13.
Get the latest news and product updates from Framework
“The team over at Framework has managed to not just create a laptop that is easily repairable and upgradable, but it‘s also a thin, gorgeous, performant laptop.” — Linus Tech Tips
“This is the best laptop you can get right now if you want a completely repairable and upgradeable device.” — Dave2D
Best of the Best Design Award
The time has come for consumer electronics products that are designed to last: products that give you back the power to upgrade, customize, and repair them. We’re excited for the opportunity to fix the consumer electronics industry together.
At our Next Level Event, we launched a colossal set of new products and upgrades, including the new, high-performance 16” Framework Laptop 16 and both Intel and AMD-powered Framework Laptop 13. We can’t wait to see what you think!
...
Read the original on frame.work »
Hyundai Promises To Keep Buttons in Cars Because Touchscreen Controls Are DangerousHyundai knows you like to keep your eyes on the road, and it’s giving you the controls to do just that.
Touchscreens and touch controls took over the world of automotive interior design, as automakers aimed to build vehicles on the cutting edge of technology and trends. As it turns out though, sometimes the old ways are best. Hyundai certainly thinks so, as it has pledged to employ real physical buttons in products to come. Sang Yup Lee, Hyundai’s head of design, reiterated the company’s commitment to buttons at the introduction of the new Hyundai Kona. As reported by CarsGuide, for the Korean automaker, it’s a decision rooted in safety concerns. “We have used the physical buttons quite significantly the last few years. For me, the safety-related buttons have to be a hard key,” said Lee. That’s a real volume knob in the new Kona’s interior, along with physical controls for the HVAC system, too. HyundaiIt’s a design call that makes a lot of sense. In some modern vehicles, adjusting things like the volume or climate control settings can require diving into menus on a touch screen, or using your eyes to find a touch control on the dash. In comparison, the tactile feedback of real buttons, dials, and switches lets drivers keep their eyes on the road instead. “When you’re driving, it’s hard to control it. This is why when it’s a hard key it’s easy to sense and feel it,” said Lee. As far as he is concerned, physical controls are a necessity for anything that could impact safety. Hence the physical buttons and dials for items like the HVAC system and volume control. Lee hinted that while this is a priority for Hyundai today, things may change in future. In particular, the company will likely look at using touch controls more heavily when autonomous driving becomes mainstream. “When it comes to Level 4 autonomous driving, then we’ll have everything soft key,” said Lee. Touchscreens and touch controls did offer certain enticements to automakers. They allow a great deal of functions to be controlled with a compact, changeable interface. A handful of touch controls and a touchscreen can also be cheaper and easier to implement than populating buttons all over the cabin. Plus, for a time, they were a sign that an automaker was moving with the times. After the past decade, though, the people grow tired of such novelties. Touch controls are, by and large, less responsive and less practical than the simple buttons of yore. That’s before we even contemplate the frustration of having to dive into a menu system just to turn on a heated seat. Hyundai has clearly identified the old-school ethos best suits its own interiors, and it isn’t afraid to say so.Got a tip? Let the author know: lewin@thedrive.com
Sign Up For Our NewsletterTechnology, performance and design delivered to your inbox. Sign Up
...
Read the original on www.thedrive.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.