10 interesting stories served every morning and every evening.
While Adobe’s annual MAX conference gives the company a chance to unveil its latest features, it also lets the brand demonstrate some of its more weird and wonderful ideas. ‘Sneaks’ sees Adobe engineers take to the stage to share ideas that may or may not see the light of day, such as 2024′s Project Turntable. Creative Bloq enjoyed an exclusive preview of the concept in ahead of its unveiling at MAX, and it’s one of the most impressive Sneaks we’ve seen.
Project Turntable lets users easily rotate 2D vector art in 3D, whilst ensuring it still look like 2D art from any new angle. And even after the rotation, the vector graphics stay true to the original shape, maintaining the design’s original essence.
In the example above, a 2D vector of an illustrated warrior is rotated to face a dragon. While spinning, the vector image appears to be a 3D object, but the static image the user settles on will be completely flat. Truly impressive is how the tool uses AI to fill in the ‘gaps’ in the image — in another example, a 2D horse with only two visibly legs is rotated to reveal four.
The tool was created by Adobe research scientist Zhiqin Chen. Adobe’s Brian Domingo told Creative Bloq that like other Adobe Innovation project, there’s still no guarantee that this feature will be released commercially — but the team expects it to generate a ton of interest at Adobe Max.
From Automatic Image Distraction Removal, and a new Generative Workspace, Adobe has already announced over 100 new creator-first features this week. And with huge announcements from other brands including Tesla and Meta, this has arguably been one of the biggest weeks for AI we’ve seen so far.
...
Read the original on www.creativebloq.com »
Many users prefer to use an RSS feed reader to stay up to date with the content on the websites they visit. But if you’ve enabled Cloudflare on your website, you’re likely blocking these RSS users from accessing your website content without realizing it.
In Cloudflare’s dashboard, you’ll find tools designed to block bot traffic to your website. Particularly, the Bot Fight Mode and block all “AI scrapers and crawlers” options below. When enabled, these features end up blocking users who access your website through RSS readers, even though RSS readers are legitimate and aren’t malicious bots.
How Cloudflare blocks RSS readers from your website
When enabling the tools, Cloudflare will evaluate each visit to your website and determine whether the visit is from an AI scraper or “bot” based on a score , which ironically Cloudflare uses AI to generate.
Then, when a user’s RSS reader attempts to read your website, Cloudflare presents it with a number of challenges that the reader would never be able to fulfill.
Here’s an example of the Human Verification challenge that an RSS reader would be shown when it tries to visit your website. The challenge requires a human to solve and, because an RSS reader is not a human, it can never complete them.
In other cases, Cloudflare will simply block the RSS reader from accessing your website without a reason.
The only way to resolve when Cloudflare blocks an RSS reader from accessing your website is by contacting you directly and asking you to make a custom rule to unblock it. But Cloudflare shouldn’t expect people to contact every owner of every Cloudflare website that blocks their RSS reader. And you shouldn’t have to waste time logging into Cloudflare to add an exception every time they block an RSS reader, either.
Unblock RSS readers while still using Cloudflare
Even though Cloudflare blocks RSS readers from your website, you can whitelist RSS readers as a workaround. This would at least unblock RSS readers without having to turn off any security features that you may have already been enabled until Cloudflare better addresses the issue.
First, find the user agent of any blocked RSS reader in Cloudflare’s analytics dashboard. The User-Agent of most good RSS readers usually include the name of the reader, it’s URL, or a word like “RSS” or “feed” that makes it obvious that it’s an RSS reader.
Once you’ve identified an RSS reader’s user agent, you can create a custom rule that explicitly whitelists and allows all traffic by the reader’s IP address or by it’s user agent string. Note that user agents can be disguised, so it’s often better to whitelist the reader’s IP address instead of the user agent. If you’d like to whitelist Open RSS, please contact us for the required information.
Cloudflare needs a better way to allow RSS readers
Cloudflare offers a bot verification program to which RSS readers owners can manually apply to avoid being blocked by websites, but this program isn’t guaranteed to work and it suffers from quite a few problems.
* The verification process is flimsy — They’re using a Google form for applications to the program. Then after applying, no notification is sent that they’re working on it or even received the application successfully (we’ve tried applying twice), with no progress updates or expected timeframe for completion.
* Verified RSS readers are still being blocked — There are reports that RSS readers Cloudflare has verified as “good bots” are still being blocked from websites. If Cloudflare has successfully approved an RSS reader as a “good bot”, it shouldn’t be blocked or still require website owners to add any custom exception rules.
* Unblocking RSS readers across multiple websites is cumbersome — Cloudflare’s only resolution to unblocking RSS readers is for the owners of the readers to contact each website owner directly and ask for an exception to be made. While that may work for one-off cases, this is unreasonable for RSS readers that have to access thousands of different Cloudflare-enabled websites each day. It’s also overwhelming for website owners to configure exceptions for each and every RSS reader.
To be clear, there’s nothing wrong with using Cloudflare’s security tools on your website to help deal with malicious AI bots, scrapers, and potential attacks. But Cloudflare needs to ensure that people who use RSS tools aren’t blocked from accessing your website content, and make it easier to resolve when they are.
...
Read the original on openrss.org »
Skip to main content
What you’re seeing below is two of Crokinole’s greats simultaneously securing perfect rounds. Technically speaking, they each flicked a 3.2cm disc 30cm across a board into a 3.5cm hole (just 9% wider than the disc itself) eight times in a row. In game terms, they made eight open 20s each. But it’s just flicking a little disc across a small board. How hard can it be, really? The mesmerizing 56 seconds above were captured at the semifinals of the 2024 World Crokinole Championship, where Connor Reinman defeated Jason Slater. A matchup not unlike Magic vs. Bird, or Swift vs. Eilish. How rare was this feat of perfection? Was this one of those obscure new Olympic events? You may even be wondering, wtf is Crokinole? We’ll get to all these questions. But first, you must understand Crokinole. The game’s origin: the southern region of Ontario. If you are from the southern region of the Canadian province of Ontario, you may already be well-versed in Crokinole due to its Canadian origin. For the uninitiated, Crokinole is like a mashup of shuffleboard and curling, played on a tabletop board. It’s been around since the 19th century but has seen a steady rise in global popularity in recent years. To know the game, one must play the game.
Let your training begin. The main objective is to flick your discs into higher-scoring regions. The center hole is worth 20 points. Be careful, though—eight pegs surround the fifteen-point region. Here, your opponent shoots on an empty board. To be a valid shot (and stay on the board) the disc must land within the 15-point region. If any opponent discs are on the board, your disc must hit one to be valid and stay on the board. Give it a try: Hit your opponent’s disc. Use the slider and button below the board to position your disc. Then aim, hold the shoot button to find the perfect power, and release. If you shoot and it is an invalid shot, your disc is removed from the board and is not scored. It’s your turn, and there are no opponent discs. You just need to land in the fifteen-point region, but scoring a 20 is ideal. A made 20 is set aside and counted. Give it a try: Shoot your disc into the center hole to secure the 20 points. After all sixteen discs are played, points are tallied based on the regions and any 20s. The round winner is the player with the most points, and a game consists of multiple rounds. Easy Keanu, that was just the basics. We didn’t even get to the one-cheek rule (yes, that cheek). For more details you can watch this video or read the rules. Oh, and feel free to go play—we made a simulator for you to hone your skills against a bot. You are ready for the next part of the journey. What does the data tell us about Connor Reinman and Jason Slater? Reinman, the back-to-back world champion, and Slater, the perennial powerhouse, are arguably the greatest players right now on the world stage. Player rankings according to Crokinole Reference. No matches from 2021-2022. But how good are they? Let’s start by looking at their ability to make open 20s, an indispensable skill for success. Here’s how competitive players compare in open 20 success rates. Reinman and Slater are top competitors in open 20s, with success rates of 66% and 75%, compared to the average competitive player’s 55%. For basketball fans: a Crokinole player making eight consecutive 20s in live play is like an NBA player sinking 20 straight free throws during a game—not impossible, but far from common. How do they compare to casual players? Observing players with varying experience, success rates for in-game open 20s ranged from 20% to 50%. The odds of two opponents making eight consecutive shots can vary greatly depending on their skill level. Here are the odds of a double perfect round. Our theoretical scenarios show how even a slight drop in skill greatly impacts the odds. To witness this rare event, both top players must hit a hot streak at the same time. These percentages reflect in-game attempts, where a player’s rhythm is disrupted by various shots. In non-competitive, less plamigerent settings, their skills really shine—like Shawn Hagarty, who set an unofficial record with 64 consecutive open 20s. However, real games are far more nuanced and complex. Players — or their opponents — often miss early on. Here’s what the data reveals after analyzing 300 rounds from various matchups in last season’s tournaments. At Which Shot an Open 20 is First Missed in Competitive Matches Note: Based on 300 rounds from final to quarterfinal matches in the past year. More often than not, the elusive double perfect round is lost right at the start. But I’ve been discussing this in the context of the most elegant form — a “pure” double perfect round, where all 16 shots are made as open 20s. Technically, though, a miss doesn’t completely rule out a perfect round. A perfect round can (and often does) include a combination of open 20s and ricochet 20s, where a disc bounces in off the opponent’s. Ricochet 20s by Justin Slater and Andrew Hutchinson. Watch on YouTube. The perfect blend of aim and power is required to perfect shots like those. Try it yourself: can you achieve the feel-good ricochet 20? Going for a 20 isn’t always the best or even a viable option. Discs on the board introduce more exciting scenarios that add layers of strategy. Having all your discs on your side is usually a strong defensive position, but watching your opponent dismantle it can be demoralizing. That’s exactly what happened in this round between Andrew Hutchinson and Nolan Tracey. The Slater-Reinman round was the only double perfect in a review of 445 highly competitive rounds in the past year. One thing is certain: more skilled players tend to keep the board clear and make open 20s at a higher rate, increasing the chance of glimpsing a pure double perfect round. If there’s one takeaway, it’s that Crokinole is fun and the community is awesome. Whether you’re playing to win or just flicking a few rounds with Grandma during the holidays, it’s a great time. So, maybe you’re into Crokinole now? Here are some resources to help you get started, or go get some practice in with the simulator. Buy a board here (or here, here, here) Player types are estimations based on all open 20 success rates from 2023-2024 NCA tournament data. Competitive 20 success rates are from Shawn Hagarty’s impressive data. Recreational open 20 success rates are based on observations of 600 open 20 attempts from 10 individuals with at least 50 attempts each. First missed shot data is from watching playoff-rounds from all 2023-2024 tournaments on Tracey Boards coverage of events. Yearly NCA tour rankings data is based on the rank in July (at the end of the season). Data from Crokinole Reference. The Pudding
is a digital publication that explains ideas debated in culture with visual essays.
...
Read the original on pudding.cool »
This blog post details how I found CVE-2024-6778 and CVE-2024-5836, which are vulnerabilities within the Chromium web browser which allowed for a sandbox escape from a browser extension (with a tiny bit of user interaction). Eventually, Google paid me $20,000 for this bug report.
In short, these bugs allowed a malicious Chrome extension to run any shell command on your PC, which might then be used to install some even worse malware. Instead of merely stealing your passwords and compromising your browser, an attacker could take control of your entire operating system.
All untrusted code that Chromium runs is sandboxed, which means that it runs in an isolated environment that cannot access anything it’s not supposed to. In practice, this means that the Javascript code that runs in a Chrome extension can only interact with itself and the Javascript APIs it has access to. Which APIs an extension has access to is dependent on the permissions that the user grants it. However, the worst that you can really do with these permissions is steal someone’s logins and browser history. Everything is supposed to stay contained to within the browser.
Additionally, Chromium has a few webpages that it uses for displaying its GUI, using a mechanism called WebUI. These are prefixed with the chrome:// URL protocol, and include ones you’ve probably used like chrome://settings and chrome://history. Their purpose is to provide the user-facing UI for Chromium’s features, while being written with web technologies such as HTML, CSS, and Javascript. Because they need to display and modify information that is specific to the internals of the browser, they are considered to be privileged, which means they have access to private APIs that are used nowhere else. These private APIs allow the Javascript code running on the WebUI frontend to communicate with native C++ code in the browser itself.
Preventing an attacker from accessing WebUIs is important because code that runs on a WebUI page can bypass the Chromium sandbox entirely. For example, on chrome://downloads, clicking on a download for a .exe file will run the executable, and thus if this action was performed via a malicious script, that script can escape the sandbox.
Running untrusted Javascript on chrome:// pages is a common attack vector, so the receiving end of these private APIs perform some validation to ensure that they’re not doing anything that the user couldn’t otherwise do normally. Going back to the chrome://downloads example, Chromium protects against that exact scenario by requiring that to open a file from the downloads page, the action that triggers it has to come from an actual user input and not just Javascript.
Of course, sometimes with these checks there’s an edge case that the Chromium developers didn’t account for.
My journey towards finding this vulnerability began when I was looking into the Chromium enterprise policy system. It’s intended to be a way for administrators to force certain settings to be applied to devices owned by a company or school. Usually, policies tied to a Google account and are downloaded from Google’s own management server.
Enterprise policies also include things that the user would not be able to modify normally. For example, one of the things you can do with policies is disable the dino easter egg game:
Moreover, the policies themselves are separated into two categories: user policies and device policies.
Device policies are used to manage settings across an entire Chrome OS device. They can be as simple as restricting which accounts can log in or setting the release channel. Some of them can even change the behavior of the device’s firmware (used to prevent developer mode or downgrading the OS). However, because this vulnerability doesn’t pertain to Chrome OS, device policies can be ignored for now.
User policies are applied to a specific user or browser instance. Unlike device policies, these are available on all platforms, and they can be set locally rather than relying on Google’s servers. On Linux for instance, placing a JSON file inside /etc/opt/chrome/policies will set the user policies for all instances of Google Chrome on the device.
Setting user policies using this method is somewhat inconvenient since writing to the policies directory requires root permissions. However, what if there was a way to modify these policies without creating a file?
Notably, Chromium has a WebUI for viewing the policies applied to the current device, located at chrome://policy. It shows the list of policies applied, the logs for the policy service, and the ability to export these policies to a JSON file.
This is nice and all, but normally there’s no way to edit the policies from this page. Unless of course, there is an undocumented feature to do exactly that.
When I was doing research on the subject, I came across the following entry in the Chrome Enterprise release notes for Chrome v117:
Chrome will introduce a chrome://policy/test page
chrome://policy/test will allow customers to test out policies on the Beta, Dev, Canary channels. If there is enough customer demand, we will consider bringing this functionality to the Stable channel.
As it turns out, this is the only place in Chromium’s documentation where this feature is mentioned at all. So with nowhere else to look, I examined the Chromium source code to figure out how it is supposed to work.
Using Chromium Code Search, I did a search for chrome://policy/test, which led me to the JS part of the WebUI code for the policy test page. I then noticed the private API calls that it uses to set the test policies:
Remember how I said that these WebUI pages have access to private APIs? Well, sendWithPromise() is one of these. sendWithPromise() is really just a wrapper for chrome.send(), which sends a request to a handler function written in C++. The handler function can then do whatever it needs to in the internals of the browser, then it may return a value which is passed back to the JS side by sendWithPromise().
And so, on a whim, I decided to see what calling this in the JS console would do.
Unfortunately, running it simply crashed the browser. Interestingly, the following line appeared in the crash log:
[17282:17282:1016/022258.064657:FATAL:local_test_policy_loader.cc(68)] Check failed: policies.has_value() && policies->is_list(). List of policies expected
It looks like it expects a JSON string with an array of policies as the first argument, which makes sense. Let’s provide one then. Luckily policy_test_browser_proxy.ts tells me the format it expects so I don’t have to do too much guesswork.
So after running this… it just works? I just set an arbitrary user policy by simply running some Javascript on chrome://policy. Clearly something is going wrong here, considering that I never explicitly enabled this feature at all.
For some context, this is what the policy test page is supposed to look like when it’s properly enabled.
To properly enable this page, you have to set the PolicyTestPageEnabled policy (also not documented anywhere). If that policy is not set to begin with, then chrome://policy/test just redirects back to chrome://policy.
So why was I able to set the test policies regardless of the fact that I had the PolicyTestPageEnabled policy disabled? To investigate this, I looked though Chromium Code Search again and found the WebUI handler for the setLocalTestPolicies function on the C++ side.
The only validation that this function performs is that it checks to see if local_test_provider exists, otherwise it crashes the entire browser. Under what conditions will local_test_provider exist, though?
To answer that, I found the code that actually creates the local test policy provider.
So this function actually does perform a check to see if the test policies are allowed. If they’re not allowed, then it returns null, and attempting to set test policies like I showed earlier will cause a crash.
Maybe IsPolicyTestingEnabled() is misbehaving? Here’s what the function looks like:
This function first checks if kPolicyTestPageEnabled is true, which is the the policy that is supposed to enable the policy test page under normal conditions. However, you may notice that when IsPolicyTestingEnabled() is called, the first argument, the pref_service, is set to null. This causes the check to be ignored entirely.
Now, the only check that remains is for the channel. In this context, “channel” means browser’s release channel, which is something like stable, beta, dev, or canary. So in this case, only Channel::CANARY and Channel::DEFAULT is allowed. That must mean that my browser is set to either Channel::CANARY or Channel::DEFAULT.
Then does the browser know what channel it’s in? Here’s the function where it determines that:
If you don’t know how the C preprocessor works, the #if BUILDFLAG(GOOGLE_CHROME_BRANDING) part means that the enclosed code will only be compiled if BUILDFLAG(GOOGLE_CHROME_BRANDING) is true. Otherwise that part of the code doesn’t exist. Considering that I’m using plain Chromium and not the branded Google Chrome, the channel will always be Channel::UNKNOWN. This also means that, unfortunately, the bug will not work on stable builds of Google Chrome since the release channel is set to the proper value there.
Looking at the enum definition for the channels, we can see that Channel::UNKNOWN is actually the same as Channel::DEFAULT. Thus, on Chromium and its derivatives, the release channel check in IsPolicyTestingEnabled() always passes, and the function will always return true.
So what can I actually do with the ability to set arbitrary user policies? To answer that, I looked at the Chrome enterprise policy list.
One of the features present in enterprise policies is the Legacy Browser Support module, also called the Browser Switcher. It’s designed to accommodate Internet Explorer users by launching an alternative browser when the user visit certain URLs in Chromium. The behaviors of this feature are all controllable with policies.
The AlternativeBrowserPath policy stood out in particular. Combined with AlternativeBrowserParameters, this lets Chromium launch any shell command as the “alternate browser.” However, keep in mind this only works on Linux, MacOS, and Windows, because otherwise the browser switcher policies don’t exist.
We can set the following policies to make Chromium launch the calculator, for instance:
Whenever the browser tries to navigate to example.com, the browser switcher will kick in and launch /bin/bash. [“-c”, “xcalc # https://example.com”] get passed in as arguments. The -c tells bash to run the command specified in the next argument. You may have noticed that the page URL gets substituted into ${url}, and so to prevent this from messing up the command, we can simply put it behind a # which makes it a comment. And thus, we are able to trick Chromium into running /bin/bash -c ’xcalc # https://example.com’.
Utilizing this from the chrome://policy page is rather simple. I can just set these policies using the aforementioned method, and then call window.open(“https://example.com”) to trigger the browser switcher.
And that right there is the sandbox escape. We have managed to run an arbitrary shell command via Javascript running on chrome://policy.
You might have noticed that so far, this attack requires the victim to paste the malicious code into the browser console while they are on chrome://policy. Actually convincing someone to do this would be rather difficult, making the bug useless. So now, my new goal is to somehow run this JS in chrome://policy automatically.
The most likely way this can be done is by creating a malicious Chrome extension. The Chrome extension APIs have a fairly large attack surface, and extensions by their very nature have the ability to inject JS onto pages. However, like I mentioned earlier, extensions are not allowed to run JS on privileged WebUI pages, so I needed to find a way around that.
There are 4 main ways that an extension can execute JS on pages:
* chrome.tabs in Manifest v2, which works similarly to how chrome.scripting does.
* chrome.devtools.inspectedWindow, which interacts with the inspected page when devtools is open.
While investigating this, I decided to look into chrome.devtools.inspectedWindow, as I felt that it was the most obscure and thus least hardened. That assumption turned out to be right.
The way that the chrome.devtools APIs work is that all extensions that use the APIs must have the devtools_page field in their manifest. For example:
Essentially, what this does is it specifies that whenever the user opens devtools, the devtools page loads devtools.html as an iframe. Within that iframe, the extension can use all of the chrome.devtools APIs. You can refer to the API documentation for the specifics.
While researching the chrome.devtools.inspectedWindow APIs, I noticed a prior bug report by David Erceg, which involved a bug with chrome.devtools.inspectedWindow.eval(). He managed to get code execution on a WebUI by opening devtools on a normal page, then running chrome.devtools.inspectedWindow.eval() with a script that crashed the page. Then, this crashed tab could be navigated to a WebUI page, where the eval request would be re-run, thus gaining code execution there.
Notably, the chrome.devtools APIs are supposed to protect against this sort of privilege execution by simply disabling their usage after the inspected page has been navigated to a WebUI. As David Erceg demonstrated in his bug report, the key to bypassing this is to send the request for the eval before Chrome decides to disable the devtools API, and to make sure the request arrives at the WebUI page.
After reading that report, I wondered if something similar was possible with chrome.devtools.inspectedWindow.reload(). This function is also able to run JS on the inspected page, as long as the injectedScript is passed into it.
The first sign that it was exploitable appeared when I tried calling inspectedWindow.reload() when the inspected page was an about:blank page which belonged to a WebUI. about:blank pages are unique in this regard since even though the URL is not special, they inherit the permissions and origin from the page that opened them. Because an about:blank page opened from a WebUI is privileged, you would expect that trying to evaluate JS on that page would be blocked.
Surprisingly, this actually worked. Notice that the title of the alert has the page’s origin in it, which is chrome://settings, so the page is in fact privileged. But wait, isn’t the devtools API supposed to prevent this exact thing by disabling the API entirely? Well, it doesn’t consider the edge case of about:blank pages. Here’s the code that handles disabling the API:
Importantly, it only takes the URL into consideration here, not the page’s origin. As I demonstrated earlier, these can be two distinct things. Even if the URL is benign, the origin may not be.
Abusing about:blank is nice and all but it’s not very useful in the context of making an exploit chain. The page I want to get code execution on, chrome://policy, never opens any about:blank popups, so that’s already a dead end. However, I noticed the fact that even though inspectedWindow.eval() failed, inspectedWindow.reload() still ran successfully and executed the JS on chrome://settings. This suggested that inspectedWindow.eval() has its own checks to see if the origin of the inspected page is allowed, while inspectedWindow.reload() has no checks of its own.
Then I wondered if I could just spam the inspectedWindow.reload() calls, so that if at least one of those requests landed on the WebUI page, I would get code execution.
And that’s the final piece of the exploit chain working. This race condition relies on the fact that the inspected page and the devtools page are different processes. When the navigation to the WebUI occurs in the inspected page, there is a small window of time before the devtools page realizes and disables the API. If inspectedWindow.reload() is called within this interval of time, the reload request will end up on the WebUI page.
Now that I had all of the steps of the exploit working, I began putting together the proof of concept code. To recap, this POC has to do the following:
Use the race condition in chrome.devtools.inspectedWindow.reload() to execute a JS payload on chrome://policy
The BrowserSwitcherEnabled, BrowserSwitcherUrlList, AlternativeBrowserPath, and AlternativeBrowserParameters are set, specifying /bin/bash as the “alternate browser.”
The browser switcher is triggered by a simple window.open() call, which executes a shell command.
The final proof of concept exploit looked like this:
And with that, I was ready to write the bug report. I finalized the script, wrote an explanation of the bug, tested it on multiple operating systems, and sent it in to Google.
At this point however, there was still a glaring problem: The race condition with .inspectedWindow.reload() was not very reliable. I managed to tweak it so that it worked about 70% of the time, but that still wasn’t enough. While the fact that it worked at all definitely made it a serious vulnerability regardless, the unreliability would have reduced the severity by quite a bit. So then I got to work trying to find a better way.
Remember how I mentioned that in David Erceg’s bug report, he utilized the fact that debugger requests persist after the tab crashes? I wondered if this exact method worked for inspectedWindow.reload() too, so I tested it. I also messed with the debugger statement, and it appeared that triggering the debugger twice in a row caused the tab to crash.
So I got to work writing a new POC:
And it works! This nice part about this approach is that it eliminates the need for a race condition and makes the exploit 100% reliable. Then, I uploaded the new POC, with all of the chrome://policy stuff, to a comment on the bug report thread.
But why would this exact oversight still exist even though it should have been patched 4 years ago? We can figure out why by looking at how that previous bug was patched. Google’s fix was to clear all the pending debugger requests after the tab crashes, which seems like a sensible approach:
You may notice that it seems to contain an exception for the Page.reload requests so that they are not cleared. Internally, the inspectedWindow.reload() API sends a Page.reload request, so as a result the inspectedWindow.reload() API calls are exempted from this patch. Google really patched this bug, then added an exception to it which made the bug possible again. I guess they didn’t realize that Page.reload could also run scripts.
Another mystery is why the page crashes when the debugger statement is run twice. I’m still not completely sure about this one, but I think I narrowed it down to a function within Chromium’s renderer code. It’s specifically happens when Chromium checks the navigation state, and when it encounters an unexpected state, it crashes. This state gets messed up when RenderFrameImpl::SynchronouslyCommitAboutBlankForBug778318 is called (yet another side effect of treating about:blank specially). Of course, any kind of crash works, such as with […new Array(2**31)], which causes the tab to run out of memory. However, the debugger crash is much faster to trigger so that’s what I used in my final POC.
Anyways, here’s what the exploit looks like in action:
By the way, you might have noticed the “extension install error” screen that is shown. That’s just to trick the user into opening devtools, which triggers the chain leading to the sandbox escape.
After I reported the vulnerability, Google quickly confirmed it and classified it as P1/S1, which means high priority and high severity. Over the next few weeks, the following fixes were implemented:
Eventually, the vulnerability involving the race condition was assigned CVE-2024-5836, with a CVSS severity score of 8.8 (High). The vulnerability involving crashing the inspected page was assigned CVE-2024-6778, also with a severity score of 8.8.
Once everything was fixed and merged into the various release branches, the VRP panel reviewed the bug report and determined the reward. I received $20,000 for finding this vulnerability!
* May 1 - I sent the bug report to Google
* May 4 - Google classified it as P1/S1
* May 5 - I found the bug involving crashing the inspected page, and updated my report
* May 6 - Google asked me to file separate bug reports for every part of the chain
* July 8 - The bug report is marked as fixed
* July 13 - The report is sent to the Chrome VRP panel to determine a reward
* July 17 - The VRP panel decided the reward amount to be $20,000
I guess the main takeaway from all of this is that if you look in the right places, the simplest mistakes can be compounded upon each other to result in a vulnerability with surprisingly high severity. You also can’t trust that very old code will remain safe after many years, considering that the inspectedWindow.reload bug actually works as far back as Chrome v45. Additionally, it isn’t a good idea to ship completely undocumented, incomplete, and insecure features to everyone, as was the case with the policy test page bug. Finally, when fixing a vulnerability, you should check to see if similar bugs are possible and try to fix those as well.
You may find the original bug report here: crbug.com/338248595
I’ve also put the POCs for each part of the vulnerability in a Github repo.
...
Read the original on ading.dev »
...
Read the original on gitlab.com »
As AI-generated images become increasingly prevalent across the web, many users find their image search results cluttered with artificial content. This can be particularly frustrating when searching for authentic, human-created images or specific real-world references.
At Kagi, we understand the importance of delivering high-quality, relevant search results. Our AI Image Filter feature is designed to address this challenge, giving users more control over the content they see and prioritizing authentic imagery in search results.
By default, Kagi Image Search downranks images from websites with a high proportion of AI-generated content. Try the example search here.
In addition to downranking, we have implemented an AI image labeling system. This feature adds a small badge or icon to thumbnails that are likely to be AI-generated content. The label helps users quickly identify potential AI images without completely removing them from search results.
You can use the AI images filter to completely exclude websites with AI-generated images from your search results.
Note that, since there is currently no reliable way to automatically identify AI-generated images with good enough precision, this feature relies on the website’s reputation rather than analyzing individual images. As a result, some AI-generated content may still appear in the results.
If you detect a website with AI or low quality imagery that slipped through our filter, you can lower or completely block it off in your search results, using our search personalization feature.
Click on shield icon in your image search results, the same as you already can do in web search results.
The decision to implement this feature and have it turned on by default is based on our AI integration philosophy and direct user feedback.
This is farily new territory so do not expect the detection to be 100% reliable. We will be working more to improve this in the future. As always, help us improve Kagi by sending feedback.
...
Read the original on help.kagi.com »
It is too early to say what the outcome will be in the ongoing fight between Automattic and WP Engine, but the WordPress community at large is already the loser. Automattic founder and CEO Matt Mullenweg has been using his control of the project, and the WordPress.org infrastructure, to punish WP Engine and remove some dissenting contributors from discussion channels. Most recently, Mullenweg has instituted a hostile fork of a WP Engine plugin and the forked plugin is replacing the original via WordPress updates.
In the beginning of the Automattic and WP Engine spat, many people hoped that the companies would ratchet down the hostilities—or at least leave it to lawyers to sort out while leaving the larger community out of it. Those hopes have gone unrealized.
WP Engine did try to opt for the legal-only route. The day after the “reprieve” on the WordPress.org
ban ended, October 2, WP Engine filed a 62‑page
complaint against Automattic and Mullenweg personally, and asked for a jury trial. The suit’s claims include contractual interference, computer fraud (for blocking its access to WordPress.org), attempted extortion, libel, and slander. In addition, the suit asks for declaratory judgment that WP Engine is not infringing on or diluting the WordPress, WooCommerce, and other trademarks that Automattic named in its cease‑and‑desist letter.
That is, of course, a move that was unlikely to rebuild any burned bridges between Automattic and WP Engine. It was predictable that the WordPress.org ban would remain in place, that Automattic would respond to the suit, and perhaps countersue WP Engine. However, to date, there has been no indication of a countersuit or response to WP Engine’s lawsuit. Instead, Mullenweg is using other means to cause problems for WP Engine—and those tactics have spilled over to the wider WordPress community in troubling ways.
Participating in the development of WordPress is not realistically possible without logging into the site. Using WordPress.org is mandatory for those who would like to contribute and update
plugins, access the
WordPress Trac (bug tracker) instance, and more. On October 9, a
new checkbox was added to the account login form on WordPress.org which reads “I am not affiliated with WP Engine in any way,
financially or otherwise.” If the box is left unchecked, users will get a prompt to check the box if they wish to proceed.
Naturally, many contributors had questions about this new checkbox, since its wording is ambiguous and any possible consequences are murky. It seems clear it would apply to those employed by WP Engine, but just how far does “” go? Does this apply, for example, to employees of the many companies that host their clients’ web sites on WP Engine? Customers with a subscription to one of WP Engine’s services? A number of contributors have sought answers about this policy in the WordPress Slack, with disappointing results. A handful have reported being banned from the Slack instance after these conversations, either due to pressing for answers or questioning Mullenweg’s leadership.
Javier Casares shared that his account was deactivated after he asked a series of questions in a Slack
thread started by Colin Stewart. (Note that one needs to have a WordPress.org account, and be signed in, to request an account on the WordPress Slack.) In the thread, Mullenweg said that the value of the checkbox is not stored, but refused to clarify what qualifies as an affiliation with WP Engine. He advised those who had questions to “”.
Like what you are reading?
Try LWN for free for 1 month,
no credit card required.
Casares said that most people agree that WP Engine should contribute more to WordPress, but that using WordPress.org as part of the battle is counterproductive. He asked on Slack that the language be changed to indicate a user does not work for WP Engine, but that suggestion was not taken up.
Another participant, Terence Eden, asked
on Slack
whether he could send pull requests via GitHub if he was affiliated with WP Engine. After an exchange with Mullenweg that was less than helpful, Eden replied:
He later reported on
Mastodon that his account was deactivated. Andrew Hutchings, a contributor who works on WordPress as part of his work with the MariaDB
Foundation, participated in the conversation as well. He wondered
on Slack
how many individual contributors could afford a lawyer to advise about the checkbox and added “I work for a different Foundation, one that
definitely cannot afford a lawyer for me to contribute.” He wrote on his blog about being banned and said that he just wanted to work on the project:
That may not be an option. During the checkbox discussion in the #meta Slack channel Alex Sirota said: “Do you not understand what is happening here? It’s pretty simple
in my opinion: you have to take a side.” Stewart said that if that was the intention, then Mullenweg could say so himself. Shortly after, Mullenweg said, “I want you all to be informed and involved. Not to stay on the
sidelines.” Sirota’s account has also been deactivated now, though it is not clear whether he was banned or deactivated the account himself.
Mullenweg had also asked Automattic employees to pick a side, shortly after banning WP Engine from WordPress.org. He wrote on October 3 that Automattic had extended an “” to its employees. The company provided a buyout package of $30,000 or six months of salary (whichever was higher) to employees who wanted to leave because they disagreed with Mullenweg’s actions. Employees who accepted the buyout were immediately terminated and are not eligible for rehire. According to the post, 159 people—8.4% of the company—accepted the offer.
WordPress’s popularity has a lot to do with its plugins and themes. A vanilla WordPress installation is missing a lot of features that one might want or need to run a web site: backups, commerce features, statistics, contact forms, search-engine optimization (SEO), managing URL redirects, or adding additional content types to WordPress.
A large ecosystem has sprung up around WordPress to offer services via those plugins, paid versions of plugins with additional functionality, and paid themes to simplify site design. In turn, that helps solidify WordPress’s place as the most popular content-management system (CMS) on the web.
WP Engine produces popular plugin called Advanced Custom
Fields (ACF), which has more than two million installs through WordPress.org. It allows developers to add extra content
fields (called custom fields) to WordPress’s edit screens. This might be used, for example, as part of adding a date
picker or an interface
to create photo galleries for a site. ACF is, in turn, used by or in conjunction with a
large number of other WordPress plugins such as Advanced Forms
for ACF and WPML
for translating WordPress sites.
The base ACF plugin is free, but it also has a paid
version (“ACF Pro”) with a yearly subscription. Both are available under the GPLv2, but users must pay for access to updates on the Pro version and those come directly from WP Engine.
On September 28, Mullenweg asked
on Slack whether ACF Pro should be included in WordPress core, the components and functionality included in a default install of WordPress. That drew mixed responses in the channel. Some users noted that the ability to add custom fields was long overdue, but others had qualms
about taking over ACF Pro “”. Richard Korthuis asked
what kind of message it would send to other developers who create paid plugins: “No matter what you think about WP Engine and the whole
dispute, this [sends] developers the wrong message and would prevent
future investments in new plugins”.
In a now-deleted
Tweet, Automattic announced on October 5, a Saturday, that it had “” in ACF to WP Engine. The company did not provide further details. John Blackbourn, the WordPress core security team lead, said
that Automattic had breached the Intigriti
Code of Conduct by “” the vulnerability publicly. Intigriti is a company that runs bug-bounty programs for companies, including
WP Engine.
On October 7, WP Engine announced a security release of the plugin. The vulnerability itself seems to be minor, according to the release notes. It is not a flaw that can be exploited remotely and it only impacts “” where a user with administrative privileges tries to attack other administrative users, or tries to gain super-admin privileges on a multi-site installation of WordPress. So far few other details on the vulnerability beyond that have been provided. Another XZ backdoor it is not.
Because its developers are now blocked from WordPress.org, WP Engine had to provide its fix
to the WordPress Security team to have it uploaded to the plugin directory. There are also instructions
on updating the plugin manually to receive updates directly from WP Engine.
Mullenweg made an announcement
on October 12, another Saturday, “on behalf of the WordPress
security team” that ACF was being forked as Secure Custom
Fields (SCF) under point 18
of the plugin
directory guidelines. That part of the guidelines states, in part, that WordPress.org may “remove developer access to a plugin in lieu
of a new, active, developer” and “make changes to a plugin,
without developer consent, in the interest of public
safety”. According to the post this move was “”.
Automattic has not merely forked the ACF code and made it available under a new name to compete with WP Engine. That might raise a few eyebrows, but it would probably be considered fair game by most observers.
Instead, it has forked the code and taken over the plugin’s entry, including all of its reviews, in the WordPress.org catalog. The new plugin is being substituted in place of ACF for all of the users who have installed it previously. According to the announcement on WordPress.org, sites that auto-update plugins will receive the SCF plugin automatically. Some site owners may be unaware that the plugin has been silently replaced. According to a comment by Mullenweg on Hacker News on October 14, there have already been 225k downloads of the new plugin, and he estimated “at least 60%
of the sites with auto-upgrade on and using .org for updates” have been moved to the fork.
This is not the first time a company has taken control of a package distributed through a central repository, though it is rare. The left-pad
incident in 2016, for example, saw npm,
Inc. restore left-pad to the Node.js package repository after its developer, Azer Koçulu, removed it. That move, however, was intended to reduce disruption to the Node.js ecosystem: the removal had broken builds for thousands of projects that had included the package, and Koçulu had effectively abandoned it.
The takeover of ACF’s place in the WordPress directory, on the other hand, is a punitive move by Automattic against another company that reaches beyond WordPress.org’s infrastructure into millions of WordPress installs. Web developer Charles Fulton wrote
about the incident and said that this is “”; he wondered if he needed to worry about updates to core WordPress that might interfere with ACF Pro.
Users of ACF Pro that depend on the WPGraphQL and WPGraphQL for Advanced Custom
Fields plugins may have real cause to be concerned that Automattic will look to break compatibility for ACF. WPGraphQL provides a GraphQL
schema and API for WordPress sites and is a popular plugin to use in conjunction with ACF. Jason Bahl, the maintainer of the plugin, announced
on October 7 that he was leaving WP Engine to join Automattic. Additionally, he said that WPGraphQL is becoming a “” for WordPress.
The concept of canonical plugins is loosely defined, but Mullenweg described
them in 2022 as official plugins that are the first choice for a type of functionality, but too niche to be included in the core distribution. With WPGraphQL development under Automattic’s roof, it seems unlikely that compatibility with ACF will be a priority.
Scott Kingsley Clark, who has been involved in a project to bring a fields API into the WordPress core, announced
on October 13 that he was stepping down from contributing to WordPress core. The fields API
project on GitHub has been archived with a goodbye notice that states that it pains him to stop but that he is “done making excuses for
Matt’s actions and will not associate myself with core any
longer”. He added
on Mastodon.social that he was going to remain part of the WordPress community overall, and continue working on the Pods plugin.
What happens next, what Mullenweg will do next, is anyone’s guess. Mullenweg’s vendetta against WP Engine has spilled over into the community in a way that can’t easily be ignored or avoided. His leadership of the project is being repeatedly called into question by contributors, users, and outside observers. That will spill over, if it hasn’t already, into the wider commercial ecosystem and have serious consequences for plugin creators, creative agencies, and hosting providers who have invested a lot into WordPress.
More contributors are likely to step away, whether they do so publicly or simply drift away and find other things to do with their time. Quite a few users on social networks have commented that they would no longer recommend WordPress and are looking for alternatives. A fork, in addition to ClassicPress, seems almost inevitable.
There is a legitimate conversation to be had, or continued, about the commercialization of open-source projects by companies that do little to sustain open-source software but reap its benefits and pull revenue away from the companies that do put in the work. That conversation has been completely eclipsed by Mullenweg’s actions to punish WP Engine.
Armin Ronacher, creator of the Flask web framework for Python and participant in launching the Open Source Pledge, has some interesting
thoughts on the topic of mixing money and open source in light of the ongoing WordPress crisis:
He observes that he has seen too many people in open source struggle “one way or another” as a direct or indirect result of work in open source. He says Mullenweg, like other creators of open-source projects, feels wronged by seeing others find financial success from his project even though WordPress is uncommonly successful “in terms of impact, success, and financial
return for its creator”. Mullenweg’s actions, Ronacher said, “have alienated many who would otherwise support him. He’s turning into a ’mad
king’”.
That is deeply unfortunate, because the questions about sustainability of open-source projects, and who profits from them versus who produces them, are in need of addressing. Instead of having that
conversation, Mullenweg has put questions about governance, centralized software distribution, and software supply chains at the forefront.
After decades of being a poster child for the goodness of open source, WordPress is becoming a case study in the dangers of the company-owned project model. Instead of being the safe choice, WordPress is starting to be seen as the risky one—and that perception may impact open source as a whole.
...
Read the original on lwn.net »
It may sound like a tall tale, but that is more or less the one recounted in a recent publication by a team of scientists at the National Museum of Natural History (MNHN) and the CNRS, which explains in the pages of the Journal of the Royal Society Interface how birds can sleep while standing, without losing their balance. We know horses and bovines are capable of this feat thanks to their four legs, but at first glance this does not seem so evident for birds. Scientists agree that this stability comes from one thing: tensegrity. Created from the contraction of the words “tension” and “integrity,” tensegrity is the property of a structure to remain stable and balanced thanks to the subtle interplay of tension and compression in the elements of which it is made.
“The starting point for this research was trying to understand evolutionary mechanisms via functional morphology, which studies the relation between an organism’s form and functioning,” points out Anick Abourachid, a specialist on evolutionary biology, and the Deputy Director of the Adaptive Mechanisms and Evolution research unit at the National Museum of Natural History. “We are especially interested in bird feet.” Studying the evolutionary mechanisms in bird feet is of interest because birds are a particularly homogenous group with respect to their structure: “They are all bipedal flying dinosaurs that, since the group’s origin, have shared a structure built to be aerodynamic. They all possess, with no exceptions, a rigid torso,
underparts for landing and taking off, and wings.” What is more, this highly preserved organisational blueprint is effective everywhere, as it can be found in birds living in all environments. “It is this versatility that has always interested and intrigued me. It was while discussing with my colleagues Philippe Wenger and Christine Chevallereau that I learned of the existence of tensegrity, mechanical systems entirely maintained by tension.”
It should be noted that bird feet are distinctive because they are close, from a structural point of view, to human legs. “Birds are flexed bipeds, which is to say that when they are standing, their entire body is bent. For humans this is the equivalent of a posture crouching on one’s toes.” This posture does not at first glance appear to be comfortable, but birds relax in this position, and can even sleep while standing; in short, they are at rest, and consequently consume less energy. To understand how birds achieve this balance, the scientists conceived a digital model based on the anatomy of a small bird, the mandarin diamondbird (Taeniopygia guttata), a species belonging to the passerine family. “It was by greatly simplifying the body’s system, preserving only what relates to standing posture, that we identified postural stability,” Abourachid reveals.
In this model, the body and foot bones were replaced by bars, and muscles and tendons by more or less rigid cables. The joints between each bone in the feet were replaced by pulleys. “We began the experiments by using a single cable that began at the ‘bird’s’ pelvis and stretched to its feet, passing through all of the joints (hip, knee, ankle), and therefore through all of the pulleys.”
This enabled the researchers to reproduce the generic standing posture of birds, although their rendition was not at all stable, which is surprising given that we see them perched atop electric cables or tree branches without ever falling over. “Our bird model was balanced at only one point in space, whereas birds are naturally balanced in a multitude of positions. So we improved its stability by using multiple cables instead of just one, notably one that runs behind the knee instead of in front, as was the case in the initial model.”
Birds do indeed have a tendon that passes behind the knee, going through a ligament loop that keeps it aligned, a unique anatomical feature. With four cables—one running behind the knee—the model became stable, in other words the system can passively retain balance on its own like a roly-poly toy, even when exposed to a minor disturbance.
The scientists also grasped that the quality of the cables–and hence of the tendons in bird feet–played a decisive role. “Birds are among the only animals with calcified tendons that are practically ossified,” explains Christine Chevallereau, a Senior Researcher at the Nantes Laboratory of Digital Science(LS2N). “In our model, we understood that this property had to be taken into consideration in order to achieve stable balance. The bird needs these stiff tendons, because when it stands, it must offset the effect of gravity in order not to tip over to one side of the branch. While this low elasticity may not seem to contribute to stability, that is not at all the case, for it lessens the effect of the disturbances the bird is subject to. This also allows for a much quicker response from the system, as it will have less of a tendency to deform before returning to its position of balance.” Whether it is snowing, raining, or windy, this tensegrity system allows birds to continue sleeping, without fear of falling from their perch.
“The advantage bird’s derive from the tensegrity in their feet is based on the mechanism’s passive nature. The bird subsequently does not have to think or concentrate to remain standing. Everything is based on the system’s elasticity. This is a kind of embodied intelligence where the body alone solves the problem, with no intervention by the brain, thereby doing so without consuming energy,” Chevallereau adds.
Hirshhorn Museum and Sculpture Garden, Smithsonian Institution, don de Joseph H. Hirshhorn, 1974 / Collection Kröller-Müller Museum - Photos Onderwijsgek et Saku Takakusaki CC BY-SA 4.0 via Wikimedia commons
The mechanism’s passive character is surprising, but it paves the way for high-impact applications. “We should remember that these mechanisms are widely used in civil engineering, as they make it possible to lighten a structure. The idea with this discovery is to use such mechanisms in robotics in order to lighten masses in motion, consume less energy, use less materials in construction, and reduce danger in the event of a collision with an operator,” points out Philippe Wenger, a Senior Researcher at LS2N. Engineers have already imagined bipedal robots with tensegrity systems allowing them to maintain a standing posture while balanced on unstable surfaces, and doing so for long periods without consuming the least energy.
...
Read the original on news.cnrs.fr »
You can now guide NotebookLM’s Audio Overview, guiding what the AI hosts focus on and their expertise level, and apply for the NotebookLM Business pilot program.
NotebookLM, a tool powered by Gemini 1.5, helps users understand complex information by instantly becoming an expert on uploaded sources. New features include customizable Audio Overviews, allowing users to provide instructions for AI hosts and listen to audio while working within NotebookLM. Additionally, NotebookLM Business, an upcoming version offered via Google Workspace, will provide enhanced features for businesses, universities, and organizations, prioritizing data privacy and security. Apply for the NotebookLM Business pilot program to gain early access to new features, training, and email support.
Summaries were generated by Google AI. Generative AI is experimental.
NotebookLM is a tool that helps you understand information from your sources.
Now, you can customize Audio Overviews by giving instructions to the AI hosts.
You can also listen to Audio Overviews while working in NotebookLM.
NotebookLM Business is coming soon with enhanced features for businesses and organizations.
Summaries were generated by Google AI. Generative AI is experimental.
NotebookLM is a tool for understanding, built with Gemini 1.5. When you upload your sources, it instantly becomes an expert, grounding its responses in your material and giving you powerful ways to transform information. And since it’s your notebook, your personal data is never used to train NotebookLM. Millions of people are already using NotebookLM to understand and engage with complex information, and today we’re removing the product’s “Experimental” label and releasing another round of features.Get more out of your Audio OverviewsWe’ve been delighted with the reception of Audio Overviews so far, and we’re introducing a pair of new features today:Guide the conversation: Now you can provide instructions before you generate a “Deep Dive” Audio Overview. For example, you can focus on specific topics or adjust the expertise level to suit your audience. Think of it like slipping the AI hosts a quick note right before they go on the air, which will change how they cover your material.Background listening: You can also listen to Audio Overviews while continuing to work within NotebookLM. Query your sources, get citations, and explore relevant quotes without interrupting the audio.
By tapping “Customize,” you can now guide NotebookLM’s Audio Overview, adjusting what the AI hosts focus on and their expertise level.
To try it out, follow these steps:Add at least one sourceIn your NotebookLM guide, tap “Generate” for an automatic Audio Overview or “Customize” to provide instructions for the AI hostsLike always, Audio Overviews are generated discussions and are not a comprehensive or objective view of a topic, but simply a reflection of the sources that you’ve uploaded and any instructions you provide.We’re announcing NotebookLM Business, an upcoming version that will be offered via Google Workspace with enhanced features for businesses, universities and organizations. From the start, NotebookLM has made data privacy and security a top priority, and we’ll continue to do that.Starting today, you can apply for the NotebookLM Business pilot program to get early access to new product features, training and email support. In the meantime, you can continue to use the free version of NotebookLM, and we’ll share more about the general availability and pricing of NotebookLM Business later this year.With over 80,000 organizations already using NotebookLM, we look forward to welcoming even more.
NotebookLM Business will be offered via Google Workspace, with more information coming later this year.
...
Read the original on blog.google »
After months of waiting, we are finally ready to unbox the latest version of Inkscape… meet 1.4, the Geek edition, where accessibility and customization reign.
Inkscape project developers, most of them volunteer contributors from countries around the world, have been plugging away behind the scenes on new features, improving current ones, bug fixes and setting the stage for the arrival of GTK 4.
Let’s dig into some of the new and improved features that enable more customization and better accessibility in Inkscape in a cool, geeky sort of way. Inkscape Power Users, this one’s for you!
The Inkscape 1.4 Filter Gallery dialog is your new entry point into the world of filters. Head to the Filters menu to find it and your favorites more easily, with previews by category or by typing key words in the search bar. This includes those custom filters you’ve added — look for them in the Personal category. Accessibility features here include the ability to change the size of the preview of thumbnails.
For those of you seeking the ultimate in symmetry and accuracy, you’ll find the new grid selector buttons a quicker, more visual way of identifying your grid of choice. For the new modular grids, you can adjust plenty of grid parameters separately, from height and width to horizontal and vertical gaps — including adjustments at the grid module (rectangle) level. Enable this option by heading to document properties.
Inkscape’s Swatches dialog has a fresh new face. It now includes a drop-down menu with previews of color palettes. You can display them with their colors’ names as a list or a grid. For Power Users, you can now increase the tile size, too. Search for your custom swatches, or those of your customers, too. Import palettes from the dialog, including those from Adobe Color Book that use CIELAB colours, while support for those with CMYK is still limited for now. Transitioning from CMYK to RGB has been improved.
For those of you who have a thing for fonts, Inkscape 1.4 is your new go-to for customization. You’ll find an easier route to buttons for sorting glyphs and removing pairs. Powerful customization at it’s finest in Inkscape!
Inkscape is testing out a unified font browser preview. Activate this feature in Preferences and restart. After that, you will access visual previews for fonts with settings such as adding your own sample text. Check for particular font characteristics by searching fonts by collection type. Click on the one you want and watch it be applied to your text.
With Inkscape 1.4, you’ll now see that the on-canvas handles are customizable by size, colour, stroke width, outline and opacity. This will be updated live in Inkscape when the CSS file has been saved.
You can now tackle quick edits on raster (pixel) images within Inkscape using the Shape Builder tool. Load an image and select sections that you want to isolate. Choose them with Shape Builder to clip them. Note that to edit them, you’ll need to release the clip and then unlink the clone. This tool is useful for a quick fix, isolating part or parts of an image quickly.
As of Inkscape 1.4, the Objective Attributes and Object Properties dialogs have been rolled up into one. This one-stop-shop is where you can change the different properties of objects, including rotations and corners.
If you use Inkscape to develop and publish documents, you’ll be happy to know that you can now add links from one page to another within a PDF document–think table of contents.
Additionally, Inkscape 1.4 can now open Affinity Designer files, adding more versatility to the program.
Welcome, Dash, the latest icon set to be accessible within Inkscape. For those of you who like customizing, you’ll appreciate this additional set of 500 plus icons to explore. These include cursors, both scalable and symbolic versions, that work on light and dark themes.
That concludes our brief wander through some of the top features included in Inkscape 1.4. For more information, the Inkscape 1.4 Release Notes are up to date to with information on all the changes included in this edition.
Get Inkscape 1.4 for Linux, Windows or macOS!
If you have questions for us, head to our web site and find our online community, where you can also find ways to get involved in the project.
...
Read the original on inkscape.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.