10 interesting stories served every morning and every evening.
Models
Research
Science
About
Explore our next generation AI systems
Explore models
Gemini
Specialized models
World models & embodied AI
Open models
Our latest AI breakthroughs and updates from the lab
Explore research
Breakthroughs
Learn more
Unlocking a new era of discovery with AI
Explore science
Life sciences
Climate and sustainability
Our mission is to build AI responsibly to benefit humanity
About Google DeepMind
Responsibility
Ensuring AI safety through proactive security, even against evolving threats
Learn more
News
Discover our latest AI breakthroughs, projects, and updates
Learn more
Careers
We’re looking for people who want to make a real, positive impact on the world
Learn more
Learn more
Models
Research
Science
About
Explore our next generation AI systems
Explore models
Gemini
Specialized models
World models & embodied AI
Open models
Our latest AI breakthroughs and updates from the lab
Explore research
Breakthroughs
Learn more
Unlocking a new era of discovery with AI
Explore science
Life sciences
Climate and sustainability
Our mission is to build AI responsibly to benefit humanity
About Google DeepMind
Responsibility
Ensuring AI safety through proactive security, even against evolving threats
Learn more
News
Discover our latest AI breakthroughs, projects, and updates
Learn more
Careers
We’re looking for people who want to make a real, positive impact on the world
Learn more
Learn more
Gemini 3.1 Pro is the next iteration in the Gemini 3 series of models, a suite of highly capable, natively multimodal reasoning models. As of this model card’s date of publication, Gemini 3.1 Pro is Google’s most advanced model for complex tasks. Gemini 3.1 Pro can comprehend vast datasets and challenging problems from massively multimodal information sources, including text, audio, images, video, and entire code repositories.
Text strings (e.g., a question, a prompt, document(s) to be summarized), images, audio, and video files, with a token context window of up to 1M.
Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the model architecture for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the training dataset for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
For more information about the training data processing for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the hardware for Gemini 3.1 Pro and our continued commitment to operate sustainably, see the Gemini 3 Pro model card.
Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the software for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
Gemini 3.1 Pro was evaluated across a range of benchmarks, including reasoning, multimodal capabilities, agentic tool use, multi-lingual performance, and long-context. Additional benchmarks and details on approach, results and their methodologies can be found at: deepmind.google/models/evals-methodology/gemini-3-1-pro.
Gemini 3.1 Pro significantly outperforms Gemini 3 Pro across a range of benchmarks requiring enhanced reasoning and multimodal capabilities. Results as of February 2026 are listed below:
Gemini 3.1 Pro is the next iteration in the Gemini 3 series of models, a suite of highly intelligent and adaptive models, capable of helping with real-world complexity, solving problems that require enhanced reasoning and intelligence, creativity, strategic planning and making improvements step-by-step. It is particularly well-suited for applications that require:
For more information about the known limitations for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
For more information about the acceptable usage for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
For more information about the evaluation approach for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
For more information about the safety policies for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
Results for some of the internal safety evaluations conducted during the development phase are listed below. The evaluation results are for automated evaluations and not human evaluation or red teaming. Scores are provided as an absolute percentage increase or decrease in performance compared to the indicated model, as described below. Overall, Gemini 3.1 Pro outperforms Gemini 3 Pro across both safety and tone, while keeping unjustified refusals low. We mark improvements in green and regressions in red. Safety evaluations of Gemini 3.1 Pro produced results consistent with the original Gemini 3 Pro safety assessment.
Automated evaluation measuring model’s ability to respond to borderline prompts while remaining safe
We continue to improve our internal evaluations, including refining automated evaluations to reduce false positives and negatives, as well as update query sets to ensure balance and maintain a high standard of results. The performance results reported below are computed with improved evaluations and thus are not directly comparable with performance results found in previous Gemini model cards. We expect variation in our automated safety evaluations results, which is why we review flagged content to check for egregious or dangerous material. Our manual review confirmed losses were overwhelmingly either a) false positives or b) not egregious.
We conduct manual red teaming by specialist teams who sit outside of the model development team. High-level findings are fed back to the model team. For child safety evaluations, Gemini 3.1 Pro satisfied required launch thresholds, which were developed by expert teams to protect children online and meet Google’s commitments to child safety across our models and Google products. For content safety policies generally, including child safety, we saw similar safety performance compared to Gemini 3 Pro.
For more information about the risks and mitigations for Gemini 3.1 Pro, see the Gemini 3 Pro model card.
Our Frontier Safety Framework includes rigorous evaluations that address risks of severe harm from frontier models, covering five risk domains: CBRN (chemical, biological, radiological and nuclear information risks), cyber, harmful manipulation, machine learning R&D and misalignment. Our frontier safety strategy is based on a “safety buffer” to prevent models from reaching critical capability levels (CCLs), i.e. if a frontier model does not reach the alert threshold for a CCL, we can assume models developed before the next regular testing interval will not reach that CCL. We conduct continuous testing, evaluating models at a fixed cadence and when a significant capability jump is detected. (Read more about this in our approach to technical AGI safety.)Following FSF protocols, we conducted a full evaluation of Gemini 3.1 Pro (focusing on Deep Think mode). We found that the model remains below alert thresholds for the CBRN, harmful manipulation, machine learning R&D, and misalignment CCLs. As previous models passed the alert threshold for cyber, we performed more additional testing in this domain on Gemini 3.1 Pro with and without Deep Think mode, and found that the model remains below the cyber CCL.More details on our evaluations and the mitigations we deploy can be found in the Gemini 3 Pro Frontier Safety Framework Report.
(Deep Think mode) The model can provide highly accurate and actionable information but still fails to offer novel or sufficiently complete and detailed instructions for critical stages, to significantly enhance the capabilities of low to medium resourced threat actors required for the CCL. We continue to deploy mitigations in this domain.
(3.1 Pro) We conducted additional testing on the model in this domain as Gemini 3 Pro had previously reached the alert threshold. The model shows an increase in cyber capabilities compared to Gemini 3 Pro. As with Gemini 3 Pro, the model has reached the alert threshold, but still does not reach the levels of uplift required for the CCL.
(Deep Think mode) Accounting for inference costs, the model with Deep Think mode performs considerably worse than without Deep Think mode. Even at high levels of inference, results for the model with Deep Think mode do not suggest higher capability than without Deep Think mode.
We continue to deploy mitigations in this domain.
(Deep Think mode) Evaluations indicated that the model showed higher manipulative efficacy for belief change metrics compared to a non-AI baseline, with the maximum odds ratio of 3.6x, which is the same as Gemini 3 Pro, and did not reach the alert threshold.
(Deep Think mode) The model shows gains on RE-Bench compared to Gemini 3 Pro, with a human-normalised average score of 1.27 compared to Gemini 3 Pro’s score of 1.04. On one particular challenge, Optimise LLM Foundry, it scores double the human-normalised baseline score (reducing the runtime of a fine-tuning script from 300 seconds to 47 seconds, compared to the human reference solution of 94 seconds). However, the model’s average performance across all challenges remains beneath the alert threshold for the CCLs.
(Deep Think mode) On stealth evaluations, the model performs similarly to Gemini 3 Pro. On situational awareness, the model is stronger than Gemini 3 Pro: on three challenges which no other model has been able to consistently solve, max tokens, context size mod, and oversight frequency, the model achieves a success rate of almost 100%. However, its performance on other challenges is inconsistent, and thus the model does not reach the alert threshold.
...
Read the original on deepmind.google »
This post is an elaboration on a comment I made on Hacker News recently, on a blog post that showed an increase in volume and decline in quality among the “Show HN” submissons. I don’t actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring. They generally don’t have a lot of work put into them, and as a result, the author (pilot?) hasn’t generally thought too much about the problem space, and so there isn’t really much of a discussion to be had.The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don’t have anything interesting to say about programming.
This isn’t something that is limited to Show HN or even Hacker News, it’s something you see everywhere.
While part of this phenomenon is likely just an upswing of people who don’t usually do programming that get swept up in the fun of building a product, I want to build an argument that it’s much worse than that.
AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
This may be a feature if you are exploring a topic you are unfamiliar with, but it’s a fatal flaw if you are writing a blog post or designing a product or trying to do some other form of original work.
Some will argue that this is why you need a human in the loop to steer the work and do the high level thinking. That premise is fundamentally flawed. Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates.
Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.
You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think.
...
Read the original on www.marginalia.nu »
Your browser does not support the audio element.
This content is generated by Google AI. Generative AI is experimental
Last week, we released a major update to Gemini 3 Deep Think to solve modern challenges across science, research and engineering. Today, we’re releasing the upgraded core intelligence that makes those breakthroughs possible: Gemini 3.1 Pro. We are shipping 3.1 Pro across our consumer and developer products to bring this progress in intelligence to your everyday applications. For developers in preview via the Gemini API in Google AI Studio, Gemini CLI, our agentic development platform Google Antigravity and Android StudioFor enterprises in Vertex AI and Gemini EnterpriseFor consumers via the Gemini app and NotebookLMBuilding on the Gemini 3 series, 3.1 Pro represents a step forward in core reasoning. 3.1 Pro is a smarter, more capable baseline for complex problem-solving. This is reflected in our progress on rigorous benchmarks. On ARC-AGI-2, a benchmark that evaluates a model’s ability to solve entirely new logic patterns, 3.1 Pro achieved a verified score of 77.1%. This is more than double the reasoning performance of 3 Pro.
3.1 Pro is designed for tasks where a simple answer isn’t enough, taking advanced reasoning and making it useful for your hardest challenges. This improved intelligence can help in practical applications — whether you’re looking for a clear, visual explanation of a complex topic, a way to synthesize data into a single view, or bringing a creative project to life.
Code-based animation: 3.1 Pro can generate website-ready, animated SVGs directly from a text prompt. Because these are built in pure code rather than pixels, they remain crisp at any scale and maintain incredibly small file sizes compared to traditional video.
Complex system synthesis: 3.1 Pro utilizes advanced reasoning to bridge the gap between complex APIs and user-friendly design. In this example, the model built a live aerospace dashboard, successfully configuring a public telemetry stream to visualize the International Space Station’s orbit.
Interactive design: 3.1 Pro codes a complex 3D starling murmuration. It doesn’t just generate the visual code; it builds an immersive experience where users can manipulate the flock with hand-tracking and listen to a generative score that shifts based on the birds’ movement. For researchers and designers, this provides a powerful way to prototype sensory-rich interfaces.
Creative coding: 3.1 Pro can translate literary themes into functional code. When prompted to build a modern personal portfolio for Emily Brontë’s “Wuthering Heights,” the model didn’t just summarize the text. It reasoned through the novel’s atmospheric tone to design a sleek, contemporary interface, creating a website that captures the essence of the protagonist.
Since releasing Gemini 3 Pro in November, your feedback and the pace of progress have driven these rapid improvements. We are releasing 3.1 Pro in preview today to validate these updates and continue to make further advancements in areas such as ambitious agentic workflows before we make it generally available soon.Starting today, Gemini 3.1 Pro in the Gemini app is rolling out with higher limits for users with the Google AI Pro and Ultra plans. 3.1 Pro is also now available on NotebookLM exclusively for Pro and Ultra users. And developers and enterprises can access 3.1 Pro now in preview in the Gemini API via AI Studio, Antigravity, Vertex AI, Gemini Enterprise, Gemini CLI and Android Studio.We can’t wait to see what you build and discover with it.
...
Read the original on blog.google »
Your page may be loading slowly because you’re building optimized sources. If you intended on using uncompiled sources, please click this link.
Google Cloud Console has failed to load JavaScript sources from www.gstatic.com.
Possible reasons are:www.gstatic.com or its IP addresses are blocked by your network administratorGoogle has temporarily blocked your account or network due to excessive automated requestsPlease contact your network administrator for further assistance.
...
Read the original on console.cloud.google.com »
Your house is quietly plotting to break while you sleep—and you’re dreaming about redoing the kitchen.
micasa tracks maintenance, projects, incidents, appliances, vendors, quotes, and documents—all from your terminal.
When did I last change the furnace filter?What if we finally did the backyard?How much would it actually cost to…Quotes side by side, vendor history, and the math you need to actually decide. Is the dishwasher still under warranty?Appliance tracking with purchase dates, warranty status, and maintenance history tied to each one.The basement is leaking again.Log incidents with severity and location, link them to appliances and vendors, and resolve them when fixed.Who did we use last time?A vendor directory with contact info, quote history, and every job they’ve done for you.Attach files—manuals, invoices, photos—directly to projects and appliances. Stored in the same SQLite file.
or grab a binary from the latest releaseLinux, macOS, and Windows binaries are available for amd64 and arm64.micasa –demo # poke around with sample data
micasa # start fresh with your own house
micasa –print-path # show where the database livesLinux, macOS, Windows. One SQLite file, your machine. Back it up with cp.
Vim-style modal keys. nav to browse, edit to change things. Sort by any column, jump to columns with fuzzy search, hide what you don’t need, drill into related records. The full list is in the keybinding reference.
...
Read the original on micasa.dev »
Microsoft SQL now supports native vector search capabilities in Azure SQL and SQL database in Microsoft Fabric. We also released the langchain-sqlserver package, enabling the management of SQL Server as a Vectorstore in LangChain. In this step-by-step tutorial, we will show you how to add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.
The Harry Potter series, written by J. K. Rowling, is a globally beloved collection of seven books that follow the journey of a young wizard, Harry Potter, and his friends as they battle the dark forces led by the evil Voldemort. Its captivating plot, rich characters, and imaginative world have made it one of the most famous and cherished series in literary history
By using a well-known dataset, we can create engaging and relatable examples that resonate with a wide audience
This Sample dataset from Kaggle contains 7 .txt files of 7 books of Harry Potter. For this demo we will only be using the first book — Harry Potter and the Sorcerer’s Stone.
Whether you’re a tech enthusiast or a Potterhead, we have two exciting use cases to explore:
A Q&A system that leverages the power of SQL Vector Store & LangChain to provide accurate and context-rich answers from the Harry Potter Book.
Next, we will push the creative limits of the application by teaching it to generate new AI-driven Harry Potter fan fiction based on our existing dataset of Harry Potter books. This feature is sure to delight Potterheads, allowing them to explore new adventures and create their own magical stories.
The code lives in an integration package langchain-sqlserver
!pip install langchain-sqlserver==0.1.1
In this example, we will use a dataset consisting of text files from the Harry Potter books, which are stored in Azure Blob Storage
LangChain has a seamless integration with AzureBlobStorage, making it easy to load documents directly from Azure Blob Storage.
Additionally, LangChain provides a method to split long text into smaller chunks, using langchain-text-splitter which is essential since Azure OpenAI embeddings have an input token limit.
In this example we use Azure OpenAI to generate embeddings of the split documents, however you can use any of the different embeddings provided in LangChain.
After splitting the long text files of Harry Potter books into smaller chunks, you can generate vector embeddings for each chunk using the Text Embedding Model available through AzureOpenAI. Notice how we can accomplish this in just a few lines of code!
* First, initialize the vector store and set up the embeddings using AzureOpenAI
* Once we have our Vector Store we can add items to our vector store by using the add_documents function
Once your vector store has been created and the relevant documents have been added you can now perform similarity search.
The vectorstore also supports a set of filters that can be applied against the metadata fields of the documents. By applying filters based on specific metadata attributes, users can limit the scope of their searches, concentrating only on the most relevant data subsets.
Performing a simple similarity search can be done as follows with the similarity_search_with_score
The Q&A function allows users to ask specific questions about the story, characters, and events, and get concise, context-rich answers. This not only enhances their understanding of the books but also makes them feel like they’re part of the magical universe.
The LangChain Vector store simplifies building sophisticated Q&A systems by enabling efficient similarity searches to find the top 10 relevant documents based on the user’s query.
The retriever is created from the vector_store, and the question-answer chain is built using the create_stuff_documents_chain function.
A prompt template is crafted using the ChatPromptTemplate class, ensuring structured and context-rich responses.
Often in Q&A applications it’s important to show users the sources that were used to generate the answer. LangChain’s built-in create_retrieval_chain will propagate retrieved source documents to the output under the “context” key:
Read more about Langchain RAG tutorials & the terminologies mentioned above here
We can now ask the user question and receive responses from the Q&A System:
Potterheads are known for their creativity and passion for the series. With this they can craft their own stories based on user prompt given , explore new adventures, and even create alternate endings. Whether it’s imagining a new duel between Harry and Voldemort or crafting a personalized Hogwarts bedtime story for you kiddo, the possibilities are endless.
The fan fiction function uses the embeddings in the vector store to generate new stories
* Retrieving Relevant Passages: When a user provides a prompt for a fan fiction story, the function first retrieves relevant passages from the SQL vector store. The vector store contains embeddings of the text from the Harry Potter books, which allows it to find passages that are contextually similar to the user’s prompt.
* Formatting the Retrieved Passages: The retrieved passages are then formatted into a coherent context. This involves combining the text from the retrieved passages into a single string that can be used as input for the language model.
* Generating the Story: The formatted context, along with the user’s prompt, is fed into a language model GPT4o to generate the fan fiction story. The language model uses the context to ensure that the generated story is relevant and coherent, incorporating elements from the retrieved passages.
Let’s imagine we prompt with this
Don’t miss the discussion around the “Vector Support in SQL — Public Preview” by young Davide on the Hogwarts Express! Even the Wizards are excited!
As you can see along with generating the story it also mentions the sources of inspiration from the Vector Store above
Hence, by combining the Q&A system with the fan fiction generator offers a unique and immersive reading experience. If users come across a puzzling moment in the books, they can ask the Q&A system for clarification. If they’re inspired by a particular scene, they can use the fan fiction generator to expand on it and create their own version of events. This interactive approach makes reading more engaging and enjoyable.
You can find this notebook in the GitHub Repo along with other samples: https://github.com/Azure-Samples/azure-sql-db-vector-search.
We’d love to hear your thoughts on this feature! Please share how you’re using it in the comments below and let us know any feedback or suggestions for future improvements. If you have specific requests, don’t forget to submit them through the Azure SQL and SQL Server feedback portal, where other users can also contribute and help us prioritize future developments. We look forward to hearing your ideas!
...
Read the original on devblogs.microsoft.com »
Work continues for the Vibrant Visuals update to come to Minecraft Java, and as part of that they’re switching the rendering from OpenGL to Vulkan.
Announced today (February 18th) by Mojang developers, it’s a huge change for such a game and will take time - but it will be worth it in the end so they can take advantage of all the modern features available for both visual improvements and better performance.
They note clearly that their aim is to “keep Minecraft: Java Edition playable for almost any PC-operating system, including macOS and Linux”. For the macOS side of things, they’ll use a translation layer since Apple don’t support Vulkan directly (they made their own API with Metal).
For modders, they’re suggesting they start making preparations to move away from OpenGL
Switching from OpenGL to Vulkan will have an impact on the mods that currently use OpenGL for rendering, and we anticipate that updating from OpenGL to Vulkan will take modders more effort than the updates you undertake for each of our releases.
To start with, we recommend our modding community look at moving away from OpenGL usage. We encourage authors to try to reuse as much of the internal rendering APIs as possible, to make this transition as easy as possible. If that is not sufficient for your needs, then come and talk to us!
It does mean that players on really old devices that don’t support Vulkan will be left out, but Vulkan has been supported going back to some pretty old GPUs. You’ve got time though, as they’ll be rolling out Vulkan alongside OpenGL in snapshots (development releases) “sometime over the summer”. You’ll be able to toggle between them during the testing period until Mojang believe it’s ready. OpenGL will be entirely removed eventually once they’re happy with performance and stability.
...
Read the original on www.gamingonlinux.com »
Things are busy in Pebbleland! We’re getting close to shipping 3 new hardware products and all the associated software that comes along with them. Overall, things feel good. I’d say the amount of last minute shenanigans is at the normal amount. Getting new hardware into ‘production’ is a pretty wild and exciting process. Building hardware is an exercise in balancing competing priorities of cost, quality and speed. In the last mile push to get into production, things can change quickly for the best (woohoo! the waterproof test finally passes, we can move to the next stage), or less good (uh, the production line needs 3 more test fixtures to test Index 01 mic performance, and a major production test software update…that’ll be a lot more money). Unlike with software, you can’t easily fix hardware issues after you ship! Making these last minute decisions is sometimes pretty stressful but hey, that’s the world of making hardware.
We’re in the Production Verification Test (PVT) phase right now, the last stop before Mass Production (MP). During this phase we manufactured hundreds of PT2s in a series of test builds, uncovered a bunch of issues, and fixed a bunch of issues. Just before the factories shut down for the lunar New Year, we got the good news that all the tests passed on the last build!
We focused most of January on improving the waterproofing on the watch (flash back to last summer when we worked on this for Pebble 2 Duo!). I traveled to visit the factory (travelogue here) and worked through a lot of open issues. Above is a video of the speaker waterproof testing from the production line. Good news is that we fixed all the issues, tests are passing and it looks like we’ll be able to certify PT2 with a waterproof rating of 30m or 3ATM! This means you can get your watch wet, wear it while swimming (but not in hot tubs/saunas) and generally not worry about it. It’s not a dive watch, though. Also, don’t expose it to hot water (this could weaken the waterproof seals), or high pressure water. It’s not invincible.
Snapshot of our mass production plan (output counts are cumulative)
The factory is closed now for Lunar New Year and will reopen around the end of Feb. As of today, mass production is scheduled to start on March 9. It will take the production line a little while to spin up towards our target output of 500 watches per day. Finished watches ship from the factory once a week to our distribution center (which takes ~1 week), then get packed for shipping (a few days to a week), then get delivered to you (~7-10 days). These dates and estimates are ALL subject to change - if we run into a problem, production shuts down until we fix it. Delays can and most likely will happen.
What everyone’s been waiting for…when will your PT2 arrive 🙂
Based on current schedule, the first mass production PT2s will arrive on wrists during the beginning of April. We should wrap up delivering all pre-ordered Pebble Time 2s two months later by the beginning of June. If your watch had an initial date of December, it should arrive in April and if your initial date was April, it should arrive in June. Unfortunately we can’t predict when your specific watch will arrive - please don’t email to ask, we’ll just send you a link to this blog post.
A few weeks before your watch is scheduled to ship, we’ll email link for you to confirm your address (change if now if you’d like), pick optional accessories (extra chargers and straps) and pay any tariffs/VAT/taxes owed. For US orders, the tariff amount is $10 per watch. For other countries, VAT/taxes will be calculated and charged during order confirmation. When the watch is delivered you won’t need to pay anything else or deal with customs forms.
Index 01 is also in the Production Verification Test (PVT) phase. We’ve manufactured several hundred so far. Waterproof testing went well (it’s rated 1m of submersion, ipx8). You’ll be able to wash your hands, wash dishes, shower, get it wet etc but you can’t swim with it on. PTV is proceeding well, but we’re not finished yet. We’re still aiming to start mass production during March, but we don’t have a firm start date yet.
In order news, we’re working an Index 01 ring sizer kit that will be available for $10 (hopefully including worldwide shipping, working on that now). This will let you measure your index finger and find your exact Pebble-specific ring size. We will ask everyone to measure their ring size, either by ordering an Index 01 sizer kit or 3D printing the kit, because our sizes are different than Oura or other rings.
We’re also considering offering size 14 and 15. It’s a big upfront expense (~$50,000) to offer these sizes due to additional tooling that will be needed, so we’re collecting interest - sign up here if you would like Index 01 in these sizes!
Things are rolling along. We finished the Design Verification 1 (DVT1) phase just before the Lunar New Year holiday started. Work is progressing well. One of the huge speed-ups to the program overall is that the electrical design is almost identical to Pebble Time 2. This means our (two person) firmware team can code new features or bug fixes for PT2 and they work immediately on PR2! After the lunar new year, we’ll focus on waterproof testing and last minute tweaks before the current estimated production start date in late May.
Our software output has been tremendous - we’re fixing bugs left, right and center and adding lots of new features to PebbleOS (changelog) and the Pebble mobile app (changelog).
Here are some highlights:
* Weather now works (in sunrise/sunset timeline pins and the Weather app)
* WhatsApp calls show up as calls (on Android)
* Fixed a major background crash bug in Pebble iOS that caused weather and other apps to not fetch live data.
* Many old Pebble apps/faces use weather APIs that no longer work (Yahoo, OpenWeather). The Pebble mobile app now catches these network requests and returns data from Open-Meteo - keeping old watchfaces working!
* Pebble Appstore is now ‘native’ inside the Pebble mobile app (in v1.0.11.1 on beta channels today). We’ve also updated the Pebble Appstore on the web at apps.repebble.com . If you’re a developer and don’t see the latest version of your app or watchface, please make sure to import them (takes ~2 minutes).
* Now you can filter out older apps with non-working settings pages or companion apps. Or filter specifically for apps that are open source!
* Some PebbleKit 1.0 Android apps should work again (thanks Google for giving us back com.getpebble.android.provider.basalt). But devs - please upgrade your apps to PebbleKit 2.0 Android for new companion apps (more info and repo)
* Watch settings can now be adjusted in the Pebble mobile app. Your settings are saved and synced to all your Pebble watches.
* Thanks to many community contributions, there are now many new app icons for notifications for apps that didn’t exist 10 years ago!
* Most PebbleOS work has been going into factory verification sw for Obelix
* Left handed mode - wear your Pebble on right hand with buttons flipped (thanks Claudio!)
* Health data is now synced from watch to phone (thanks Michael!)
We’ve also made some great advances on the SDK and developer front…expect an update very soon 😉
...
Read the original on repebble.com »
Find GDPR-compliant, EU-hosted software and service alternatives that respect your data sovereignty. Browse 500+ European companies across 30+ categories.
* Alternatives to US Tech — EU replacements for popular tools
* Browse by Country — Find companies in your region
* Submit a Company — Add your company to the directory
EU Tech Map is the leading directory of European software companies and GDPR-compliant alternatives. We help businesses find trustworthy, privacy-respecting technology solutions hosted in Europe.
...
Read the original on eutechmap.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.