26
Project Guideline helps people with visual impairments run on their own, using #GoogleAI to identify a line on the ground and send audio signals to the runner. Still early research, but excited to see the project expand to Japan projectguidelinejp.withgoogle.com/intl/en/
27
We're making @Android more accessible for everyone with Camera Switches & Project Activate that make it easier to control your phone and communicate using the phone’s front-facing camera and #GoogleAI to detect face and eye gestures blog.google/outreach-initi…
28
At #SearchOn today, we shared how we're using #GoogleAI to make Search multimodal. With #GoogleLens you can search with both images and words, making it easier to find that pattern from a shirt, or instructions to fix your bike. blog.google/products/searc…
29
Today @Nature published our quantum computing research on time crystals. Excited to see more experiments like this not only in physics, but future quantum applications in other fields too #GoogleAI nature.com/articles/s4158… blog.google/inside-google/…
30
In 2019, we started automatically labeling images in @googlechrome to make them more accessible for people who are blind or low-vision. This week, we expanded to 10 more languages, making images more accessible to millions more people. #GoogleAI #IDPWD blog.google/outreach-initi…
31
Today we're sharing research from @StanfordMed on how the genome sequencing method we built with @ucscgenomics helped speed up identification of disease-causing variants, enabling faster NICU diagnoses. #GoogleAI blog.google/technology/hea…
32
We can learn a lot about our environment just by listening to the birds. New #GoogleAI approaches can help isolate and identify birdsongs, helping ecologists better understand food systems and forest health. 🐦 ai.googleblog.com/2022/01/separa…
33
A new #GoogleAI tool reduces the size of datasets needed for radiology models used to predict abnormalities in chest x-rays, making it easier for researchers to build custom models that can help in disease detection. ai.googleblog.com/2022/07/simpli…
34
Excited to launch Pixel 7 and Pixel 7 Pro with our next generation Google Tensor G2 chip, which brings state-of-the-art #GoogleAI directly to the phone. And introducing our first Google Pixel Watch, with health and fitness insights from Fitbit. blog.google/products/pixel…
35
We're building an AI model that supports the 1,000 most-spoken languages, a big step in making knowledge more accessible. Already we've developed a model trained on 400+ languages, the most covered by any speech model to date. #GoogleAI blog.google/technology/ai/…
36
In @Nature: making a traversable wormhole with a quantum computer. A qubit teleported across our Sycamore processor exhibits the dynamics expected from crossing a traversable wormhole, opening up possibilities to test quantum gravity theories. #GoogleAI ai.googleblog.com/2022/11/making… twitter.com/GoogleAI/statu…
37
One of our new @googlecloud AI tools uses machine learning models to identify billions of products based on visual and text features, helping retailers check their in-store shelf stock. Lots more in store for #GoogleAI in 2023, stay tuned! wsj.com/articles/googl…
38
Summary of great research progress on #GoogleAI, including language models, computer vision, multimodal models, generative ML. We're building it all into current and upcoming products + APIs, look forward to sharing more with everyone soon. Stay tuned! ai.googleblog.com/2023/01/google…
39
1/ In 2021, we shared next-gen language + conversation capabilities powered by our Language Model for Dialogue Applications (LaMDA). Coming soon: Bard, a new experimental conversational #GoogleAI service powered by LaMDA. blog.google/technology/ai/…
40
Our latest quantum milestone, in @Nature today: for the 1st time ever, @GoogleQuantumAI has experimentally demonstrated that it's possible to reduce errors by increasing the # of qubits, bringing us a step closer to large-scale quantum computers. #GoogleAI blog.google/inside-google/…