web analytics

Save Big on Renewed Electronics & Gadgets | Daily Amazon Deals Inside

Google updates the Gemini app with real-time AI video, Deep Research, and more


Google announced several updates to the Gemini AI chatbot app during Google I/O 2025, including more broadly available multimodal AI features, updated AI models, and deeper integrations with Google’s suite of products.

Starting Tuesday, Google is rolling out Gemini Live’s camera and screen-sharing capabilities to all users on iOS and Android. The feature, powered by Project Astra, allows users to have near-real-time verbal conversations with Gemini, while simultaneously streaming video from their smartphone’s camera or screen to the AI model.

For example, while walking around a new city, users could point their phone at a building and ask Gemini Live about the architecture or history behind it, and get answers with little to no delay.

In the coming weeks, Google says Gemini Live will also start to integrate more deeply with its other apps. The company says Gemini Live will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks.

The slew of updates to Google’s Gemini are part of the company’s efforts to compete with OpenAI’s ChatGPT, Apple’s Siri, and other digital assistant providers. The rise of AI chatbots has given users a new way to interact with the internet and their devices. This has put pressure on several Big Tech businesses, including Google Search and Google Assistant. Google announced during I/O 2025 that Gemini now has 400 million monthly active users, and the company surely hopes to grow that user base with these updates.

Google introduced two new AI subscriptions: Google AI Pro, a rebrand for its $20-per-month Gemini Advanced plan, and Google AI Ultra, a $250-per-month plan that competes with ChatGPT Pro. The Ultra plan gives users very high rate limits, early access to new AI models, and exclusive access to certain features.

U.S. subscribers to Pro and Ultra who have English selected as their language in Chrome will also get access to Gemini in their Chrome browser, Google announced Tuesday. The integration aims to let users ask Gemini to summarize information or answer questions about what appears on their screen.

Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Deep Research will cross-reference these private PDFs with public data to create more personalized reports. The company says that soon users will be able to directly integrate Drive and Gmail to Deep Research.

Free users of Gemini are getting an updated AI image model, Imagen 4, which Google says delivers better text outputs. Subscribers to the company’s new $250-per-month AI Ultra plan will also get access to Google’s latest AI video model, Veo 3, which generates sound that corresponds to video scenes through native audio generation.

Google is also updating the default model in Gemini to be Gemini 2.5 Flash, which the company says will offer higher quality responses with lower latency.

To cater to the growing number of students that use AI chatbots, Google says Gemini will now create personalized quizzes focused on areas that users find challenging. When users answer questions wrong, Gemini will help create additional quizzes and action plans to strengthen those areas.

Save Big on Renewed Electronics & Gadgets | Daily Amazon Deals Inside
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart