close
close

Google's Gemini models from Google add natives video understanding

Google has integrated native video understanding into its Gemini models. Enable users to analyze YouTube content via Google Ai Studio. Simply enter a YouTube video link in your entry request. The system then transcribes the audio and analyzes the video frame at intervals of one second. For example, you can refer specific time stamps and extract summaries, translations or visual descriptions. In the preview, the function currently enables the processing of up to 8 hours of video per day with restrictions on a public video per request. Gemini Pro processes videos for up to two hours, while Gemini Flash processes videos of up to an hour. The update follows the implementation of native generation in Gemini.

Video: About Logan Kilpatrick

Advertisement

Enter our community

Connect the decoder community via Discord, Reddit or Twitter – we can't wait to meet you.

Support our independent reporting on free access. Every post helps and secures our future. Support now:

Matthias is a co -founder and editor of the decoder and examines how AI fundamentally changes the relationship between humans and computers.

Enter our community

Connect the decoder community via Discord, Reddit or Twitter – we can't wait to meet you.