Google's newest LLM can handle 10 million tokens of context
Why this might be a big deal—and why it might not be.
A lot has happened on the AI beat in the last few weeks, including the release of Gemini Advanced (and subsequent controversy over its handling of race and gender) and the announcement of OpenAI’s Sora video generation model. As a result, I think too little attention has been paid to Gemini 1.5 Pro, the multimodal model Google announced two weeks ago.
Th…
Keep reading with a 7-day free trial
Subscribe to Understanding AI to keep reading this post and get 7 days of free access to the full post archives.