Google CEO Sundar Pichai sat down with us for a rare, in-depth conversation about what comes next for AI, agents, and the architecture powering the future of search.

From world models and the diffusion version of Gemini, to Alpha Evolve and agent memory, Sundar shares Google’s evolving philosophy on AGI—and how the company plans to build for it.

The big takeaway? Google is all-in on a future where AI is proactive, personalized, and deeply embedded in our daily lives, from search to glasses to assistants that know you better over time.

📌 Key Moments from the Interview

From Transformers to Diffusion: Sundar explains why Google is pushing diffusion-based models as fast, efficient complements to traditional LLMs—and how they may converge.

The Rise of Self-Improving AI: Alpha Evolve and recursive learning represent a shift toward models that can discover knowledge and improve autonomously.

Agent Memory and Lock-In: Pichai shares his thoughts on the importance of open protocols like MCP and A2A, and whether agent memory should be portable.

XR and Astra: Why smart glasses may be the most natural interface for interacting with personal AI.

The Future of Search: Pichai outlines a vision where the Google homepage becomes a proactive, AI-forward experience that brings you what you need before you search.

Advice for Knowledge Workers: His message to professionals wondering how to stay relevant in the age of AI: “Lean in. Use the tools.”

🎥 Full Interview: Sundar Pichai on AGI, Gemini, and the Future of Personal AI

💬 What Sundar Pichai Revealed — In His Own Words

Diffusion Models in Gemini (00:00)

Pichai explains Google’s push into text-based diffusion models, outlining the speed advantages and where they may coexist with transformer-based LLMs.

“We’re going to push the diffusion paradigm as hard as possible and then where we need to bring them together, we will.”

World Models and AGI Strategy (00:59)

Google DeepMind is working on parallel architectures—including physics-grounded world models—to build toward general intelligence.

“Things we are learning there will make their way [into Gemini]. That’s how I would think about it.”

Alpha Evolve and Self-Improving AI (02:32)

Pichai reflects on Alpha Evolve, recursive improvement, and why agents that can discover new knowledge are so profound.

“You can have these agents which can go improve code, make discoveries... What an extraordinary paradigm that is.”

Efficiency as a Breakthrough (04:08)

The key to AI’s future isn’t just intelligence—it’s making that intelligence affordable and fast enough to scale.

“Driving efficiency in how all of this works is what’s going to make it practical to use at scale everywhere.”

The Power and Risk of Agent Memory (05:15)

Pichai discusses the opportunity and privacy implications of agents that remember user preferences and behaviors.

“When you're giving these models memory… there are important privacy issues at stake. You want to make sure the user is in control.”

Open Protocols and Portability (06:21)

For AI to be user-centric and competitive, open standards will be essential—especially around memory and agent communication.

“I don't think there's going to be one agent to rule them all… protocols like MCP and A2A are exciting directions.”

Glasses and Natural Interfaces (06:54)

Pichai sees smart glasses as a highly intuitive form factor for interacting with AI in everyday life.

“It is in your line of sight… maybe can even talk to you more privately.”

The Future of Search (08:22)

Search is evolving from reactive queries to proactive AI that understands your context and surfaces what you need.

“It’s grounded in search, it can use all the tools, and over time we can be proactive there too.”

Advice for Knowledge Workers (09:56)

Pichai encourages professionals to treat AI as a super-assistant and integrate it into their workflows early.

“The best way you can prepare is like what you're doing… just lean into these tools.”

📄 Full Transcript

[00:00:00]

Matthew Berman:
The diffusion version of Gemini—that caught me off guard. Is that a departure from transformers or something entirely new?

Sundar Pichai:
We’re going to push the diffusion paradigm as hard as possible. Where we need to bring them together, we’ll do that.

Matthew Berman:
Are we at that inflection point now, where this starts to look like self-improving artificial intelligence?

Sundar Pichai:
Yes, we’re definitely working on recursive, self-improving paradigms.

[00:00:29]

Sundar Pichai:
For people doing knowledge work, my advice is simple: lean into these tools. Shift your mindset. Think, “Now I have a super assistant with me at all times. I should take advantage of it.”

Matthew Berman:
Do you see the Google Search homepage still being the place people go to find things?

Also, I noticed you announced that Gemini is becoming a "world model." Does that require significant architectural changes?

[00:00:59]

Sundar Pichai:
Google DeepMind has always had a broad view of what’s needed for AGI. They have efforts on G2 models, and they’re also working in parallel on world models—which are distinct from the Gemini 2.5 Pro mainline.

But what we learn in one area informs the other. For example, when we built VO3, it was grounded in physics. Some of that innovation came from our world model research.

[00:01:35]

Matthew Berman:
And the diffusion version of Gemini—wasn’t expecting that. I heard it's five times faster than Flash. Will it be part of the world model roadmap?

Sundar Pichai:
Today, our main Gemini models are autoregressive LLMs—next-token prediction models. Our image models, on the other hand, are diffusion-based. Doing text with diffusion is a new paradigm.

It’s significantly faster for similar capabilities, although it’s still behind the mainline in terms of sophistication. But it has real potential in specific use cases.

We're going to push the diffusion approach as far as we can. Where it makes sense to combine paradigms, we will. It's important to explore all directions in parallel.

[00:02:32]

Matthew Berman:
That makes sense—run multiple bets in parallel and see how they come together.

The next thing I wanted to ask about is Alpha Evolve. I read the paper and saw the demo—honestly, I was blown away. It feels like we're entering the era of the intelligence explosion. Do you think we’re at that point?

[00:03:02]

Sundar Pichai:
You're right to focus on Alpha Evolve. We launched it just ahead of I/O, in a low-key way, but it’s some of the most groundbreaking work we’re doing.

We talked a lot about agents today, but the idea that you can have agents that improve code or discover knowledge—that’s an extraordinary paradigm.

I think people still underestimate the potential of this technology. There’s never been anything like it. I’ve always felt this is more profound than fire or electricity.

Right now, models are still expensive and latency remains a challenge—especially when you chain them together. But we’re actively developing recursive self-improving systems.

[00:04:08]

Matthew Berman:
If you had to choose one area—model intelligence, memory, scaffolding—what’s the highest-leverage area to improve?

Sundar Pichai:
For me, it's driving efficiency. Making this all work more efficiently is what will make AI practical at scale.

That’s why we focus on models like 2.5 Flash. It’s our workhorse—most intelligence at the best price point. The more breakthroughs we can make there, the more broadly we can deploy this technology.

It's also why we invest in infrastructure like TPUs. That’s what gives us an edge.

[00:05:15]

Matthew Berman:
Let’s talk agents. I’m especially interested in agent memory. When agents learn to build shorthand with you, they become more useful—but there's a risk of lock-in. Should there be an open standard for agent memory, like MCP or A2A?

Sundar Pichai:
That’s a great question. Giving these models memory raises important privacy issues. Users must be in control.

Today, if you stop using Gmail, you can export your data. Memory should be similar—you should be able to take it elsewhere.

We may be in the early days, but those are the right concepts to explore: portability, user control, and interoperability.

[00:06:21]

Sundar Pichai:
Open protocols are critical. That’s why we’re excited about MCP and A2A. I don’t think there will be one “agent to rule them all.”

You’ll use many agents. What matters is knowing where your data is, how it’s accessed, and ensuring it can move with you. That’s the future we should build for.

[00:06:54]

Matthew Berman:
I tried the XR glasses based on Project Astra. They looked amazing. Do you see glasses as the ideal interface for personal AI?

Sundar Pichai:
It’ll show up in many places, but yes—glasses are very compelling. They fit into daily life, sit in your line of sight, and can communicate privately.

[00:07:21]

Sundar Pichai:
I just had this incredible experience with Astra. I showed it a few things in my office. Later, I asked where an item was, and it said, “Let’s play detective.”

I moved the item without it knowing. It said, “I just saw it there. Can you zoom out?” It was figuring out that I had moved it. It was so intuitive and impressive.

[00:08:22]

Matthew Berman:
Looking ahead five years—do you still see the Google Search homepage as the starting point for finding information?

Sundar Pichai:
It will evolve in surprising ways. I’m very excited about AI Mode. People are typing more naturally and engaging deeply.

It’s grounded in search, but with tools, personal context, and proactive capabilities. Imagine wearing your glasses and being reminded to do homework, with resources already pulled up. That’s within reach.

[00:09:26]

Matthew Berman:
That’s why I just bought an Android phone—I want to experience that integration across services.

One final question: a lot of people are anxious. If AI can do most—or eventually all—knowledge work, what happens to those workers? How do they stay relevant?

[00:09:56]

Sundar Pichai:
At least in the near term, this is like having a superpower. It removes grunt work and lets you operate at a higher level.

With tools like VO3, or even Gemini 2.5 Pro, creators can explain things better, produce faster—it’s about enabling people.

The best way to prepare is exactly what you’re doing. Lean into these tools. Test them. Start using them.

Whenever someone shows me something, I ask: “What does Gemini 2.5 Pro think?” That mindset shift is key.

[00:10:59]

Sundar Pichai:
You now have a super assistant with you at all times. Take advantage of that. I’m extremely optimistic about what comes next.

Matthew Berman:
Sundar, thank you so much. This was a pleasure.

Sundar Pichai:
Thank you.

🔑 Key Takeaways

  • Google is investing heavily in text diffusion models, aiming for faster performance than traditional transformers, especially in targeted applications.

  • Gemini’s future will integrate multiple architectural approaches, including autoregressive LLMs, diffusion models, and physics-grounded world models.

  • Alpha Evolve marks a shift toward self-improving AI, with agents that can discover knowledge, improve code, and act autonomously.

  • Efficiency is the top priority, with models like Gemini 2.5 Flash designed for practical deployment at scale—balancing intelligence and cost.

  • Agent memory is powerful but risky, raising important questions around user control, data portability, and platform lock-in.

  • Pichai supports open standards like MCP and A2A, believing they’re essential to a multi-agent ecosystem that respects user data ownership.

  • Smart glasses could become the primary interface for AI, offering private, ambient, and always-available interaction with personal agents.

  • The Google Search experience is evolving, becoming more contextual, AI-forward, and capable of proactive task support.

  • For knowledge workers, AI is a superpower, not a threat—Pichai urges professionals to adopt tools like Gemini now to stay ahead.

  • Pichai believes we’re entering a new era of computing, one more transformative than electricity or fire—and still widely underestimated.

Enjoyed this conversation?

For more in-depth interviews with the people shaping AI, follow us on X and subscribe to our YouTube channel.

Reply

Avatar

or to participate

Keep Reading