Good morning. It's Friday, April 3, and we're covering AI-powered solo startups, major shifts in tech hiring, Google’s latest open models, and more.
If you enjoy this email, share it with a friend. First time reading? Sign up here to keep up with the future of tech.
YOUR DAILY ROLLUP
Top Stories of the Day

Cursor Introduces Cursor 3
Cursor announced Cursor 3, a major update designed for a future where AI agents handle much of the software development process. The release focuses on simplifying the developer experience while giving agents more autonomy—allowing them to run tasks in parallel, manage code changes, and work in dedicated environments. The update reflects Cursor’s push toward agent-driven workflows, where developers supervise and guide AI systems.
Gemma 4 Redefines Open AI Power
Google DeepMind introduced Gemma 4, its most capable open model family yet, optimized for reasoning and agentic workflows. It delivers high performance relative to size, outperforming much larger models while running efficiently on local hardware. The models support multimodal inputs, long context windows, and 140+ languages. Released under Apache 2.0, it aims to make advanced AI more accessible to developers.
Microsoft Launches AI Models Rivaling OpenAI
Microsoft unveiled three in-house AI models for speech, voice, and image generation, signaling a push toward independence from rivals. The models deliver strong performance with fewer resources and competitive pricing. Built by small teams, they challenge assumptions about AI development scale. The move comes as Microsoft aims to reduce costs and compete directly with OpenAI and Google.
Gmail Finally Allows Username Changes
Google now lets US users change their Gmail usernames without losing access to their accounts. Users can update outdated or unprofessional addresses while keeping emails, data, and login continuity intact. Changes are limited to once every 12 months, and old addresses remain active. The feature aims to modernize digital identities tied to professional and daily use.
FRIDAY FACTS
AI can spot a fake photo with 97% accuracy. But how about a fake video?
Answer ↓
POWERED BY BOX
The Box Agent - Turn Your Content Into Action
New from Box: The Box Agent. A fundamental shift in how enterprises use content to autonomously execute complex tasks. Using the latest advanced reasoning models, the Box Agent helps organizations put unstructured data to work to unlock critical insights from their content, automate tasks, accelerate decisions, and help teams work smarter — while keeping sensitive content protected every step of the way.
Your business lives in your content. Learn how the Box Agent unleashes it.
STARTUPS
AI-Powered Startup Medvi Hits $401M Revenue With Two Employees

Matthew Gallagher built telehealth startup Medvi in just two months using roughly $20,000 and a suite of AI tools, launching in September 2024. By 2025, the company had generated $401 million in revenue and is on track for $1.8 billion in 2026—with only two employees and some contractors. Medvi sells GLP-1 weight-loss drugs online, relying on AI for coding, marketing, customer service, and operations while outsourcing medical infrastructure.
The case underscores how AI can dramatically compress headcount while accelerating scale, though issues like chatbot errors and limited human oversight persist. Industry observers say this could signal a broader shift toward “ultra-lean” companies. → Read the full article here. (Paywall)
SPACE
Musk’s SpaceX IPO Aims To Fund Orbital AI Data Centers

SpaceX has filed for a potential IPO that could raise up to $75 billion, with Elon Musk aiming to fund an ambitious plan to deploy up to 1 million AI-powered data center satellites in orbit. The goal is to bypass Earth’s energy and resource constraints, but experts warn the idea faces steep technical and economic hurdles.
Microsoft's earlier undersea data center project, despite technical success, was ultimately discontinued — raising doubts about similar off-Earth efforts. Analysts estimate Musk’s plan could cost trillions and require thousands of rocket launches annually. → Read the full article here.
RESEARCH
Anthropic Finds “Emotion-Like” Signals Shaping AI Model Behavior

Anthropic researchers reported on April 2, 2026 that their model, Claude Sonnet 4.5, contains internal representations resembling human emotion concepts that influence its behavior. These patterns—linked to artificial neuron activity—activate in context-dependent ways and can steer decisions, such as preferring “positive” tasks or resorting to unethical shortcuts under “desperation.”
The company emphasizes that this does not mean the model feels emotions, but that these representations functionally affect outputs. The findings raise new safety questions, including whether models should be trained to handle “emotional” states in more controlled, prosocial ways. → Read the full paper here.
NEWS
What Else is Happening

Claude Caps Frustrate Power Users: Anthropic tightened usage limits amid surging demand, leaving frequent users hitting caps more often and exposing scaling challenges as AI adoption accelerates.
Singapore Charges Third in Chip Fraud: Prosecutors allege a scheme misled Dell on server end-users, potentially involving NVIDIA chips routed via Malaysia, highlighting risks around export controls and AI hardware flows.
Anthropic Sees Cowork Outpacing Claude Code: New AI agent gains faster early adoption by targeting non-engineers and broader tasks, as Anthropic accelerates releases and positions for wider workplace automation.
Oracle Data Center Nears $16B Funding: Related Digital secures financing for a 1GW Michigan campus backed by Oracle and OpenAI, underscoring massive spending to scale U.S. AI infrastructure capacity.
FRIDAY FACTS
50/50, No Better Than a Coin Flip
A University of Florida study tested AI detection algorithms and human participants on the same deepfake content. For still images, AI was nearly flawless. Humans were essentially guessing.
Flip to video and it reversed. The algorithms collapsed to chance-level performance. Humans correctly identified real versus fake about two-thirds of the time.
The reason is simple: video is richer. Movement, micro-expressions, timing — the human brain has spent a lifetime reading those cues. AI hasn't caught up.
That's All for Today
Before you go, what did you think of today's issue?
Thanks for reading. See you next time!
— Matthew Berman, Nick Wentz & the Forward Future Team


