Good morning. It's Tuesday, April 7, and we're covering Altman’s policy push on AI, internal shakeups at OpenAI, Meta’s hybrid model strategy, and more.

If you enjoy this email, share it with a friend. First time reading? Sign up here to keep up with the future of tech.

YOUR DAILY ROLLUP

Top Stories of the Day

Meta Plans Partial Open AI Models
Meta plans to release new AI models under Alexandr Wang, with some versions open-sourced. The company will keep parts proprietary to manage safety and competitiveness. This hybrid approach aims to balance developer access with control over advanced systems. It also reflects growing industry shifts away from fully open models.

Software Jobs Rise Despite AI Fears
Software engineer job listings have risen 30% so far in 2026, reaching about 67,000 openings. The increase challenges fears that AI is rapidly replacing coding jobs. However, hiring remains difficult due to automated recruiting and “ghost jobs.” AI may also be reshaping work, boosting output but raising concerns about code quality.

Iran Threatens AI Data Center Strikes
Iran warned it could target AI data centers, including the Stargate project, amid escalating tensions with the U.S. The threat follows U.S. warnings of strikes on Iranian civilian infrastructure. Iranian-linked attacks have hit a cloud facility in Bahrain, while Dubai authorities strongly deny reports of a similar strike there.

US AI Giants Unite Against China's Model Theft
OpenAI, Anthropic, and Google are now sharing intelligence through the Frontier Model Forum to block "adversarial distillation" attempts by Chinese AI firms. Anthropic says DeepSeek, Moonshot AI, and MiniMax generated over 16 million exchanges with Claude through roughly 24,000 fraudulent accounts, while OpenAI told US lawmakers that DeepSeek developed new, obfuscated methods to disguise its extraction activity.

VIDEO

I Was Hacked…

AI hacker Ply tries to break into Matt’s personal AI system—testing jailbreaks, token attacks, and security defenses.

THE FUTURE LIVE

Founder of BEP Research Joins The Live Show

POLICY

Sam Altman Proposes “AI New Deal” With Taxes, Public Fund

OpenAI CEO Sam Altman has outlined a 13-page policy blueprint urging governments to prepare for rapid advances in AI, including potential “superintelligence,” according to an Axios interview. The proposal calls for measures like a public wealth fund, robot taxes, and a four-day workweek to offset economic disruption, including job losses and shifts in tax revenue.

Altman warns that near-term risks include major cyberattacks and the possible misuse of AI in biological threats. The plan positions OpenAI as both a builder of transformative technology and a participant in shaping its regulation—highlighting urgency while inviting broader debate. Read the full article here.

LEADERSHIP

Inside OpenAI Turmoil: Allegations Challenge Sam Altman’s Leadership

A New Yorker investigation published April 6, 2026 details internal conflicts at OpenAI, including allegations that CEO Sam Altman misled colleagues and obscured safety concerns as the company advanced powerful AI systems. Internal memos from chief scientist Ilya Sutskever and others accused Altman of a “pattern” of deceptive behavior, contributing to his brief ouster in November 2023 before he was reinstated within days.

The investigation exposes tensions between rapid commercialization and AI safety, with some researchers warning that safeguards were deprioritized. The episode underscores broader questions about governance, accountability, and trust in companies building potentially transformative AI technologies. → Read the full article here.

CYBERSECURITY

AI Arms Race Reshapes Cybersecurity as Attacks Grow Autonomous

Cybersecurity experts warn that new AI systems from companies like Anthropic and OpenAI are accelerating both cyberattacks and defenses, according to a New York Times report published April 6, 2026. Anthropic disclosed a case in which suspected Chinese state-backed hackers used AI agents to automate up to 80–90% of an intrusion campaign across roughly 30 organizations.

While such incidents remain rare, upcoming AI releases are expected to make it far easier to discover and exploit vulnerabilities at scale. At the same time, defenders are deploying AI to detect weaknesses faster, setting up an escalating “AI vs. AI” dynamic. The core question: whether attackers or defenders gain the upper hand first. Read the full article here. (Paywall)

NEWS

What Else is Happening

AI Fuels Cyber Arms Race (Paywall): A New York Times report finds AI automates up to 90% of some intrusions while also uncovering hundreds of zero-day flaws faster, intensifying attacker-defender competition.

Celigo Unveils AI Agent Platform: Celigo launches Ora and a low-code Agent Builder, letting non-technical teams automate workflows via natural language with governance controls, addressing struggles to scale AI beyond pilots.

Xoople Raises $130M for Earth AI Data: Spain’s Xoople secures $130 million Series B to build a satellite network delivering higher-precision geospatial data for AI, targeting enterprise demand for reliable “ground truth.”

Madison Air Targets $13.2B IPO: Ventilation firm Madison Air plans to raise up to $2.23 billion in a U.S. IPO, fueled by AI-driven data center cooling demand despite volatile markets slowing listings.

White House Sees AI Lowering Rates: Adviser Kevin Hassett said April 6, 2026 that AI-driven productivity and capital spending could ease inflation, giving the Federal Reserve room to cut interest rates.

PROMPT OF THE WEEK

Data Analysis Brief

Before analyzing this data, work through these steps in order:

1. ASSUMPTIONS — State what you're assuming about the data (collection method, time period, unit of measure, population it represents). Flag anything ambiguous.

2. QUALITY CHECK — Identify missing values, outliers, potential sampling bias, or anything that would limit the reliability of conclusions. Be specific.

3. FINDINGS — Separate into two buckets:
   - Descriptive: what the data directly shows (with numbers)
   - Interpretive: what you believe it means and why

4. LIMITS — What questions does this data explicitly cannot answer? What would you need to answer them?

5. HYPOTHESES — Generate 3 follow-up hypotheses worth testing, ranked by potential impact.

Do not skip steps or merge them. If you don't have enough information for a step, say so explicitly rather than proceeding.

[paste your data or describe your dataset here]
TWEETS

Artemis II Lunar Flyby

That's All for Today

Before you go, what did you think of today's issue?

We read every response.

Login or Subscribe to participate

Thanks for reading. See you next time!

— Matthew Berman, Nick Wentz & the Forward Future Team

Reply

Avatar

or to participate