Reinventing Physics with Self-Learning AI
Unleashing AI to explore the vast unknowns of nature beyond human knowledge
In their seminal position paper "The Era of Experience," Richard Sutton and David Silver envision a new phase of self-learning AI, where intelligence emerges independently of human knowledge. While human insights represent a valuable slice of universal understanding, they form just a tiny fraction—like 5% known, 5% known unknowns, and 90% unknown unknowns in a vast circle. Sutton and Silver aim to unlock this expanse through AI that learns directly from experience, heralding profound discoveries.
Physics, the mother of all sciences, stands at an intriguing crossroads with artificial intelligence. While fields like cosmology and high-energy particle physics have long relied on sophisticated data analysis, the integration of real-world AI applications into physics remains limited. For instance, reinforcement learning (RL) could optimize telescope positioning, yet such practical AI implementations are still emerging in the discipline.
AI has already mastered domains once deemed pinnacles of human intelligence, such as chess, Go, Shakespearean analysis, and math olympiads, diminishing their status as intelligence benchmarks. Physics, however, probes the essence of nature—why it matters, its value, and how we can use it to transform our real world.
We conjecture that physics will evolve dramatically in this era, as machines decode nature by learning from it firsthand. Human knowledge can bootstrap models, but ultimately, AI must iterate based on real-world results and consequences, with nature as the final judge—not human opinion.
To grasp physics' role, consider its origins around 500 BC with Pythagoras and other Greek philosophers. This shift from mythology to rational inquiry likely stemmed from cultural exchanges with Egypt and Phoenicia, plus rivalries among Greek city-states, fostering naturalistic explanations.
The core distinction between mysticism and modern physics, as David Deutsch notes, lies in "explanations that are hard to vary." Mythic tales, like planetary motion driven by gods' whims, allow endless tweaks without predictive power or real-world consequences. Rational thought, by contrast, used math and theory to forge causal links, enabling testable predictions.
With the fall of Greece and later Rome people lost interest in these methods. But the didn’t die. Fast-forward to Galileo Galilei in the early 17th century, who revived precise measurements, mathematical modeling, and experimentation, laying the foundation for what we call today ‘the scientific method’. Key innovations included math as physics' universal language for formulation and testing, and a distributed network of scientists verifying theories collaboratively. This unleashed astonishing progress: telegraphs, railways, bridges, computers, iPhones, AI, and self-driving cars.
Yet the scientific method remains human-bound, limited by our pace of theory integration and biases like vanity, prestige, and career pressures. A researcher might avoid risky trajectories of research fearing job loss or lack of tenure. This attitude leads to incrementalism, club thinking and hinders progress.
Enter AI in the Era of Experience: Machines using reinforcement learning could iterate billions of variations to solve problems autonomously. For instance, designing Starship's heat shield for atmospheric reentry currently involves prototyping, testing, and analysis on real starships with high cost and lots of debris as collateral damage. AI could simulate millions of scenarios, optimizing materials, compositions, and designs— even refining reward functions and goals, like prioritizing atomic structure or heat volatility.
In this paradigm, physicists shift from solving problems to equipping AI for self-discovery, such as curating essential data for learning.
In conclusion, physics will accelerate exponentially, yielding faster breakthroughs. Physicists will pivot from devising solutions (e.g., explaining the double-slit experiment) to crafting simulation methods and data strategies for AI experimentation. Rather than seeking peer acclaim in journals, they'll focus on delivering high-impact data to train and captivate AI models.



