Energy is a Computer Science Problem
Computers are modern telescopes. Similar to Galileo, who used human observation to develop a new theory of science, computer generated simulation can help find novel solutions to the energy problem.
Recently, I attended a talk by Chris Young, a mechanical engineer who works on nuclear fusion at the Lawrence Livermore National Laboratory (LLNL). His exact job description is as a designer in the inertial confinement fusion (ICF) and high energy density (HED) physics program. Young spoke about the recent breakthroughs at LLNL and actually managed to explain the complex problem of nuclear fusion in an accessible way. I wrote about the project recently in my essay "Fusion is Illusion - Abundant Energy is not". There, I argue that megaprojects like the ICF at Livermore are problematic and most likely have a net negative impact on society. They cost too much, and because of their nature of being large, complex, and having binary outcomes, their chance of fostering disruptive progress is minimal. I’ve argued many times that disruption is a function of the speed of iteration, the cost of errors, and the cost of error correction. In short, you want projects that allow you to make mistakes with little consequence, fix them at low cost, and iterate as much as possible. Nuclear Fusion is not such a project. It’s the opposite. But there is a silver lining. If we can build computers that can simulate the complex interaction of particles, we might be able to circumvent the physical challenge of building large-scale nuclear fusion reactors. Iteration can happen in virtual space, where the cost of errors and error correction is low. This could lead to fast iterations and potentially positive outcomes. What Young’s talk solidified for me is that energy as a material science problem is best solved with innovations in computer architecture and software. In short, Energy is a Computer Science problem.
Sometimes looking at a problem from a different angle helps clarify the path to success. That’s exactly what Young’s talk did for me. Instead of thinking about energy as a highly complex engineering problem where physicists and engineers tinker with billion-dollar-sized machines to achieve breakthroughs, how about we reformulate energy as a computer science problem? It’s about finding ways to simulate the highly complex interaction of particles and find solutions to problems that could eventually lead to productive energy harvesting technologies. Here is the outline of this essay: First, I describe what the energy problem actually is. Second, I tie it to material science, and third, I formulate the energy problem in the language of martial science. Fourth, I tie the material science problem to computer science, where advances in hardware, software, and machine learning might help us accelerate progress. Fifth, I make a case for a fundamental rethink of the energy problem as a computer science problem, which hopefully will have implications for academia, venture capital, and business development. When Galileo Galilei and his scientific contemporaries decided to use observation as a tool for finding explanations for natural phenomena, they inspired a paradigm shift in science from divine explanations to inductive reasoning by observation. Today, we propose a similar shift towards a new methodology and lens through which explanations for problems in science and engineering can be found. It’s the lens of computer-generated simulation. Like Galileo, who used his eyes and creativity to achieve better results in scientific reasoning, computer-generated simulation has the potential to improve the efficiency of iteration and thus accelerate scientific progress.
What is the Energy Problem?
According to the first law of thermodynamics, energy cannot be created or destroyed. All we can do is redirect energy from one source to another. That’s what we call energy harvesting. We don’t actually create energy; we harvest it for our own specific use. For example, when we store water in a lake on top of a mountain, we don’t create energy. What happens in a hydroelectric power plant is that water can be used to move turbine blades, which creates a rotation. This rotation is exposed to a magnet, which, according to the dynamics of electromagnetism, creates an electric current. This current is what we call electricity. We can use this electricity to power a fridge or an air conditioner. Hydroelectric power plants are 19th-century inventions. Today, we look for more efficiency and better energy harvesting technologies. For example, we try to build batteries that help us store intermittent electricity generated from solar and wind. Or, as Chris Young and his colleagues at Livermore try to fuse hydrogen to helium and harness the resulting release of bonding energy. Nuclear fission is the exact opposite, where we try to split a large atom into midsized atoms and harvest the resulting release of bonding energy. All these cases have one thing in common: They are multidimensional optimization problems. The design space is huge. In the case of nuclear fusion, you can manipulate many factors such as dimension, material selection, operating temperatures, chemical catalysts, and many more design vectors. The key is to find an optimal composition of all these vectors to achieve the best possible results.
The same applies to battery technology. Batteries require the careful assembly of elementary particles and higher-level elements. But which elements should we use? How should we mix them? What should the electrode look like, and what electrolyte should be used? Batteries are a multidimensional problem with a multitude of optimization factors. For example, you can design a battery for high energy density, high performance, high cycle time, and/or low cost. All these factors impact the design choice. Hopefully, this illustrates that the energy problem is in fact a problem of choosing appropriate design factors to achieve an optimal result. The goal is to eventually develop an energy source that can be scaled at low cost and with moderate emissions. We want clean and low-cost energy. This can be achieved by finding the optimal combination of particles and elements to attain specific design goals.
A more striking example of an energy problem is electromagnetic propulsion (EMP) which uses electric current and magnetic fields to achieve propulsion. This is an energy harvesting technology that aims to convert electric energy into kinetic energy. The problem with EMP is that the particles involved undergo lots of disturbances, and hence the efficiency is low. In order to achieve better results, we need to find ways to better assemble the particles and expose them to the electric current in more optimal ways. These examples illustrate that energy is, in essence, a problem that requires the careful assembly of fundamental particles.
Energy is a Material Science problem
When scientists talk about material science, they think about the properties of materials such as steel, copper, lithium, etc. Engineers take this insight to the next level and look for ways to manufacture materials with the goal of achieving certain desired properties. The key difference between science and engineering is that science asks what is in nature, while engineering tries to change naturally occurring properties to reach a certain design goal. Most contemporary energy problems fall into this category. The best example is battery technology, which aims to solve the problem of intermittent renewable energy sources such as solar and wind. Sun power can be harvested on a massive scale through photovoltaic technology. But the sun doesn’t shine all the time, so we need to find ways to store the energy. One way to solve this problem is through lithium-ion batteries. There are many more technologies people are working on, and Stanford recently hosted an event presenting their work in the field. But let’s focus on lithium-ion batteries to keep it simple. Such batteries can be optimized for various design goals. For example, you might want to have a high energy density, which means you want to store a lot of energy per kilogram of material as required by applications such as flight, where weight is the limiting factor. Another design goal is a high cycle count with little loss of storage capacity. This design goal is prevalent in electric vehicles, where charging happens often and energy storage capacity needs to be preserved. Other design goals could be latency, performance, cost, etc.
The key here is to recognize that design goals are just that: goals. Engineers need to agree on what they actually try to achieve and then find solutions to the problem. The solution is often a material science solution, such as in the case of lithium-ion batteries. In the end, the work requires careful consideration about what materials to use and how to combine them.
Another related area is electricity transmission. Here, the problem is how to bring the electrons from the source to the markets where people actually consume electricity. Transmission is highly dependent on the conductive properties of materials. Therefore, transmission is fundamentally a material science problem. In fact, advances in high-temperature superconductivity have opened new avenues of research and increased the potential for highly effective transmission of electricity. Imagine building massive solar power plants in the Nevada desert and transmitting the electricity to New York with little loss. The same applies to other markets around the world. Just think Sahra and Europe, or Gobi and China. Electric transmission is a material science problem, and solving it could substantially enhance the potential for renewable energy sources.
One promising area of research is superconductivity. Recently, there has been some noise around high-temperature superconductivity, and I believe it’s a great example of how material science can enhance the potential for energy efficiency. Superconductivity was discovered around the beginning of the 20th century as a corollary of low-temperature physics. When scientists managed to cool hydrogen down to almost absolute zero, they observed certain surprising effects that they later labeled "superconductivity". Today, superconductivity is considered a phase on its own, like gases, solids, or liquids. Materials exhibiting superconductivity don’t have electric resistance, among other useful properties. While superconductivity was considered a low temperature effect, scientists at IBM Zurich discovered materials exhibiting similar properties at higher temperatures. In my opinion, superconductivity at ambient conditions (which means room temperature and normal pressure) is the Hail Mary of energy. If we could engineer such devices, we’d open up enormous potential for energy harvesting and electric motors.
All these examples have one thing in common: They are material science problems. Hence, the energy problem can be formulated as a special application of material science.
Computer Science accelerates Material Science
My previous analysis of material science illustrates the importance of studying the interaction of elementary particles with themselves and the environment. These interactions can have many flavors. For example, surface technology studies the interaction of materials at the edge, where bonds are broken and elementary particles are exposed to the outside. A typical example would be the study of steel when it’s cut and how the surface interacts with the ambient atmosphere. Questions about oxidation, resistance, robustness, etc. come up. Material science is fundamentally the study of how elementary particles interact. The same applies to materials engineering, where engineers hope to find better combinations of materials to achieve certain design goals.
Computer science can enhance this process by allowing for faster iterations at a lower cost. Iteration is the key to all innovation. Human progress happens best when people find ways to iterate rapidly, make errors, fix them, and repeat. Computers can enhance this process by accelerating iteration. In short, you can simulate physical systems in virtual space and thus accelerate the trial-and-error process. Errors can be made cheaply and repaired at a low cost. Take the example of deep fakes developed with Generative Adversarial Networks (GANs). These deep fakes have a bad reputation, but the science and engineering behind them are very interesting. Imagine using GANs to develop better materials for electric conductivity. Imagine using GANs to build better batteries. GANs are algorithmic methods to iterate rapidly towards a goal.
Some of the most interesting advancements in quantum computing are in developing better GANs. Quantum computers can offer advantages in generating samples from sparse data, which is part of the problem of developing efficient GANs. Quantum computing itself is a fascinating field with enormous potential. While we are still tapping in the dark as to how to use existing quantum hardware, the path forward is very exciting. With better quantum hardware, circuits can be developed that allow for more efficient deployment of quantum algorithms. These are algorithms that could not be deployed with classical computers. The fundamental difference between classical and quantum computers is that the latter are able to exploit quantum effects such as entanglement and interference. Once you have hardware that can exploit those quantum properties, quantum algorithms can be deployed, and applications such as GANs could, in principle, accelerate substantially.
Quantum computation is very exciting. But even classical computation has many paths to improvement. One interesting bottleneck is memory, which limits current workloads. Memory used to be an adjacent feature of processors. For decades, processing power has increased at the speed of Moore’s Law. But modern workloads are often constrained by memory, not processing speed. Take the problem of iteration in material science. In order to run through billions of iterations about how to combine elementary particles, memory is key. Results have to be stored in memory and then compared with a new set of results, which then have to be stored again, and this process is repeated many times. This iterative nature of workflows poses challenges for existing computer architectures. In fact, I’d argue it even calls into question the Von Neumann computing model. One of the professors I follow closely is Onur Mutlu from ETH Zurich. His area of expertise is computer architecture. Mutlu’s focus is on designing hardware and hardware/software interfaces that are more flexible and serve multiple workloads more efficiently. I strongly believe that innovations in computer architecture can directly accelerate progress in material science.
Computers are modern telescopes
When Galileo Galilei tried to explain why an apple drops from an elevated tower, he relied on human observation. Later, he innovated around the engineering of telescopes, which allowed him to observe the stars more accurately. What Galilei and his contemporaries did was turn the focus from divine explanations about physical phenomena towards human observations. While observing natural phenomena, physicists of the Renaissance were able to develop a new theory of nature, which culminated in the Principia Mathematica by Newton. Human observation and reasoning by induction is not the only way to do science, but back then it was an innovation that allowed scientists to break free from scholastic doctrines.
Today we have a new lens through which to look at nature, and that is through computer-generated simulations. Fundamentally, computation is just another word for iteration, and math is a shortcut we can use to accelerate computation. Iteration is not the only way we can do science and engineering. However, similar to Galileo’s innovation of using human observation, computer-generated simulations can help accelerate problem solving in science and engineering.
Conclusion
Looking at energy through the lens of computer science offers various advantages. Similar to Galileo and his contemporaries, who used human observation as a new tool for science and technology, computer-generated simulation can be viewed as a new tool to accelerate science. In particular, computer science can help solve material science problems, which are at the heart of most modern energy problems. Computer-generated simulation can accelerate the development of new materials for batteries, electricity transmission, and electricity harvesting from the sun. Combining these innovations could accelerate the advent of sustainable energy around the globe. Abundant energy is not a pipe dream. There is no law of physics that prohibits large amounts of efficiently harvested energy. Human ingenuity can solve this problem, and computer science lies at the core of this endeavor. We encourage universities, investors, and businesses to view energy through the lens of computer science.