In my work as a scientist, I owe a great lot to serendipity.
In 2012 I was in the right place (IBM’s Almaden research laboratory in California) at the right time—and I did the “wrong” thing.
I was meant to be combining three chemicals in a beaker to generate a recognized substance.
The objective was to substitute one of the typical components with a version produced from plastic waste, to enhance the sustainability of strong plastics called thermoset polymers.
Instead, when I combined two of the components, a hard, white plastic material developed in the beaker.
It was so rough I had to shatter the beaker to get it out.
Furthermore, when it rested in dilute acid overnight, it returned to its precursor components.
Without trying to, I had found a whole new family of recyclable thermoset polymers.
Had I deemed it a failed experiment and not followed it, we would have never known what we had produced.
It was scientific fortuity at its best, in the great tradition of Roy Plunkett, who inadvertently created Teflon while researching the chemistry of cooling gases.
Today I have a new goal: to decrease the requirement for serendipity in chemical discovery.
Challenges such as the climate catastrophe and COVID-19 are so large that our answers can’t depend on chance alone.
Nature is complicated and strong, and we need to be able to represent it exactly if we wish to achieve the scientific advancements we require.
Specifically, if we want to take the discipline of chemistry ahead, we need to be able to comprehend the energetics of chemical processes with a high level of certainty.
This is not a new idea, but it underlines a key constraint: predicting the behavior of even simple molecules with complete precision is beyond the capability of the most powerful computers.
In this area, quantum computing has the potential to make substantial progress in the next years.
Because classical computers can’t accurately calculate the quantum behavior of more than a few electrons—the computations are too massive and time-consuming—modeling chemical processes on them involves approximations.
Each approximation lowers the model’s value and increases the amount of lab effort required of chemists to validate and steer the model.
Quantum computing, on the other hand, functions differently.
Quantum computers can use quantum phenomena like entanglement to describe electron-electron interactions without approximations because each quantum bit, or qubit, can map onto a specific electron’s spin orbitals.
Quantum computers have now progressed to the point where they can begin to model the energetics and properties of small molecules like lithium hydride, potentially paving the way for models that provide clearer paths to discovery than we currently have.
The field of quantum chemistry is not new.
German chemists such as Walter Heitler and Fritz London demonstrated in the early twentieth century that quantum mechanics could be used to understand the covalent bond.
The rise in computing power available to chemists in the late twentieth century made it possible to do some basic modeling on classical systems.
Even so, when I was working on my Ph.D. at Boston College in the mid-2000s, it was uncommon for bench chemists to have a functional understanding of what chemical modeling computers could do.
The disciplines (as well as the skill sets required) were vastly different. Bench chemists adhered to trial-and-error techniques, hoping for an informed but frequently fortuitous discovery, rather than investigating the insights of computational methodologies.
I had the good fortune to work with Amir Hoveyda’s research group, which was one of the first to understand the benefits of integrating experimental and theoretical studies.
Theoretical study and modeling of chemical processes to interpret experimental data are now routine, owing to the theoretical discipline’s maturation and bench chemists’ progressive adoption of these models into their work.
The models’ output serves as a useful feedback loop for lab discoveries.
For example, the explosion of available chemical data from high-throughput screening, a trial-and-error-based experimental method, has allowed for the development of well-developed chemical models.
Drug discovery and material experimentation are two examples of commercial applications of these models.
The need to simplify these models is their limiting factor.
At each stage of the simulation, you must choose an area where you will sacrifice accuracy to stay within the computer’s practical limits.
You’re working with “coarse-grained” models, to use industry jargon.
Each simplification decreases your model’s overall accuracy and limits its use in the quest for knowledge. The more coarse your data, the more time-consuming your lab work will be.
The quantum method is unique.
Quantum computing, in its purest form, would allow us to simulate nature exactly as it is, with no approximations.
“Nature isn’t classical, dammit,” said Richard Feynman, “and if you want to build a simulation of nature, you’d best make it quantum-mechanical.” In recent years, quantum computers have witnessed tremendous advancements in power.
In 2020, IBM more than quadrupled its quantum volume—a metric for the number and quality of qubits in a system—and is on track to develop a device with more than 1,000 qubits by 2023, up from single digits in 2016.
Others in the industry have made similarly aggressive promises about their devices’ power and capabilities.
GETTING THINGS STARTED
So far, we’ve used quantum computers to simulate energies associated with molecular ground states and excited states.
These computations will enable us to investigate a wide range of reaction pathways as well as light-reacting compounds.
We’ve also used them to simulate the dipole moment in tiny molecules, which is a step toward understanding how electrons are dispersed between atoms in a chemical bond and can help us predict how these molecules will behave.
Looking ahead, we’ve begun building the groundwork for future quantum computer modeling of chemical systems, and we’ve been studying various sorts of computations on various types of molecules that are now solvable on a quantum computer.
What happens, for example, if there is an unpaired electron in the system? This gives the molecule spin, making computations more difficult.
How can we tweak the algorithm to meet the predicted outcomes? This type of research will one day allow us to study radical species, or molecules with unpaired electrons, which are notoriously difficult to study in the lab or simulate classically.
To be sure, all of this work can be replicated using traditional computers.
Even so, none of it would have been possible five years ago with the quantum technology that existed.
With recent advancements, quantum computing has the potential to serve as a powerful catalyst for chemical discovery shortly.
I don’t see a future in which chemists simply plug algorithms into a quantum device and get a clear set of data to use in the lab right away.
Incorporating quantum models as a step in existing processes that rely on classical computers is feasible—and may already be possible.
For the computationally intensive part of a model, we use classical methods in this approach.
An enzyme, a polymer chain, or a metal surface are all examples of this.
The chemistry in an enzyme pocket, explicit interactions between a solvent molecule and a polymer chain, or hydrogen bonding in a small molecule are all models that we use a quantum method to model.
We’d still accept approximations in some parts of the model, but the most critical parts of the reaction would be much more precise.
We’ve already made significant progress by looking at the idea of incorporating quantum-electronic structure calculations into a conventionally computed environment.
This method may be used in a variety of situations.
More rapid advances in the field of polymer chains could aid us in addressing the problem of plastic pollution, which has worsened since China cut its recyclable material imports.
The energy costs of recycling in the United States are still relatively high; if we could develop plastics that are easier to recycle, we could significantly reduce the amount of waste produced.
Beyond plastics, the demand for materials with lower carbon emissions is growing, and the ability to manufacture substances with lower carbon footprints, such as jet fuel and concrete, is critical to reducing our total global emissions.
The next generation of chemists coming out of graduate schools around the world has a level of data fluency that was unthinkable in the 2000s.
However, there are physical limitations to this fluency: traditional computers simply cannot handle the level of complexity of substances as common as caffeine.
In this environment, no amount of data fluency can replace the need for serendipity: to make significant progress, you’ll always need luck on your side.
Future chemists who embrace quantum computers, on the other hand, are likely to have a lot more luck.
Jeannette M. Garcia is senior manager for the Quantum Applications, Algorithms and Theory team at IBM Research. Her team’s research focuses on computational science applications and theory for quantum computing.