Research ArticleQuantum Mechanics

Experimentally modeling stochastic processes with less memory by the use of a quantum processor

+ See all authors and affiliations

Science Advances  03 Feb 2017:
Vol. 3, no. 2, e1601302
DOI: 10.1126/sciadv.1601302


Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems.

  • quantum information
  • complexity
  • Quantum Optics
  • stochastic simulation
  • quantum measurement


What new tasks can be enhanced by quantum information science? It is a matter of practical importance and fundamental interest to find new additions to the impressive list of known quantum information benefits that include the exponential speedup provided by Shor’s factorization algorithm (1) and by algorithms for simulating quantum systems (2), the physically guaranteed security of quantum key distribution (3), and the sensitivity advantages in using certain quantum states for metrology (4, 5). Here, we experimentally demonstrate a fundamentally new quantum advantage (6): Quantum information processing can reduce the memory required to simulate a stochastically evolving classical system by encoding information in nonorthogonal quantum states. Limitations on memory availability are a key consideration in computer simulation—a ubiquitous tool in modern society (7)—as the state space grows exponentially with the size of the system.

Our work is of particular relevance to the field of complexity theory. Therefore, the phenomena that people seek to understand, such as neural networks or the dynamics of the stock market, consist of a vast myriad of interacting components, whose internal details are too complex or inaccessible for one to model their behavior from first principles. In such cases, the system is instead typically regarded as a black box such that one has access only to some observable output. The task is then to isolate key indicators of future behavior from these data—and these data alone—without any knowledge of the system’s internal mechanism. It is possible to imagine that many different models of this type could be constructed for a given process. Of these, simpler models—those that store less data without sacrificing predictive accuracy—then represent a better understanding of exactly what observations in the past matter for the future. Our experimental work aims to demonstrate that, in taking this motivation to its ultimate conclusion, quantum effects can provide a powerful resource for simplifying models.

Our technique is fundamentally different from quantum data compression (8, 9). That protocol is concerned with preserving all input data and thus encodes orthogonal signal states into orthogonal encoded states. In contrast, our work is concerned with more efficient ways of discarding useless data (in the sense of being useless for future prediction), by encoding classically distinct states as nonorthogonal quantum states, and processing them coherently.

To demonstrate the quantum advantage provided for this kind of simulation task, we need to quantify the minimum amount of memory (that is, stored information) required to simulate a process. Mathematically, we can characterize the observable behavior of a dynamical process by a joint probability distribution Embedded Image, where Embedded Image and Embedded Image represent random variables that govern the observed behavior of the process in the past and future, respectively. A simulator, implementing a model for the process, operates by storing information about Embedded Image within some physical system S such that, for each instance of the process with a particular past Embedded Image, it can be set to a particular state that allows for the reproduction of expected future statistics, that is, generating a random variable sampled from Embedded Image.

The complexity of the simplest simulator—the one for which S has minimal entropy—is regarded as an intrinsic property of the process being simulated, capturing the bare minimum information one must store to replicate the statistics of the process (10, 11). In complexity theory, the minimal entropy of S is known as the statistical complexity C. The most complex processes reside between complete randomness (maximum system entropy) and complete order (zero system entropy) (12). At each extremity, the entropy of the simulator is zero: C = 0. The statistical complexity has been applied to a wide range of problems, including self-organization (13), the onset of chaos (14), and the complexity of a protein configuration space (15).


The statistical complexity of a stochastic process can be determined by dividing the set of all possible pasts into equivalences classes such that all members of a given class yield coinciding future predictions. The implementation of such a model can replicate future statistics by recording only the equivalence class s that Embedded Image belongs to. In the literature, these equivalence classes are known as causal states (14); thus, causal states encode the information that is required to be stored. The complexity of such a simulator is then given by its entropy (16)Embedded Image(1)where the sum is taken over all causal states sS, and s is the probability that Embedded Image lies in s. This representation turns out be classically optimal (14)—no classical model can simulate a stochastic process by storing less memory than Cc. Thus, Cc coincides with the statistical complexity.

Naïvely, one might expect such optimal models to waste no information—any information they store should be of relevance to the future. Surprisingly, this is not so. Classical models are almost always inefficient. Even in very simple processes, the statistical complexity Cc is usually strictly greater than Embedded Image, the mutual information between past and future outputs (10). Some information stored within a simulator is simply wasted. This surprising wastefulness of even the provably most efficient classical models can be very significant for more complex systems and contributes to an unavoidable energy cost in stochastic simulation (17, 18).

Quantum information processing can drastically reduce this waste. It has been theoretically demonstrated that, for any process whose optimal classical model satisfies Cc > E, there exists a quantum model that requires a smaller memory, Cq < Cc (6). Quantum models assign each causal state s a corresponding quantum causal state Embedded Image. The amount of memory required to retain Embedded Image is then given by the von Neumann entropy Cq = − Tr[ρ log2 ρ], where Embedded Image represents the mixed state of the associated quantum memory that takes on state Embedded Image with probability s, and Tr[⋅] denotes the matrix trace. Quantum models then derive their advantage in not requiring the states Embedded Image to be mutually orthogonal, without sacrificing their capacity to replicate desired future statistical behavior. This nonorthogonality ensures that Cq < Cc, allowing quantum models to save additional memory over their classical counterparts (16). This implies that when simulating N instances of such stochastic processes in parallel, quantum compression allows one to store all necessary past information within NCq qubits in the asymptotic limit of large N, whereas classical simulators will necessitate the use of NCc bits.

We experimentally demonstrate these ideas by modeling a specific, simple, stochastic process. It applies to many different physical systems, one of which is illustrated in Fig. 1A: a pair of binary switches. At each time step j, one of the switches is chosen at random and flipped with probability P. The system then outputs 0 if the switches are aligned and 1 if they are anti-aligned. The obvious (perhaps naïve) model keeps track of the state of both switches, resulting in a memory of entropy 2. However, we may optimize this classical model by observing that the parity of the switches corresponds to the causal states of the system (any past histories for coinciding switch parity have statistically identical futures). Thus, to simulate its statistics, we need to only store a single binary value, s, that takes on 0 and 1 with equiprobability. A potential representation of the simplest model for the switches in Fig. 1A is illustrated in Fig. 1B, which uses antiparallel red and blue vectors to represent the two causal states.

Fig. 1 Representation of a stochastic system, with classical and quantum statistical models, at the (j + 1)th time step of evolution.

(A) The example system is a pair of switches whose settings determine the value of an output bit and are randomized by a probabilistic process during the step (see text for details). (B) Because the output is determined solely by the parity of the switches, a 1-bit classical model can be used to represent the system and to produce equivalent output statistics. In the example shown, the orientation (up or down) of the vector [or equivalently its color (red or blue)] represents the state of the model and determines the output bit xj + 1. (C) A quantum model allows for reduced complexity (see text for details) by encoding the state into nonorthogonal quantum states (the multicolor vectors, with nonpolar orientations, represent quantum superpositions of logical states). (D) A conceptual classical circuit (double lines represent classical bit rails) for realizing the operation of the classical model above. The classical input state (top rail) at time tj is subjected to a probabilistic action, potentially flipping the state. The model state is then correlated with a meter bit (bottom rail), initially in the logical zero state, via a controlled-NOT (CNOT) gate. Reading out the meter via a logical (Pauli “Z”) measurement provides the output bit. After readout, the model state is passed on to the next time step, along with a fresh meter bit. (E) A conceptual quantum circuit (glowing lines represent qubit states) for realizing the quantum model above. The operation is similar to the classical circuit in (D), except that the probabilistic action is delayed until the readout of the meter (as above), which yields a random result, and collapses the model state because of the entanglement generated by the CNOT gate acting on superposition states. (F and G) Conceptual circuits of the classical and quantum models, as experimentally realized. The key difference (for practical reasons only) is the interruption of the model states for characterizing measurements (denoted T, for quantum state tomography) with subsequent repreparation.

Figure 2A summarizes how the dynamics of this process are completely captured by transitions between the two causal states. For a single flip probability P, the steady-state occupation probabilities 0 and 1 for the two causal states coincide due to symmetry. Thus, this process, in general, has a statistical complexity of Cc = 1. The only exception is P = 0.5, because each output bit is then completely random (uncorrelated with earlier output bits), so that no memory of previous outcomes is required. Because the P = 0.5 protocol can be run without a memory, its theoretical Cc is 0, consistent with 0 bits of memory. [However, note that if the protocol is run with a single bit of (unnecessary) memory, that bit will be maximally mixed on average and the memory will have Embedded Image; see results below.]

Fig. 2 Replicating statistical behavior with causal states—transition diagram for the model.

(A) In the example in Fig. 1, the probability of the model state bit transitioning from one causal state (denoted here by a circle) to the other is P, and thus, the probability of remaining in the same state is 1 − P. (B) In general, a two–causal state model may have a transition probability, either P or P, that depends on the causal state at the beginning of the step. The case we consider for most of this work, P = P = P, is a particular example.

Figure 1C provides a conceptual representation of the quantum causal states. The quantum model makes use of a nonorthogonal encoding such that each of the two values of s is assigned to a corresponding quantum state Embedded Image, namelyEmbedded Image(2)Embedded Image(3)Here, |0〉 and |1〉 are the logical basis states of a qubit. The quantum-enhanced model saves further memory by sacrificing absolute knowledge of switch parity—it distinguishes the two possible immediate pasts only to the extent required to generate correct future statistics. The storage of Embedded Image in a physical system S, rather than the classical states s, results in a reduced simulator entropy ofEmbedded Image(4)whereEmbedded Image(5)represents the state of S averaged over possible causal states, and Embedded Image is the Pauli operator. The coherence in Eq. 5 comes from the nonorthogonality, which guarantees reduced complexity Cq < Cc for any P (except P = 0.5, where Cq = Cc = 0). For P close to 0.5, Cq can be arbitrarily small, whereas Cc = 1.

Figure 1 (D and E) shows classical and quantum logical circuits, respectively, that implement these models. The operation of the circuits is explained in detail in the caption, but the key point is that the (j + 1)th simulation step (going from discrete time tj to tj + 1, say) involves taking the memory as input, applying the probabilistic operation, and generating a classical output xj + 1. In the classical case, the probability P of a flip is inserted externally, but in the quantum case, it comes from the intrinsic randomness of quantum measurements on nonorthogonal states. In either scenario, the resulting predictive model can faithfully replicate future statistical behavior. That is, when initialized in the appropriate (quantum) causal states at time t, the future outputs are statistically indistinguishable and align with that of the original process being modeled.

Experimental implementation

We implemented the quantum switch model using a photonic quantum logic circuit. We compared it with the theoretical classical bound and a classical switch model that we also implemented with a photonic circuit. Figure 1 (F and G) shows the mapping of the conceptual models onto what we realized experimentally. Experimentally processing either classical states (classical model) or quantum states (quantum model) required a CNOT gate, as well as two single photons—one to encode the state of the model and one to facilitate readout. We used a linear optics controlled-Z (CZ) gate (Fig. 3) with local unitary operations, and spontaneous parametric downconversion for photon generation, to realize these (see Materials and Methods).

Fig. 3 Experimental setup.

Photons from a continuous wave–pumped spontaneous parametric downconversion (SPDC) source are prepared in the relevant input states by half-wave plates (HWPs) and are incident on a linear optics CNOT gate realized with partially polarizing beam splitters (PPBS) (see Materials and Methods). Converting between classical and quantum models requires changing the input states from classical (orthogonal) polarization states to nonorthogonal superposition states. Measurement of one output determines the output bit at the current time step, and the other output is tomographically characterized over many measurement runs to determine the state of the model and its entropy. Key elements include polarizing beam splitters (PBS), PPBS, quarter wave plates (QWP) and HWP, and avalanche photodiode (APD) single-photon detectors.

In the classical circuit, the causal states are encoded in orthogonal logical photon polarization states, the equivalent of classical bits. The CNOT gate performs a classical XOR (exclusive-OR) operation, mapping the system state (after the probabilistic operation) onto a meter bit, which is read out via a destructive projective measurement to provide the (j + 1)th data value of the model output.

In the quantum circuit, the relevant quantum causal states are encoded in nonorthogonal photon polarization states, as per Eqs. 2 and 3. The CNOT gate produces an entangled state between the model state and a meter qubit. The probability of a flip is determined by the degree of orthogonality of the causal states. Destructive projective measurement of the first qubit after the CNOT gate produces a classical output, which is the (j + 1)th data value of the model output, and the corresponding collapse of the quantum state on the other photon implements the probabilistic operation on the model qubit for the next time step.

To verify the statistical complexity of the simulation, we measure the entropy of the model register via quantum state tomography at the end of the time-step circuit. This requires a destructive measurement of the photonic memory (qu)bit and, consequently, repreparation using classical logic. This is a slight practical difference from the theoretical circuits of Fig. 1 (D and E) and is necessary only for the sake of verifying the information storage requirements of the quantum model.

Experimental determinations of the statistical complexity, for both classical and quantum models, are shown in Fig. 4A, where they are compared with theoretical predictions. We collected data for various values of P ranging from 0 to 1 at intervals of 0.1. For a wide range of P values, Embedded Image, as predicted by theory. Small imperfections in Embedded Image arise due to slight imperfections in the operation of the CNOT gate and preparation of the input states. Figure 4B shows theoretical and experimental single-qubit density matrices, for the symmetric case where P = 0.8, and provides a good example of the strong agreement between theory and experiment.

Fig. 4 Experimental data for classical and quantum models of the stochastic process.

(A) Experimentally measured statistical complexities (entropy) for the classical and quantum model states, sampled for a range of values for P(=P = P). Blue squares are the quantum data, and black diamonds are the classical data. The orange solid line represents the theoretically calculated entropy for the classical scenario. The black curve represents the theoretically calculated entropy for the quantum scenario. Error bars are 1 SD, derived from Poissonian counting statistics. Error bars not shown are much smaller than the data points. The orange triangle denotes the classical prediction for P = 0.5, where no memory is required for the corresponding completely random process. (B) Real parts of the tomographically determined equilibrium density matrix, at the model state output, for P = 0.8. Top, classical model; bottom, quantum model; left, theory; right, experiment. Imaginary components (small) are not shown. Fractional uncertainties in the density matrix reconstruction are comparable in size to the uncertainties in (A).

The classical scenario for this model uses orthogonal logical states, which are invariant with P. Experimentally, this leads to a single data point for the classical symmetric case because the statistics are independent of P. Experimental imperfections, as discussed, led to a measured value of Embedded Image, very slightly less than the predicted value of unity (Fig. 4A). The imperfections, at the ≲ 0.1 % level, bias the equilibrium statistics. Note that measuring a value of less unity does not imply that the classical bound of unity is incorrect but rather that our slightly imperfect experiment implements a classical model of a slightly different process, one with a statistical complexity marginally less than 1. In principle, the same effect could cause the quantum statistical complexity to be lower than that of the expected process. In our experiment, we used careful characterization of the experimental circuit, state preparations, and measurements to ensure that the implemented model was very close to the desired model. Because we implemented our classical protocol with 1 bit of memory at P = 0.5, we observe Cc(0.5) ≈ 1, as discussed previously. However, no memory is required, in principle, for either the classical or quantum simulation at P = 0.5; hence, there is no quantum advantage in theory for this special case.

Our setup can also be generalized to model a class of more general stochastic processes, including the case where probabilities of transitioning between the two causal states do not coincide (see Fig. 2B). This is the case, for example, when the probability of flipping a switch depends on its current parity. Although the causal states of such a process remain unchanged, this generalization does affect the transition probabilities between the two causal states and, thus, their equilibrium distribution. In general, 12, and thus, Cc ≠ 1 (see Materials and Methods for details).

We experimentally tested one such case, where P = 0.3 and P = 0.9. The experimental implementation is the same as before, except that the states Embedded Image and Embedded Image are no longer symmetrically distributed around Embedded Image. The experimentally determined entropy for the quantum model is Sq = 0.19 ± 0.01, much lower than the equivalent case for the classical model Sc = 0.818 ± 0.001. Note that these values are slightly in excess of the theoretically predicted values of 0.12 and 0.81, respectively, which we attribute to a combination of slightly imperfect state preparation and slightly imperfect CNOT gate operation.


In complexity theory, the statistical complexity of a stochastic process is considered as an intrinsic quantifier of its structure, representing the ultimate limit in the amount of memory required to optimally simulate its future statistics. Here, we have experimentally demonstrated that this limit can be surpassed with quantum processing. Stochastic processes permeate quantitative science, modeling diverse phenomena from neural networks to financial markets. In complexity theory, the construction of the simplest such models that replicate their observation behavior has played an important role in understanding their hidden structure. Our results present a proof of principle that these existing methods can be enhanced through quantum technology. Recent theoretical work indicates that quantum models can be further improved for non-Markovian processes (19), and our technology could be adapted to realizing these designs.

In the short term, it will be possible to implement simulations with few-qubit systems, in line with the current state of the art in quantum computer logic. As realizable quantum processor circuits increase in scale, it will be possible to implement this for large numbers of qubits, in one or more of the scalable platforms [including, but not limited to, optics, spins in solids (20), trapped ions (21), and superconductors (22)]. We note that the particular optical gate presented here is not suitable for scaling to large circuits in its current form, but optical quantum logic is scalable with more sophisticated implementations (2325).

Because the amount of information classical models waste often scales with the complexity of the processes they model, the adoption of our methods could have significant potential in simplifying more complex simulations. This highlights not only quantum theory’s relevance in understanding the microscopic world but also its importance in studying the complex macroscopic systems that are characteristic of everyday life.


Photon source and CNOT gate

A source of polarization-unentangled photon pairs was realized using type I spontaneous parametric downconversion in bismuth triborate (BiBO). The source produced photon pairs at 820 nm when pumped with a 410-nm, continuous-wave, 60-mW diode laser. The classical or quantum logic was implemented by constructing a linear optics CNOT gate as shown in Fig. 3. [In practice, the CNOT gate was realized using a CZ gate (26) and Hadamard rotations, which were incorporated into the settings of the wave plates before and after the gate.] To determine how well the CNOT gate was operating (27), we attempted to generate a maximally entangled state from separable inputs, with the resultant 2-qubit density matrix reconstructed via quantum state tomography. The fidelity of the state produced by the CNOT gate with the desired maximally entangled state was measured to be 0.97 ± 0.01.

Classical XOR gate

The classical XOR gate, used to implement the model with classical causal states, can be implemented using a quantum CNOT gate (as above) and orthogonal logical photon polarization states as bits. Because of experimental contingencies, we collected the classical data using a different (but nominally identical in layout and component type) CNOT gate to the one used for the quantum data collection and at a later time.

Asymmetric two-switch process

In the main text, we studied the special case of the two-switch process, where the probability of flipping a switch did not depend on the state of the two switches. We can generalize this model by assuming that a switch is flipped with probability P when the switches align, or P otherwise (Fig. 2B). The resulting system will still have two causal states, s = {0, 1}, corresponding to the parity of the two switches. However, the two causal states no longer occur with equiprobability and instead satisfy 0P = 1P. Thus, Cc ⩽ 1, with equality when P = P. The quantum model causal states for the general model are given byEmbedded Image(6)Embedded Image(7)

The statistical complexity is given by the von Neumann entropy of Embedded Image. The quantum circuit to realize this quantum model is the same as that in Fig. 1F, except that we must replace the controlled-Embedded Image (CNOT) operation with a controlled-Û (CU) gate, where Û is the operator such that Embedded Image. In practice, Embedded Image, where Embedded Image is a rotation around the y axis of the Bloch sphere. Thus, the rotation can be implemented by HWPs in the meter arm before and after the CNOT gate—we incorporated these rotations into the settings of the state preparation and measurement wave plates. However, the asymmetry of settings, when implemented with experimental components, lead to a slight degradation in the performance of the CU gate compared with the CNOT case.

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.


Acknowledgments: Funding: This research was funded, in part, by the Australian Research Council (project no. DP160101911) and the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (project no. CE110001027). M.G. is financially supported by the John Templeton Foundation Grant 53914 “Occam’s quantum mechanical razor: Can quantum theory admit the simplest understanding of reality?” and the National Research Foundation NRF-Fellowship (reference no. NRF-NRFF2016-02). Author contributions: M.S.P. performed the experiment and data analysis, with contributions from J.H. and with assistance from G.J.P. The theoretical and experimental conceptualization was developed by M.G., H.M.W., and G.J.P. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper. Additional data related to this paper may be requested from the authors.
View Abstract

More Like This

Navigate This Article