RSS icon
Twitter icon
Facebook icon
Vimeo icon
YouTube icon

Quantum Computing and Information Science


In conventional electronic computers, information is stored and processed in the form of strings of "bits" (binary digits). Each individual bit can have only one of two values: 0 or 1. In a quantum computer, information would be stored in quantum bits, or qubits, each of which, thanks to the nature of superposition, can be 0, 1 or both at once. This parallelism could make some mathematical operations exponentially faster compared to conventional computing speeds for the same problem. One important future application of quantum computers is the task of factoring the extremely large numbers that serve as the "public keys" in current encryption and data-protection schemes. JQI physicists are investigating promising quantum computing architectures as well as developing methods to control quantum effects that can be exploited to process information in new ways.

Latest News

Scientists at the Joint Quantum Institute (JQI) have been steadily improving the performance of ion trap systems, a leading platform for future quantum computers. Now, a team of researchers led by JQI Fellows Norbert Linke and Christopher Monroe has performed a key experiment on five ion-based quantum bits, or qubits. They used laser pulses to simultaneously create quantum connections between different pairs of qubits—the first time these kinds of parallel operations have been executed in an ion trap. The new study, which is a critical step toward large-scale quantum computation, was published on July 24 in the journal Nature.  

“When it comes to the scaling requirements for a quantum computer, trapped ions check all of the boxes,” says Monroe, who is also the Bice-Sechi Zorn professor in the UMD Department of Physics and co-founder of the quantum computing startup IonQ. “Getting these parallel operations to work further illustrates that advancing ion trap quantum processors is not limited by the physics of qubits and is instead tied to engineering their controllers.” 

Ion traps are devices for capturing charged atoms and molecules, and they are commonly deployed for chemical analysis. In recent decades, physicists and engineers have combined ion traps with sophisticated laser systems to exert control over single atomic ions. Today, this type of hardware is one of the most promising for building a universal quantum computer.

The JQI ion trap used in this study is made from gold-coated electrodes, which carry the electric fields that confine ytterbium ions. The ions are caught in the middle of the trap where they form a line, each one separated from its neighbor by a few microns. This setup enables researchers to have fine control over individual ions and configure them as qubits.

Each ion has internal energy levels or quantum states that are naturally isolated from outside influences. This feature makes them ideal for storing and controlling quantum information, which is notoriously delicate. In this experiment, the research team uses two of these states, called “0” and “1”, as the qubit.

The researchers aim laser pulses at a string of qubits to execute programs on this small-scale quantum computer. The programs, also called circuits, are broken down into a set of single- and two-qubit gates. A single-qubit gate can, for instance, flip the state of an ion from 1 to 0. This is a straightforward task for a laser pulse. A two-qubit gate requires more sophisticated pulses because it involves tailoring the interactions between qubits. Certain two-qubit operations can create entanglement—a quantum connection necessary for quantum computation—between two qubits. 

Until now, circuits in ion trap quantum computers have been limited to a sequence of individual gates, one after another. With this new demonstration, researchers can now do two-qubit gates in parallel, creating entanglement between different pairs of ions simultaneously. The research team achieved this by optimizing the laser pulse sequences used to perform operations, making sure to cancel out unwanted laser-qubit interactions. In this way, they were able to successfully implement simultaneous entangling gates on two separate ion pairs.

According to the authors, parallel entangling gates will enable programs to correct errors during a quantum computation—a near-certain requirement in quantum computers with many more qubits. In addition, a quantum computer that factors large numbers or simulates quantum physics will likely need parallel entangling operations to achieve a speed advantage over conventional computers. 

Story by E. Edwards

In addition to Monroe and Linke, Caroline Figgatt, former JQI graduate student and scientist at Honeywell, was lead author on this research paper and provided background material for this news story. The research paper was published simultaneous to similar work done by former JQI postdoctoral researcher and Tsinghua University professor Kihwan Kim. 

Animation on small-scale programmable quantum computing hardware 

Read More
  • Perfect quantum portal emerges at exotic interface
  • A junction between an ordinary metal and a special kind of superconductor has provided a robust platform to observe Klein tunneling.
  • June 19, 2019 Quantum Computing and Information Science

Researchers at the University of Maryland have captured the most direct evidence to date of a quantum quirk that allows particles to tunnel through a barrier like it’s not even there. The result, featured on the cover of the June 20, 2019 issue of the journal Nature, may enable engineers to design more uniform components for future quantum computers, quantum sensors and other devices.

The new experiment is an observation of Klein tunneling, a special case of a more ordinary quantum phenomenon. In the quantum world, tunneling allows particles like electrons to pass through a barrier even if they don’t have enough energy to actually climb over it. A taller barrier usually makes this harder and lets fewer particles through.

Klein tunneling occurs when the barrier becomes completely transparent, opening up a portal that particles can traverse regardless of the barrier’s height. Scientists and engineers from UMD’s Center for Nanophysics and Advanced Materials (CNAM), the Joint Quantum Institute (JQI) and the Condensed Matter Theory Center (CMTC), with appointments in UMD’s Department of Materials Science and Engineering and Department of Physics, have made the most compelling measurements yet of the effect.

“Klein tunneling was originally a relativistic effect, first predicted almost a hundred years ago,” says Ichiro Takeuchi, a professor of materials science and engineering (MSE) at UMD and the senior author of the new study. “Until recently, though, you could not observe it.”

It was nearly impossible to collect evidence for Klein tunneling where it was first predicted—the world of high-energy quantum particles moving close to the speed of light. But in the past several decades, scientists have discovered that some of the rules governing fast-moving quantum particles also apply to the comparatively sluggish particles traveling near the surface of some unusual materials.

One such material—which researchers used in the new study—is samarium hexaboride (SmB6), a substance that becomes a topological insulator at low temperatures. In a normal insulator like wood, rubber or air, electrons are trapped, unable to move even when voltage is applied. Thus, unlike their free-roaming comrades in a metal wire, electrons in an insulator can’t conduct a current.

Topological insulators such as SmB6 behave like hybrid materials. At low enough temperatures, the interior of SmB6 is an insulator, but the surface is metallic and allows electrons some freedom to move around. Additionally, the direction that the electrons move becomes locked to an intrinsic quantum property called spin that can be oriented up or down. Electrons moving to the right will always have their spin pointing up, for example, and electrons moving left will have their spin pointing down.

The metallic surface of SmB6 would not have been enough to spot Klein tunneling, though. It turned out that Takeuchi and colleagues needed to transform the surface of SmB6 into a superconductor—a material that can conduct electrical current without any resistance.

To turn SmB6 into a superconductor, they put a thin film of it atop a layer of yttrium hexaboride (YB6). When the whole assembly was cooled to just a few degrees above absolute zero, the YB6 became a superconductor and, due to its proximity, the metallic surface of SmB6 became a superconductor, too.

It was a “piece of serendipity” that SmB6 and its yttrium-swapped relative shared the same crystal structure, says Johnpierre Paglione, a professor of physics at UMD, the director of CNAM and a co-author of the research paper. “However, the multidisciplinary team we have was one of the keys to this success. Having experts on topological physics, thin-film synthesis, spectroscopy and theoretical understanding really got us to this point,” Paglione adds.

The combination proved the right mix to observe Klein tunneling. By bringing a tiny metal tip into contact with the top of the SmB6, the team measured the transport of electrons from the tip into the superconductor. They observed a perfectly doubled conductance—a measure of how the current through a material changes as the voltage across it is varied.

“When we first observed the doubling, I didn’t believe it,” Takeuchi says. “After all, it is an unusual observation, so I asked my postdoc Seunghun Lee and research scientist Xiaohang Zhang to go back and do the experiment again.”

When Takeuchi and his experimental colleagues convinced themselves that the measurements were accurate, they didn’t initially understand the source of the doubled conductance. So they started searching for an explanation. UMD’s Victor Galitski, a JQI Fellow, a professor of physics and a member of CMTC, suggested that Klein tunneling might be involved.

“At first, it was just a hunch,” Galitski says. “But over time we grew more convinced that the Klein scenario may actually be the underlying cause of the observations.”

Valentin Stanev, an associate research scientist in MSE and a research scientist at JQI, took Galitski’s hunch and worked out a careful theory of how Klein tunneling could emerge in the SmB6 system—ultimately making predictions that matched the experimental data well.

The theory suggested that Klein tunneling manifests itself in this system as a perfect form of Andreev reflection, an effect present at every boundary between a metal and a superconductor. Andreev reflection can occur whenever an electron from the metal hops onto a superconductor. Inside the superconductor, electrons are forced to live in pairs, so when an electron hops on, it picks up a buddy.

In order to balance the electric charge before and after the hop, a particle with the opposite charge—which scientists call a hole—must reflect back into the metal. This is the hallmark of Andreev reflection: an electron goes in, a hole comes back out. And since a hole moving in one direction carries the same current as an electron moving in the opposite direction, this whole process doubles the overall conductance—the signature of Klein tunneling through a junction of a metal and a topological superconductor.

In conventional junctions between a metal and a superconductor, there are always some electrons that don’t make the hop. They scatter off the boundary, reducing the amount of Andreev reflection and preventing an exact doubling of the conductance.

But because the electrons in the surface of SmB6 have their direction of motion tied to their spin, electrons near the boundary can’t bounce back—meaning that they will always transit straight into the superconductor.

“Klein tunneling had been seen in graphene as well,” Takeuchi says. “But here, because it’s a superconductor, I would say the effect is more spectacular. You get this exact doubling and a complete cancellation of the scattering, and there is no analog of that in the graphene experiment.”

Junctions between superconductors and other materials are ingredients in some proposed quantum computer architectures, as well as in precision sensing devices. The bane of these components has always been that each junction is slightly different, Takeuchi says, requiring endless tuning and calibration to reach the best performance. But with Klein tunneling in SmB6, researchers might finally have an antidote to that irregularity.

“In electronics, device-to-device spread is the number one enemy,” Takeuchi says. “Here is a phenomenon that gets rid of the variability.”

Story by Chris Cesare

In addition to Takeuchi, Paglione, Lee, Zhang, Galitski and Stanev, co-authors of the research paper include Drew Stasak, a former research assistant in MSE; Jack Flowers, a former graduate student in MSE; Joshua S. Higgins, a research scientist in CNAM and the Department of Physics; Sheng Dai, a research fellow in the department of chemical engineering and materials science at the University of California, Irvine (UCI); Thomas Blum, a graduate student in physics and astronomy at UCI; Xiaoqing Pan, a professor of chemical engineering and materials science and of physics and astronomy at UCI; Victor M. Yakovenko, a JQI Fellow, professor of physics at UMD and a member of CMTC; and Richard L. Greene, a professor of physics at UMD and a member of CNAM.

Read More

Researchers at the Joint Quantum Institute (JQI) have created the first silicon chip that can reliably constrain light to its four corners. The effect, which arises from interfering optical pathways, isn't altered by small defects during fabrication and could eventually enable the creation of robust sources of quantum light.

That robustness is due to topological physics, which describes the properties of materials that are insensitive to small changes in geometry. The cornering of light, which was reported June 17 in Nature Photonics, is a realization of a new topological effect, first predicted in 2017.

In particular, the new work is a demonstration of quadrupole topological physics. A quadrupole is an arrangement of four poles—sinks and sources of force fields such as electrical charges or the poles of a magnet. You can visualize an electric quadrupole by imagining charges on each corner of a square that alternate positive-negative-positive-negative as you go along the perimeter.

The fact that the cornering arises from quadrupole physics instead of the physics of dipoles—that is, arrangements of just two poles—means it a higher-order topological effect.

Although the cornering effect has been observed in acoustic and microwave systems before, the new work is the first time it’s been observed in an optical system, says JQI Fellow Mohammad Hafezi, the paper’s senior author. "We have been developing integrated silicon photonic systems to realize ideas derived from topology in a physical system," Hafezi says. "The fact that we use components compatible with current technology means that, if these systems are robust, they could possibly be translated into immediate applications."

In the new work, laser light is injected into a grid of resonators—grooved loops in the silicon that confine the light to rings. By placing the resonators at carefully measured distances, it's possible to adjust the interaction between neighboring resonators and alter the path that light takes through the grid.

The cumulative effect is that the light in the middle of the chip interferes with itself, causing most of the light injected into the chip to spend its time at the four corners.

Light doesn’t have an electric charge, but the presence or absence of light in a given resonator provides a kind of polar behavior. In this way, the pattern of resonators on the chip corresponds to a collection of interacting quadrupoles—precisely the conditions required by the first prediction of higher-order topological states of matter.

To test their fabricated pattern, Hafezi and his colleagues injected light into each corner of the chip and then captured an image of the chip with a microscope. In the collected light, they saw four bright peaks, one at each corner of the chip.

To show that the cornered light was trapped by topology, and not merely a result of where they injected the lasers, they tested a chip with the bottom two rows of resonators shifted. This changed their interactions with the resonators above, and, at least theoretically, changed where the bright spots should appear. They again injected the light at the corners, and this time—just as theory predicted—the lower two bright spots showed up above the rows of shifted resonators and not at the physical corners.

Despite the protection from small changes in resonator placement offered by topology, a second, more destructive fabrication defect remains in these chips. Since each resonator isn't exactly the same, the four points of light at the corners all shine with slightly different frequencies. This means that, for the moment, the chip may be no better than a single resonator if used as a source of photons—the quantum particles of light that many hope to harness as carriers of quantum information in future devices and networks.

"If you have many sources that are forced by topology to spit out identical photons, then you could interfere them, and that would be a game-changer," says Sunil Mittal, the lead author of the paper and a postdoctoral researcher at JQI. "I hope this work actually excites theorists to think about maybe looking for models that are insensitive to this lingering disorder in resonator frequencies."

Story by Chris Cesare

Hafezi and Mittal also have affiliations in the Department of Electrical and Computer Engineering, as well as the Institue for Research in Electronics and Applied Physics. Hafezi is also an associate professor in the Department of Physics.

Read More

Researchers at the Joint Quantum Institute have implemented an experimental test for quantum scrambling, a chaotic shuffling of the information stored among a collection of quantum particles. Their experiments on a group of seven atomic ions, reported in the March 7 issue of Nature, demonstrate a new way to distinguish between scrambling—which maintains the amount of information in a quantum system but mixes it up—and true information loss. The protocol may one day help verify the calculations of quantum computers, which harness the rules of quantum physics to process information in novel ways.

“In terms of the difficulty of quantum algorithms that have been run, we’re toward the top of that list,” says Kevin Landsman, a graduate student at JQI and the lead author of the new paper. “This is a very complicated experiment to run, and it takes a very high level of control.”

The research team, which includes JQI Fellow and UMD Distinguished University Professor Christopher Monroe and JQI Fellow Norbert Linke, performed their scrambling tests by carefully manipulating the quantum behavior of seven charged atomic ions using well-timed sequences of laser pulses. They found that they could correctly diagnose whether information had been scrambled throughout a system of seven atoms with about 80% accuracy.

“With scrambling, one particle’s information gets blended or spread out into the entire system,” Landsman says. “It seems lost, but it’s actually still hidden in the correlations between the different particles.”

Quantum scrambling is a bit like shuffling a fresh deck of cards. The cards are initially ordered in a sequence, ace through king, and the suits come one after another. Once it’s sufficiently shuffled, the deck looks mixed up, but—crucially—there’s a way to reverse that process. If you kept meticulous track of how each shuffle exchanged the cards, it would be simple (though tedious) to “unshuffle” the deck by repeating all those exchanges and swaps in reverse.

Quantum scrambling is similar in that it mixes up the information stored inside a set of atoms and can also be reversed, which is a key difference between scrambling and true, irreversible information loss. Landsman and colleagues used this fact to their advantage in the new test by scrambling up one set of atoms and performing a related scrambling operation on a second set. A mismatch between the two operations would indicate that the process was not scrambling, causing the final step of the method to fail. 

That final step relied on quantum teleportation—a method for transferring information between two quantum particles that are potentially very far apart. In the case of the new experiment, the teleportation is over modest distances—just 35 microns separates the first atom from the seventh—but it is the signature by which the team detects scrambling: If information is successfully teleported from one atom to another, it means that the state of the first atom is spread out across all of the atoms—something that only happens if the information is scrambled. If the information was lost, successful teleportation would not be possible. Thus, for an arbitrary process whose scrambling properties might not be known, this method could be used to test whether—or even how much—it scrambles.

The authors say that prior tests for scrambling couldn’t quite capture the difference between information being hidden and lost, largely because individual atoms tend to look similar in both cases. The new protocol, first proposed by theorists Beni Yoshida of the Perimeter Institute in Canada, and Norman Yao at the University of California, Berkeley, distinguishes the two cases by taking correlations between particular particles into account in the form of teleportation.

“When our colleague Norm Yao told us about this teleportation litmus test for scrambling and how it needed at least seven qubits capable of running many quantum operations in a sequence, we knew that our quantum computer was uniquely suited for the job,” says Linke.

The experiment was originally inspired by the physics of black holes. Scientists have long pondered what happens when something falls into a black hole, especially if that something is a quantum particle. The fundamental rules of quantum physics suggest that regardless of what a black hole does to a quantum particle, it should be reversible—a prediction that seems at odds with a black hole’s penchant for crushing things into an infinitely small point and spewing out radiation. But without a real black hole to throw things into, researchers have been stuck speculating.

Quantum scrambling is one suggestion for how information can fall into a black hole and come out as random-looking radiation. Perhaps, the argument goes, it’s not random at all, and black holes are just excellent scramblers. The paper discusses this motivation, as well as an interpretation of the experiment that compares quantum teleportation to information going through a wormhole. 

“Regardless of whether real black holes are very good scramblers, studying quantum scrambling in the lab could provide useful insights for the future development of quantum computing or quantum simulation,” Monroe says.

By Chris Cesare

In addition to Landsman, Monroe and Linke, the new paper had four other coauthors: Caroline Figgatt, now at Honeywell in Colorado; Thomas Schuster at UC Berkeley; Beni Yoshida at the Perimeter Institute for Theoretical Physics; and Norman Yao at UC Berkeley and Lawrence Berkeley National Laboratory.

Read More

Electrons tend to avoid one another as they go about their business carrying current. But certain devices, cooled to near zero temperature, can coax these loner particles out of their shells. In extreme cases, electrons will interact in unusual ways, causing strange quantum entities to emerge.

At the Joint Quantum Institute (JQI), a group, led by Jimmy Williams, is working to develop new circuitry that could host such exotic states. “In our lab, we want to combine materials in just the right way so that suddenly, the electrons don’t really act like electrons at all,” says Williams, a JQI Fellow and an assistant professor in the University of Maryland Department of Physics. “Instead the surface electrons move together to reveal interesting quantum states that collectively can behave like new particles.”

These states have a feature that may make them useful in future quantum computers: They appear to be inherently protected from the destructive but unavoidable imperfections found in fabricated circuits. As described recently in Physical Review Letters, Williams and his team have reconfigured one workhorse superconductor circuit—a Josephson junction—to include a material suspected of hosting quantum states with boosted immunity.

Josephson junctions are electrical synapses comprised of two superconductors separated by a thin strip of a second material. The electron movement across the strip, which is usually made from an insulator, is sensitive to the underlying material characteristics as well as the surroundings. Scientists can use this sensitivity to detect faint signals, such as tiny magnetic fields. In this new study, the researchers replaced the insulator with a sliver of topological crystalline insulator (TCI) and detected signs of exotic quantum states lurking on the circuit’s surface.

Physics graduate student Rodney Snyder, lead author on the new study, says this area of research is full of unanswered questions, down to the actual process for integrating these materials into circuits. In the case of this new device, the research team found that beyond the normal level of sophisticated material science, they needed a bit of luck.

“I'd make like 16 to 25 circuits at a time. Then, we checked a bunch of those and they would all fail, meaning they wouldn’t even act like a basic Josephson junction,” says Snyder. “We eventually found that the way to make them work was to heat the sample during the fabrication process. And we only discovered this critical heating step because one batch was accidentally heated on a fluke, basically when the system was broken.”

Once they overcame the technical challenges, the team went hunting for the strange quantum states. They examined the current through the TCI region and saw dramatic differences when compared to an ordinary insulator. In conventional junctions, the electrons are like cars haphazardly trying to cross a single lane bridge. The TCI appeared to organize the transit by opening up directional traffic lanes between the two locations. 

The experiments also indicated that the lanes were helical, meaning that the electron’s quantum spin, which can be oriented either up or down, sets its travel direction. So in the TCI strip, up and down spins move in opposite directions. This is analogous to a bridge that restricts traffic according to vehicle colors—blue cars drive east and red cars head west. These kinds of lanes, when present, are indicative of exotic electron behaviors.

Just as the careful design of a bridge ensures safe passage, the TCI structure played a crucial role in electron transit. Here, the material’s symmetry, a property that is determined by the underlying atom arrangement, guaranteed that the two-way traffic lanes stayed open. “The symmetry acts like a bodyguard for the surface states, meaning that the crystal can have imperfections and still the quantum states survive, as long as the overall symmetry doesn’t change,” says Williams.

Physicists at JQI and elsewhere have previously proposed that built-in bodyguards could shield delicate quantum information. According to Williams, implementing such protections would be a significant step forward for quantum circuits, which are susceptible to failure due to environmental interference.

In recent years, physicists have uncovered many promising materials with protected travel lanes, and researchers have begun to implement some of the theoretical proposals. TCIs are an appealing option because, unlike more conventional topological insulators where the travel lanes are often given by nature, these materials allow for some lane customization. Currently, Williams is working with materials scientists at the Army Research Laboratory to tailor the travel lanes during the manufacturing process. This may enable researchers to position and manipulate the quantum states, a step that would be necessary for building a quantum computer based on topological materials.

In addition to quantum computing, Williams is driven by the exploration of basic physics questions. “We really don't know yet what kind of quantum matter you get from collections of these more exotic states,” Williams says. “And I think, quantum computation aside, there is a lot of interesting physics happening when you are dealing with these oddball states.”

Written by E. Edwards and S. Elbeshbishi

Read More

The smallest amount of light you can have is one photon, so dim that it’s pretty much invisible to humans. While imperceptible, these tiny blips of energy are useful for carrying quantum information around. Ideally, every quantum courier would be the same, but there isn’t a straightforward way to produce a stream of identical photons. This is particularly challenging when individual photons come from fabricated chips. 

Now, researchers at the Joint Quantum Institute (JQI) have demonstrated a new approach that enables different devices to repeatedly emit nearly identical single photons. The team, led by JQI Fellow Mohammad Hafezi, made a silicon chip that guides light around the device’s edge, where it is inherently protected against disruptions. Previously, Hafezi and colleagues showed that this design can reduce the likelihood of optical signal degradation. In a paper published online on Sept. 10 in Nature, the team explains that the same physics which protects the light along the chip’s edge also ensures reliable photon production.

Single photons, which are an example of quantum light, are more than just really dim light. This distinction has a lot to do with where the light comes from. “Pretty much all of the light we encounter in our everyday lives is packed with photons,” says Elizabeth Goldschmidt, a researcher at the US Army Research Laboratory and co-author on the study.  “But unlike a light bulb, there are some sources that actually emit light, one photon at time, and this can only be described by quantum physics,” adds Goldschmidt. 

Many researchers are working on building reliable quantum light emitters so that they can isolate and control the quantum properties of single photons. Goldschmidt explains that such light sources will likely be important for future quantum information devices as well as further understanding the mysteries of quantum physics. “Modern communications relies heavily on non-quantum light,” says Goldschmidt. “Similarly, many of us believe that single photons are going to be required for any kind of quantum communication application out there.”

Scientists can generate quantum light using a natural color-changing process that occurs when a beam of light passes through certain materials. In this experiment the team used silicon, a common industrial choice for guiding light, to convert infrared laser light into pairs of different-colored single photons.

They injected light into a chip containing an array of miniscule silicon loops. Under the microscope, the loops look like linked-up glassy racetracks. The light circulates around each loop thousands of times before moving on to a neighboring loop. Stretched out, the light’s path would be several centimeters long, but the loops make it possible to fit the journey in a space that is about 500 times smaller. The relatively long journey is necessary to get many pairs single photons out of the silicon chip.  

Such loop arrays are routinely used as single photon sources, but small differences between chips will cause the photon colors to vary from one device to the next. Even within a single device, random defects in the material may reduce the average photon quality. This is a problem for quantum information applications where researchers need the photons to be as close to identical as possible.

The team circumvented this issue by arranging the loops in a way that always allows the light to travel undisturbed around the edge of the chip, even if fabrication defects are present. This design not only shields the light from disruptions—it  also restricts how single photons form within those edge channels. The loop layout essentially forces each photon pair to be nearly identical to the next, regardless of microscopic differences among the rings. The central part of the chip does not contain protected routes, and so any photons created in those areas are affected by material defects.

The researchers compared their chips to ones without any protected routes. They collected pairs of photons from the different chips, counting the number emitted and noting their color. They observed that their quantum light source reliably produced high quality, single-color photons time and again, whereas the conventional chip’s output was more unpredictable.

“We initially thought that we would need to be more careful with the design, and that the photons would be more sensitive to our chip’s fabrication process,” says Sunil Mittal, a JQI postdoctoral researcher and lead author on the new study. “But, astonishingly, photons generated in these shielded edge channels are always nearly identical, regardless of how bad the chips are.”

Mittal adds that this device has one additional advantage over other single photon sources. “Our chip works at room temperature. I don’t have to cool it down to cryogenic temperatures like other quantum light sources, making it a comparatively very simple setup.”

The team says that this finding could open up a new avenue of research, which unites quantum light with photonic devices having built-in protective features. “Physicists have only recently realized that shielded pathways fundamentally alter the way that photons interact with matter,” says Mittal. “This could have implications for a variety of fields where light-matter interactions play a role, including quantum information science and optoelectronic technology.”

Written by D. Genkina and E. Edwards

Author affiliations 

Sunil Mittal is a postdoctoral researcher at the Joint Quantum Institute and Institute for Research in Electronics and Applied Physics (IREAP) at the University of Maryland (UMD).

Elizabeth Goldschmidt is a physicist at the US Army Research Laboratory.

Mohammad Hafezi is a Fellow of the Joint Quantum Institute and an Associate Professor in the UMD Department of Electrical and Computer Engineering, Department of Physics, and IREAP.

Read More

NSF has announced a $15 million award to a collaboration of seven institutions including the University of Maryland. The goal: Build the world’s first practical quantum computer.

"Quantum computers will change everything about the technology we use and how we use it, and we are still taking the initial steps toward realizing this goal," said NSF Director France Córdova. "Developing the first practical quantum computer would be a major milestone. By bringing together experts who have outlined a path to a practical quantum computer and supporting its development, NSF is working to take the quantum revolution from theory to reality."

Dubbed the Software-Tailored Architecture for Quantum co-design (STAQ) project, the new effort seeks to demonstrate a quantum advantage over traditional computers within five years using ion trap technology.

The project is the result of a National Science Foundation Ideas Lab—a week-long, free-form exchange among researchers from a wide range of fields that aims to spawn creative, collaborative proposals to address a given research challenge. The result of each Ideas Lab is interdisciplinary research that is high-risk, high-reward, cutting-edge and unlikely to be funded through traditional grant mechanisms.

JQI Fellow Christopher Monroe will lead the team developing the hardware. JQI Fellow Alexey Gorshkov will be involved in the theory side of the collaboration. 

Text for this news item was adapted from the Duke University and NSF press releases on the award. 

Read More

Transistors are tiny switches that form the bedrock of modern computing—billions of them route electrical signals around inside a smartphone, for instance.

Quantum computers will need analogous hardware to manipulate quantum information. But the design constraints for this new technology are stringent, and today’s most advanced processors can’t be repurposed as quantum devices. That’s because quantum information carriers, dubbed qubits, have to follow different rules laid out by quantum physics. 

Scientists can use many kinds of quantum particles as qubits, even the photons that make up light. Photons have added appeal because they can swiftly shuttle information over long distances and they are compatible with fabricated chips. However, making a quantum transistor triggered by light has been challenging because it requires that the photons interact with each other, something that doesn’t ordinarily happen on its own. 

Now, researchers at the Joint Quantum Institute (JQI), led by JQI Fellow Edo Waks have cleared this hurdle and demonstrated the first single-photon transistor using a semiconductor chip. The device, described in the July 6 issue of Science, is compact: Roughly one million of these new transistors could fit inside a single grain of salt. It is also fast, able to process 10 billion photonic qubits every second. 

“Using our transistor, we should be able to perform quantum gates between photons,” says Waks. “Software running on a quantum computer would use a series of such operations to attain exponential speedup for certain computational problems.

The photonic chip is made from a semiconductor with numerous holes in it, making it appear much like a honeycomb. Light entering the chip bounces around and gets trapped by the hole pattern; a small crystal called a quantum dot sits inside the area where the light intensity is strongest. Analogous to conventional computer memory, the dot stores information about photons as they enter the device. The dot can effectively tap into that memory to mediate photon interactions—meaning that the actions of one photon affect others that later arrive at the chip.

“In a single-photon transistor the quantum dot memory must persist long enough to interact with each photonic qubit,” says Shuo Sun, the lead author of the new work who is a Postdoctoral Research Fellow at Stanford University*. “This allows a single photon to switch a bigger stream of photons, which is essential for our device to be considered a transistor.”

To test that the chip operated like a transistor, the researchers examined how the device responded to weak light pulses that usually contained only one photon. In a normal environment, such dim light might barely register. However, in this device, a single photon gets trapped for a long time, registering its presence in the nearby dot. 

The team observed that a single photon could, by interacting with the dot, control the transmission of a second light pulse through the device. The first light pulse acts like a key, opening the door for the second photon to enter the chip. If the first pulse didn’t contain any photons, the dot blocked subsequent photons from getting through. This behavior is similar to a conventional transistor where a small voltage controls the passage of current through it’s terminals. Here, the researchers successfully replaced the voltage with a single photon and demonstrated that their quantum transistor could switch a light pulse containing around 30 photons before the quantum dot’s memory ran out.

Waks, who is also a professor in the University of Maryland Department of Electrical and Computer Engineering, says that his team had to test different aspects of the device’s performance prior to getting the transistor to work. “Until now, we had the individual components necessary to make a single photon transistor, but here we combined all of the steps into a single chip,” Waks says.

Sun says that with realistic engineering improvements their approach could allow many quantum light transistors to be linked together. The team hopes that such speedy, highly connected devices will eventually lead to compact quantum computers that process large numbers of photonic qubits. 

*Other contributors and affiliations

  • Edo Waks has affiliations with the University of Maryland Department of Electrical and Computer Engineering (ECE), Department of Physics, Joint Quantum Institute, and the Institute for Research in Electronics and Applied Physics (IREAP).
  • Shuo Sun was a UMD graduate student at the time of this research. He is now a postdoctoral research fellow at Stanford University.
  • JQI Fellow Glenn Solomon, a physicist at the National Institute of Standards and Technology, grew the sample used in this research.
  • Hyochul Kim was a postdoctoral research at UMD at the time of the research. He is now at Samsung Advanced Institute of Technology.
  • Zhouchen Luo is currently a UMD ECE graduate student.
Read More

In the latest experiment of its kind, researchers have captured the most compelling evidence to date that unusual particles lurk inside a special kind of superconductor. The result, which confirms theoretical predictions first made nearly a decade ago at the Joint Quantum Institute (JQI) and the University of Maryland (UMD), will be published in the April 5 issue of Nature

The stowaways, dubbed Majorana quasiparticles, are different from ordinary matter like electrons or quarks—the stuff that makes up the elements of the periodic table. Unlike those particles, which as far as physicists know can’t be broken down into more basic pieces, Majorana quasiparticles arise from coordinated patterns of many atoms and electrons and only appear under special conditions. They are endowed with unique features that may allow them to form the backbone of one type of quantum computer, and researchers have been chasing after them for years.

The latest result is the most tantalizing yet for Majorana hunters, confirming many theoretical predictions and laying the groundwork for more refined experiments in the future. In the new work, researchers measured the electrical current passing through an ultra-thin semiconductor connected to a strip of superconducting aluminum—a recipe that transforms the whole combination into a special kind of superconductor.

Experiments of this type expose the nanowire to a strong magnet, which unlocks an extra way for electrons in the wire to organize themselves at low temperatures. With this additional arrangement the wire is predicted to host a Majorana quasiparticle, and experimenters can look for its presence by carefully measuring the wire’s electrical response. 

The new experiment was conducted by researchers from QuTech at the Technical University of Delft in the Netherlands and Microsoft Research, with samples of the hybrid material prepared at the University of California, Santa Barbara and Eindhoven University of Technology in the Netherlands. Experimenters compared their results to theoretical calculations by JQI Fellow Sankar Das Sarma and JQI graduate student Chun-Xiao Liu.

The same group at Delft saw hints of a Majorana in 2012, but the measured electrical effect wasn’t as big as theory had predicted. Now the full effect has been observed, and it persists even when experimenters jiggle the strength of magnetic or electric fields—a robustness that provides even stronger evidence that the experiment has captured a Majorana, as predicted in careful theoretical simulations by Liu.

"We have come a long way from the theoretical recipe in 2010 for how to create Majorana particles in semiconductor-superconductor hybrid systems," says Das Sarma, a coauthor of the paper who is also the director of the Condensed Matter Theory Center at UMD. "But there is still some way to go before we can declare total victory in our search for these strange particles."

The success comes after years of refinements in the way that researchers assemble the nanowires, leading to cleaner contact between the semiconductor wire and the aluminum strip. During the same time, theorists have gained insight into the possible experimental signatures of Majoranas—work that was pioneered by Das Sarma and several collaborators at UMD.

Theory meets experiment

The quest to find Majorana quasiparticles in thin quantum wires began in 2001, spurred by Alexei Kitaev, then a physicist then at Microsoft Research. Kitaev, who is now at the California Institute of Technology in Pasadena, concocted a relatively simple but unrealistic system that could theoretically harbor a Majorana. But this imaginary wire required a specific kind of superconductivity not available off-the-shelf from nature, and others soon began looking for ways to imitate Kitaev’s contraption by mixing and matching available materials.

One challenge was figuring out how to get superconductors, which usually go about their business with an even number of electrons—two, four, six, etc.—to also allow an odd number of electrons, a situation that is normally unstable and requires extra energy to maintain. The odd number is necessary because Majorana quasiparticles are unabashed oddballs: They only show up in the coordinated behavior of an odd number of electrons. 

In 2010, almost a decade after Kitaev’s original paper, Das Sarma, JQI Fellow Jay Deep Sau and JQI postdoctoral researcher Roman Lutchyn, along with a second group of researchers, struck upon a method to create these special superconductors, and it has driven the experimental search ever since. They suggested combining a certain kind of semiconductor with an ordinary superconductor and measuring the current through the whole thing. They predicted that the combination of the two materials, along with a strong magnetic field, would unlock the Majorana arrangement and yield Kitaev’s special material.

They also predicted that a Majorana could reveal itself in the way current flows through such a nanowire. If you connect an ordinary semiconductor to a metal wire and a battery, electrons usually have some chance of hopping off the wire onto the semiconductor and some chance of being rebuffed—the details depend on the electrons and the makeup of the material. But if you instead use one of Kitaev’s nanowires, something completely different happens. The electron always gets perfectly reflected back into the wire, but it’s no longer an electron. It becomes what scientists call a hole—basically a spot in the metal that’s missing an electron—and it carries a positive charge back in the opposite direction.

Physics demands that the current across the interface be conserved, which means that two electrons must end up in the superconductor to balance out the positive charge heading in the other direction. The strange thing is that this process, which physicists call perfect Andreev reflection, happens even when electrons in the metal receive no push toward the boundary—that is, even when they aren’t hooked up to a battery of sorts. This is related to the fact that a Majorana is its own antiparticle, meaning that it doesn’t cost any energy to create a pair of Majoranas in the nanowire. The Majorana arrangement gives the two electrons some extra room to maneuver and allows them to traverse the nanowire as a quantized pair—that is, exactly two at a time. 

"It is the existence of Majoranas that gives rise to this quantized differential conductance," says Liu, who ran numerical simulations to predict the results of the experiments on UMD’s Deepthought2 supercomputer cluster. "And such a quantization should even be robust to small changes in experimental parameters, as the real experiment shows."

Scientists refer to this style of experiment as tunneling spectroscopy because electrons are taking a quantum route through the nanowire to the other side. It has been the focus of recent efforts to capture Majoranas, but there are other tests that could more directly reveal the exotic properties of the particles—tests that would fully confirm that the Majoranas are really there. 

"This experiment is a big step forward in our search for these exotic and elusive Majorana particles, showing the great advance made in the materials improvement over the last five years," Das Sarma says. "I am convinced that these strange particles exist in these nanowires, but only a non-local measurement establishing the underlying physics can make the evidence definitive."

By Chris Cesare

Read More

Two independent teams of scientists, including one from the Joint Quantum Institute, have used more than 50 interacting atomic qubits to mimic magnetic quantum matter, blowing past the complexity of previous demonstrations. The results appear in this week’s issue of Nature.

As the basis for its quantum simulation, the JQI team deploys up to 53 individual ytterbium ions—charged atoms trapped in place by gold-coated and razor-sharp electrodes. A complementary design by Harvard and MIT researchers uses 51 uncharged rubidium atoms confined by an array of laser beams. With so many qubits these quantum simulators are on the cusp of exploring physics that is unreachable by even the fastest modern supercomputers. And adding even more qubits is just a matter of lassoing more atoms into the mix.

“Each ion qubit is a stable atomic clock that can be perfectly replicated,” says JQI Fellow Christopher Monroe*, who is also the co-founder and chief scientist at the startup IonQ Inc. “They are effectively wired together with external laser beams. This means that the same device can be reprogrammed and reconfigured, from the outside, to adapt to any type of quantum simulation or future quantum computer application that comes up.” Monroe has been one of the early pioneers in quantum computing and his research group’s quantum simulator is part of a blueprint for a general-purpose quantum computer.

Quantum hardware for a quantum problem

While modern, transistor-driven computers are great for crunching their way through many problems, they can screech to a halt when dealing with more than 20 interacting quantum objects. That’s certainly the case for quantum magnetism, in which the interactions can lead to magnetic alignment or to a jumble of competing interests at the quantum scale.

“What makes this problem hard is that each magnet interacts with all the other magnets,” says research scientist Zhexuan Gong, lead theorist and co-author on the study. “With the 53 interacting quantum magnets in this experiment, there are over a quadrillion possible magnet configurations, and this number doubles with each additional magnet. Simulating this large-scale problem on a conventional computer is extremely challenging, if at all possible.”

When these calculations hit a wall, a quantum simulator may help scientists push the envelope on difficult problems. This is a restricted type of quantum computer that uses qubits to mimic complex quantum matter. Qubits are isolated and well-controlled quantum systems that can be in a combination of two or more states at once. Qubits come in different forms, and atoms—the versatile building blocks of everything—are one of the leading choices for making qubits. In recent years, scientists have controlled 10 to 20 atomic qubits in small-scale quantum simulations.

Currently, tech industry behemoths, startups and university researchers are in a fierce race to build prototype quantum computers that can control even more qubits. But qubits are delicate and must stay isolated from the environment to protect the device’s quantum nature. With each added qubit this protection becomes more difficult, especially if qubits are not identical from the start, as is the case with fabricated circuits. This is one reason that atoms are an attractive choice that can dramatically simplify the process of scaling up to large-scale quantum machinery.

An atomic advantage

Unlike the integrated circuitry of modern computers, atomic qubits reside inside of a room-temperature vacuum chamber that maintains a pressure similar to outer space. This isolation is necessary to keep the destructive environment at bay, and it allows the scientists to precisely control the atomic qubits with a highly engineered network of lasers, lenses, mirrors, optical fibers and electrical circuitry.

“The principles of quantum computing differ radically from those of conventional computing, so there’s no reason to expect that these two technologies will look anything alike,” says Monroe.

In the 53-qubit simulator, the ion qubits are made from atoms that all have the same electrical charge and therefore repel one another. But as they push each other away, an electric field generated by a trap forces them back together. The two effects balance each other, and the ions line up single file. Physicists leverage the inherent repulsion to create deliberate ion-to-ion interactions, which are necessary for simulating of interacting quantum matter.

The quantum simulation begins with a laser pulse that commands all the qubits into the same state. Then, a second set of laser beams interacts with the ion qubits, forcing them to act like tiny magnets, each having a north and south pole. The team does this second step suddenly, which jars the qubits into action. They feel torn between two choices, or phases, of quantum matter. As magnets, they can either align their poles with their neighbors to form a ferromagnet or point in random directions yielding no magnetization. The physicists can change the relative strengths of the laser beams and observe which phase wins out under different laser conditions.

The entire simulation takes only a few milliseconds. By repeating the process many times and measuring the resulting states at different points during the simulation, the team can see the process as it unfolds from start to finish. The researchers observe how the qubit magnets organize as different phases form, dynamics that the authors say are nearly impossible to calculate using conventional means when there are so many interactions.

This quantum simulator is suitable for probing magnetic matter and related problems. But other kinds of calculations may need a more general quantum computer with arbitrarily programmable interactions in order to get a boost.

“Quantum simulations are widely believed to be one of the first useful applications of quantum computers,” says Alexey Gorshkov**, JQI Fellow and co-author of the study. “After perfecting these quantum simulators, we can then implement quantum circuits and eventually quantum-connect many such ion chains together to build a full-scale quantum computer with a much wider domain of applications.”

As they look to add even more qubits, the team believes that its simulator will embark on more computationally challenging terrain, beyond magnetism. “We are continuing to refine our system, and we think that soon, we will be able to control 100 ion qubits, or more,” says Jiehang Zhang, the study’s lead author and postdoctoral researcher. “At that point, we can potentially explore difficult problems in quantum chemistry or materials design.”

Written by E. Edwards

* Christopher Monroe is a fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science. He is a Distinguished University Professor & Bice Seci-Zorn Professor in the UMD Physics Department.

**Alexey Gorshkov is a fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science. He is an Adjunct Professor in the UMD Physics Department.

Read More

Computers based on quantum physics promise to solve certain problems much faster than their conventional counterparts. By utilizing qubits—which can have more than just the two values of ordinary bits—quantum computers of the future could perform complex simulations and may solve difficult problems in chemistry, optimization and pattern-recognition.

But building a large quantum computer—one with thousands or millions of qubits—is hard because qubits are very fragile. Small interactions with the environment can introduce errors and lead to failures. Detecting these errors is not straightforward, since quantum measurements are a form of interaction and therefore also disrupt quantum states. Quantum physics presents another wrinkle, too: It’s not possible to simply copy a qubit for backup.

Scientists have come up with clever ways to detect errors and keep them from spreading. But so far, a complete error detection protocol has not been tested in experiments, partly due to the difficulty of creating controlled interactions between all of the necessary qubits.

Now, in a recent article published in Science Advances, researchers at the Joint Quantum Institute tested a full procedure for encoding a qubit and detecting some of the errors that occur during and after the encoding. They applied a scheme that distributed the information of one qubit among four trapped ytterbium ions—themselves also qubits—using a fifth ion qubit to read out whether certain errors had occurred. Ions provide a rich set of interactions, which allowed scientists to link the fifth ion qubit with the other four at will—a common requirement of error detection or correction schemes. With this approach, the scientists detected nearly all of the single-ion errors, performing more than 5000 runs of the full encoding and measurement procedure for a number of different quantum states. Additionally, the encoding itself didn’t appear to introduce errors on multiple ions at the same time, a feature that could have spelled doom for error detection and correction in ions.

Although the result is an early step toward larger quantum memories and quantum computers, the authors say it demonstrates the potential of qubit protection schemes with trapped ions and paves the way toward error detection and eventually error correction on a larger scale.

Written by Nina Beier

Read More

This story was prepared by the Delft University of Technology (TU Delft) and adapted with permission. The experiments described were performed at TU Delft, with theoretical and numerical contributions from JQI Fellow and Condensed Matter Theory Center Director Sankar Das Sarma and JQI postdoctoral researcher Xiao Li.

Quantum behavior plays a crucial role in novel and emergent material properties, such as superconductivity and magnetism. Unfortunately, it is still impossible to calculate the underlying quantum behavior, let alone fully understand it. Scientists of QuTech, the Kavli Institute of Nanoscience in Delft and TNO, in collaboration with ETH Zurich and the University of Maryland, have now succeeded in building an "artificial material" that mimics this type of quantum behavior on a small scale. In doing so, they have laid the foundations for new insights and potential applications. Their work is published today in Nature.

Over the past century, an increased understanding of semiconductor materials has led to many technological improvements, such as computer chips becoming ever faster and smaller. We are, however, gradually reaching the limits of Moore's Law, the trend that predicts a doubling in computing power for half the price every two years. But this prediction ignores the possibility that computers might harness quantum physics.

"There is so much physics left to discover if we truly want to understand materials on the very smallest scale," says Lieven Vandersypen, a professor at TU Delft in the Netherlands and the lead experimentalist on the new paper. And that new physics is set to bring even more new technology with it. "The difficulty is that, at this scale, quantum theory determines the behavior of electrons and it is virtually impossible to calculate this behavior accurately even for just a handful of electrons, using even the most powerful supercomputers," Vandersypen says.

Scientists are now combining the power of the semiconductor industry with their knowledge of quantum technology in order to mimic the behavior of electrons in materials—a technique known as quantum simulation. "I hope that, in the near future, this will enable us to learn so much about materials that we can open some important doors in technology, such as the design of superconductors at room temperature, to make possible loss-free energy transport over long distances, for example," Vandersypen says.

Mimicking nature

It has long been known that individual electrons can be confined to small regions on a chip, known as quantum dots. There are, in principle, suitable for researching the behavior and interactions of electrons in materials. The captured electrons can move, or tunnel, between the quantum dots in a controlled way, while they interact through the repulsion of their negative charges. "Processes like these in quantum dots, cooled to a fraction of a degree above absolute zero, are perfectly suitable for simulating the electronic properties of new materials," says Toivo Hensgens, a graduate student at TU Delft and the lead author of the paper.

In practice, it is a major challenge to control the electrons in quantum dots so precisely that the underlying physics becomes visible. Imperfections in the quantum chips and inefficient methods of controlling the electrons in the dots have made this a particularly hard nut to crack.

Quantum equipment

Researchers have now demonstrated a method that is both effective and can be scaled up to larger numbers of quantum dots. The number of electrons in each quantum dot can be set from 0 to 4 and the chance of tunnelling between neighbouring dots can be varied from negligible to the point at which neighbouring dots actually become one large dot. "We use voltages to distort the (potential) landscape that the electrons sense," explains Hensgens. "That voltage determines the number of electrons in the dots and the relative interactions between them."

In a quantum chip with three quantum dots, the QuTech team has demonstrated that they are capable of simulating a series of material processes experimentally. But the most important result is the method that they have demonstrated. "We are now easily able to  add more quantum dots with electrons and control the potential landscape in such a way that we can ultimately simulate very large and interesting quantum processes," Hensgens says.

The Vandersypen team aims to progress towards more quantum dots as soon as possible. To achieve that, he and his colleagues have entered a close collaboration with chipmaker Intel. "Their knowledge and expertise in semiconductor manufacturing combined with our deep understanding of quantum control offers opportunities that are now set to bear fruit," he says.

Read More

Large-scale quantum computers, which are an active pursuit of many university labs and tech giants, remain years away. But that hasn’t stopped some scientists from thinking ahead, to a time when quantum computers might be linked together in a network or a single quantum computer might be split up across many interconnected nodes.

A group of physicists at the University of Maryland, working with JQI Fellow Christopher Monroe, are pursuing the second goal, attempting to wire up isolated modules of trapped atomic ions with light. They imagine many modules, each with a hundred or so ions, linked together to form a quantum computer that is inherently scalable: If you want a bigger computer, simply add more modules to the mix.

In a paper published recently in Physical Review Letters, Monroe and his collaborators reported on putting together many of the pieces needed to create such a module. It includes two different species of ions: an ytterbium ion for storing information and a barium ion for generating the light that communicates with other nodes.

This dual-species approach isolates the storage and communication tasks of a network node. With a single species, manipulating the communication ion with a laser could easily corrupt the storage ion. In several experiments, the researchers demonstrated that they could successfully isolate the two ions from each other, transfer information between them and capture light generated by both ions. 

The light from the barium communication ion could eventually be routed through fiber optic cables to a reconfigurable sensor, where it would meet light from other nodes. To demonstrate that the module could produce this communication light, the team carefully excited the barium ion with a laser—leaving the ytterbium ion untouched—and captured the light emitted as it decayed. By observing both this emitted light and the ion, the team determined that the two were entangled, a requirement if the light is to carry messages in a quantum network.

The team also transferred information between the two ions, using their mutual electrical push and the resulting motion to intermingle the ions’ internal quantum characteristics. Using lasers to excite specific motion, the team showed how to swap information from one ion to the other and even entangle the two ions. Entangling the storage ion with the communication ion and the communication ion with outgoing light are the main ingredients needed for a node in a quantum network.

Using two different species came with some challenges, though. One problem to overcome was a size mismatch. Since ions give each other an electrical push, they wobble in a coordinated way when they are trapped next to each other. But ytterbium is heavier than barium, creating a mismatch in this motion that slows down the rate that information can be transferred from the ytterbium memory to the barium interface.

By analyzing this coupled motion, the team realized that using motion along the line connecting the two ions—something that is typically slower because ions aren’t as tightly confined in this direction—would speed up the information transfer.

The team has added memory ions to their module since the experiments they report in this work. But their main focus going forward will be to wire more modules together, with the eventual goal being a large-scale, modular quantum computer.

The paper has four authors in addition to Monroe: lead author and JQI graduate student Volkan Inlek, JQI graduate student Clayton Crocker, JQI postdoctoral researcher Marty Lichtman and JQI graduate student Ksenia Sosnova.

Story by Chris Cesare

Read More

Deep within solids, individual electrons zip around on a nanoscale highway paved with atoms. For the most part, these electrons avoid one another, kept in separate lanes by their mutual repulsion. But vibrations in the atomic road can blur their lanes and sometimes allow the tiny particles to pair up. The result is smooth and lossless travel, and it’s one way to create superconductivity.

But there are other, less common ways to achieve this effect. Scientists from the University of Maryland (UMD), the University of California, Irvine (UCI) and Fudan University have now shown that tiny magnetic tremors lead to superconductivity in a material made from metallic nano-layers. And, beyond that, the resulting electron pairs shatter a fundamental symmetry between past and future. Although the material is a known superconductor, these researchers provide a theoretical model and measurement, which, for the first time, unambiguously reveals the material’s exotic nature.

In quantum materials, breaking the symmetry between the past and the future often signifies unconventional phases of matter. The nickel-bismuth (Ni-Bi) sample studied here is the first example of a 2D material where this type of superconductivity is intrinsic, meaning that it happens without the help of external agents, such as a nearby superconductor. These findings, recently published in Science Advances, make Ni-Bi an appealing choice for use in future quantum computers. This research may also assist scientists in their search for other similarly strange superconductors.

Mehdi Kargarian*, a postdoctoral researcher at UMD and a co-author of the paper, explains that even after a century of study, superconductivity remains a vibrant area of research. “It is a rather old problem, so it is surprising that people are still discovering types of superconductivity in the lab that are unprecedented,” Kargarian says, adding that there are typically two questions scientists ask of a new superconductor. “First, we want to understand the underlying electron pairing—what is causing the superconductivity,” he says. “The second thing, related to applications, is to see if superconductivity is possible at higher temperatures.”

Superconductors, particularly the exotic types, largely remain shackled to unwieldy cryogenic equipment. Scientists are searching for ways to push superconducting temperatures higher, thus making these materials easier to use for things like improved electricity distribution and building quantum devices. In this new research, the team tackles Kargarian’s first question and the material hints at a positive outlook for the second question. Its exotic superconductivity, although still cryogenic, occurs at a higher temperature compared to other similar systems.

Ni-Bi superconductivity was first observed in the early 1990s. But later, when Fudan University scientists published studies of an ultrapure, ultra-thin sample, they noticed something unusual happening.

The strangeness starts with the superconductivity itself. Bismuth alone is not a superconductor, except under extraordinarily low temperatures and high pressure—conditions that are not easy to achieve. Nickel is magnetic and not a superconductor. In fact, strong magnets are known to suppress the effect. This means that too much nickel destroys the superconductivity, but a small amount induces it.

UMD theorists* proposed that fluctuations in nickel’s magnetism are at the heart of this peculiar effect. These tiny magnetic tremors help electrons to form pairs, thus doing the work performed by vibrations in conventional superconductors. If there is too much nickel, magnetism dominates and the effect of the fluctuations diminishes. If there is too much bismuth, then the top surface, where superconductivity takes place, is too far away from the source of magnetic fluctuations.

The goldilocks zone occurs when a twenty-nanometer-thick bismuth layer is grown on top of two nanometers of nickel. For this layer combination, superconductivity happens at around 4 degrees above absolute zero. While this is about as cold as deep space, it is actually quite lab-friendly and reachable using standard cryogenic equipment.

The idea that magnetic fluctuations can promote superconductivity is not new and dates back to the end of the 20th century. However, most earlier examples of such behavior require strict operating conditions, such as high pressure. The researchers explain that Ni-Bi is different because straightforward cooling is enough to achieve this type of exotic superconductivity, which breaks time symmetry.

The researchers employed a highly customized apparatus to search for signs of the broken symmetry. Light should rotate when reflected from samples that have this property. For Ni-Bi, the expected amount of light rotation is tens of nanoradians, which is about 100 billionths of a tick on a watch face. Jing Xia*, a co-author of the paper and a professor at UCI, has one of the only devices in the world capable of measuring such an imperceptible light rotation.

In order to measure this rotation for Ni-Bi, light waves are first injected into one end of a single special-purpose optical fiber. The two waves travel through the fiber, as if on independent paths. They hit the sample and then retrace their paths. Upon return, the waves are combined and form a pattern. Rotations of the light waves—from, say, symmetry breaking—will show up in the analyzed pattern as small translations. Xia and his colleagues at UCI measured around 100 nanoradians of rotation, confirming the broken symmetry. Importantly, the effect appeared just as the Ni-Bi sample became a superconductor, suggesting that the broken time symmetry and the appearance of superconductivity are strongly linked.

This form of superconductivity is rare and researchers say that there is still no recipe for making it happen. But, as Xia points out, there is guidance in the math behind the electron behavior. “We know mathematically how to make electron pairs break time-reversal symmetry,” Xia says. Practically, how do you achieve this formulaically? That is the million-dollar question. But my instinct is that when you do get magnetic fluctuation-mediated superconductivity, like in this material, then it is highly likely you get break that symmetry.”

* M. Kargarian is a postdoctoral researcher at the Condensed Matter Theory Center (CMTC), University of Maryland (UMD) and is affiliated with the Joint Quantum Institute. V. Yakovenko and V. Galitski are members of CMTC, fellows of the Joint Quantum Institute, and UMD professors. J. Xia is a professor at the University of California, Irvine. 

Written by E. Edwards/JQI

Read More

The race to build larger and larger quantum computers is heating up, with several technologies competing for a role in future devices. Each potential platform has strengths and weaknesses, but little has been done to directly compare the performance of early prototypes. Now, researchers at the JQI have performed a first-of-its-kind benchmark test of two small quantum computers built from different technologies.

The team, working with JQI Fellow Christopher Monroe and led by postdoctoral researcher Norbert Linke, sized up their own small-scale quantum computer against a device built by IBM. Both machines use five qubits—the fundamental units of information in a quantum computer—and both machines have similar error rates. But while the JQI device relies on chains of trapped atomic ions, IBM Q uses the movement of charges in a superconducting circuit.

To make their comparison, the JQI team ran several quantum programs on the devices, each of which solved a simple problem using a series of logic gates to manipulate one or two qubits at a time. Researchers accessed the IBM device using an online interface, which allows anyone to try their hand at programming IBM Q.

Both computers have strengths and weaknesses. For example, the superconducting platform has quicker gates and may be easier to mass produce, but its man-made qubits are all slightly different and have shorter lifetimes. Monroe says that the slower gates of ions might not be a major hurdle, though. "Because there is time,” Monroe says. “Trapped ion qubit lifetimes are way longer than any other type of qubit. Moreover, the ion qubits are identical, and they can be better replicated without error."

When put to the test, researchers found that the trapped-ion module was more accurate for programs that involved many pairs of qubits. Linke and Monroe attribute this to the simple fact that every qubit in their device is connected to every other—meaning that a logic gate can connect any pair of qubits. IBM Q has fewer than half the connections of its JQI counterpart, and in order to run some programs it had to shuffle information between qubits—a step that introduced errors into the calculation. When this shuffling wasn’t necessary, the two computers had similar performance.  “As we build larger systems, connectivity between qubits will become even more important,” Monroe says.

The new study, which was recently published in Proceedings of the National Academy of Sciences, provides an important benchmark for researchers studying quantum computing. And such head-to-head comparisons will become increasingly important in the future. “If you want to buy a quantum computer, you’ll need to know which one is best for your application,” Linke says. “You’ll need to test them in some way, and this is the first of this kind of comparison.”

By Erin Marshall

Read More

Consider, for a moment, the humble puddle of water. If you dive down to nearly the scale of molecules, it will be hard to tell one spot in the puddle from any other. You can shift your gaze to the left or right, or tilt your head, and the microscopic bustle will be identical—a situation that physicists call highly symmetric.

That all changes abruptly when the puddle freezes. In contrast to liquid water, ice is a crystal, and it gains a spontaneous rigid structure as the temperature drops. Freezing fastens neighboring water molecules together in a regular pattern, and a simple tilt of the head now creates a kaleidoscopic change.

In 2012, Nobel-prize winning physicist Frank Wilczek, a professor at the Massachusetts Institute of Technology, proposed something that sounds pretty strange. It might be possible, Wilczek argued, to create crystals that are arranged in time instead of space. The suggestion prompted years of false starts and negative results that ruled out some of the most obvious places to look for these newly named time crystals.

Now, five years after the first proposal, a team of researchers led by physicists at the Joint Quantum Institute and the University of Maryland have created the world's first time crystal using a chain of atomic ions. The result, which finally brings Wilczek's exotic idea to life, was reported in Nature on March 9.

Much like freezing destroys the symmetry of liquid water, a time crystal disturbs a regularity in time. This is somewhat surprising, says lead author and JQI postdoctoral researcher Jiehang Zhang, since nature usually responds in sync to things that change in time. "The earth rotates around the sun once a year, and the seasons have the same period," Zhang says. "That’s what you would naturally expect."

A time crystal doesn't follow the lead, instead responding with a slower frequency—like a bell struck once a second that rings every other second. The atomic ions in the Maryland experiment, which researchers manipulated using laser pulses, responded exactly half as fast as the sequence of pulses that drove them.

Zhang, JQI Fellow Christopher Monroe and a group of experimentalists at UMD teamed up with a theory group at the University of California, Berkeley to create their time crystal. The Berkeley group, led by physicist Norman Yao, had previously proposed a way to create time crystals in the lab. For a chain of atomic ions, the challenge came down to finding the right sequence of laser pulses, along with assembling the sea of mirrors and lenses that ensured the lasers impinged on the ions in the right way.

To create their time crystal, researchers activated three types of laser-driven behavior in a chain of ten ytterbium ions. First, each ion was bombarded with its own individual laser beam, flipping an internal quantum property called spin by roughly 180 degrees with each pulse. Second, the ions were induced to interact with each other, coupling their internal spins together like two neighboring magnets. Finally, random disorder—essentially noise—was sprinkled onto each ion, a feature known from previous experiments to prevent the spins from jostling and heating up the chain.

Altogether, this sequence twisted around the ions' spins, and researchers kept track of the orientation of each spin after many repetitions of the sequence. When all three laser-driven behaviors were turned on, the spins of each ion synced up, and they would rhythmically return to their original direction at half the speed of the laser sequence.

But a time crystal is more than mere repetition, and this alone would not be enough to claim the creation of a time crystal, Zhang says. A crystal also needs to be rigid. "If you put a bunch of billiard balls on a pool table separated by exactly 10 centimeters, is that a crystal?" Zhang says.  "Not really, because if you shake the table a little bit it will fall apart."

Zhang and his colleagues demonstrated that their ions had this rigidity by attempting to artificially "melt" the time crystal. By modifying one of the laser pulses—essentially shaking the table—they observed that the rhythm remained stable, up to a point. Past a certain amount of heating, the time crystal dissolved away, just as an ice cube can melt back into a small puddle of water. But with weak shaking, it remained stable, a fact that provided the key evidence that they had created a time crystal.

This rigidity makes time crystals a potential ingredient for clocking complex quantum systems that have inherent defects and are hard to control. They could have applications to future quantum computers, which will also need to be robust. But such applications are still a long way off, especially since the time crystal that Zhang and collaborators produced lasted less than a millisecond.

"This bizarre state of matter results from a complex interplay between many quantum controls at the individual atomic level," says Monroe. "But time crystals can also emerge in certain solid-state devices, so a general understanding of this phenomenon could help bring such systems into future quantum devices."

In the same issue of Nature, a group of researchers from Harvard University, also working with Berkeley’s Yao, reported the creation of a time crystal using just such a solid-state system. Instead of ions, they used natural defects found in diamond to set up their crystal.

Read More

From credit card numbers to bank account information, we transmit sensitive digital information over the internet every day. Since the 1990s, though, researchers have known that quantum computers threaten to disrupt the security of these transactions.

That’s because quantum physics predicts that these computers could do some calculations far faster than their conventional counterparts. This would let a quantum computer crack a common internet security system called public key cryptography.

This system lets two computers establish private connections hidden from potential hackers. In public key cryptography, every device hands out copies of its own public key, which is a piece of digital information.  Any other device can use that public key to scramble a message and send it back to the first device. The first device is the only one that has another piece of information, its private key, which it uses to decrypt the message. Two computers can use this method to create a secure channel and send information back and forth.

A quantum computer could quickly calculate another device’s private key and read its messages, putting every future communication at risk. But many scientists are studying how quantum physics can fight back and help create safer communication lines.

One promising method is quantum key distribution, which allows two parties to directly establish a secure channel with a single secret key. One way to generate the key is to use pairs of entangled photons—particles of light with a shared quantum connection. The entanglement guarantees that no one else can know the key, and if someone tries to eavesdrop, both parties will be tipped off.

Tobias Huber, a recently arrived JQI Experimental Postdoctoral Fellow, has been investigating how to reliably generate the entangled photons necessary for this secure communication. Huber is a graduate of the University of Innsbruck in Austria, where he was supervised by Gregor Weihs. They have frequently collaborated with JQI Fellow Glenn Solomon, who spent a semester at Innsbruck as a Fulbright Scholar. Over the past couple of years, they have been studying a particular source of entangled photons, called quantum dots.

A quantum dot is a tiny area in a semiconductor, just nanometers wide, that is embedded in another semiconductor. This small region behaves like an artificial atom. Just like in an atom, electrons in a quantum dot occupy certain discrete energy levels. If the quantum dot absorbs a photon of the right color, an electron can jump to a higher energy level. When it does, it leaves behind an open slot at the lower energy, which physicists call a hole. Eventually, the electron will decay to its original energy, emitting a photon and filling in the hole. The intermediate combination of the excited electron and the hole is called an exciton, and two excited electrons and two holes are called a biexciton. A biexciton will decay in a cascade, emitting a pair of photons.

Huber, Weihs, Solomon and several colleagues have developed a way to directly excite biexcitons in quantum dots using a sequence of laser pulses. The pulses make it possible to encode information in the pair of emitted photons, creating a connection between them known as time-bin entanglement. It’s the best type of entanglement for transmitting quantum information through optical fibers because it doesn’t degrade as easily as other types over long distances. Huber and his colleagues are the first to directly produce time-bin entangled photons from quantum dots.

In their latest work, published in Optics Express, they investigated how the presence of material imperfections surrounding the quantum dots influences this entanglement generation. Imperfections have their own electron energy levels and can steal an electron from a dot or donate an electron to fill a hole. Either way, the impurity prevents an exciton from decaying and emitting a photon, decreasing the number of photons that are ultimately released. To combat this loss, the team used a second laser to fill up the electron levels of the impurities and showed that this increased the number of photons released without compromising the entanglement between them.

The team says the new work is a step in the right direction to make quantum dots a viable source of entangled photons. Parametric down-conversion, a competitor that uses crystals to split the energy of one photon into two, occasionally produces two pairs of entangled photons instead of one. This could allow an eavesdropper to read an encrypted message without being detected. The absence of this drawback makes quantum dots an excellent candidate for producing entangled photons for quantum key distribution.

The advent of quantum computing brings new security challenges, but tools like quantum key distribution are taking those challenges head-on. It’s possible that, one day, we could have not only quantum computers, but quantum-secure communication lines, free from prying eyes.

Read More

This is part two of a two-part series on Weyl semimetals and Weyl fermions, newly discovered materials and particles that have drawn great interest from physicists at JQI and the Condensed Matter Theory Center at the University of Maryland. The second part focuses on the theoretical questions about Weyl materials that Maryland researchers are exploring. Part one, which was published last week, introduced their history and basic physics. If you haven’t read part one, we encourage you to head there first before getting into the details of part two.

The 2015 discovery of a Weyl semimetal—and the Weyl fermions it harbored—provoked a flurry of activity from researchers around the globe. A quick glance at a recent physics journal or the online arXiv preprint server testifies to the topic’s popularity. The arXiv alone has had more than 200 papers on Weyl semimetals posted in 2016.

Researchers at JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland have been interested in Weyl physics since before last summer’s discovery, publishing 18 papers on the topic over the past two years. In all, more than a dozen scientists at Maryland have been working to understand the fundamental properties of these curious new materials.

In addition to studying specific topics, researchers are also realizing that the physics of Weyl fermions—particles first studied decades ago in different setting—might have wider reach. They may account for the low-energy behavior of certain superconductors and other materials, especially those that are strongly correlated—that is, materials in which interactions between electrons cannot be ignored.

"Weyl physics should be abundant in many, many correlated materials," says Pallab Goswami, a theorist and postdoctoral researcher at JQI and CMTC. "If we can understand the exotic thermodynamic, transport and electrodynamic properties of Weyl fermions, we can probably understand more about low temperature physics in general," Goswami says.

Taking a wider approach

Goswami is not only interested in discovering new Weyl semimetals. He also wants to find strongly interacting materials where Weyl semimetal physics can help explain unresolved puzzles. That behavior is often related to the unique magnetic properties of Weyl fermions.

Recently, he and several colleagues examined a family of compounds known as pyrochlore iridates, formed from iridium, oxygen and a rare earth element such as neodymium or praseodymium. While most of these are insulators at low temperatures, the compound with praseodymium is an exception. It remains a metal and, intriguingly, has an anomalous current that flows without any external influence. This current, due to a phenomenon called the Hall effect, appears in other materials, but it is usually driven by an applied magnetic field or the magnetic properties of the material itself. In the praseodymium iridate, though, it appears even without a magnetic field and despite the fact that the compound has no magnetic properties that have been seen by experiment.

Goswami and his colleagues have argued that Weyl fermions can account for this puzzling behavior. They can distort a material’s magnetic landscape, making it look to other particles as if a large magnetic field is there. This effect is hard to spot in the lab, though, due to the challenge of keeping samples at very cold temperatures. The team has suggested how future experiments might confirm the presence of Weyl fermions through precise measurements with a scanning tunneling microscope.

On the surface

Parallel to Goswami’s efforts to expand the applications of Weyl physics, Johannes Hofmann, a former JQI and CMTC theorist who is now at the University of Cambridge in the UK, is diving into the details of Weyl semimetals. Hofmann has studied Weyl semimetals divorced from any real material and predicted a generic behavior that electrons on the surface of a semimetal will have. It’s a feature that could ultimately find applications to electronics and photonics.

In particular, he studied undulating charge distributions on the surface of semimetals, created by regions with more electrons and regions with less. Such charge fluctuations are dynamic, moving back and forth in response to their mutual electrical attraction, and in Weyl semimetals they support waves that move in only one direction.

The charge fluctuations generate electric and magnetic fields just outside the surface. And on the surface, positive and negative regions are packed close together—so close, in fact, that their separation can be much smaller than the wavelength of visible light. Since these fluctuations occur on such a small scale, they can also be used to detect small features in other objects. For instance, bringing a sample of some other material near the surface will modify the distribution of charges in a way that could be measured. Getting the same resolution with light would require high-energy photons that could destroy the object being imaged. Indeed, researchers have already shown that this is a viable imaging technique, as demonstrated in experiments with ordinary metals.

On the surface of Weyl semimetals one-way waves can travel through these charge fluctuations. Ordinary metals, too, can carry waves but require huge magnetic fields to steer them in only one direction. Hofmann showed that in a Weyl semimetal, it’s possible to create these waves without a magnetic field, a fact that could enable applications of the materials to microscopy and lithography. 

Too much disorder?

Although many studies imagine that Weyl materials are perfectly clean, such a situation rarely occurs in real experiments. Contaminants inevitably lodge themselves into the ideal crystal structure of any solid. Consequently, JQI scientists have looked at how disorder—the dirt that prevents samples from behaving like perfect theoretical models—affects the properties of Weyl materials. Their work has settled an argument theorists have been having for years.

One camp thought that weak disorder—dirt that doesn’t cause big changes—was essentially harmless to Weyl semimetals, since tiny wobbles in the material's electrical landscape could safely be ignored. The other camp argued that certain fluctuations, though weak, affect a wide enough area of the landscape that they cannot be ignored.

Settling the dispute took intense numerical study, requiring the use of supercomputing resources at Maryland. "It was very hard to do this," says Jed Pixley, a postdoctoral researcher at JQI and CMTC who finally helped solve the disorder conundrum. "It turns out that the effects of large local fluctuations of the disorder are weak, but they’re there."

Pixley’s calculations found that large regions of weak disorder create a new type of low-energy excitation, in addition to Weyl fermions. These new excitations live around the disordered regions and divert energy away from the Weyl fermion quasiparticles. The upshot is that the quasiparticles have a finite lifetime, instead of the infinite lifetime predicted by previous studies. The result has consequences for the stability of Weyl semimetals in a technical sense, although the lifetime of the quasiparticles is still quite long. In typical experiments, the effects of large areas of disorder would be tough to spot, although experiments on Weyl semimetals are still in their early days.

Research into Weyl materials shows little sign of slowing down. And the broader role that Weyl fermions play in condensed matter physics is still evolving and growing, with many more surprises likely in the future. As more and more experimental groups join the hunt for exotic physics, theoretical investigations, like those of the scientists at JQI and CMTC, will be crucial to identifying new behaviors and suggesting new experiments, steering the study of Weyl physics toward new horizons.

Read More

This is part one of a two-part series on Weyl semimetals and Weyl fermions, newly discovered materials and particles that have drawn great interest from researchers at JQI and the Condensed Matter Theory Center at the University of Maryland. The first part focuses on the history and basic physics of these materials. Part two focuses on theoretical work at Maryland.

For decades, particle accelerators have grabbed headlines while smashing matter together at faster and faster speeds. But in recent years, alongside the progress in high-energy experiments, another realm of physics has been taking its own exciting strides forward.

That realm, which researchers call condensed matter physics, studies chunks of matter moving decidedly slower than the protons in the LHC. In fact, the materials under study—typically solids or liquids—are usually sitting still. That doesn't make them boring, though. Their calm appearance can often hide exotic physics that arises from their microscopic activity.

"In condensed matter physics, the energy scales are much lower," says Pallab Goswami, a postdoctoral researcher at JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland. "We want to go to lower energies and find new phenomena, which is exactly the opposite of what is done in particle physics."

Historically, that's been a fruitful approach. The field has explained the physics of semiconductors—like the silicon that makes computer chips—and many superconductors, which generate the large magnetic fields required for clinical MRI machines.

Over the past decade, that success has continued. In 2004, researchers at the University of Manchester in the UK discovered a way to make single-atom-thick sheets of carbon by sticking Scotch tape onto graphite and peeling it off. It was a shockingly low-tech way to make graphene, a material with stellar electrical properties and incredible strength, and it led quickly to a Nobel Prize in physics in 2010.

A few years later, researchers discovered topological insulators, materials that trap their internal electrons but let charges on the surface flow freely. It’s a behavior that requires sophisticated math to explain—math that earned three researchers a share of the 2016 Nobel Prize in physics for theoretical discoveries that ultimately explain the physics of these and other materials.

In 2012, experimentalists studying the junction between a superconductor and a thin wire spotted evidence for Majorana fermions, particles that behave like uncharged electrons. Originally studied in the context of high-energy physics, these exotic particles never showed up in accelerators, but scientists at JQI predicted that they might make an appearance at much lower energies.

Last year, separate research groups at Princeton University, MIT and the Chinese Academy of Sciences discovered yet another exotic material—a Weyl semimetal—and with it yet another particle: the Weyl fermion. It brought an end to a decades-long search that began in the 1930s and earned acclaim as a top-10 discovery of the year, according to Physics World.

Like graphene, Weyl semimetals have appealing electrical properties and may one day make their way into electronic devices. But, perhaps more intriguingly for theorists, they also share some of the rich physics of topological insulators and have provoked a flurry new research. Scientists working with JQI Fellow Sankar Das Sarma, the Director of CMTC, have published 18 papers on the subject since 2014.

Das Sarma says that the progress in understanding solid state materials over the past decade has been astonishing, especially the discovery of phenomena researchers once thought were confined to high-energy physics. “It shows how clever nature is, as concepts and laws developed in one area of physics show up in a completely disparate area in unanticipated ways,” he says.

An article next week will explore some of the work on Weyl materials at JQI and CMTC. This week's story will focus on the fundamental physics at play in these unusual materials.

Spotted at long last

Within two weeks last summer, three research groups reported evidence for Weyl semimetals. Teams from the US and China measured the energy of electrons on the surface of tantalum arsenide, a metallic crystal that some had predicted might be a semimetal. By shining light on the material and capturing electrons ejected from the sample, researchers were able to map out a characteristic sign of Weyl semimetals—a band of energies that electrons on the surface inhabit, known as a Fermi arc. It was a feature predicted only for Weyl semimetals.

Much of the stuff on Earth, from wood and glass to copper and water, is not a semimetal. It's either an insulator, which does a bad job of conducting electricity, or a conductor, which lets electrical current flow with ease.

Quantum physics ultimately explains the differences between conductors, insulators, semiconductors and semimetals. The early successes of quantum physics—like explaining the spectrum of light emitted by hydrogen atoms—revolved around the idea that quantum objects have discrete energy levels. For instance, in the case of hydrogen, the single electron orbiting the nucleus can only occupy certain energies. The pattern of light emanating from hot hydrogen gas matches up with the spacing between these levels.

In a solid, which has many, many atoms, electrons still occupy a discrete set of energies. But with so many electrons come many more levels, and those levels tend to bunch together. This leads to a series of energy bands, where electrons can live, and gaps, where they can't. The figure below illustrates this.

Electrons pile into these bands, filling up the allowed energies and skipping the gaps. Depending on where in the band structure the last few electrons sit, a material will have dramatically different electrical behavior. Insulators have an outer band that is completely filled up, with an energy gap to a higher empty band. Metals have their most energetic electrons sitting in a partially filled band, with lots of slightly higher energies to jump to if they are prodded by a voltage from a battery.

A Weyl semimetal is a different beast. There, electrons pile in and completely fill a band, but there is no gap to the higher, unfilled band. Instead, the two touch at isolated points, which are responsible for some interesting properties of Weyl materials.

Quasiparticles lead the charge

We often think of electrons as carrying the current in a wire, but that’s not the whole story. The charge carriers look like electrons, but due to their microscopic interactions they behave like they have a different mass. These effective charge carriers, which have different properties in different materials, are called quasiparticles.

By examining a material's bands and gaps, it's possible to glean some of the properties of these quasiparticles. For a Weyl semimetal, the charge carriers satisfy an equation first studied in 1929 by a German mathematician named Hermann Weyl, and they are now called Weyl fermions.

But the structure of the bands doesn't capture everything about the material, says Johannes Hofmann, a former postdoctoral researcher at JQI and CMTC who is now at the University of Cambridge. "In a sense, these Weyl materials are very similar to graphene," Hofmann says. "But they are not only described by the band structure. There is a topological structure as well, just as in topological insulators."

Hofmann says that although the single-point crossings in the bands play an important role, they don't tell the whole story. Weyl semimetals also have a topological character, which means that the overall shape of the bands and gaps, as well as the way electrons spread out in space, affect their properties. Topology can account for these other properties by capturing the global differences in these shapes, like the distinction between an untwisted ribbon and a Moebius strip.

The interplay between the topological structure and the properties of Weyl materials is an active area of research. Experiments, though, are still in the earliest stages of sorting out these questions.

A theorist’s dream

Researchers at JQI and elsewhere are studying many of the theoretical details, from the transport properties on the surfaces of Weyl materials to the emergence of new types of order. They are even finding Weyl physics useful in tackling condensed matter quandaries that have long proved intractable.

Jed Pixley, a postdoctoral researcher at CMTC, has studied how Weyl semimetals behave in the presence of disorder. Pixley says that such investigations are crucial if Weyl materials are to find practical applications. "If you are hoping semimetals have these really interesting aspects,” he says, “then things better not change when they get a little dirty."

Please return next week for a sampling of the research into Weyl materials underway at JQI and CMTC. Written by Chris Cesare with illustrations and figures by Sean Kelley.

Read More

When it comes to quantum physics, light and matter are not so different. Under certain circumstances, negatively charged electrons can fall into a coordinated dance that allows them to carry a current through a material laced with imperfections. That motion, which can only occur if electrons are confined to a two-dimensional plane, arises due to a phenomenon known as the quantum Hall effect.

Researchers, led by Mohammad Hafezi, a JQI Fellow and assistant professor in the Department of Electrical and Computer Engineering at the University of Maryland, have made the first direct measurement that characterizes this exotic physics in a photonic platform. The research was published online Feb. 22 and featured on the cover of the March 2016 issue of Nature Photonics. These techniques may be extended to more complex systems, such as one in which strong interactions and long-range quantum correlations play a role.

Symmetry and Topology

Physicists use different approaches to classify matter; symmetry is one powerful method. For instance, the microscopic structure of a material like diamond looks the same even while shifting your gaze to a new spot in the crystal. These symmetries – the rotations and translations that leave the microscopic structure the same – predict many of the physical properties of crystals.

Symmetry can actually offer a kind of protection against disruptions. Here, the word protection means that the system (e.g. a quantum state) is robust against changes that do not break the symmetry. Recently, another classification scheme based on topology has gained significant attention. Topology is a property that depends on the global arrangement of particles that make up a system rather than their microscopic details. The excitement surrounding this mathematical concept has been driven by the idea that the topology of a system can offer a stability bubble around interesting and even exotic physics, beyond that of symmetry. Physicists are interested in harnessing states protected by both symmetry and topology because quantum devices must be robust against disturbances that can interfere with their functionality.

The quantum Hall effect is best understood by peering through the lens of topology. In the 1980s, physicists discovered that electrons in some materials behave strangely when subjected to large magnetic fields at extreme cryogenic temperatures. Remarkably, the electrons at the boundary of the material will flow along avenues of travel called ‘edge states’, protected against defects that are most certainly present in the material. Moreover, the conductance--a measure of the current--is quantized. This means that when the magnetic field is ramped up, then the conductance does not change smoothly. Instead it stays flat, like a plateau, and then suddenly jumps to a new value. The plateaus occur at precise values that are independent of many of the material’s properties. This hopping behavior is a form of precise quantization and is what gives the quantum Hall effect its great utility, allowing it to provide the modern standard for calibrating resistance in electronics, for instance.

Researchers have engineered quantum Hall behavior in other platforms besides the solid-state realm in which it was originally discovered. Signatures of such physics have been spotted in ultracold atomic gases and photonics, where light travels in fabricated chips. Hafezi and colleagues have led the charge in the photonics field.

The group uses a silicon-based chip that is filled with an array of ring-shaped structures called resonators. The resonators are connected to each other via waveguides (figure). The chip design strictly determines the conditions under which light can travel along the edges rather than through the inner regions. The researchers measure the transmission spectrum, which is the fraction of light that successfully passes through an edge pathway. To circulate unimpeded through the protected edge modes, the light must possess a certain energy. The transmission increases when the light energy matches this criteria. For other parameters, the light will permeate the chip interior or get lost, causing the transmission signal to decrease. The compiled transmission spectrum looks like a set of bright stripes separated by darker regions (see figure). Using such a chip, this group previously collected images of light traveling in edge states, definitively demonstrating the quantum Hall physics for photons.

In this new experiment Hafezi’s team modified their design to directly measure the value of the topology-related property that characterizes the photonic edge states. This measurement is analogous to characterizing the quantized conductance, which was critical to understanding the electron quantum Hall effect. In photonics, however, conductance is not relevant as it pertains to electron-like behavior. Here the significant feature is the winding number, which is related to how light circulates around the chip. Its value equals to the number of available edge states and should not change in the face of certain disruptions.

To extract the winding number, the team adds 100 nanometer titanium heaters on a layer above the waveguides. Heat changes the index of refraction, namely how the light bends as it passes through the waveguides. In this manner, researchers can controllably imprint a phase shift onto the light. Phase can be thought of in terms of a time delay. For instance, when comparing two light waves, the intensity can be the same, but one wave may be shifted in time compared to the other. The two waves overlap when one wave is delayed by a full oscillation cycle—this is called a 2π phase shift.

On the chip, enough heat is added to add a 2π phase shift to the light. The researchers observe an energy shift in the transmission stripes corresponding to light traveling along the edge. Notably, in this chip design, the light can circulate either clockwise (CW) or counterclockwise (CCW), and the two travel pathways do not behave the same (in contrast to an interferometer). When the phase shift is introduced, the CW traveling light hops one direction in the transmission spectrum, and the CCW goes the opposite way. The winding number is the amount that these edge-state spectral features move and is exactly equivalent to the quantized jumps in the electronic conductance.

Sunil Mittal, lead author and postdoctoral researcher explains one future direction, “So far, our research has been focused on transporting classical [non-quantum] properties of light--mainly the power transmission. It is intriguing to further investigate if this topological system can also achieve robust transport of quantum information, which will have potential applications for on-chip quantum information processing.” 

This text was written by E. Edwards/JQI

Read More

Scientists have created a crystal structure that boosts the interaction between tiny bursts of light and individual electrons, an advance that could be a significant step toward establishing quantum networks in the future.

Today’s networks use electronic circuits to store information and optical fibers to carry it, and quantum networks may benefit from a similar framework. Such networks would transmit qubits – quantum versions of ordinary bits – from place to place and would offer unbreakable security for the transmitted information. But researchers must first develop ways for qubits that are better at storing information to interact with individual packets of light called photons that are better at transporting it, a task achieved in conventional networks by electro-optic modulators that use electronic signals to modulate properties of light.

Now, researchers in the group of Edo Waks, a fellow at JQI and an Associate Professor in the Department of Electrical and Computer Engineering at the University of Maryland, have struck upon an interface between photons and single electrons that makes progress toward such a device. By pinning a photon and an electron together in a small space, the electron can quickly change the quantum properties of the photon and vice versa. The research was reported online Feb. 8 in the journal Nature Nanotechnology.

“Our platform has two major advantages over previous work,” says Shuo Sun, a graduate student at JQI and the first author of the paper. “The first is that the electronic qubit is integrated on a chip, which makes the approach very scalable. The second is that the interactions between light and matter are fast. They happen in only a trillionth of a second – 1,000 times faster than previous studies.”


The new interface utilizes a well-studied structure known as a photonic crystal to guide and trap light. These crystals are built from microscopic assemblies of thin semiconductor layers and a grid of carefully drilled holes. By choosing the size and location of the holes, researchers can control the properties of the light traveling through the crystal, even creating a small cavity where photons can get trapped and bounce around.

”These photonic crystals can concentrate light in an extremely small volume, allowing devices to operate at the fundamental quantum limit where a single photon can make a big difference,” says Waks.

The results also rely on previous studies of how small, engineered nanocrystals called quantum dots can manipulate light. These tiny regions behave as artificial atoms and can also trap electrons in a tight space. Prior work from the JQI group showed that quantum dots could alter the properties of many photons and rapidly switch the direction of a beam of light.

The new experiment combines the light-trapping of photonic crystals with the electron-trapping of quantum dots. The group used a photonic crystal punctuated by holes just 72 nanometers wide, but left three holes undrilled in one region of the crystal. This created a defect in the regular grid of holes that acted like a cavity, and only those photons with only a certain energy could enter and leave.

Inside this cavity, embedded in layers of semiconductors, a quantum dot held one electron. The spin of that electron – a quantum property of the particle that is analogous to the motion of a spinning top – controlled what happened to photons injected into the cavity by a laser. If the spin pointed up, a photon entered the cavity and left it unchanged. But when the spin pointed down, any photon that entered the cavity came out with a reversed polarization – the direction that light’s electric field points. The interaction worked the opposite way, too: A single photon prepared with a certain polarization could flip the electron’s spin.

Both processes are examples of quantum switches, which modify the qubits stored by the electron and photon in a controlled way. Such switches will be the coin of the realm for proposed future quantum computers and quantum networks.


Those networks could take advantage of the strengths that photons and electrons offer as qubits. In the future, for instance, electrons could be used to store and process quantum information at one location, while photons could shuttle that information between different parts of the network.

Such links could enable the distribution of entanglement, the enigmatic connection that groups of distantly separated qubits can share. And that entanglement could enable other tasks, such as performing distributed quantum computations, teleporting qubits over great distances or establishing secret keys that two parties could use to communicate securely.

Before that, though, Sun says that the light-matter interface that he and his colleagues have created must create entanglement between the electron and photon qubits, a process that will require more accurate measurements to definitively demonstrate.

“The ultimate goal will be integrating photon creation and routing onto the chip itself,” Sun says. “In that manner we might be able to create more complicated quantum devices and quantum circuits.”

In addition to Waks and Sun, the paper has two additional co-authors: Glenn Solomon, a JQI fellow, and Hyochul Kim, a post-doctoral researcher in the Department of Electrical and Computer Engineering at the University of Maryland.

"Creating a quantum switch" credit: S. Kelley/JQI

Read More

Harnessing quantum systems for information processing will require controlling large numbers of basic building blocks called qubits. The qubits must be isolated, and in most cases cooled such that, among other things, errors in qubit operations do not overwhelm the system, rendering it useless. Led by JQI Fellow Christopher Monroe, physicists have recently demonstrated important steps towards implementing a proposed type of gate, which does not rely on super-cooling their ion qubits. This work, published as an Editor’s Suggestion in Physical Review Letters, implements ultrafast sensing and control of an ion's motion, which is required to realize these hot gates. Notably, this experiment demonstrates thermometry over an unprecedented range of temperatures--from zero-point to room temperature.

Graduate student and first author Kale Johnson explains how this research could be applied, “Atomic clock states found in ions make the most pristine quantum bits, but the speed at which we have been able to access them in a useful way for quantum information processing is slower than it could be. We are changing that by making each operation on the qubit faster while eliminating the need to cool the ion to the ground state after each operation.”

In the experiment the team begins with a single trapped atomic ion. The ion can be thought of as a bar magnet that can be oriented with its north pole ‘up’ or ‘down’ or any combination between the two poles (pointing horizontally along an imaginary equator is up + down).  Physicists can use lasers and microwave radiation to control this orientation. The individual laser pulses are a mere ten picoseconds in length—a time scale that is a tiny fraction of how long it takes for the ion to undergo appreciable motion in the trap. Operating in this regime is precisely what allows researchers to have superior sensing and ultimately control over the ion motion. The speed enables the team to extract the motional behavior of an ion using a technique that works independently of the energy in the motion itself.  In other words, the measurement is equally sensitive to a fast or very slow atom.

The researchers use a method that is based on Ramsey interferometry, named for the Nobel Laureate Norman Ramsey who pioneered it back in 1949. Known then as his “method of separated oscillatory fields,” it is used throughout atomic physics and quantum information science.   

Laser pulses are carefully divided and then reunited to achieve control over the ion’s spin and motion. The researchers call these laser-ion interactions ‘spin-dependent kicks’ (SDK) because each series of specially tailored laser pulses flips the spin, while simultaneously giving the ion a push (this is depicted in the illustration below). With each fast kick, the atom’s quantum wave packet is split into two parts in under three nanoseconds. Those halves are then re-combined at different points in space and time, and the signal from the unique overlap pattern reveals how the population is distributed between the two spin states. In this experimental sequence, that distribution depends on parameters such as the number of SDKs, the time between kicks, and the initial position and speed of the ion. The team repeats this experiment to extract the average motion of the ion, or its effective temperature.


In order to realize proposed two-ion quantum gates that do not require cooling the system into its quantum mechanical ground state, multiple spin dependent kicks must be employed with high accuracy such that errors remain manageable. Here the team was able to clearly demonstrate the necessary high-quality spin dependent kicks. More broadly, this protocol shows that adding ultrafast pulsed laser technology to the ion-trapping toolbox gives physicists ultimate quantum control over what can be a limiting, noise-inducing parameter: the motion.

Read More

The concept of temperature is critical in describing many physical phenomena, such as the transition from one phase of matter to another.  Turn the temperature knob and interesting things can happen.  But other knobs might be just as important for studying some phenomena.  One such knob is chemical potential, a thermodynamic parameter first introduced in the nineteenth century by scientists for keeping track of potential energy absorbed or emitted by a system during chemical reactions.

In these reactions different atomic species rearranged themselves into new configuration while conserving the overall inventory of atoms.  That is, atoms could change their partners but the total number of identity of the atoms remained invariant. 

Chemical potential is just one of many examples of how flows can be described.  An imbalance in temperature results in a flow of energy.  An imbalance in electrical potential results in a flow of charged particles.  Meanwhile, an imbalance in chemical potential results in a flow of particles; and specifically an imbalance in chemical potential for light would result in a flow of photons.

Can the concept of chemical light apply to light?  At first the answer would seem to be no since particles of light, photons, are regularly absorbed when then they interact with regular matter.  The number of photons present is not preserved.  But recent experiments have shown that under special conditions photon number can be conserved, clearing the way for the use of chemical potential for light. 

Now three JQI scientists offer a more generalized theoretical description of chemical potential (usually denoted by the Greek letter mu) for light and show how mu can be controlled and applied in a number of physics research areas.

A prominent experimental demonstration of chemical potential for light took place at the University of Bonn (*) in 2010.  It consisted of quanta of light (photons) bouncing back and forth inside a reflective cavity filled with dye molecules.  The dye molecules, acting as a tunable energy bath (a parametric bath), would regularly absorb photons (seemingly ruling out the idea of photon number being conserved) but would re-emit the light.  Gradually the light warmed the molecules and the molecules cooled the light until they were all at thermal equilibrium.  This was the first time photons had been successfully “thermalized” in this way.  Furthermore, at still colder temperatures the photons collapsed into a single quantum state; this was the first photonic Bose-Einstein condensate (BEC).

In a paper published in the journal Physical Review B the JQI theorists describe a generic approach to chemical potential for light. They illustrate their ideas by showing how a chemical-potential protocol can be implemented a microcircuit array. Instead of crisscrossing a single cavity, the photons are set loose in an array of microwave transmission lines. And instead of interacting with a bath of dye molecules, the photons here interact with a network of tuned circuits.

“One likely benefit in using chemical potential as a controllable parameter will be carrying out quantum simulations of actual condensed-matter systems,” said Jacob Taylor, one of the JQI theorists taking part in the new study.  In what some call a prototype for future full-scale quantum computing, quantum simulations use tuned interactions in a small microcircuit setup to arrive at a numerical solution to calculations that (in their complexity) would defeat a normal digital computer.

In the scheme described above, for instance, the photons, carefully put in a superposition of spin states, could serve as qubits. The qubits can be programmed to perform special simulations. The circuits, including the transmission lines, act as the coupling mechanism whereby photons can be respectively up- or down-converted to lower or higher energy by obtaining energy from or giving energy to excitations of the circuits.

(*) J. Klaers, J. Schmitt, F. Vewinger, and M. Weitz, Nature 468, 545 (2010)

Read More

The quantum Hall effect, discovered in the early 1980s, is a phenomenon that was observed in a two-dimensional gas of electrons existing at the interface between two semiconductor layers. Subject to the severe criteria of very high material purity and very low temperatures, the electrons, when under the influence of a large magnetic field, will organize themselves into an ensemble state featuring remarkable properties.

Many physicists believe that quantum Hall physics is not unique to electrons, and thus it should be possible to observe this behavior elsewhere, such as in a collection of trapped ultracold atoms. Experiments at JQI and elsewhere are being planned to do just that. On the theoretical front, scientists* at JQI and University of Maryland have also made progress, which they describe in the journal Physical Review Letters. The result, to be summarized here, proposes using quantum matter made from a neutral atomic gas, instead of electrons. In this new design, elusive exotic states that are predicted to occur in certain quantum Hall systems should emerge. These states, known as parafermionic zero modes, may be useful in building robust quantum gates.

For electrons, the hallmark of quantum Hall behavior includes a free circulation of electrical charge around the edge but not in the interior of the sample. This research specifically relates to utilizing the fractional quantum Hall (FQH) effect, which is a many-body phenomenon. In this case, one should not consider just the movement of individual electrons, but rather imagine the collective action of all the electrons into particle-like “quasiparticles.” These entities appear to possess fractional charge, such as 1/3.

How does this relate to zero modes? Zero modes, as an attribute of quantum Hall systems, come into their own in the vicinity of specially tailored defects. Defects are where quasiparticles can be trapped. In previous works, physicists proposed that a superconducting nanowire serve as a defect that snags quasiparticles at either end of the wire. Perhaps the best-known example of a composite particle associated with zero-mode defects is the famed Majorana fermion.

Author David Clarke, a Postdoctoral Research Scholar from the UMD Condensed Matter Theory Center, explains, “Zero modes aren’t particles in the usual sense. They’re not even quasiparticles, but rather a place that a quasiparticle can go and not cost any energy.”

Aside from interest in them for studying fundamental physics, these zero modes might play an important role in quantum computing. This is related to what’s known as topology, which is a sort of global property that can allow for collective phenomena, such as the current of charge around the edge of the sample, to be impervious to the tiny microscopic details of a system. Here the topology endows the FQH system with multiple quantum states with exactly the same energy. The exactness and imperturbability of the energy amid imperfections in the environment makes the FQH system potentially useful for hosting quantum bits. The present report proposes a practical way to harness this predicted topological feature of the FQH system through the appearance of what are known as parafermionic zero-modes.

These strange and wonderful states, which in some ways go beyond Majoranas, first appeared on the scene only a few years ago, and have attracted significant attention. Now dubbed ‘parafermions,’ they were first proposed by Maissam Barkeshli and colleagues at Stanford University. Barkeshli is currently a postdoctoral researcher at Microsoft Station Q and will be coming soon to JQI as a Fellow. Author David Clarke was one of the early pioneers in studying how these states could emerge in a superconducting environment. Because both parafermions and Majoranas are expected to have unconventional behaviors when compared to the typical particles used as qubits, unambiguously observing and controlling them is an important research topic that spans different physics disciplines. From an application standpoint, parafermions are predicted to offer more versatility than Majorana modes when constructing quantum computing resources.

What this team does, for the first time, is to describe in detail how a parafermionic mode could be produced in a gas of cold bosonic atoms. Here the parafermion would appear at both ends of a one-dimensional trench of Bose-Einstein Condensate (BEC) atoms sitting amid a larger two-dimensional formation of cold atoms displaying FQH properties. According to first author and Postdoctoral Researcher Mohammad Maghrebi, “The BEC trench is the defect that does for atoms what the superconducting nanowire did for electrons.”

Some things are different for electrons and neutral atoms. For one thing, electrons undergo the FQH effect only if exposed to high magnetic fields. Neutral atoms have no charge and thus do not react strongly to magnetic fields; researchers must mimic this behavior by exposing the atoms to carefully designed laser pulses, which create a synthetic field environment. JQI Fellow Ian Spielman has led this area of experimental research and is currently performing atom-based studies of quantum Hall physics.

Another author of the PRL piece, JQI Fellow Alexey Gorshkov, explains how the new research paper came about: “Motivated by recent advances in Spielman's lab and (more recently) in other cold atom labs in generating synthetic fields for ultracold neutral atoms, we show how to implement in a cold-atom system the same exotic parafermionic zero modes proposed originally in the context of condensed-matter systems.”

“We argue that these zero modes, while arguably quite difficult to observe in the condensed matter context, can be observed quite naturally in atomic experiments,” says Maghrebi. “The JQI atmosphere of close collaboration and cross-fertilization between atomic physics and condensed matter physics on the one hand and between theory and experiment on the other hand was at the heart of this work.”

“Ultracold atoms play by a slightly different set of rules from the solid state,” says JQI Fellow Jay Sau. Things which come easy in one are hard in the other. Figuring out the twists in getting a solid state idea to work for cold atoms is always fun and the JQI is one of the best places to do it.”

(*)  The PRL paper has five authors, and their affiliations illustrate the complexity of modern physics work.  Mohammad Maghrebi, Sriram Ganeshan, Alexey Gorshkov, and Jay Sau are associated with the Joint Quantum Institute, operated by the University of Maryland and the National Institute for Standards and Technology.  Three of the authors---Ganeshan, Clarke, and Sau---are also associated with the Condensed Matter Theory Center at the University of Maryland physics department.  Finally, Maghrebi and Gorshkov are associated with the Joint Center for Quantum Information and Computer Science (usually abbreviated QuICS), which is, like the JQI, a University of Maryland-NIST joint venture.

Read More

Symmetry permeates nature, from the radial symmetry of flowers to the left-right symmetry of the human body. As such, it provides a natural way of classifying objects by grouping those that share the same symmetry. This is particularly useful for describing transitions between phases of matter. For example, liquid and gas phases have translational symmetry, meaning the arrangement of molecules doesn’t change regardless of the direction from which they are observed. On the other hand, the density of atoms in a solid phase is not continuously the same — thus translational symmetry is broken.

In quantum mechanics, symmetry describes more than just the patterns that matter takes — it is used to classify the nature of quantum states. These states can be entangled, exhibiting peculiar connections that cannot be explained without the use of quantum physics. For some entangled states, the symmetry of these connections can offer a kind of protection against disruptions.

Here, the word protection indicates that the system is robust against non-symmetry breaking changes. Like an island in the middle of an ocean, there is not a direct road leading to a symmetry-protected phase or state. This means that the only way to access the state is to change the symmetry itself. Physicists are interested in exploring these classes of protected states because building a useful quantum device requires its building blocks to be robust against outside disturbances that may interfere with device operations.

Recently, JQI researchers under the direction of Christopher Monroe have used trapped atomic ions to construct a system that could potentially support a type of symmetry-protected quantum state. For this research they used a three-state system, called a qutrit, and demonstrated a proof-of-principle experiment for manipulating and controlling multiple qutrits. The result appeared in Physical Review X, an online open-access journal, and is the first demonstration of using multiple interacting qutrits for doing quantum information operations and quantum simulation of the behavior of real materials.

To date, almost all of the work in quantum information science has focused on manipulating "qubits," or so-called spin-1/2 particles that consist of just two energy levels.  In quantum mechanics, multilevel systems are analogous to the concept of "spin," where the number of energy levels corresponds to the number of possible states of spin. This group has used ion spins to explore a variety of topics, such as the physics of quantum magnetism and the transmission speed of quantum information across a spin-crystal. Increasingly, there is interest in moving beyond spin-½ to control and simulations of higher order spin systems, where the laws of symmetry can be radically altered. “One complication of spin-1 materials is that the added complexity of the levels often makes these systems much more difficult to model or understand. Thus, performing experiments in these higher [spin] dimensional systems may yield insight into difficult-to-calculate problems, and also give theorists some guidance on modeling such systems, ” explains Jake Smith, a graduate student in Monroe’s lab and author on the paper.

To engineer a spin-1 system, the researchers electromagnetically trapped a linear crystal of atomic ytterbium (Yb) ions, each atom a few micrometers from the next. Using a magnetic field, internal states of each ion are tailored to represent a qutrit, with a (+) state, (-) state and (0) state denoting the three available energy levels (see figure). With two ions, the team demonstrated the basic techniques necessary for quantum simulation: preparing initial states (placing the ions in certain internal states), observing the state of the system after some evolution, and verifying that the ions are entangled, here with 86% fidelity (fidelity is a measure of how much the experimentally realized state matches the theoretical target state).

To prepare the system in certain initial states, the team first lowers the system into its ground state, the lowest energy state in the presence of a large effective magnetic field. The different available spin chain configurations at a particular magnetic field value correspond to different energies. They observed how the spin chain reacted or evolved as the amplitude of the magnetic field was lowered. Changing the fields that the ions spins are exposed to causes the spins to readjust in order to remain in the lowest energy configuration.  

By adjusting the parameters (here laser amplitudes and frequencies) the team can open up and follow pathways between different energy levels. This is mostly true, but for some target states a simple trajectory that doesn’t break symmetries or pass through a phase transition does not exist. For instance, when the team added a third ion, they could not smoothly guide the system into its ground state, indicating the possible existence of a state with some additional symmetry protections.

“This result is a step towards investigating quantum phases that have special properties based on the symmetries of the system,” says Smith. Employing these sorts of topological phases may be a way to improve coherence times when doing quantum computation, even in the face of environmental disruptions. Coherence time is how long a state retains its quantum nature. Quantum systems are very sensitive to outside disturbances, and doing useful computation requires maintaining this quantum nature for longer than the time it takes to perform a particular calculation.

Monroe explains, "These symmetry-protected states may be the only way to build a large-scale stable quantum computer in many physical systems, especially in the solid-state.  With the exquisite control afforded atomic systems such as trapped ions demonstrated here, we hope to study and control how these very subtle symmetry effects might be used for quantum computing, and help guide their implementation in any platform."

To further investigate this protected phase, the researchers next intend to address the problem of creating antisymmetric ground states. Smith continues, “The next steps are to engineer more complicated interactions between the effective spins and implement a way to break the symmetries of the interactions.”

Read More

Rydberg atoms, atoms whose outermost electrons are highly excited but not ionized, might be just the thing for processing quantum information.  These outsized atoms can be sustained for a long time in a quantum superposition condition---a good thing for creating qubits---and they can interact strongly with other such atoms, making them useful for devising the kind of logic gates needed to process information.   Scientists at JQI and at other labs are pursuing this promising research area.

One problem with Rydberg atoms is that in they are often difficult to handle.  One approach is to search for special wavelengths---“magic wavelengths”—at which atoms can be trapped and excited into Rydberg states without disturbing them. A new JQI experiment bears out high-precision calculations predicting the existence of specific magic wavelengths.


Named for Swedish physicist Johannes Rydberg, these ballooned-up atoms are made by exciting the outermost electron in certain elements.  Alkali atoms are handy for this purpose since they are hydrogen-like.  That is, all the inner electrons can lumped together and regarded, along with the atom’s nucleus, as a unified core, with the lone remaining electron lying outside; it’s as if the atom were a heavy version of hydrogen.

The main energy levels of atoms are rated according to their principle quantum number, denoted by the letter n.  For rubidium atoms, the species used in this experiment, the outermost electron starts in an n=5 state.   Then laser light was used here to promote the electron into an n=18 state.  Unlike atoms in their ground state, atoms in the n=18 excited state see each other out to distances as large as 700 nm.  Rydberg atoms with higher values of n can interact at even larger separations, up to many microns.   For comparison, the size of an un-excited rubidium atom is less than 1 nm.

Actually the energy required to promote the atom to the 18s state directly  would require a laser producing ultraviolet light, and the researchers decided it was more practical to boost the outer electron to its higher perch in two steps, using two more convenient lasers whose energy added to the total energy difference.


Rb atoms are in the trap in the first place because they have been gathered into a cloud, cooled to temperatures only a few millionths of a degree above absolute zero, and then maintained in position by a special trapping laser beam system.

The trapping process exploits the Stark effect, a phenomenon in which the strong electric field of the confining laser beam alters the energy levels of the atom.  By using a sort of hourglass-shaped beam, the light forms a potential-energy well in which atoms will be trapped.  The atoms will congregate in a tidy bundle in the middle of this optical dipole trap.  The trouble is that the Stark effect, and along with it the trapping influence of the laser beams, depends on the value of n.  In other words, a laser beam good for trapping atoms at one n might not work for other values of n.

Fortunately, at just the right wavelengths, the “magic wavelengths,” the trapping process will confine atoms in both the low-lying n=5 state and in the excited n=18 state.  The theoretical calculations predicting where these wavelengths would be (with a particularly useful one around a value of 1064 nm) and the experimental findings bearing out the predictions were published recently in the journal Physical Review A

The first author on the paper is NRC Postdoctoral Fellow Elizabeth Goldschmidt.  “We made a compromise, using atoms in a relatively low-n Rydberg state, the 18s state.  We work in this regime because we are interested in interaction lengths commensurate with our optical lattice and because the particular magic wavelength is at a convenient wavelength for our lasers, namely 1064 nm.”  She said that in a next round of experiments, in the lab run by Trey Porto and Steve Rolston, will aim for a higher Rydberg level of n greater than 50.

JQI Adjunt Fellow and University of Delaware professor Marianna Safronova helped to produce the magic wavelength predictions.  “To make a prediction,” said Safronova, “you need to know the polarizability---the amount by which the Stark effect will shift the energy level---for the highly-excited n=18 level.  The job for finding magic wavelengths beyond n=18 with our high-precision first-principles approach would be pretty hard.  Agreement of theoretical prediction with experimental measurement gives a great benchmark for high-precision theory.”

“The most important feature of our paper,” said JQI Fellow and NIST physicist Trey Porto, “is that the theorists have pushed the theoretical limits of calculations of magic wavelengths for highly excited Rydberg atoms, and then verified these calculations experimentally.”

Read More

If you’re designing a new computer, you want it to solve problems as fast as possible. Just how fast is possible is an open question when it comes to quantum computers, but JQI physicists have narrowed the theoretical limits for where that “speed limit” is. The work implies that quantum processors will work more slowly than some research has suggested. 

The work offers a better description of how quickly information can travel within a system built of quantum particles such as a group of individual atoms. Engineers will need to know this to build quantum computers, which will have vastly different designs and be able to solve certain problems much more easily than the computers of today. While the new finding does not give an exact speed for how fast information will be able to travel in these as-yet-unbuilt computers—a longstanding question—it does place a far tighter constraint on where this speed limit could be.

Quantum computers will store data in a particle’s quantum states—one of which is its spin, the property that confers magnetism. A quantum processor could suspend many particles in space in close proximity, and computing would involve moving data from particle to particle. Just as one magnet affects another, the spin of one particle influences its neighbor’s, making quantum data transfer possible, but a big question is just how fast this influence can work.

The team’s findings advance a line of research that stretches back to the 1970s, when scientists discovered a limit on how quickly information could travel if a suspended particle only could communicate directly with its next-door neighbors. Since then, technology advanced to the point where scientists could investigate whether a particle might directly influence others that are more distant, a potential advantage. By 2005, theoretical studies incorporating this idea had increased the speed limit dramatically.

“Those results implied a quantum computer might be able to operate really fast, much faster than anyone had thought possible,” says postdoctoral researcher and lead author Michael Foss-Feig. “But over the next decade, no one saw any evidence that the information could actually travel that quickly.”

Physicists exploring this aspect of the quantum world often line up several particles and watch how fast changing the spin of the first particle affects the one farthest down the line—a bit like standing up a row of dominoes and knocking the first one down to see how fast the chain reaction takes. The team looked at years of others’ research and, because the dominoes never seemed to fall as fast as the 2005 prediction suggested, they developed a new mathematical proof that reveals a much tighter limit on how fast quantum information can propagate.

“The tighter a constraint we have, the better, because it means we’ll have more realistic expectations of what quantum computers can do,” says Foss-Feig.

The limit, their proof indicates, is far closer to the speed limits suggested by the 1970s result. The proof addresses the rate at which entanglement propagates across quantum systems. Entanglement—the weird linkage of quantum information between two distant particles—is important, because the more quickly particles grow entangled with one another, the faster they can share data. The 2005 results indicated that even if the interaction strength decays quickly with distance, as a system grows, the time needed for entanglement to propagate through it grows only logarithmically with its size, implying that a system could get entangled very quickly. The team’s work, however, shows that propagation time grows as a power of its size, meaning that while quantum computers may be able to solve problems that ordinary computers find devilishly complex, their processors will not be speed demons.

“On the other hand, the findings tell us something important about how entanglement works,” says Foss-Feig. “They could help us understand how to model quantum systems more efficiently.”

This was originally written by C. Boutin for NIST TechBeat, with modifications for JQI made by E. Edwards.

Read More

In quantum mechanics, interactions between particles can give rise to entanglement, which is a strange type of connection that could never be described by a non-quantum, classical theory. These connections, called quantum correlations, are present in entangled systems even if the objects are not physically linked (with wires, for example). Entanglement is at the heart of what distinguishes purely quantum systems from classical ones; it is why they are potentially useful, but it sometimes makes them very difficult to understand.

Physicists are pretty adept at controlling quantum systems and even making certain entangled states. Now JQI researchers*, led by theorist Alexey Gorshkov and experimentalist Christopher Monroe, are putting these skills to work to explore the dynamics of correlated quantum systems. What does it mean for objects to interact locally versus globally? How do local and global interactions translate into larger, increasingly connected networks? How fast can certain entanglement patterns form? These are the kinds of questions that the Monroe and Gorshkov teams are asking. Their recent results investigating how information flows through a quantum many-body system are published this week in the journal Nature (10.1038/nature13450), and in a second paper to appear in Physical Review Letters.

Researchers can engineer a rich selection of interactions in ultracold atom experiments, allowing them to explore the behavior of complex and massively intertwined quantum systems. In the experimental work from Monroe’s group, physicists examined how quickly quantum connections formed in a crystal of eleven ytterbium ions confined in an electromagnetic trap. The researchers used laser beams to implement interactions between the ions. Under these conditions, the system is described by certain types of ‘spin’ models, which are a vital mathematical representation of numerous physical phenomena including magnetism. Here, each atomic ion has isolated internal energy levels that represent the various states of spin.

In the presence of carefully chosen laser beams the ion spins can influence their neighbors, both near and far. In fact, tuning the strength and form of this spin-spin interaction is a key feature of the design. In Monroe's lab, physicists can study different types of correlated states within a single pristine quantum environment (Click here to learn about how this is possible with a crystal of atomic ions).

To see dynamics the researchers initially prepared the ion spin system in an uncorrelated state. Next, they abruptly turned on a global spin-spin interaction. The system is effectively pushed off-balance by such a fast change and the spins react, evolving under the new conditions.The team took snapshots of the ion spins at different times and observed the speed at which quantum correlations grew.

The spin models themselves do not have an explicitly built-in limit on how fast such information can propagate. The ultimate limit, in both classical and quantum systems, is given by the speed of light. However, decades ago, physicists showed that a slower information speed limit emerges due to some types of spin-spin interactions, similar to sound propagation in mechanical systems. While the limits are better known in the case where spins predominantly influence their closest neighbors, calculating constraints on information propagation in the presence of more extended interactions remains challenging. Intuitively, the more an object interacts with other distant objects, the faster the correlations between distant regions of a network should form. Indeed, the experimental group observes that long-range interactions provide a comparative speed-up for sending information across the ion-spin crystal. In the paper appearing in Physical Review Letters, Gorshkov’s team improves existing theory to much more accurately predict the speed limits for correlation formation, in the presence of interactions ranging from nearest-neighbor to long-range.

Verifying and forming a complete understanding of quantum information propagation is certainly not the end of the story; this also has many profound implications for our understanding of quantum systems more generally. For example, the growth of entanglement, which is a form of information that must obey the bounds described above, is intimately related to the difficulty of modeling quantum systems on a computer. Dr. Michael Foss-Feig explains, “From a theorist’s perspective, the experiments are cool because if you want to do something with a quantum simulator that actually pushes beyond what calculations can tell you, doing dynamics with long-range interacting systems is expected to be a pretty good way to do that. In this case, entanglement can grow to a point that our methods for calculating things about a many-body system break down.”

Theorist Dr. Zhexuan Gong states that in the context of both works, “We are trying to put bounds on how fast correlation and entanglement can form in a generic many-body system. These bounds are very useful because with long-range interactions, our mathematical tools and state-of-the-art computers can hardly succeed at predicting the properties of the system. We would then need to either use these theoretical bounds or a laboratory quantum simulator to tell us what interesting properties a large and complicated network of spins possess. These bounds will also serve as a guideline on what interaction pattern one should achieve experimentally to greatly speed up information propagation and entanglement generation, both key for building a fast quantum computer or a fast quantum network.”

From the experimental side, Dr. Phil Richerme gives his perspective, “We are trying to build the world’s best experimental platform for evolving the Schrodinger equation [math that describes how properties of a quantum system change in time]. We have this ability to set up the system in a known state and turn the crank and let it evolve and then make measurements at the end. For system sizes not much larger than what we have here, doing this becomes impossible for a conventional computer.” 

This news item was written by E. Edwards/JQI. 

Read More
  • How do you build a large-scale quantum computer?
  • Physicists propose a modular quantum computer architecture that offers scalability to large numbers of qubits
  • February 25, 2014 Quantum Computing and Information Science, PFC

How do you build a universal quantum computer? Turns out, this question was addressed by theoretical physicists about 15 years ago. The answer was laid out in a research paper and has become known as the DiVincenzo criteria [See Gallery Sidebar for information on this criteria]. The prescription is pretty clear at a glance; yet in practice the physical implementation of a full-scale universal quantum computer remains an extraordinary challenge.

To glimpse the difficulty of this task, consider the guts of a would-be quantum computer. The computational heart is composed of multiple quantum bits, or qubits, that can each store 0 and 1 at the same time. The qubits can become “entangled,” or correlated in ways that are impossible in conventional devices. A quantum computing device must create and maintain these quantum connections in order to have a speed and storage advantage over any conventional computer. That’s the upside. The difficulty arises because harnessing entanglement for computation only works when the qubits are almost completely isolated from the outside world. Isolation and control becomes much more difficult as more and more qubits are added into the computer. Basically, as quantum systems are made bigger, they generally lose their quantum-ness.  

In pursuit of a quantum computer, scientists have gained amazing control over various quantum systems. One leading platform in this broad field of research is trapped atomic ions, where nearly 20 qubits have been juxtaposed in a single quantum register. However, scaling this or any other type of qubit to much larger numbers while still contained in a single register will become increasingly difficult, as the connections will become too numerous to be reliable.

Physicists led by ion-trapper Christopher Monroe at the JQI have now proposed a modular quantum computer architecture that promises scalability to much larger numbers of qubits. This research is described in the journal Physical Review A (reference below), a topical journal of the American Physical Society. The components of this architecture have individually been tested and are available, making it a promising approach. In the paper, the authors present expected performance and scaling calculations, demonstrating that their architecture is not only viable, but in some ways, preferable when compared to related schemes.

Individual qubit modules are at the computational center of this design, each one consisting of a small crystal of perhaps 10-100 trapped ions confined with electromagnetic fields. Qubits are stored in each atomic ion’s internal energy levels. Logical gates can be performed locally within a single module, and two or more ions can be entangled using the collective properties of the ions in a module.

One or more qubits from the ion trap modules are then networked through a second layer of optical fiber photonic interconnects. This higher-level layer hybridizes photonic and ion-trap technology, where the quantum state of the ion qubits is linked to that of the photons that the ions themselves emit. Photonics is a natural choice as an information bus as it is proven technology and already used for conventional information flow. In this design, the fibers are directed to a reconfigurable switch, so that any set of modules could be connected. The switch system, which incorporates special micro-electromechanical mirrors (MEMs) to direct light into different fiber ports, would allow for entanglement between arbitrary modules and on-demand distribution of quantum information.

The defining feature of this new architecture is that it is modular, meaning that several identical modules composed of smaller registers are connected in a way that is inherently scalable.  Modularity is a common property of complex systems, from social networks to biological function, and will likely be a necessary component of any future large-scale quantum computer. Monroe explains,"This is the only way to imagine scaling to larger quantum systems, by building them in smaller standard units and hooking them together. In this case, we know how to engineer every aspect of the architecture."

In conventional computers, modularity is routinely exploited to realize the massive interconnects required in semiconductor devices, which themselves have been successfully miniaturized and integrated with other electronics and photonics. The first programmable computers were the size of large rooms and used vacuum tubes, and now people have an incredible computer literally at their fingertips. Today’s processors have billions of semiconductor transistors fabricated on chips that are only about a centimeter across.

Similar fabrication techniques are now used to construct computer chip-style ion-traps, sometimes with integrated optics. The modular quantum architecture proposed in this research would not only allow many ion-trap chips to be tied together, but could also be exploited with alternative qubit modules that couple easily to photons such as qubits made from nitrogen vacancy centers in diamond or ultracold atomic gases (the neutral cousin of ion-traps). 

For more on ion traps other research related to this work, see related links below and also visit 

This article was written by E. Edwards/JQI

Read More

Encoding information using quantum bits—which can be maintained in a superposition of states—is at the heart of quantum computing. Superposition states offer the advantage of massive parallelism compared to conventional computing using digital bits---which can assume only one value at a time.  

Unfortunately, qubits are fragile; they dissipate in the face of interactions with their environment. A new JQI semiconductor-based qubit design ably addresses this issue of qubit robustness. Unlike previous semiconductor qubit setups, the new device has no need of external, changing magnetic fields. This permits greater control over the qubit.

The enhanced mastery results in new reliability for processing quantum information in semiconductor qubits.  Furthermore, the necessary control features---voltages applied to nanometer-scale electrodes---are directly compatible with existing transistor-based devices. The research was carried out by scientists at JQI, Harvard, the Niels Bohr Institute at the University of Copenhagen, and the University of California at Santa Barbara.  The results are published in two papers---one theoretical, one experimental---in the journal Physical Review Letters.

This new approach expands upon efforts to use individual electron spins which point “up” or “down” to form a natural qubit (Read about "Qubit Design" in Gallery, above).  Instead of using a single electron the researchers used three electrons, confined in a three adjacent quantum dots.  Advances in fabricating arrays quantum dots have opened new possibilities for realizing robust qubits. In the last few years, scientists have demonstrated control of multiple electrons and their associated spin in such triple quantum dots (Read about "Triple Quantum Dots" in Gallery, above). Quantum dots now have many of the essential characteristics needed for quantum computing but decoherence and stability remains an outstanding issue. 

Why is using three electrons better than using one?  In the case of a single electron trapped in quantum dots, the qubit is made using the two possible orientations of an intrinsic angular momentum called spin. The spin is free to point “up (aligned)” or “down (anti-aligned)” with respect to a magnetic field. Here, flipping the state of the qubit required a large, rapidly oscillating magnetic field. Fast changes in magnetic fields, however, give rise to oscillating electric fields---this is exactly what happens in electric generators. This method presents a challenge, because such additional changing electric fields will fundamentally and dynamically modify the qubit system. It is therefore better to exert control over qubits solely with electric fields, eliminating the oscillating magnetic fields.

The researchers do exactly this.  They find a way to use an electric field rather than a magnetic field to flip the qubit from 1 to 0 (or vice versa).  While single electron spin transitions require a magnetic field (magnetic resonance-think, MRI technology), if one adds electrons to the system and constructs the energy levels based on combinations of three-spin orientations, the magnetic interaction is longer be necessary.

JQI scientist Jacob Taylor explains further why using three electrons is better than one: "Amazingly, when the three electrons are in constant, weak contact, their interaction hides their individual features from the outside world. This is, in essence, why the three-spin design protects quantum information so well.  Furthermore, even though the information is protected, a well designed 'tickling' of the device at just the right frequency has a dramatic effect on the information, as demonstrated by the quantum gates shown in the experiment."

These joint few-electron states have been studied previously and are called singlet-triplet qubit (graphical explanation in Gallery). However, while the changing magnetic fields were removed in previous works, the proposed logic gates relied on varying the tunneling rate in multiple steps, a cumbersome process that is susceptible to additional noise on the control electrodes. 

These two new papers in Physical Review Letters report improvements and simplifications in qubit construction and particularly in qubit manipulation. Instead of requiring sequential changes in the tunneling, qubit manipulation is carried out directly, allowing for easy, control of the qubit around two axes. The levels are separated in energy by microwave frequencies (GHz) and can be addressed with oscillating electric fields. Additionally, by adjusting the electrode voltages, the qubit levels are made to be far in energy from other states.

Once the researchers establish the electrode voltages, they proceed to the business of performing logic operations using resonant microwaves. What does this mean? The resonant exchange qubit, as the author’s refer to it, is embedded in a crystal awash with all pesky environmental that can endanger the integrity of qubits---phonons, other electrons, and interactions with the atoms that compose the crystal.  And yet, as a home for qubits, this environment is actually pretty stable. How stable? The experiments showed a coherence time of 20 microseconds. This is about 10,000 times longer than their reported gate time, which is good news for scaling up both number of qubits and gates. 

The JQI theory paper that accompanies the paper devoted to experimental results proposes a way of elaborating on the basic resonant-exchange qubit design to accommodate two-qubit-logic gates. Jake Taylor  "Working from a few simple concepts, this new approach to using electron spins seems obvious in retrospect. I'm particularly gratified with the ease of implementation. With this new design, we are getting fast, good, cheap quantum bits in an approach that promises enormous economies of scale.  But the two-qubit gate---crucial for the future success of this approach---is the big, open experimental challenge. I'm very curious to see which method ends up working the best in practice."

Charles Marcus, leader of the Niels Bohr Institute contingent for this experiment, agrees that the resonant exchange qubit represents a workable format for reliable qubits.  He gives Taylor credit in this way: "The idea for the resonant exchange qubit came from Jake, developed, I imagine, sitting with Jim Medford in the lab, next to the cryostat.  Jake is the only full-time theorist I know who can run a dilution refrigerator. I think his creativity is stimulated by the thumping sound of vacuum pumps. With the resonant exchange qubit, I see a big step forward for spin qubits. Challenges remain, but I can now see a clear forward path."

Written by Phillip F. Schewe and Emily Edwards

Read More

At low light, cats see better than humans. Electronic detectors do even better, but eventually they too become more prone to errors at very low light. The fundamental probabilistic nature of light makes it impossible to perfectly distinguish light from dark at very low intensity.  However, by using quantum mechanics, one can find measurement schemes that can, at least for part of the time, perform measurements which are free of errors, even when the light intensity is very low.

The chief advantage of using such a dilute light beam is to reduce the power requirement. And this in turn means that encrypted data can be sent over longer distances, even up to distant satellites.  Low power and high fidelity in reading data is especially important for transmitting and processing quantum information for secure communications and quantum computation. To facilitate this quantum capability you want a detector that sees well in the (almost) dark. Furthermore, in some secure communications applications it is preferable to occasionally avoid making a decision at all rather than to make an error.

A scheme demonstrated at the Joint Quantum Institute does exactly this. The JQI work, carried out in the lab of Alan Migdall and published in the journal Nature Communications, shows how one category of photo-detection system can make highly accurate readings of incoming information at the single-photon level by allowing the detector in some instances not to give a conclusive answer. Sometimes discretion is the better part of valor.

Quantum Morse Code

Most digital data comes into your home or office in the form of pulsed light, usually encoding a stream of zeros and ones, the equivalent of the 19th century Morse code of dots and dashes.  A more sophisticated data encoding scheme is one that uses not two but four states---0, 1, 2, 3 instead of the customary 0 and 1.  This protocol can be conveniently implemented, for example, by having the four states correspond to four different phases of the light pulse.  However, the phase states representing values of 0, 1, 2, and 3 have some overlap, and this produces ambiguity when you try to determine which state you have received. This overlap, which is inherent in the states, means that your measurement system sometimes gives you the wrong answer.

Migdall and his associates recently achieved the lowest error rate yet for a photodetector deciphering such a four-fold phase encoding of information in a light pulse.  In fact, the error rate was some 4 times lower than what is possible with conventional measurement techniques.  Such low error rates were achieved by implementing measurements for minimum error discrimination, or MED for short.  This measurement is deterministic insofar as it always gives an answer, albeit with some chance of being wrong.

By contrast, one can instead perform measurements that are in principle error free by allowing some inconclusive results and giving up the deterministic character of the measurement outcomes. This probabilistic discrimination scheme, based on quantum mechanics, is called unambiguous state discrimination, or USD, and is beyond the capabilities of conventional measurement schemes.

In their latest result (see paper linked below), JQI scientists implement a USD of such four-fold phase-encoded states by performing measurements able to eliminate all but one possible value for the input state---whether 0, 1, 2, or 3. This has the effect of determining the answer perfectly or not at all.   Alan Migdall compares the earlier minimum error discrimination approach with the unambiguous state discrimination approach: “The former is a technique that always gets an answer albeit with some probability of being mistaken, while the latter is designed to get answers that in principle are never wrong, but at the expense of sometimes getting an answer that is the equivalent of ‘don't know.’ It’s as your mother said, ‘if you can’t say something good it is better to say nothing at all.’”

With USD you make a series of measurements that rule out each state in turn. Then by process of elimination you figure out which state it must be. However, sometimes you obtain an inconclusive result, for example when your measurements eliminate less than three of the four possibilities for the input state.

The Real World

Measurement systems are not perfect, and ideal USD is not possible. Real-world imperfections produce some errors in the decoded information even in situations where USD appears to work smoothly.  The JQI experiment, which implements USD for four quantum states encoded in pulses with average photon numbers of less than one photon, is robust against real-world imperfections.  At the end, it performs with much lower errors than what could be achieved by any deterministic measurement, including MED. This advance will be useful for quantum information processing, quantum communications with many states, and fundamental studies of quantum measurements at the low-light level.

Read More
Optical RAM Figure 1

The sequence of images that constitute Hollywood movies can be stored handily on solid-state media such as magnetic tape or compact diskettes. At the Joint Quantum Institute (*) images can be stored in something as insubstantial as a gas of rubidium atoms. No one expects a vapor to compete with a solid in terms of density of storage. But when the “images” being stored are part of a quantum movie---the coherent sequential input to or output from a quantum computer---then the pinpoint control possible with vapor will be essential.

Last year Paul Lett and his JQI colleagues reported the ability to store a sequence of images (two letters of the alphabet) which were separated in time but overlapping in space within the volume of a gas-filled memory cell. This is random access in time. In a new experiment, by contrast, parts of a single image (spread out across a volume of space) can be stored and later recovered in chunks. Selectively reading out these partial views represents random access memory in space. The earlier storage effort could be called “temporal multiplexing,” while now it can be called “spatial multiplexing.”

This new result is published in the New Journal of Physics (**). The paper includes a movie showing how parts of an image are recalled. The image is of the acronym NIST, standing for National Institute of Standards and Technology, where the experiment was performed.

Images can be stored in the atoms held in a 20-cm-long vapor cell when atoms are selectively excited or manipulated by three fields acting simultaneously: the electric field of a “signal” laser beam bearing the primary image, the electric field of a second “control” laser, and the magnetic field from an external magnet coil whose strength along the length of the vapor cell varies with a well calibrated gradient. The storage method, called gradient echo memory, was described in detail in a previous press release (***).

In the new experiment the image is read out in three distinct parts one after the other by making the “control” beam more complex, delivering its light using additional mirrors and fibers. This serves to excite different parts of the vapor selectively.

The image is held by moving some atoms in the vapor into different stable states. The sections of the image can be read out over a period of microseconds. Portions of the image can also be deleted with what the researchers call an optical eraser.

So, in the JQI optical memory, you can read out any one of the letters N-I-S-T. But eventually with a memory you would like to store many (trillions) of bits of information. “We can’t compete with solid-state memories yet as far as density of storage,” said Paul Lett. Other vapor memories exist, but their image-recovery efficiencies are about 50%, Lett says, while those for the new JQI memory can be as high as 87%."

The more immediate aim of his work is to prepare for manipulating and storing quantum information, a task with special requirements. “With our form of optical memory we are hoping to store and retrieve quantum information and not add so much noise as to destroy it,” Lett adds. “For most other types of memory the noise level is fine for storing classical information, but not for quantum information.”

(*)The Joint Quantum Institute is operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park.

(**) See reference publication below. 

(***) See press release reporting previous work with movies stored in an atomic vapor under Related JQI Articles.

Read More