RSS icon
Twitter icon
Facebook icon
Vimeo icon
YouTube icon

Quantum Computing and Information Science

About

In conventional electronic computers, information is stored and processed in the form of strings of "bits" (binary digits). Each individual bit can have only one of two values: 0 or 1. In a quantum computer, information would be stored in quantum bits, or qubits, each of which, thanks to the nature of superposition, can be 0, 1 or both at once. This parallelism could make some mathematical operations exponentially faster compared to conventional computing speeds for the same problem. One important future application of quantum computers is the task of factoring the extremely large numbers that serve as the "public keys" in current encryption and data-protection schemes. JQI physicists are investigating promising quantum computing architectures as well as developing methods to control quantum effects that can be exploited to process information in new ways.

Latest News

From credit card numbers to bank account information, we transmit sensitive digital information over the internet every day. Since the 1990s, though, researchers have known that quantum computers threaten to disrupt the security of these transactions.

That’s because quantum physics predicts that these computers could do some calculations far faster than their conventional counterparts. This would let a quantum computer crack a common internet security system called public key cryptography.

This system lets two computers establish private connections hidden from potential hackers. In public key cryptography, every device hands out copies of its own public key, which is a piece of digital information.  Any other device can use that public key to scramble a message and send it back to the first device. The first device is the only one that has another piece of information, its private key, which it uses to decrypt the message. Two computers can use this method to create a secure channel and send information back and forth.

A quantum computer could quickly calculate another device’s private key and read its messages, putting every future communication at risk. But many scientists are studying how quantum physics can fight back and help create safer communication lines.

One promising method is quantum key distribution, which allows two parties to directly establish a secure channel with a single secret key. One way to generate the key is to use pairs of entangled photons—particles of light with a shared quantum connection. The entanglement guarantees that no one else can know the key, and if someone tries to eavesdrop, both parties will be tipped off.

Tobias Huber, a recently arrived JQI Experimental Postdoctoral Fellow, has been investigating how to reliably generate the entangled photons necessary for this secure communication. Huber is a graduate of the University of Innsbruck in Austria, where he was supervised by Gregor Weihs. They have frequently collaborated with JQI Fellow Glenn Solomon, who spent a semester at Innsbruck as a Fulbright Scholar. Over the past couple of years, they have been studying a particular source of entangled photons, called quantum dots.

A quantum dot is a tiny area in a semiconductor, just nanometers wide, that is embedded in another semiconductor. This small region behaves like an artificial atom. Just like in an atom, electrons in a quantum dot occupy certain discrete energy levels. If the quantum dot absorbs a photon of the right color, an electron can jump to a higher energy level. When it does, it leaves behind an open slot at the lower energy, which physicists call a hole. Eventually, the electron will decay to its original energy, emitting a photon and filling in the hole. The intermediate combination of the excited electron and the hole is called an exciton, and two excited electrons and two holes are called a biexciton. A biexciton will decay in a cascade, emitting a pair of photons.

Huber, Weihs, Solomon and several colleagues have developed a way to directly excite biexcitons in quantum dots using a sequence of laser pulses. The pulses make it possible to encode information in the pair of emitted photons, creating a connection between them known as time-bin entanglement. It’s the best type of entanglement for transmitting quantum information through optical fibers because it doesn’t degrade as easily as other types over long distances. Huber and his colleagues are the first to directly produce time-bin entangled photons from quantum dots.

In their latest work, published in Optics Express, they investigated how the presence of material imperfections surrounding the quantum dots influences this entanglement generation. Imperfections have their own electron energy levels and can steal an electron from a dot or donate an electron to fill a hole. Either way, the impurity prevents an exciton from decaying and emitting a photon, decreasing the number of photons that are ultimately released. To combat this loss, the team used a second laser to fill up the electron levels of the impurities and showed that this increased the number of photons released without compromising the entanglement between them.

The team says the new work is a step in the right direction to make quantum dots a viable source of entangled photons. Parametric down-conversion, a competitor that uses crystals to split the energy of one photon into two, occasionally produces two pairs of entangled photons instead of one. This could allow an eavesdropper to read an encrypted message without being detected. The absence of this drawback makes quantum dots an excellent candidate for producing entangled photons for quantum key distribution.

The advent of quantum computing brings new security challenges, but tools like quantum key distribution are taking those challenges head-on. It’s possible that, one day, we could have not only quantum computers, but quantum-secure communication lines, free from prying eyes.

Read More

This is part two of a two-part series on Weyl semimetals and Weyl fermions, newly discovered materials and particles that have drawn great interest from physicists at JQI and the Condensed Matter Theory Center at the University of Maryland. The second part focuses on the theoretical questions about Weyl materials that Maryland researchers are exploring. Part one, which was published last week, introduced their history and basic physics. If you haven’t read part one, we encourage you to head there first before getting into the details of part two.

The 2015 discovery of a Weyl semimetal—and the Weyl fermions it harbored—provoked a flurry of activity from researchers around the globe. A quick glance at a recent physics journal or the online arXiv preprint server testifies to the topic’s popularity. The arXiv alone has had more than 200 papers on Weyl semimetals posted in 2016.

Researchers at JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland have been interested in Weyl physics since before last summer’s discovery, publishing 18 papers on the topic over the past two years. In all, more than a dozen scientists at Maryland have been working to understand the fundamental properties of these curious new materials.

In addition to studying specific topics, researchers are also realizing that the physics of Weyl fermions—particles first studied decades ago in different setting—might have wider reach. They may account for the low-energy behavior of certain superconductors and other materials, especially those that are strongly correlated—that is, materials in which interactions between electrons cannot be ignored.

"Weyl physics should be abundant in many, many correlated materials," says Pallab Goswami, a theorist and postdoctoral researcher at JQI and CMTC. "If we can understand the exotic thermodynamic, transport and electrodynamic properties of Weyl fermions, we can probably understand more about low temperature physics in general," Goswami says.

Taking a wider approach

Goswami is not only interested in discovering new Weyl semimetals. He also wants to find strongly interacting materials where Weyl semimetal physics can help explain unresolved puzzles. That behavior is often related to the unique magnetic properties of Weyl fermions.

Recently, he and several colleagues examined a family of compounds known as pyrochlore iridates, formed from iridium, oxygen and a rare earth element such as neodymium or praseodymium. While most of these are insulators at low temperatures, the compound with praseodymium is an exception. It remains a metal and, intriguingly, has an anomalous current that flows without any external influence. This current, due to a phenomenon called the Hall effect, appears in other materials, but it is usually driven by an applied magnetic field or the magnetic properties of the material itself. In the praseodymium iridate, though, it appears even without a magnetic field and despite the fact that the compound has no magnetic properties that have been seen by experiment.

Goswami and his colleagues have argued that Weyl fermions can account for this puzzling behavior. They can distort a material’s magnetic landscape, making it look to other particles as if a large magnetic field is there. This effect is hard to spot in the lab, though, due to the challenge of keeping samples at very cold temperatures. The team has suggested how future experiments might confirm the presence of Weyl fermions through precise measurements with a scanning tunneling microscope.

On the surface

Parallel to Goswami’s efforts to expand the applications of Weyl physics, Johannes Hofmann, a former JQI and CMTC theorist who is now at the University of Cambridge in the UK, is diving into the details of Weyl semimetals. Hofmann has studied Weyl semimetals divorced from any real material and predicted a generic behavior that electrons on the surface of a semimetal will have. It’s a feature that could ultimately find applications to electronics and photonics.

In particular, he studied undulating charge distributions on the surface of semimetals, created by regions with more electrons and regions with less. Such charge fluctuations are dynamic, moving back and forth in response to their mutual electrical attraction, and in Weyl semimetals they support waves that move in only one direction.

The charge fluctuations generate electric and magnetic fields just outside the surface. And on the surface, positive and negative regions are packed close together—so close, in fact, that their separation can be much smaller than the wavelength of visible light. Since these fluctuations occur on such a small scale, they can also be used to detect small features in other objects. For instance, bringing a sample of some other material near the surface will modify the distribution of charges in a way that could be measured. Getting the same resolution with light would require high-energy photons that could destroy the object being imaged. Indeed, researchers have already shown that this is a viable imaging technique, as demonstrated in experiments with ordinary metals.

On the surface of Weyl semimetals one-way waves can travel through these charge fluctuations. Ordinary metals, too, can carry waves but require huge magnetic fields to steer them in only one direction. Hofmann showed that in a Weyl semimetal, it’s possible to create these waves without a magnetic field, a fact that could enable applications of the materials to microscopy and lithography. 

Too much disorder?

Although many studies imagine that Weyl materials are perfectly clean, such a situation rarely occurs in real experiments. Contaminants inevitably lodge themselves into the ideal crystal structure of any solid. Consequently, JQI scientists have looked at how disorder—the dirt that prevents samples from behaving like perfect theoretical models—affects the properties of Weyl materials. Their work has settled an argument theorists have been having for years.

One camp thought that weak disorder—dirt that doesn’t cause big changes—was essentially harmless to Weyl semimetals, since tiny wobbles in the material's electrical landscape could safely be ignored. The other camp argued that certain fluctuations, though weak, affect a wide enough area of the landscape that they cannot be ignored.

Settling the dispute took intense numerical study, requiring the use of supercomputing resources at Maryland. "It was very hard to do this," says Jed Pixley, a postdoctoral researcher at JQI and CMTC who finally helped solve the disorder conundrum. "It turns out that the effects of large local fluctuations of the disorder are weak, but they’re there."

Pixley’s calculations found that large regions of weak disorder create a new type of low-energy excitation, in addition to Weyl fermions. These new excitations live around the disordered regions and divert energy away from the Weyl fermion quasiparticles. The upshot is that the quasiparticles have a finite lifetime, instead of the infinite lifetime predicted by previous studies. The result has consequences for the stability of Weyl semimetals in a technical sense, although the lifetime of the quasiparticles is still quite long. In typical experiments, the effects of large areas of disorder would be tough to spot, although experiments on Weyl semimetals are still in their early days.

Research into Weyl materials shows little sign of slowing down. And the broader role that Weyl fermions play in condensed matter physics is still evolving and growing, with many more surprises likely in the future. As more and more experimental groups join the hunt for exotic physics, theoretical investigations, like those of the scientists at JQI and CMTC, will be crucial to identifying new behaviors and suggesting new experiments, steering the study of Weyl physics toward new horizons.

Read More

This is part one of a two-part series on Weyl semimetals and Weyl fermions, newly discovered materials and particles that have drawn great interest from researchers at JQI and the Condensed Matter Theory Center at the University of Maryland. The first part focuses on the history and basic physics of these materials. Part two focuses on theoretical work at Maryland.

For decades, particle accelerators have grabbed headlines while smashing matter together at faster and faster speeds. But in recent years, alongside the progress in high-energy experiments, another realm of physics has been taking its own exciting strides forward.

That realm, which researchers call condensed matter physics, studies chunks of matter moving decidedly slower than the protons in the LHC. In fact, the materials under study—typically solids or liquids—are usually sitting still. That doesn't make them boring, though. Their calm appearance can often hide exotic physics that arises from their microscopic activity.

"In condensed matter physics, the energy scales are much lower," says Pallab Goswami, a postdoctoral researcher at JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland. "We want to go to lower energies and find new phenomena, which is exactly the opposite of what is done in particle physics."

Historically, that's been a fruitful approach. The field has explained the physics of semiconductors—like the silicon that makes computer chips—and many superconductors, which generate the large magnetic fields required for clinical MRI machines.

Over the past decade, that success has continued. In 2004, researchers at the University of Manchester in the UK discovered a way to make single-atom-thick sheets of carbon by sticking Scotch tape onto graphite and peeling it off. It was a shockingly low-tech way to make graphene, a material with stellar electrical properties and incredible strength, and it led quickly to a Nobel Prize in physics in 2010.

A few years later, researchers discovered topological insulators, materials that trap their internal electrons but let charges on the surface flow freely. It’s a behavior that requires sophisticated math to explain—math that earned three researchers a share of the 2016 Nobel Prize in physics for theoretical discoveries that ultimately explain the physics of these and other materials.

In 2012, experimentalists studying the junction between a superconductor and a thin wire spotted evidence for Majorana fermions, particles that behave like uncharged electrons. Originally studied in the context of high-energy physics, these exotic particles never showed up in accelerators, but scientists at JQI predicted that they might make an appearance at much lower energies.

Last year, separate research groups at Princeton University, MIT and the Chinese Academy of Sciences discovered yet another exotic material—a Weyl semimetal—and with it yet another particle: the Weyl fermion. It brought an end to a decades-long search that began in the 1930s and earned acclaim as a top-10 discovery of the year, according to Physics World.

Like graphene, Weyl semimetals have appealing electrical properties and may one day make their way into electronic devices. But, perhaps more intriguingly for theorists, they also share some of the rich physics of topological insulators and have provoked a flurry new research. Scientists working with JQI Fellow Sankar Das Sarma, the Director of CMTC, have published 18 papers on the subject since 2014.

Das Sarma says that the progress in understanding solid state materials over the past decade has been astonishing, especially the discovery of phenomena researchers once thought were confined to high-energy physics. “It shows how clever nature is, as concepts and laws developed in one area of physics show up in a completely disparate area in unanticipated ways,” he says.

An article next week will explore some of the work on Weyl materials at JQI and CMTC. This week's story will focus on the fundamental physics at play in these unusual materials.

Spotted at long last

Within two weeks last summer, three research groups reported evidence for Weyl semimetals. Teams from the US and China measured the energy of electrons on the surface of tantalum arsenide, a metallic crystal that some had predicted might be a semimetal. By shining light on the material and capturing electrons ejected from the sample, researchers were able to map out a characteristic sign of Weyl semimetals—a band of energies that electrons on the surface inhabit, known as a Fermi arc. It was a feature predicted only for Weyl semimetals.

Much of the stuff on Earth, from wood and glass to copper and water, is not a semimetal. It's either an insulator, which does a bad job of conducting electricity, or a conductor, which lets electrical current flow with ease.

Quantum physics ultimately explains the differences between conductors, insulators, semiconductors and semimetals. The early successes of quantum physics—like explaining the spectrum of light emitted by hydrogen atoms—revolved around the idea that quantum objects have discrete energy levels. For instance, in the case of hydrogen, the single electron orbiting the nucleus can only occupy certain energies. The pattern of light emanating from hot hydrogen gas matches up with the spacing between these levels.

In a solid, which has many, many atoms, electrons still occupy a discrete set of energies. But with so many electrons come many more levels, and those levels tend to bunch together. This leads to a series of energy bands, where electrons can live, and gaps, where they can't. The figure below illustrates this.

Electrons pile into these bands, filling up the allowed energies and skipping the gaps. Depending on where in the band structure the last few electrons sit, a material will have dramatically different electrical behavior. Insulators have an outer band that is completely filled up, with an energy gap to a higher empty band. Metals have their most energetic electrons sitting in a partially filled band, with lots of slightly higher energies to jump to if they are prodded by a voltage from a battery.

A Weyl semimetal is a different beast. There, electrons pile in and completely fill a band, but there is no gap to the higher, unfilled band. Instead, the two touch at isolated points, which are responsible for some interesting properties of Weyl materials.

Quasiparticles lead the charge

We often think of electrons as carrying the current in a wire, but that’s not the whole story. The charge carriers look like electrons, but due to their microscopic interactions they behave like they have a different mass. These effective charge carriers, which have different properties in different materials, are called quasiparticles.

By examining a material's bands and gaps, it's possible to glean some of the properties of these quasiparticles. For a Weyl semimetal, the charge carriers satisfy an equation first studied in 1929 by a German mathematician named Hermann Weyl, and they are now called Weyl fermions.

But the structure of the bands doesn't capture everything about the material, says Johannes Hofmann, a former postdoctoral researcher at JQI and CMTC who is now at the University of Cambridge. "In a sense, these Weyl materials are very similar to graphene," Hofmann says. "But they are not only described by the band structure. There is a topological structure as well, just as in topological insulators."

Hofmann says that although the single-point crossings in the bands play an important role, they don't tell the whole story. Weyl semimetals also have a topological character, which means that the overall shape of the bands and gaps, as well as the way electrons spread out in space, affect their properties. Topology can account for these other properties by capturing the global differences in these shapes, like the distinction between an untwisted ribbon and a Moebius strip.

The interplay between the topological structure and the properties of Weyl materials is an active area of research. Experiments, though, are still in the earliest stages of sorting out these questions.

A theorist’s dream

Researchers at JQI and elsewhere are studying many of the theoretical details, from the transport properties on the surfaces of Weyl materials to the emergence of new types of order. They are even finding Weyl physics useful in tackling condensed matter quandaries that have long proved intractable.

Jed Pixley, a postdoctoral researcher at CMTC, has studied how Weyl semimetals behave in the presence of disorder. Pixley says that such investigations are crucial if Weyl materials are to find practical applications. "If you are hoping semimetals have these really interesting aspects,” he says, “then things better not change when they get a little dirty."

Please return next week for a sampling of the research into Weyl materials underway at JQI and CMTC. Written by Chris Cesare with illustrations and figures by Sean Kelley.

Read More

When it comes to quantum physics, light and matter are not so different. Under certain circumstances, negatively charged electrons can fall into a coordinated dance that allows them to carry a current through a material laced with imperfections. That motion, which can only occur if electrons are confined to a two-dimensional plane, arises due to a phenomenon known as the quantum Hall effect.

Researchers, led by Mohammad Hafezi, a JQI Fellow and assistant professor in the Department of Electrical and Computer Engineering at the University of Maryland, have made the first direct measurement that characterizes this exotic physics in a photonic platform. The research was published online Feb. 22 and featured on the cover of the March 2016 issue of Nature Photonics. These techniques may be extended to more complex systems, such as one in which strong interactions and long-range quantum correlations play a role.

Symmetry and Topology

Physicists use different approaches to classify matter; symmetry is one powerful method. For instance, the microscopic structure of a material like diamond looks the same even while shifting your gaze to a new spot in the crystal. These symmetries – the rotations and translations that leave the microscopic structure the same – predict many of the physical properties of crystals.

Symmetry can actually offer a kind of protection against disruptions. Here, the word protection means that the system (e.g. a quantum state) is robust against changes that do not break the symmetry. Recently, another classification scheme based on topology has gained significant attention. Topology is a property that depends on the global arrangement of particles that make up a system rather than their microscopic details. The excitement surrounding this mathematical concept has been driven by the idea that the topology of a system can offer a stability bubble around interesting and even exotic physics, beyond that of symmetry. Physicists are interested in harnessing states protected by both symmetry and topology because quantum devices must be robust against disturbances that can interfere with their functionality.

The quantum Hall effect is best understood by peering through the lens of topology. In the 1980s, physicists discovered that electrons in some materials behave strangely when subjected to large magnetic fields at extreme cryogenic temperatures. Remarkably, the electrons at the boundary of the material will flow along avenues of travel called ‘edge states’, protected against defects that are most certainly present in the material. Moreover, the conductance--a measure of the current--is quantized. This means that when the magnetic field is ramped up, then the conductance does not change smoothly. Instead it stays flat, like a plateau, and then suddenly jumps to a new value. The plateaus occur at precise values that are independent of many of the material’s properties. This hopping behavior is a form of precise quantization and is what gives the quantum Hall effect its great utility, allowing it to provide the modern standard for calibrating resistance in electronics, for instance.

Researchers have engineered quantum Hall behavior in other platforms besides the solid-state realm in which it was originally discovered. Signatures of such physics have been spotted in ultracold atomic gases and photonics, where light travels in fabricated chips. Hafezi and colleagues have led the charge in the photonics field.

The group uses a silicon-based chip that is filled with an array of ring-shaped structures called resonators. The resonators are connected to each other via waveguides (figure). The chip design strictly determines the conditions under which light can travel along the edges rather than through the inner regions. The researchers measure the transmission spectrum, which is the fraction of light that successfully passes through an edge pathway. To circulate unimpeded through the protected edge modes, the light must possess a certain energy. The transmission increases when the light energy matches this criteria. For other parameters, the light will permeate the chip interior or get lost, causing the transmission signal to decrease. The compiled transmission spectrum looks like a set of bright stripes separated by darker regions (see figure). Using such a chip, this group previously collected images of light traveling in edge states, definitively demonstrating the quantum Hall physics for photons.

In this new experiment Hafezi’s team modified their design to directly measure the value of the topology-related property that characterizes the photonic edge states. This measurement is analogous to characterizing the quantized conductance, which was critical to understanding the electron quantum Hall effect. In photonics, however, conductance is not relevant as it pertains to electron-like behavior. Here the significant feature is the winding number, which is related to how light circulates around the chip. Its value equals to the number of available edge states and should not change in the face of certain disruptions.

To extract the winding number, the team adds 100 nanometer titanium heaters on a layer above the waveguides. Heat changes the index of refraction, namely how the light bends as it passes through the waveguides. In this manner, researchers can controllably imprint a phase shift onto the light. Phase can be thought of in terms of a time delay. For instance, when comparing two light waves, the intensity can be the same, but one wave may be shifted in time compared to the other. The two waves overlap when one wave is delayed by a full oscillation cycle—this is called a 2π phase shift.

On the chip, enough heat is added to add a 2π phase shift to the light. The researchers observe an energy shift in the transmission stripes corresponding to light traveling along the edge. Notably, in this chip design, the light can circulate either clockwise (CW) or counterclockwise (CCW), and the two travel pathways do not behave the same (in contrast to an interferometer). When the phase shift is introduced, the CW traveling light hops one direction in the transmission spectrum, and the CCW goes the opposite way. The winding number is the amount that these edge-state spectral features move and is exactly equivalent to the quantized jumps in the electronic conductance.

Sunil Mittal, lead author and postdoctoral researcher explains one future direction, “So far, our research has been focused on transporting classical [non-quantum] properties of light--mainly the power transmission. It is intriguing to further investigate if this topological system can also achieve robust transport of quantum information, which will have potential applications for on-chip quantum information processing.” 

This text was written by E. Edwards/JQI

Read More

Scientists have created a crystal structure that boosts the interaction between tiny bursts of light and individual electrons, an advance that could be a significant step toward establishing quantum networks in the future.

Today’s networks use electronic circuits to store information and optical fibers to carry it, and quantum networks may benefit from a similar framework. Such networks would transmit qubits – quantum versions of ordinary bits – from place to place and would offer unbreakable security for the transmitted information. But researchers must first develop ways for qubits that are better at storing information to interact with individual packets of light called photons that are better at transporting it, a task achieved in conventional networks by electro-optic modulators that use electronic signals to modulate properties of light.

Now, researchers in the group of Edo Waks, a fellow at JQI and an Associate Professor in the Department of Electrical and Computer Engineering at the University of Maryland, have struck upon an interface between photons and single electrons that makes progress toward such a device. By pinning a photon and an electron together in a small space, the electron can quickly change the quantum properties of the photon and vice versa. The research was reported online Feb. 8 in the journal Nature Nanotechnology.

“Our platform has two major advantages over previous work,” says Shuo Sun, a graduate student at JQI and the first author of the paper. “The first is that the electronic qubit is integrated on a chip, which makes the approach very scalable. The second is that the interactions between light and matter are fast. They happen in only a trillionth of a second – 1,000 times faster than previous studies.”

CONSTRUCTING AN INTERFACE

The new interface utilizes a well-studied structure known as a photonic crystal to guide and trap light. These crystals are built from microscopic assemblies of thin semiconductor layers and a grid of carefully drilled holes. By choosing the size and location of the holes, researchers can control the properties of the light traveling through the crystal, even creating a small cavity where photons can get trapped and bounce around.

”These photonic crystals can concentrate light in an extremely small volume, allowing devices to operate at the fundamental quantum limit where a single photon can make a big difference,” says Waks.

The results also rely on previous studies of how small, engineered nanocrystals called quantum dots can manipulate light. These tiny regions behave as artificial atoms and can also trap electrons in a tight space. Prior work from the JQI group showed that quantum dots could alter the properties of many photons and rapidly switch the direction of a beam of light.

The new experiment combines the light-trapping of photonic crystals with the electron-trapping of quantum dots. The group used a photonic crystal punctuated by holes just 72 nanometers wide, but left three holes undrilled in one region of the crystal. This created a defect in the regular grid of holes that acted like a cavity, and only those photons with only a certain energy could enter and leave.

Inside this cavity, embedded in layers of semiconductors, a quantum dot held one electron. The spin of that electron – a quantum property of the particle that is analogous to the motion of a spinning top – controlled what happened to photons injected into the cavity by a laser. If the spin pointed up, a photon entered the cavity and left it unchanged. But when the spin pointed down, any photon that entered the cavity came out with a reversed polarization – the direction that light’s electric field points. The interaction worked the opposite way, too: A single photon prepared with a certain polarization could flip the electron’s spin.

Both processes are examples of quantum switches, which modify the qubits stored by the electron and photon in a controlled way. Such switches will be the coin of the realm for proposed future quantum computers and quantum networks.

QUANTUM NETWORKING

Those networks could take advantage of the strengths that photons and electrons offer as qubits. In the future, for instance, electrons could be used to store and process quantum information at one location, while photons could shuttle that information between different parts of the network.

Such links could enable the distribution of entanglement, the enigmatic connection that groups of distantly separated qubits can share. And that entanglement could enable other tasks, such as performing distributed quantum computations, teleporting qubits over great distances or establishing secret keys that two parties could use to communicate securely.

Before that, though, Sun says that the light-matter interface that he and his colleagues have created must create entanglement between the electron and photon qubits, a process that will require more accurate measurements to definitively demonstrate.

“The ultimate goal will be integrating photon creation and routing onto the chip itself,” Sun says. “In that manner we might be able to create more complicated quantum devices and quantum circuits.”

In addition to Waks and Sun, the paper has two additional co-authors: Glenn Solomon, a JQI fellow, and Hyochul Kim, a post-doctoral researcher in the Department of Electrical and Computer Engineering at the University of Maryland.

"Creating a quantum switch" credit: S. Kelley/JQI

Read More

Harnessing quantum systems for information processing will require controlling large numbers of basic building blocks called qubits. The qubits must be isolated, and in most cases cooled such that, among other things, errors in qubit operations do not overwhelm the system, rendering it useless. Led by JQI Fellow Christopher Monroe, physicists have recently demonstrated important steps towards implementing a proposed type of gate, which does not rely on super-cooling their ion qubits. This work, published as an Editor’s Suggestion in Physical Review Letters, implements ultrafast sensing and control of an ion's motion, which is required to realize these hot gates. Notably, this experiment demonstrates thermometry over an unprecedented range of temperatures--from zero-point to room temperature.

Graduate student and first author Kale Johnson explains how this research could be applied, “Atomic clock states found in ions make the most pristine quantum bits, but the speed at which we have been able to access them in a useful way for quantum information processing is slower than it could be. We are changing that by making each operation on the qubit faster while eliminating the need to cool the ion to the ground state after each operation.”

In the experiment the team begins with a single trapped atomic ion. The ion can be thought of as a bar magnet that can be oriented with its north pole ‘up’ or ‘down’ or any combination between the two poles (pointing horizontally along an imaginary equator is up + down).  Physicists can use lasers and microwave radiation to control this orientation. The individual laser pulses are a mere ten picoseconds in length—a time scale that is a tiny fraction of how long it takes for the ion to undergo appreciable motion in the trap. Operating in this regime is precisely what allows researchers to have superior sensing and ultimately control over the ion motion. The speed enables the team to extract the motional behavior of an ion using a technique that works independently of the energy in the motion itself.  In other words, the measurement is equally sensitive to a fast or very slow atom.

The researchers use a method that is based on Ramsey interferometry, named for the Nobel Laureate Norman Ramsey who pioneered it back in 1949. Known then as his “method of separated oscillatory fields,” it is used throughout atomic physics and quantum information science.   

Laser pulses are carefully divided and then reunited to achieve control over the ion’s spin and motion. The researchers call these laser-ion interactions ‘spin-dependent kicks’ (SDK) because each series of specially tailored laser pulses flips the spin, while simultaneously giving the ion a push (this is depicted in the illustration below). With each fast kick, the atom’s quantum wave packet is split into two parts in under three nanoseconds. Those halves are then re-combined at different points in space and time, and the signal from the unique overlap pattern reveals how the population is distributed between the two spin states. In this experimental sequence, that distribution depends on parameters such as the number of SDKs, the time between kicks, and the initial position and speed of the ion. The team repeats this experiment to extract the average motion of the ion, or its effective temperature.

 

In order to realize proposed two-ion quantum gates that do not require cooling the system into its quantum mechanical ground state, multiple spin dependent kicks must be employed with high accuracy such that errors remain manageable. Here the team was able to clearly demonstrate the necessary high-quality spin dependent kicks. More broadly, this protocol shows that adding ultrafast pulsed laser technology to the ion-trapping toolbox gives physicists ultimate quantum control over what can be a limiting, noise-inducing parameter: the motion.

Read More

The concept of temperature is critical in describing many physical phenomena, such as the transition from one phase of matter to another.  Turn the temperature knob and interesting things can happen.  But other knobs might be just as important for studying some phenomena.  One such knob is chemical potential, a thermodynamic parameter first introduced in the nineteenth century by scientists for keeping track of potential energy absorbed or emitted by a system during chemical reactions.

In these reactions different atomic species rearranged themselves into new configuration while conserving the overall inventory of atoms.  That is, atoms could change their partners but the total number of identity of the atoms remained invariant. 

Chemical potential is just one of many examples of how flows can be described.  An imbalance in temperature results in a flow of energy.  An imbalance in electrical potential results in a flow of charged particles.  Meanwhile, an imbalance in chemical potential results in a flow of particles; and specifically an imbalance in chemical potential for light would result in a flow of photons.

Can the concept of chemical light apply to light?  At first the answer would seem to be no since particles of light, photons, are regularly absorbed when then they interact with regular matter.  The number of photons present is not preserved.  But recent experiments have shown that under special conditions photon number can be conserved, clearing the way for the use of chemical potential for light. 

Now three JQI scientists offer a more generalized theoretical description of chemical potential (usually denoted by the Greek letter mu) for light and show how mu can be controlled and applied in a number of physics research areas.

A prominent experimental demonstration of chemical potential for light took place at the University of Bonn (*) in 2010.  It consisted of quanta of light (photons) bouncing back and forth inside a reflective cavity filled with dye molecules.  The dye molecules, acting as a tunable energy bath (a parametric bath), would regularly absorb photons (seemingly ruling out the idea of photon number being conserved) but would re-emit the light.  Gradually the light warmed the molecules and the molecules cooled the light until they were all at thermal equilibrium.  This was the first time photons had been successfully “thermalized” in this way.  Furthermore, at still colder temperatures the photons collapsed into a single quantum state; this was the first photonic Bose-Einstein condensate (BEC).

In a paper published in the journal Physical Review B the JQI theorists describe a generic approach to chemical potential for light. They illustrate their ideas by showing how a chemical-potential protocol can be implemented a microcircuit array. Instead of crisscrossing a single cavity, the photons are set loose in an array of microwave transmission lines. And instead of interacting with a bath of dye molecules, the photons here interact with a network of tuned circuits.

“One likely benefit in using chemical potential as a controllable parameter will be carrying out quantum simulations of actual condensed-matter systems,” said Jacob Taylor, one of the JQI theorists taking part in the new study.  In what some call a prototype for future full-scale quantum computing, quantum simulations use tuned interactions in a small microcircuit setup to arrive at a numerical solution to calculations that (in their complexity) would defeat a normal digital computer.

In the scheme described above, for instance, the photons, carefully put in a superposition of spin states, could serve as qubits. The qubits can be programmed to perform special simulations. The circuits, including the transmission lines, act as the coupling mechanism whereby photons can be respectively up- or down-converted to lower or higher energy by obtaining energy from or giving energy to excitations of the circuits.

(*) J. Klaers, J. Schmitt, F. Vewinger, and M. Weitz, Nature 468, 545 (2010)

Read More

The quantum Hall effect, discovered in the early 1980s, is a phenomenon that was observed in a two-dimensional gas of electrons existing at the interface between two semiconductor layers. Subject to the severe criteria of very high material purity and very low temperatures, the electrons, when under the influence of a large magnetic field, will organize themselves into an ensemble state featuring remarkable properties.

Many physicists believe that quantum Hall physics is not unique to electrons, and thus it should be possible to observe this behavior elsewhere, such as in a collection of trapped ultracold atoms. Experiments at JQI and elsewhere are being planned to do just that. On the theoretical front, scientists* at JQI and University of Maryland have also made progress, which they describe in the journal Physical Review Letters. The result, to be summarized here, proposes using quantum matter made from a neutral atomic gas, instead of electrons. In this new design, elusive exotic states that are predicted to occur in certain quantum Hall systems should emerge. These states, known as parafermionic zero modes, may be useful in building robust quantum gates.

For electrons, the hallmark of quantum Hall behavior includes a free circulation of electrical charge around the edge but not in the interior of the sample. This research specifically relates to utilizing the fractional quantum Hall (FQH) effect, which is a many-body phenomenon. In this case, one should not consider just the movement of individual electrons, but rather imagine the collective action of all the electrons into particle-like “quasiparticles.” These entities appear to possess fractional charge, such as 1/3.

How does this relate to zero modes? Zero modes, as an attribute of quantum Hall systems, come into their own in the vicinity of specially tailored defects. Defects are where quasiparticles can be trapped. In previous works, physicists proposed that a superconducting nanowire serve as a defect that snags quasiparticles at either end of the wire. Perhaps the best-known example of a composite particle associated with zero-mode defects is the famed Majorana fermion.

Author David Clarke, a Postdoctoral Research Scholar from the UMD Condensed Matter Theory Center, explains, “Zero modes aren’t particles in the usual sense. They’re not even quasiparticles, but rather a place that a quasiparticle can go and not cost any energy.”

Aside from interest in them for studying fundamental physics, these zero modes might play an important role in quantum computing. This is related to what’s known as topology, which is a sort of global property that can allow for collective phenomena, such as the current of charge around the edge of the sample, to be impervious to the tiny microscopic details of a system. Here the topology endows the FQH system with multiple quantum states with exactly the same energy. The exactness and imperturbability of the energy amid imperfections in the environment makes the FQH system potentially useful for hosting quantum bits. The present report proposes a practical way to harness this predicted topological feature of the FQH system through the appearance of what are known as parafermionic zero-modes.

These strange and wonderful states, which in some ways go beyond Majoranas, first appeared on the scene only a few years ago, and have attracted significant attention. Now dubbed ‘parafermions,’ they were first proposed by Maissam Barkeshli and colleagues at Stanford University. Barkeshli is currently a postdoctoral researcher at Microsoft Station Q and will be coming soon to JQI as a Fellow. Author David Clarke was one of the early pioneers in studying how these states could emerge in a superconducting environment. Because both parafermions and Majoranas are expected to have unconventional behaviors when compared to the typical particles used as qubits, unambiguously observing and controlling them is an important research topic that spans different physics disciplines. From an application standpoint, parafermions are predicted to offer more versatility than Majorana modes when constructing quantum computing resources.

What this team does, for the first time, is to describe in detail how a parafermionic mode could be produced in a gas of cold bosonic atoms. Here the parafermion would appear at both ends of a one-dimensional trench of Bose-Einstein Condensate (BEC) atoms sitting amid a larger two-dimensional formation of cold atoms displaying FQH properties. According to first author and Postdoctoral Researcher Mohammad Maghrebi, “The BEC trench is the defect that does for atoms what the superconducting nanowire did for electrons.”

Some things are different for electrons and neutral atoms. For one thing, electrons undergo the FQH effect only if exposed to high magnetic fields. Neutral atoms have no charge and thus do not react strongly to magnetic fields; researchers must mimic this behavior by exposing the atoms to carefully designed laser pulses, which create a synthetic field environment. JQI Fellow Ian Spielman has led this area of experimental research and is currently performing atom-based studies of quantum Hall physics.

Another author of the PRL piece, JQI Fellow Alexey Gorshkov, explains how the new research paper came about: “Motivated by recent advances in Spielman's lab and (more recently) in other cold atom labs in generating synthetic fields for ultracold neutral atoms, we show how to implement in a cold-atom system the same exotic parafermionic zero modes proposed originally in the context of condensed-matter systems.”

“We argue that these zero modes, while arguably quite difficult to observe in the condensed matter context, can be observed quite naturally in atomic experiments,” says Maghrebi. “The JQI atmosphere of close collaboration and cross-fertilization between atomic physics and condensed matter physics on the one hand and between theory and experiment on the other hand was at the heart of this work.”

“Ultracold atoms play by a slightly different set of rules from the solid state,” says JQI Fellow Jay Sau. Things which come easy in one are hard in the other. Figuring out the twists in getting a solid state idea to work for cold atoms is always fun and the JQI is one of the best places to do it.”

(*)  The PRL paper has five authors, and their affiliations illustrate the complexity of modern physics work.  Mohammad Maghrebi, Sriram Ganeshan, Alexey Gorshkov, and Jay Sau are associated with the Joint Quantum Institute, operated by the University of Maryland and the National Institute for Standards and Technology.  Three of the authors---Ganeshan, Clarke, and Sau---are also associated with the Condensed Matter Theory Center at the University of Maryland physics department.  Finally, Maghrebi and Gorshkov are associated with the Joint Center for Quantum Information and Computer Science (usually abbreviated QuICS), which is, like the JQI, a University of Maryland-NIST joint venture.

Read More

Symmetry permeates nature, from the radial symmetry of flowers to the left-right symmetry of the human body. As such, it provides a natural way of classifying objects by grouping those that share the same symmetry. This is particularly useful for describing transitions between phases of matter. For example, liquid and gas phases have translational symmetry, meaning the arrangement of molecules doesn’t change regardless of the direction from which they are observed. On the other hand, the density of atoms in a solid phase is not continuously the same — thus translational symmetry is broken.

In quantum mechanics, symmetry describes more than just the patterns that matter takes — it is used to classify the nature of quantum states. These states can be entangled, exhibiting peculiar connections that cannot be explained without the use of quantum physics. For some entangled states, the symmetry of these connections can offer a kind of protection against disruptions.

Here, the word protection indicates that the system is robust against non-symmetry breaking changes. Like an island in the middle of an ocean, there is not a direct road leading to a symmetry-protected phase or state. This means that the only way to access the state is to change the symmetry itself. Physicists are interested in exploring these classes of protected states because building a useful quantum device requires its building blocks to be robust against outside disturbances that may interfere with device operations.

Recently, JQI researchers under the direction of Christopher Monroe have used trapped atomic ions to construct a system that could potentially support a type of symmetry-protected quantum state. For this research they used a three-state system, called a qutrit, and demonstrated a proof-of-principle experiment for manipulating and controlling multiple qutrits. The result appeared in Physical Review X, an online open-access journal, and is the first demonstration of using multiple interacting qutrits for doing quantum information operations and quantum simulation of the behavior of real materials.

To date, almost all of the work in quantum information science has focused on manipulating "qubits," or so-called spin-1/2 particles that consist of just two energy levels.  In quantum mechanics, multilevel systems are analogous to the concept of "spin," where the number of energy levels corresponds to the number of possible states of spin. This group has used ion spins to explore a variety of topics, such as the physics of quantum magnetism and the transmission speed of quantum information across a spin-crystal. Increasingly, there is interest in moving beyond spin-½ to control and simulations of higher order spin systems, where the laws of symmetry can be radically altered. “One complication of spin-1 materials is that the added complexity of the levels often makes these systems much more difficult to model or understand. Thus, performing experiments in these higher [spin] dimensional systems may yield insight into difficult-to-calculate problems, and also give theorists some guidance on modeling such systems, ” explains Jake Smith, a graduate student in Monroe’s lab and author on the paper.

To engineer a spin-1 system, the researchers electromagnetically trapped a linear crystal of atomic ytterbium (Yb) ions, each atom a few micrometers from the next. Using a magnetic field, internal states of each ion are tailored to represent a qutrit, with a (+) state, (-) state and (0) state denoting the three available energy levels (see figure). With two ions, the team demonstrated the basic techniques necessary for quantum simulation: preparing initial states (placing the ions in certain internal states), observing the state of the system after some evolution, and verifying that the ions are entangled, here with 86% fidelity (fidelity is a measure of how much the experimentally realized state matches the theoretical target state).

To prepare the system in certain initial states, the team first lowers the system into its ground state, the lowest energy state in the presence of a large effective magnetic field. The different available spin chain configurations at a particular magnetic field value correspond to different energies. They observed how the spin chain reacted or evolved as the amplitude of the magnetic field was lowered. Changing the fields that the ions spins are exposed to causes the spins to readjust in order to remain in the lowest energy configuration.  

By adjusting the parameters (here laser amplitudes and frequencies) the team can open up and follow pathways between different energy levels. This is mostly true, but for some target states a simple trajectory that doesn’t break symmetries or pass through a phase transition does not exist. For instance, when the team added a third ion, they could not smoothly guide the system into its ground state, indicating the possible existence of a state with some additional symmetry protections.

“This result is a step towards investigating quantum phases that have special properties based on the symmetries of the system,” says Smith. Employing these sorts of topological phases may be a way to improve coherence times when doing quantum computation, even in the face of environmental disruptions. Coherence time is how long a state retains its quantum nature. Quantum systems are very sensitive to outside disturbances, and doing useful computation requires maintaining this quantum nature for longer than the time it takes to perform a particular calculation.

Monroe explains, "These symmetry-protected states may be the only way to build a large-scale stable quantum computer in many physical systems, especially in the solid-state.  With the exquisite control afforded atomic systems such as trapped ions demonstrated here, we hope to study and control how these very subtle symmetry effects might be used for quantum computing, and help guide their implementation in any platform."

To further investigate this protected phase, the researchers next intend to address the problem of creating antisymmetric ground states. Smith continues, “The next steps are to engineer more complicated interactions between the effective spins and implement a way to break the symmetries of the interactions.”

Read More

Rydberg atoms, atoms whose outermost electrons are highly excited but not ionized, might be just the thing for processing quantum information.  These outsized atoms can be sustained for a long time in a quantum superposition condition---a good thing for creating qubits---and they can interact strongly with other such atoms, making them useful for devising the kind of logic gates needed to process information.   Scientists at JQI and at other labs are pursuing this promising research area.

One problem with Rydberg atoms is that in they are often difficult to handle.  One approach is to search for special wavelengths---“magic wavelengths”—at which atoms can be trapped and excited into Rydberg states without disturbing them. A new JQI experiment bears out high-precision calculations predicting the existence of specific magic wavelengths.

RYDBERG ATOMS

Named for Swedish physicist Johannes Rydberg, these ballooned-up atoms are made by exciting the outermost electron in certain elements.  Alkali atoms are handy for this purpose since they are hydrogen-like.  That is, all the inner electrons can lumped together and regarded, along with the atom’s nucleus, as a unified core, with the lone remaining electron lying outside; it’s as if the atom were a heavy version of hydrogen.

The main energy levels of atoms are rated according to their principle quantum number, denoted by the letter n.  For rubidium atoms, the species used in this experiment, the outermost electron starts in an n=5 state.   Then laser light was used here to promote the electron into an n=18 state.  Unlike atoms in their ground state, atoms in the n=18 excited state see each other out to distances as large as 700 nm.  Rydberg atoms with higher values of n can interact at even larger separations, up to many microns.   For comparison, the size of an un-excited rubidium atom is less than 1 nm.

Actually the energy required to promote the atom to the 18s state directly  would require a laser producing ultraviolet light, and the researchers decided it was more practical to boost the outer electron to its higher perch in two steps, using two more convenient lasers whose energy added to the total energy difference.

DIPOLE TRAP AND STARK EFFECT

Rb atoms are in the trap in the first place because they have been gathered into a cloud, cooled to temperatures only a few millionths of a degree above absolute zero, and then maintained in position by a special trapping laser beam system.

The trapping process exploits the Stark effect, a phenomenon in which the strong electric field of the confining laser beam alters the energy levels of the atom.  By using a sort of hourglass-shaped beam, the light forms a potential-energy well in which atoms will be trapped.  The atoms will congregate in a tidy bundle in the middle of this optical dipole trap.  The trouble is that the Stark effect, and along with it the trapping influence of the laser beams, depends on the value of n.  In other words, a laser beam good for trapping atoms at one n might not work for other values of n.

Fortunately, at just the right wavelengths, the “magic wavelengths,” the trapping process will confine atoms in both the low-lying n=5 state and in the excited n=18 state.  The theoretical calculations predicting where these wavelengths would be (with a particularly useful one around a value of 1064 nm) and the experimental findings bearing out the predictions were published recently in the journal Physical Review A

The first author on the paper is NRC Postdoctoral Fellow Elizabeth Goldschmidt.  “We made a compromise, using atoms in a relatively low-n Rydberg state, the 18s state.  We work in this regime because we are interested in interaction lengths commensurate with our optical lattice and because the particular magic wavelength is at a convenient wavelength for our lasers, namely 1064 nm.”  She said that in a next round of experiments, in the lab run by Trey Porto and Steve Rolston, will aim for a higher Rydberg level of n greater than 50.

JQI Adjunt Fellow and University of Delaware professor Marianna Safronova helped to produce the magic wavelength predictions.  “To make a prediction,” said Safronova, “you need to know the polarizability---the amount by which the Stark effect will shift the energy level---for the highly-excited n=18 level.  The job for finding magic wavelengths beyond n=18 with our high-precision first-principles approach would be pretty hard.  Agreement of theoretical prediction with experimental measurement gives a great benchmark for high-precision theory.”

“The most important feature of our paper,” said JQI Fellow and NIST physicist Trey Porto, “is that the theorists have pushed the theoretical limits of calculations of magic wavelengths for highly excited Rydberg atoms, and then verified these calculations experimentally.”

Read More

If you’re designing a new computer, you want it to solve problems as fast as possible. Just how fast is possible is an open question when it comes to quantum computers, but JQI physicists have narrowed the theoretical limits for where that “speed limit” is. The work implies that quantum processors will work more slowly than some research has suggested. 

The work offers a better description of how quickly information can travel within a system built of quantum particles such as a group of individual atoms. Engineers will need to know this to build quantum computers, which will have vastly different designs and be able to solve certain problems much more easily than the computers of today. While the new finding does not give an exact speed for how fast information will be able to travel in these as-yet-unbuilt computers—a longstanding question—it does place a far tighter constraint on where this speed limit could be.

Quantum computers will store data in a particle’s quantum states—one of which is its spin, the property that confers magnetism. A quantum processor could suspend many particles in space in close proximity, and computing would involve moving data from particle to particle. Just as one magnet affects another, the spin of one particle influences its neighbor’s, making quantum data transfer possible, but a big question is just how fast this influence can work.

The team’s findings advance a line of research that stretches back to the 1970s, when scientists discovered a limit on how quickly information could travel if a suspended particle only could communicate directly with its next-door neighbors. Since then, technology advanced to the point where scientists could investigate whether a particle might directly influence others that are more distant, a potential advantage. By 2005, theoretical studies incorporating this idea had increased the speed limit dramatically.

“Those results implied a quantum computer might be able to operate really fast, much faster than anyone had thought possible,” says postdoctoral researcher and lead author Michael Foss-Feig. “But over the next decade, no one saw any evidence that the information could actually travel that quickly.”

Physicists exploring this aspect of the quantum world often line up several particles and watch how fast changing the spin of the first particle affects the one farthest down the line—a bit like standing up a row of dominoes and knocking the first one down to see how fast the chain reaction takes. The team looked at years of others’ research and, because the dominoes never seemed to fall as fast as the 2005 prediction suggested, they developed a new mathematical proof that reveals a much tighter limit on how fast quantum information can propagate.

“The tighter a constraint we have, the better, because it means we’ll have more realistic expectations of what quantum computers can do,” says Foss-Feig.

The limit, their proof indicates, is far closer to the speed limits suggested by the 1970s result. The proof addresses the rate at which entanglement propagates across quantum systems. Entanglement—the weird linkage of quantum information between two distant particles—is important, because the more quickly particles grow entangled with one another, the faster they can share data. The 2005 results indicated that even if the interaction strength decays quickly with distance, as a system grows, the time needed for entanglement to propagate through it grows only logarithmically with its size, implying that a system could get entangled very quickly. The team’s work, however, shows that propagation time grows as a power of its size, meaning that while quantum computers may be able to solve problems that ordinary computers find devilishly complex, their processors will not be speed demons.

“On the other hand, the findings tell us something important about how entanglement works,” says Foss-Feig. “They could help us understand how to model quantum systems more efficiently.”

This was originally written by C. Boutin for NIST TechBeat, with modifications for JQI made by E. Edwards.

Read More

In quantum mechanics, interactions between particles can give rise to entanglement, which is a strange type of connection that could never be described by a non-quantum, classical theory. These connections, called quantum correlations, are present in entangled systems even if the objects are not physically linked (with wires, for example). Entanglement is at the heart of what distinguishes purely quantum systems from classical ones; it is why they are potentially useful, but it sometimes makes them very difficult to understand.

Physicists are pretty adept at controlling quantum systems and even making certain entangled states. Now JQI researchers*, led by theorist Alexey Gorshkov and experimentalist Christopher Monroe, are putting these skills to work to explore the dynamics of correlated quantum systems. What does it mean for objects to interact locally versus globally? How do local and global interactions translate into larger, increasingly connected networks? How fast can certain entanglement patterns form? These are the kinds of questions that the Monroe and Gorshkov teams are asking. Their recent results investigating how information flows through a quantum many-body system are published this week in the journal Nature (10.1038/nature13450), and in a second paper to appear in Physical Review Letters.

Researchers can engineer a rich selection of interactions in ultracold atom experiments, allowing them to explore the behavior of complex and massively intertwined quantum systems. In the experimental work from Monroe’s group, physicists examined how quickly quantum connections formed in a crystal of eleven ytterbium ions confined in an electromagnetic trap. The researchers used laser beams to implement interactions between the ions. Under these conditions, the system is described by certain types of ‘spin’ models, which are a vital mathematical representation of numerous physical phenomena including magnetism. Here, each atomic ion has isolated internal energy levels that represent the various states of spin.

In the presence of carefully chosen laser beams the ion spins can influence their neighbors, both near and far. In fact, tuning the strength and form of this spin-spin interaction is a key feature of the design. In Monroe's lab, physicists can study different types of correlated states within a single pristine quantum environment (Click here to learn about how this is possible with a crystal of atomic ions).

To see dynamics the researchers initially prepared the ion spin system in an uncorrelated state. Next, they abruptly turned on a global spin-spin interaction. The system is effectively pushed off-balance by such a fast change and the spins react, evolving under the new conditions.The team took snapshots of the ion spins at different times and observed the speed at which quantum correlations grew.

The spin models themselves do not have an explicitly built-in limit on how fast such information can propagate. The ultimate limit, in both classical and quantum systems, is given by the speed of light. However, decades ago, physicists showed that a slower information speed limit emerges due to some types of spin-spin interactions, similar to sound propagation in mechanical systems. While the limits are better known in the case where spins predominantly influence their closest neighbors, calculating constraints on information propagation in the presence of more extended interactions remains challenging. Intuitively, the more an object interacts with other distant objects, the faster the correlations between distant regions of a network should form. Indeed, the experimental group observes that long-range interactions provide a comparative speed-up for sending information across the ion-spin crystal. In the paper appearing in Physical Review Letters, Gorshkov’s team improves existing theory to much more accurately predict the speed limits for correlation formation, in the presence of interactions ranging from nearest-neighbor to long-range.

Verifying and forming a complete understanding of quantum information propagation is certainly not the end of the story; this also has many profound implications for our understanding of quantum systems more generally. For example, the growth of entanglement, which is a form of information that must obey the bounds described above, is intimately related to the difficulty of modeling quantum systems on a computer. Dr. Michael Foss-Feig explains, “From a theorist’s perspective, the experiments are cool because if you want to do something with a quantum simulator that actually pushes beyond what calculations can tell you, doing dynamics with long-range interacting systems is expected to be a pretty good way to do that. In this case, entanglement can grow to a point that our methods for calculating things about a many-body system break down.”

Theorist Dr. Zhexuan Gong states that in the context of both works, “We are trying to put bounds on how fast correlation and entanglement can form in a generic many-body system. These bounds are very useful because with long-range interactions, our mathematical tools and state-of-the-art computers can hardly succeed at predicting the properties of the system. We would then need to either use these theoretical bounds or a laboratory quantum simulator to tell us what interesting properties a large and complicated network of spins possess. These bounds will also serve as a guideline on what interaction pattern one should achieve experimentally to greatly speed up information propagation and entanglement generation, both key for building a fast quantum computer or a fast quantum network.”

From the experimental side, Dr. Phil Richerme gives his perspective, “We are trying to build the world’s best experimental platform for evolving the Schrodinger equation [math that describes how properties of a quantum system change in time]. We have this ability to set up the system in a known state and turn the crank and let it evolve and then make measurements at the end. For system sizes not much larger than what we have here, doing this becomes impossible for a conventional computer.” 

This news item was written by E. Edwards/JQI. 

Read More
  • How do you build a large-scale quantum computer?
  • Physicists propose a modular quantum computer architecture that offers scalability to large numbers of qubits
  • February 25, 2014 Quantum Computing and Information Science, PFC

How do you build a universal quantum computer? Turns out, this question was addressed by theoretical physicists about 15 years ago. The answer was laid out in a research paper and has become known as the DiVincenzo criteria [See Gallery Sidebar for information on this criteria]. The prescription is pretty clear at a glance; yet in practice the physical implementation of a full-scale universal quantum computer remains an extraordinary challenge.

To glimpse the difficulty of this task, consider the guts of a would-be quantum computer. The computational heart is composed of multiple quantum bits, or qubits, that can each store 0 and 1 at the same time. The qubits can become “entangled,” or correlated in ways that are impossible in conventional devices. A quantum computing device must create and maintain these quantum connections in order to have a speed and storage advantage over any conventional computer. That’s the upside. The difficulty arises because harnessing entanglement for computation only works when the qubits are almost completely isolated from the outside world. Isolation and control becomes much more difficult as more and more qubits are added into the computer. Basically, as quantum systems are made bigger, they generally lose their quantum-ness.  

In pursuit of a quantum computer, scientists have gained amazing control over various quantum systems. One leading platform in this broad field of research is trapped atomic ions, where nearly 20 qubits have been juxtaposed in a single quantum register. However, scaling this or any other type of qubit to much larger numbers while still contained in a single register will become increasingly difficult, as the connections will become too numerous to be reliable.

Physicists led by ion-trapper Christopher Monroe at the JQI have now proposed a modular quantum computer architecture that promises scalability to much larger numbers of qubits. This research is described in the journal Physical Review A (reference below), a topical journal of the American Physical Society. The components of this architecture have individually been tested and are available, making it a promising approach. In the paper, the authors present expected performance and scaling calculations, demonstrating that their architecture is not only viable, but in some ways, preferable when compared to related schemes.

Individual qubit modules are at the computational center of this design, each one consisting of a small crystal of perhaps 10-100 trapped ions confined with electromagnetic fields. Qubits are stored in each atomic ion’s internal energy levels. Logical gates can be performed locally within a single module, and two or more ions can be entangled using the collective properties of the ions in a module.

One or more qubits from the ion trap modules are then networked through a second layer of optical fiber photonic interconnects. This higher-level layer hybridizes photonic and ion-trap technology, where the quantum state of the ion qubits is linked to that of the photons that the ions themselves emit. Photonics is a natural choice as an information bus as it is proven technology and already used for conventional information flow. In this design, the fibers are directed to a reconfigurable switch, so that any set of modules could be connected. The switch system, which incorporates special micro-electromechanical mirrors (MEMs) to direct light into different fiber ports, would allow for entanglement between arbitrary modules and on-demand distribution of quantum information.

The defining feature of this new architecture is that it is modular, meaning that several identical modules composed of smaller registers are connected in a way that is inherently scalable.  Modularity is a common property of complex systems, from social networks to biological function, and will likely be a necessary component of any future large-scale quantum computer. Monroe explains,"This is the only way to imagine scaling to larger quantum systems, by building them in smaller standard units and hooking them together. In this case, we know how to engineer every aspect of the architecture."

In conventional computers, modularity is routinely exploited to realize the massive interconnects required in semiconductor devices, which themselves have been successfully miniaturized and integrated with other electronics and photonics. The first programmable computers were the size of large rooms and used vacuum tubes, and now people have an incredible computer literally at their fingertips. Today’s processors have billions of semiconductor transistors fabricated on chips that are only about a centimeter across.

Similar fabrication techniques are now used to construct computer chip-style ion-traps, sometimes with integrated optics. The modular quantum architecture proposed in this research would not only allow many ion-trap chips to be tied together, but could also be exploited with alternative qubit modules that couple easily to photons such as qubits made from nitrogen vacancy centers in diamond or ultracold atomic gases (the neutral cousin of ion-traps). 

For more on ion traps other research related to this work, see related links below and also visit iontrap.umd.edu 

This article was written by E. Edwards/JQI

Read More

Encoding information using quantum bits—which can be maintained in a superposition of states—is at the heart of quantum computing. Superposition states offer the advantage of massive parallelism compared to conventional computing using digital bits---which can assume only one value at a time.  

Unfortunately, qubits are fragile; they dissipate in the face of interactions with their environment. A new JQI semiconductor-based qubit design ably addresses this issue of qubit robustness. Unlike previous semiconductor qubit setups, the new device has no need of external, changing magnetic fields. This permits greater control over the qubit.

The enhanced mastery results in new reliability for processing quantum information in semiconductor qubits.  Furthermore, the necessary control features---voltages applied to nanometer-scale electrodes---are directly compatible with existing transistor-based devices. The research was carried out by scientists at JQI, Harvard, the Niels Bohr Institute at the University of Copenhagen, and the University of California at Santa Barbara.  The results are published in two papers---one theoretical, one experimental---in the journal Physical Review Letters.

This new approach expands upon efforts to use individual electron spins which point “up” or “down” to form a natural qubit (Read about "Qubit Design" in Gallery, above).  Instead of using a single electron the researchers used three electrons, confined in a three adjacent quantum dots.  Advances in fabricating arrays quantum dots have opened new possibilities for realizing robust qubits. In the last few years, scientists have demonstrated control of multiple electrons and their associated spin in such triple quantum dots (Read about "Triple Quantum Dots" in Gallery, above). Quantum dots now have many of the essential characteristics needed for quantum computing but decoherence and stability remains an outstanding issue. 

Why is using three electrons better than using one?  In the case of a single electron trapped in quantum dots, the qubit is made using the two possible orientations of an intrinsic angular momentum called spin. The spin is free to point “up (aligned)” or “down (anti-aligned)” with respect to a magnetic field. Here, flipping the state of the qubit required a large, rapidly oscillating magnetic field. Fast changes in magnetic fields, however, give rise to oscillating electric fields---this is exactly what happens in electric generators. This method presents a challenge, because such additional changing electric fields will fundamentally and dynamically modify the qubit system. It is therefore better to exert control over qubits solely with electric fields, eliminating the oscillating magnetic fields.

The researchers do exactly this.  They find a way to use an electric field rather than a magnetic field to flip the qubit from 1 to 0 (or vice versa).  While single electron spin transitions require a magnetic field (magnetic resonance-think, MRI technology), if one adds electrons to the system and constructs the energy levels based on combinations of three-spin orientations, the magnetic interaction is longer be necessary.

JQI scientist Jacob Taylor explains further why using three electrons is better than one: "Amazingly, when the three electrons are in constant, weak contact, their interaction hides their individual features from the outside world. This is, in essence, why the three-spin design protects quantum information so well.  Furthermore, even though the information is protected, a well designed 'tickling' of the device at just the right frequency has a dramatic effect on the information, as demonstrated by the quantum gates shown in the experiment."

These joint few-electron states have been studied previously and are called singlet-triplet qubit (graphical explanation in Gallery). However, while the changing magnetic fields were removed in previous works, the proposed logic gates relied on varying the tunneling rate in multiple steps, a cumbersome process that is susceptible to additional noise on the control electrodes. 

These two new papers in Physical Review Letters report improvements and simplifications in qubit construction and particularly in qubit manipulation. Instead of requiring sequential changes in the tunneling, qubit manipulation is carried out directly, allowing for easy, control of the qubit around two axes. The levels are separated in energy by microwave frequencies (GHz) and can be addressed with oscillating electric fields. Additionally, by adjusting the electrode voltages, the qubit levels are made to be far in energy from other states.

Once the researchers establish the electrode voltages, they proceed to the business of performing logic operations using resonant microwaves. What does this mean? The resonant exchange qubit, as the author’s refer to it, is embedded in a crystal awash with all pesky environmental that can endanger the integrity of qubits---phonons, other electrons, and interactions with the atoms that compose the crystal.  And yet, as a home for qubits, this environment is actually pretty stable. How stable? The experiments showed a coherence time of 20 microseconds. This is about 10,000 times longer than their reported gate time, which is good news for scaling up both number of qubits and gates. 

The JQI theory paper that accompanies the paper devoted to experimental results proposes a way of elaborating on the basic resonant-exchange qubit design to accommodate two-qubit-logic gates. Jake Taylor  "Working from a few simple concepts, this new approach to using electron spins seems obvious in retrospect. I'm particularly gratified with the ease of implementation. With this new design, we are getting fast, good, cheap quantum bits in an approach that promises enormous economies of scale.  But the two-qubit gate---crucial for the future success of this approach---is the big, open experimental challenge. I'm very curious to see which method ends up working the best in practice."

Charles Marcus, leader of the Niels Bohr Institute contingent for this experiment, agrees that the resonant exchange qubit represents a workable format for reliable qubits.  He gives Taylor credit in this way: "The idea for the resonant exchange qubit came from Jake, developed, I imagine, sitting with Jim Medford in the lab, next to the cryostat.  Jake is the only full-time theorist I know who can run a dilution refrigerator. I think his creativity is stimulated by the thumping sound of vacuum pumps. With the resonant exchange qubit, I see a big step forward for spin qubits. Challenges remain, but I can now see a clear forward path."

Written by Phillip F. Schewe and Emily Edwards

Read More

At low light, cats see better than humans. Electronic detectors do even better, but eventually they too become more prone to errors at very low light. The fundamental probabilistic nature of light makes it impossible to perfectly distinguish light from dark at very low intensity.  However, by using quantum mechanics, one can find measurement schemes that can, at least for part of the time, perform measurements which are free of errors, even when the light intensity is very low.

The chief advantage of using such a dilute light beam is to reduce the power requirement. And this in turn means that encrypted data can be sent over longer distances, even up to distant satellites.  Low power and high fidelity in reading data is especially important for transmitting and processing quantum information for secure communications and quantum computation. To facilitate this quantum capability you want a detector that sees well in the (almost) dark. Furthermore, in some secure communications applications it is preferable to occasionally avoid making a decision at all rather than to make an error.

A scheme demonstrated at the Joint Quantum Institute does exactly this. The JQI work, carried out in the lab of Alan Migdall and published in the journal Nature Communications, shows how one category of photo-detection system can make highly accurate readings of incoming information at the single-photon level by allowing the detector in some instances not to give a conclusive answer. Sometimes discretion is the better part of valor.

Quantum Morse Code

Most digital data comes into your home or office in the form of pulsed light, usually encoding a stream of zeros and ones, the equivalent of the 19th century Morse code of dots and dashes.  A more sophisticated data encoding scheme is one that uses not two but four states---0, 1, 2, 3 instead of the customary 0 and 1.  This protocol can be conveniently implemented, for example, by having the four states correspond to four different phases of the light pulse.  However, the phase states representing values of 0, 1, 2, and 3 have some overlap, and this produces ambiguity when you try to determine which state you have received. This overlap, which is inherent in the states, means that your measurement system sometimes gives you the wrong answer.

Migdall and his associates recently achieved the lowest error rate yet for a photodetector deciphering such a four-fold phase encoding of information in a light pulse.  In fact, the error rate was some 4 times lower than what is possible with conventional measurement techniques.  Such low error rates were achieved by implementing measurements for minimum error discrimination, or MED for short.  This measurement is deterministic insofar as it always gives an answer, albeit with some chance of being wrong.

By contrast, one can instead perform measurements that are in principle error free by allowing some inconclusive results and giving up the deterministic character of the measurement outcomes. This probabilistic discrimination scheme, based on quantum mechanics, is called unambiguous state discrimination, or USD, and is beyond the capabilities of conventional measurement schemes.

In their latest result (see paper linked below), JQI scientists implement a USD of such four-fold phase-encoded states by performing measurements able to eliminate all but one possible value for the input state---whether 0, 1, 2, or 3. This has the effect of determining the answer perfectly or not at all.   Alan Migdall compares the earlier minimum error discrimination approach with the unambiguous state discrimination approach: “The former is a technique that always gets an answer albeit with some probability of being mistaken, while the latter is designed to get answers that in principle are never wrong, but at the expense of sometimes getting an answer that is the equivalent of ‘don't know.’ It’s as your mother said, ‘if you can’t say something good it is better to say nothing at all.’”

With USD you make a series of measurements that rule out each state in turn. Then by process of elimination you figure out which state it must be. However, sometimes you obtain an inconclusive result, for example when your measurements eliminate less than three of the four possibilities for the input state.

The Real World

Measurement systems are not perfect, and ideal USD is not possible. Real-world imperfections produce some errors in the decoded information even in situations where USD appears to work smoothly.  The JQI experiment, which implements USD for four quantum states encoded in pulses with average photon numbers of less than one photon, is robust against real-world imperfections.  At the end, it performs with much lower errors than what could be achieved by any deterministic measurement, including MED. This advance will be useful for quantum information processing, quantum communications with many states, and fundamental studies of quantum measurements at the low-light level.

Read More
Optical RAM Figure 1

The sequence of images that constitute Hollywood movies can be stored handily on solid-state media such as magnetic tape or compact diskettes. At the Joint Quantum Institute (*) images can be stored in something as insubstantial as a gas of rubidium atoms. No one expects a vapor to compete with a solid in terms of density of storage. But when the “images” being stored are part of a quantum movie---the coherent sequential input to or output from a quantum computer---then the pinpoint control possible with vapor will be essential.

Last year Paul Lett and his JQI colleagues reported the ability to store a sequence of images (two letters of the alphabet) which were separated in time but overlapping in space within the volume of a gas-filled memory cell. This is random access in time. In a new experiment, by contrast, parts of a single image (spread out across a volume of space) can be stored and later recovered in chunks. Selectively reading out these partial views represents random access memory in space. The earlier storage effort could be called “temporal multiplexing,” while now it can be called “spatial multiplexing.”

This new result is published in the New Journal of Physics (**). The paper includes a movie showing how parts of an image are recalled. The image is of the acronym NIST, standing for National Institute of Standards and Technology, where the experiment was performed.

Images can be stored in the atoms held in a 20-cm-long vapor cell when atoms are selectively excited or manipulated by three fields acting simultaneously: the electric field of a “signal” laser beam bearing the primary image, the electric field of a second “control” laser, and the magnetic field from an external magnet coil whose strength along the length of the vapor cell varies with a well calibrated gradient. The storage method, called gradient echo memory, was described in detail in a previous press release (***).

In the new experiment the image is read out in three distinct parts one after the other by making the “control” beam more complex, delivering its light using additional mirrors and fibers. This serves to excite different parts of the vapor selectively.

The image is held by moving some atoms in the vapor into different stable states. The sections of the image can be read out over a period of microseconds. Portions of the image can also be deleted with what the researchers call an optical eraser.

So, in the JQI optical memory, you can read out any one of the letters N-I-S-T. But eventually with a memory you would like to store many (trillions) of bits of information. “We can’t compete with solid-state memories yet as far as density of storage,” said Paul Lett. Other vapor memories exist, but their image-recovery efficiencies are about 50%, Lett says, while those for the new JQI memory can be as high as 87%."

The more immediate aim of his work is to prepare for manipulating and storing quantum information, a task with special requirements. “With our form of optical memory we are hoping to store and retrieve quantum information and not add so much noise as to destroy it,” Lett adds. “For most other types of memory the noise level is fine for storing classical information, but not for quantum information.”

(*)The Joint Quantum Institute is operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park.

(**) See reference publication below. 

(***) See press release reporting previous work with movies stored in an atomic vapor under Related JQI Articles.

Read More

Subscribe to A Quantum Bit 

Quantum physics began with revolutionary discoveries in the early twentieth century and continues to be central in today’s physics research. Learn about quantum physics, bit by bit. From definitions to the latest research, this is your portal. Subscribe to receive regular emails from the quantum world. Previous Issues...

Sign Up Now

Sign up to receive A Quantum Bit in your email!

 Have an idea for A Quantum Bit? Submit your suggestions to jqi-comm@umd.edu