RSS icon
Twitter icon
Facebook icon
Vimeo icon
YouTube icon

Quantum Many-Body Physics


Physicists use theoretical and experimental techniques to develop explanations of the goings-on in nature. Somewhat surprisingly, many phenomena such as electrical conduction can be explained through relatively simplified mathematical pictures — models that were constructed well before the advent of modern computation. And then there are things in nature that push even the limits of high performance computing and sophisticated experimental tools. Computers particularly struggle at simulating systems made of numerous particles--or many-bodies-- interacting with each other through multiple competing pathways (e.g. charge and motion). Yet, some of the most intriguing physics happens when the individual particle behaviors give way to emergent collective properties. In the quest to better explain and even harness the strange and amazing behaviors of interacting quantum systems, JQI physicists use experimental and theoretical tools to study the complexities of many-body physics, with an emphasis on topics such as entanglement and topology.

Latest News

  • boson spin-hall thumb

These days, movies and video games render increasingly realistic 3-D images on 2-D screens, giving viewers the illusion of gazing into another world. For many physicists, though, keeping things flat is far more interesting.

One reason is that flat landscapes can unlock new movement patterns in the quantum world of atoms and electrons. For instance, shedding the third dimension enables an entirely new class of particles to emerge—particles that that don’t fit neatly into the two classes, bosons and fermions, provided by nature. These new particles, known as anyons, change in novel ways when they swap places, a feat that could one day power a special breed of quantum computer.

But anyons and the conditions that produce them have been exceedingly hard to spot in experiments. In a pair of papers published this week in Physical Review Letters, JQI Fellow Alexey Gorshkov and several collaborators proposed new ways of studying this unusual flat physics, suggesting that small numbers of constrained atoms could act as stand-ins for the finicky electrons first predicted to exhibit low-dimensional quirks.

"These two papers add to the growing literature demonstrating the promise of cold atoms for studying exotic physics in general and anyons in particular," Gorshkov says. "Coupled with recent advances in cold atom experiments—including by the group of Ian Spielman at JQI—this work hints at exciting experimental demonstrations that might be just around the corner."

In the first paper, which was selected as an Editors’ Suggestion, Gorshkov and colleagues proposed looking for a new experimental signature of anyons—one that might be visible in a small collection of atoms hopping around in a 1-D grid. Previous work suggested that such systems might simulate the swapping behavior of anyons, but researchers only knew of ways to spot the effects at extremely cold temperatures. Instead, Fangli Liu, a graduate student at JQI, along with Gorshkov and other collaborators, found a way to detect the presence of anyons without needing such frigid climes.

Ordinarily, atoms spread out symmetrically over time in a 1-D grid, but anyons will generally favor the left over the right or vice versa. The researchers argued that straightforward changes to the laser used to create the grid would make the atoms hop less like themselves and more like anyons. By measuring the way that the number of atoms at different locations changes over time, it would then be possible to spot the asymmetry expected from anyons. Furthermore, adjusting the laser would make it easy to switch the favored direction in the experiment.

"The motivation was to use something that didn’t require extremely cold temperatures to probe the anyons," says Liu, the lead author of the paper. "The hope is that maybe some similar ideas can be used in more general settings, like looking for related asymmetries in two dimensions."

In the second paper, Gorshkov and a separate group of collaborators found theoretical evidence for a new state of matter closely related to a Laughlin liquid, the prototypical example of a substance with topological order. In a Laughlin liquid, particles—originally electrons—find elaborate ways of avoiding one another, leading to the emergence of anyons that carry only a fraction of the electric charge held by an electron.

"Anyons are pretty much still theoretical constructs," says Tobias Grass, a postdoctoral researcher at JQI and the lead author of the second paper, "and experiments have yet to conclusively demonstrate them."

Although fractional charges have been observed in experiments with electrons, many of their other predicted properties have remained unmeasurable. This makes it hard to search for other interesting behavior or to study Laughlin liquids more closely. Grass, Gorshkov and their colleagues suggested a way to manipulate the interactions between a handful of atoms and discovered a new state of matter that mixes characteristics of the Laughlin liquid and a less exotic crystal phase.

The atoms in this new state avoid one another in a similar way as electrons in a Laughlin liquid, and they also fall into a regular pattern like in a crystal—albeit in a strange way, with only half of an atom occupying each crystal site. It’s a unique mix of crystal symmetry and more complex topological order—a combination that has received little prior study.

"The idea that you have a bosonic or fermionic system, and then from interactions there emerges completely different physics—that’s only possible in lower dimensions," Grass says. "Having an experimental demonstration of any of these phases is just interesting from a fundamental perspective."

Story by Chris Cesare

Read More

A powerful engine roils deep beneath our feet, converting energy in the Earth’s core into magnetic fields that shield us from the solar wind. Similar engines drive the magnetic activity of the sun, other stars and even other planets—all of which create magnetic fields that reinforce themselves and feed back into the engines to keep them running.

Much about these engines, which scientists refer to as dynamos, remains unknown. That’s partly because the math behind them is doubly difficult, combining the complex equations of fluid motion with the equations that govern how electric and magnetic fields bend, twist, interact and propagate. But it’s also because lab-bound dynamos, which attempt to mimic the astrophysical versions, are expensive, dangerous and do not yet reliably produce the signature self-sustaining magnetic fields of real dynamos.

Now, Victor Galitski, a Fellow of the Joint Quantum Institute (JQI), in collaboration with two other scientists, has proposed a radical new approach to studying dynamos, one that could be simpler and safer. The proposal, which was published Oct. 25 in Physical Review Letters, suggests harnessing the electrons in a centimeter-sized chunk of solid matter to emulate the fluid flows in ordinary dynamos.

If such an experiment is successful, it might be possible for researchers in the future to study the Earth’s dynamo more closely—and maybe even learn more about the magnetic field flips that happen every 100,000 years or so. "The dynamics of the Earth’s dynamo are not well understood, and neither are the dynamics of these flips," says Galitski, who is also a physics professor at the University of Maryland. "If we had experiments that could reproduce some aspects of that dynamo, that would be very important."

Such experiments wouldn’t be possible but for the fact that electrons, which carry current through a material, can sometimes be thought of as a fluid. They flow from high potential to low potential, just like water down a hill, and they can flow at different speeds. The trick to spotting the dynamo effect in an electron fluid is getting them to flow fast enough without melting the material.

"People haven’t really thought about doing these experiments in solids with electron fluids," Galitski says. "In this work we don’t imagine having a huge system, but we do think it’s possible to induce very fast flows."

Those fast flows would be interesting in their own right, Galitski says, but they are especially important for realizing the dynamo effect in the lab. Despite the many lingering unknowns about dynamos, it seems that turbulence plays a crucial role in their creation. This is likely because turbulence, which leads to chaotic fluid motion, can jostle the magnetic field loose from the rest of the fluid, causing it to twist and bend on top of itself and increase its strength.

But turbulence only arises for very fast flows—like the air rushing over the wing of an airplane—or for flows over very large scales—like the liquid metal in the Earth’s core or the plasma shell of the sun. To create a dynamo using a small piece of solid matter, the electrons would need to move at speeds never before seen, even in materials known for having highly mobile electrons.

Galitski and his collaborators think that a material called a Weyl semimetal may be able to host an electron fluid flowing at more than a kilometer per second—potentially fast enough to generate the turbulence necessary to bootstrap a dynamo. These materials have received broad attention in recent years due to their unusual characteristics, including anomalous currents that arise in the presence of magnetic fields and that may reduce the speed required for turbulence to emerge.

"It might seem that turbulence isn’t particularly extraordinary," says Sergey Syzranov, a co-author and former JQI postdoctoral researcher who is now an assistant professor of physics at the University of California, Santa Cruz. "But in solids it has never been demonstrated to our knowledge. A major achievement of our work is that turbulence is realistic in some solid-state materials."

The authors say that it’s not yet clear how best to kickstart a dynamo on a small sliver of Weyl semimetal. It may be as simple as physically rotating the material. Or it could require pulsing an electric or magnetic field. Either way, Galitski says, the experimental signature would show a totally nonmagnetic system spontaneously form a magnetic field. "Controlled experiments like these with turbulence in electrons are totally unheard of," Galitski says. "I can’t really say what will come out of it, but it could be really interesting."

Mehdi Kargarian, a former JQI postdoctoral researcher who is now an assistant professor of physics at the Sharif University of Technology, was also a co-author of the new paper.

Story by Chris Cesare

Read More

The smallest amount of light you can have is one photon, so dim that it’s pretty much invisible to humans. While imperceptible, these tiny blips of energy are useful for carrying quantum information around. Ideally, every quantum courier would be the same, but there isn’t a straightforward way to produce a stream of identical photons. This is particularly challenging when individual photons come from fabricated chips. 

Now, researchers at the Joint Quantum Institute (JQI) have demonstrated a new approach that enables different devices to repeatedly emit nearly identical single photons. The team, led by JQI Fellow Mohammad Hafezi, made a silicon chip that guides light around the device’s edge, where it is inherently protected against disruptions. Previously, Hafezi and colleagues showed that this design can reduce the likelihood of optical signal degradation. In a paper published online on Sept. 10 in Nature, the team explains that the same physics which protects the light along the chip’s edge also ensures reliable photon production.

Single photons, which are an example of quantum light, are more than just really dim light. This distinction has a lot to do with where the light comes from. “Pretty much all of the light we encounter in our everyday lives is packed with photons,” says Elizabeth Goldschmidt, a researcher at the US Army Research Laboratory and co-author on the study.  “But unlike a light bulb, there are some sources that actually emit light, one photon at time, and this can only be described by quantum physics,” adds Goldschmidt. 

Many researchers are working on building reliable quantum light emitters so that they can isolate and control the quantum properties of single photons. Goldschmidt explains that such light sources will likely be important for future quantum information devices as well as further understanding the mysteries of quantum physics. “Modern communications relies heavily on non-quantum light,” says Goldschmidt. “Similarly, many of us believe that single photons are going to be required for any kind of quantum communication application out there.”

Scientists can generate quantum light using a natural color-changing process that occurs when a beam of light passes through certain materials. In this experiment the team used silicon, a common industrial choice for guiding light, to convert infrared laser light into pairs of different-colored single photons.

They injected light into a chip containing an array of miniscule silicon loops. Under the microscope, the loops look like linked-up glassy racetracks. The light circulates around each loop thousands of times before moving on to a neighboring loop. Stretched out, the light’s path would be several centimeters long, but the loops make it possible to fit the journey in a space that is about 500 times smaller. The relatively long journey is necessary to get many pairs single photons out of the silicon chip.  

Such loop arrays are routinely used as single photon sources, but small differences between chips will cause the photon colors to vary from one device to the next. Even within a single device, random defects in the material may reduce the average photon quality. This is a problem for quantum information applications where researchers need the photons to be as close to identical as possible.

The team circumvented this issue by arranging the loops in a way that always allows the light to travel undisturbed around the edge of the chip, even if fabrication defects are present. This design not only shields the light from disruptions—it  also restricts how single photons form within those edge channels. The loop layout essentially forces each photon pair to be nearly identical to the next, regardless of microscopic differences among the rings. The central part of the chip does not contain protected routes, and so any photons created in those areas are affected by material defects.

The researchers compared their chips to ones without any protected routes. They collected pairs of photons from the different chips, counting the number emitted and noting their color. They observed that their quantum light source reliably produced high quality, single-color photons time and again, whereas the conventional chip’s output was more unpredictable.

“We initially thought that we would need to be more careful with the design, and that the photons would be more sensitive to our chip’s fabrication process,” says Sunil Mittal, a JQI postdoctoral researcher and lead author on the new study. “But, astonishingly, photons generated in these shielded edge channels are always nearly identical, regardless of how bad the chips are.”

Mittal adds that this device has one additional advantage over other single photon sources. “Our chip works at room temperature. I don’t have to cool it down to cryogenic temperatures like other quantum light sources, making it a comparatively very simple setup.”

The team says that this finding could open up a new avenue of research, which unites quantum light with photonic devices having built-in protective features. “Physicists have only recently realized that shielded pathways fundamentally alter the way that photons interact with matter,” says Mittal. “This could have implications for a variety of fields where light-matter interactions play a role, including quantum information science and optoelectronic technology.”

Written by D. Genkina and E. Edwards

Author affiliations 

Sunil Mittal is a postdoctoral researcher at the Joint Quantum Institute and Institute for Research in Electronics and Applied Physics (IREAP) at the University of Maryland (UMD).

Elizabeth Goldschmidt is a physicist at the US Army Research Laboratory.

Mohammad Hafezi is a Fellow of the Joint Quantum Institute and an Associate Professor in the UMD Department of Electrical and Computer Engineering, Department of Physics, and IREAP.

Read More

Magnets, whether in the form of a bar, horseshoe or electromagnet, always have two poles. If you break a magnet in half, you’ll end up with two new magnets, each with its own magnetic north and south.

But some physics theories predict the existence of single-pole magnets—a situation akin to electric charges, which come in either positive or negative chunks. One particular incarnation—called the Yang monopole after its discoverer—was originally predicted in the context of high-energy physics, but it has never been observed. 

Now, a team at JQI led by postdoctoral researcher Seiji Sugawa and JQI Fellow Ian Spielman have succeeded in emulating a Yang monopole with an ultracold gas of rubidium atoms. The result, which provides another example of using cold quantum gases to simulate other areas of physics, was reported in the June 29 issue of Science.

"This new result links together ideas born in high-energy physics—the Yang monopole—with concepts in condensed matter physics—topological phase transitions—and realizes them in the atomic physics laboratory," Spielman says.

To detect the Yang monopoles in their quantum gas, Spielman, Sugawa and coworkers manipulated the internal compass needles that all atoms carry—a quantum property called spin—using radio waves and microwaves to rotate the needles in specific ways. By cycling the atoms among four different spin orientations, researchers were able to send the atoms on a journey through “spin space” and bring them back to where they started—very much like a traveler on the Earth’s surface taking a trip around the globe (but in four dimensions instead of the globe’s two).

The team measured the orientation of the atoms’ spins after they completed their journey and compared the result to their initial orientations. They found that the atoms’ spins didn’t return to where they started, a discrepancy that can arise during a trip through curved space. In this case, the size and direction of the deflection matched predictions for the curvature created by a Yang monopole.

To test that the deflections were indeed due to the monopole and not another source, researchers sent the atoms on a different journey, one that attempted to avoid the space-bending singularity created by the monopole. On this new path, the atoms no longer felt an overall tug from the curvature, a strong indication that they had exited the monopole’s realm of influence.

Turning the monopole’s effects on and off depends only on the big-picture shape of the paths that the atoms take and not on any small wiggles along the way—an indication that the effect is topological. The paths either enclose a monopole or they don’t, and this provides a topological feature that could lead to new types of quantum charge pumps, Spielman says.

Story by Chris Cesare

Read More
  • Atoms may hum a tune from grand cosmic symphony
  • An expanding cloud of atoms could offer insight into unanswered cosmological questions.
  • April 19, 2018 Quantum Many-Body Physics, PFC

Researchers playing with a cloud of ultracold atoms uncovered behavior that bears a striking resemblance to the universe in microcosm. Their work, which forges new connections between atomic physics and the sudden expansion of the early universe, was published April 19 in Physical Review X and featured in Physics.

"From the atomic physics perspective, the experiment is beautifully described by existing theory," says Stephen Eckel, an atomic physicist at the National Institute of Standards and Technology (NIST) and the lead author of the new paper. "But even more striking is how that theory connects with cosmology."

In several sets of experiments, Eckel and his colleagues rapidly expanded the size of a doughnut-shaped cloud of atoms, taking snapshots during the process. The growth happens so fast that the cloud is left humming, and a related hum may have appeared on cosmic scales during the rapid expansion of the early universe—an epoch that cosmologists refer to as the period of inflation.

The work brought together experts in atomic physics and gravity, and the authors say it is a testament to the versatility of the Bose-Einstein condensate (BEC)—an ultracold cloud of atoms that can be described as a single quantum object—as a platform for testing ideas from other areas of physics.

"Maybe this will one day inform future models of cosmology," Eckel says. "Or vice versa. Maybe there will be a model of cosmology that’s difficult to solve but that you could simulate using a cold atomic gas."

It’s not the first time that researchers have connected BECs and cosmology. Prior studies mimicked black holes and searched for analogs of the radiation predicted to pour forth from their shadowy boundaries. The new experiments focus instead on the BEC’s response to a rapid expansion, a process that suggests several analogies to what may have happened during the period of inflation.

The first and most direct analogy involves the way that waves travel through an expanding medium. Such a situation doesn’t arise often in physics, but it happened during inflation on a grand scale. During that expansion, space itself stretched any waves to much larger sizes and stole energy from them through a process known as Hubble friction.

In one set of experiments, researchers spotted analogous features in their cloud of atoms. They imprinted a sound wave onto their cloud—alternating regions of more atoms and fewer atoms around the ring, like a wave in the early universe—and watched it disperse during expansion. Unsurprisingly, the sound wave stretched out, but its amplitude also decreased. The math revealed that this damping looked just like Hubble friction, and the behavior was captured well by calculations and numerical simulations.

"It's like we're hitting the BEC with a hammer," says Gretchen Campbell, the NIST co-director of the Joint Quantum Institute (JQI) and a coauthor of the paper, "and it’s sort of shocking to me that these simulations so nicely replicate what's going on."

In a second set of experiments, the team uncovered another, more speculative analogy. For these tests they left the BEC free of any sound waves but provoked the same expansion, watching the BEC slosh back and forth until it relaxed.

In a way, that relaxation also resembled inflation. Some of the energy that drove the expansion of the universe ultimately ended up creating all of the matter and light around us. And although there are many theories for how this happened, cosmologists aren’t exactly sure how that leftover energy got converted into all the stuff we see today.

In the BEC, the energy of the expansion was quickly transferred to things like sound waves traveling around the ring. Some early guesses for why this was happening looked promising, but they fell short of predicting the energy transfer accurately. So the team turned to numerical simulations that could capture a more complete picture of the physics.

What emerged was a complicated account of the energy conversion: After the expansion stopped, atoms at the outer edge of the ring hit their new, expanded boundary and got reflected back toward the center of the cloud. There, they interfered with atoms still traveling outward, creating a zone in the middle where almost no atoms could live. Atoms on either side of this inhospitable area had mismatched quantum properties, like two neighboring clocks that are out of sync.

The situation was highly unstable and eventually collapsed, leading to the creation of vortices throughout the cloud. These vortices, or little quantum whirlpools, would break apart and generate sound waves that ran around the ring, like the particles and radiation left over after inflation. Some vortices even escaped from the edge of the BEC, creating an imbalance that left the cloud rotating.

Unlike the analogy to Hubble friction, the complicated story of how sloshing atoms can create dozens of quantum whirlpools may bear no resemblance to what goes on during and after inflation. But Ted Jacobson, a coauthor of the new paper and a physics professor at the University of Maryland specializing in black holes, says that his interaction with atomic physicists yielded benefits outside these technical results.

"What I learned from them, and from thinking so much about an experiment like that, are new ways to think about the cosmology problem," Jacobson says. "And they learned to think about aspects of the BEC that they would never have thought about before. Whether those are useful or important remains to be seen, but it was certainly stimulating."

Eckel echoes the same thought. "Ted got me to think about the processes in BECs differently," he says, "and any time you approach a problem and you can see it from a different perspective, it gives you a better chance of actually solving that problem."

Future experiments may study the complicated transfer of energy during expansion more closely, or even search for further cosmological analogies. "The nice thing is that from these results, we now know how to design experiments in the future to target the different effects that we hope to see," Campbell says. "And as theorists come up with models, it does give us a testbed where we could actually study those models and see what happens."

The new paper included contributions from two coauthors not mentioned in the text: Avinash Kumar, a graduate student at JQI; and Ian Spielman, a JQI Fellow and NIST physicist.

Story by Chris Cesare

Read More

In the latest experiment of its kind, researchers have captured the most compelling evidence to date that unusual particles lurk inside a special kind of superconductor. The result, which confirms theoretical predictions first made nearly a decade ago at the Joint Quantum Institute (JQI) and the University of Maryland (UMD), will be published in the April 5 issue of Nature

The stowaways, dubbed Majorana quasiparticles, are different from ordinary matter like electrons or quarks—the stuff that makes up the elements of the periodic table. Unlike those particles, which as far as physicists know can’t be broken down into more basic pieces, Majorana quasiparticles arise from coordinated patterns of many atoms and electrons and only appear under special conditions. They are endowed with unique features that may allow them to form the backbone of one type of quantum computer, and researchers have been chasing after them for years.

The latest result is the most tantalizing yet for Majorana hunters, confirming many theoretical predictions and laying the groundwork for more refined experiments in the future. In the new work, researchers measured the electrical current passing through an ultra-thin semiconductor connected to a strip of superconducting aluminum—a recipe that transforms the whole combination into a special kind of superconductor.

Experiments of this type expose the nanowire to a strong magnet, which unlocks an extra way for electrons in the wire to organize themselves at low temperatures. With this additional arrangement the wire is predicted to host a Majorana quasiparticle, and experimenters can look for its presence by carefully measuring the wire’s electrical response. 

The new experiment was conducted by researchers from QuTech at the Technical University of Delft in the Netherlands and Microsoft Research, with samples of the hybrid material prepared at the University of California, Santa Barbara and Eindhoven University of Technology in the Netherlands. Experimenters compared their results to theoretical calculations by JQI Fellow Sankar Das Sarma and JQI graduate student Chun-Xiao Liu.

The same group at Delft saw hints of a Majorana in 2012, but the measured electrical effect wasn’t as big as theory had predicted. Now the full effect has been observed, and it persists even when experimenters jiggle the strength of magnetic or electric fields—a robustness that provides even stronger evidence that the experiment has captured a Majorana, as predicted in careful theoretical simulations by Liu.

"We have come a long way from the theoretical recipe in 2010 for how to create Majorana particles in semiconductor-superconductor hybrid systems," says Das Sarma, a coauthor of the paper who is also the director of the Condensed Matter Theory Center at UMD. "But there is still some way to go before we can declare total victory in our search for these strange particles."

The success comes after years of refinements in the way that researchers assemble the nanowires, leading to cleaner contact between the semiconductor wire and the aluminum strip. During the same time, theorists have gained insight into the possible experimental signatures of Majoranas—work that was pioneered by Das Sarma and several collaborators at UMD.

Theory meets experiment

The quest to find Majorana quasiparticles in thin quantum wires began in 2001, spurred by Alexei Kitaev, then a physicist then at Microsoft Research. Kitaev, who is now at the California Institute of Technology in Pasadena, concocted a relatively simple but unrealistic system that could theoretically harbor a Majorana. But this imaginary wire required a specific kind of superconductivity not available off-the-shelf from nature, and others soon began looking for ways to imitate Kitaev’s contraption by mixing and matching available materials.

One challenge was figuring out how to get superconductors, which usually go about their business with an even number of electrons—two, four, six, etc.—to also allow an odd number of electrons, a situation that is normally unstable and requires extra energy to maintain. The odd number is necessary because Majorana quasiparticles are unabashed oddballs: They only show up in the coordinated behavior of an odd number of electrons. 

In 2010, almost a decade after Kitaev’s original paper, Das Sarma, JQI Fellow Jay Deep Sau and JQI postdoctoral researcher Roman Lutchyn, along with a second group of researchers, struck upon a method to create these special superconductors, and it has driven the experimental search ever since. They suggested combining a certain kind of semiconductor with an ordinary superconductor and measuring the current through the whole thing. They predicted that the combination of the two materials, along with a strong magnetic field, would unlock the Majorana arrangement and yield Kitaev’s special material.

They also predicted that a Majorana could reveal itself in the way current flows through such a nanowire. If you connect an ordinary semiconductor to a metal wire and a battery, electrons usually have some chance of hopping off the wire onto the semiconductor and some chance of being rebuffed—the details depend on the electrons and the makeup of the material. But if you instead use one of Kitaev’s nanowires, something completely different happens. The electron always gets perfectly reflected back into the wire, but it’s no longer an electron. It becomes what scientists call a hole—basically a spot in the metal that’s missing an electron—and it carries a positive charge back in the opposite direction.

Physics demands that the current across the interface be conserved, which means that two electrons must end up in the superconductor to balance out the positive charge heading in the other direction. The strange thing is that this process, which physicists call perfect Andreev reflection, happens even when electrons in the metal receive no push toward the boundary—that is, even when they aren’t hooked up to a battery of sorts. This is related to the fact that a Majorana is its own antiparticle, meaning that it doesn’t cost any energy to create a pair of Majoranas in the nanowire. The Majorana arrangement gives the two electrons some extra room to maneuver and allows them to traverse the nanowire as a quantized pair—that is, exactly two at a time. 

"It is the existence of Majoranas that gives rise to this quantized differential conductance," says Liu, who ran numerical simulations to predict the results of the experiments on UMD’s Deepthought2 supercomputer cluster. "And such a quantization should even be robust to small changes in experimental parameters, as the real experiment shows."

Scientists refer to this style of experiment as tunneling spectroscopy because electrons are taking a quantum route through the nanowire to the other side. It has been the focus of recent efforts to capture Majoranas, but there are other tests that could more directly reveal the exotic properties of the particles—tests that would fully confirm that the Majoranas are really there. 

"This experiment is a big step forward in our search for these exotic and elusive Majorana particles, showing the great advance made in the materials improvement over the last five years," Das Sarma says. "I am convinced that these strange particles exist in these nanowires, but only a non-local measurement establishing the underlying physics can make the evidence definitive."

By Chris Cesare

Read More

Optical highways for light are at the heart of modern communications. But when it comes to guiding individual blips of light called photons, reliable transit is far less common. Now, a collaboration of researchers from the Joint Quantum Institute (JQI), led by JQI Fellows Mohammad Hafezi and Edo Waks, has created a photonic chip that both generates single photons, and steers them around. The device, described in the Feb. 9 issue of Science, features a way for the quantum light to seamlessly move, unaffected by certain obstacles.

"This design incorporates well-known ideas that protect the flow of current in certain electrical devices," says Hafezi. "Here, we create an analogous environment for photons, one that protects the integrity of quantum light, even in the presence of certain defects."

The chip starts with a photonic crystal, which is an established, versatile technology used to create roadways for light. They are made by punching holes through a sheet of semiconductor. For photons, the repeated hole pattern looks very much like a real crystal made from a grid of atoms. Researchers use different hole patterns to change the way that light bends and bounces through the crystal. For instance, they can modify the hole sizes and separations to make restricted lanes of travel that allow certain light colors to pass, while prohibiting others.

Sometimes, even in these carefully fabricated devices, there are flaws that alter the light’s intended route, causing it to detour into an unexpected direction. But rather than ridding their chips of every flaw, the JQI team mitigates this issue by rethinking the crystal’s hole shapes and crystal pattern. In the new chip, they etch out thousands of triangular holes in an array that resembles a bee’s honeycomb. Along the center of the device they shift the spacing of the holes, which opens a different kind of travel lane for the light. Previously, these researchers predicted that photons moving along that line of shifted holes should be impervious to certain defects because of the overall crystal structure, or topology. Whether the lane is a switchback road or a straight shot, the light’s path from origin to destination should be assured, regardless of the details of the road.

The light comes from small flecks of semiconductor—dubbed quantum emitters—embedded into the photonic crystal. Researchers can use lasers to prod this material into releasing single photons. Each emitter can gain energy by absorbing laser photons and lose energy by later spitting out those photons, one at time. Photons coming from the two most energetic states of a single emitter are different colors and rotate in opposite directions. For this experiment, the team uses photons from an emitter found near the chip’s center.

The team tested the capabilities of the chip by first changing a quantum emitter from its lowest energy state to one of its two higher energy states. Upon relaxing back down, the emitter pops out a photon into the nearby travel lane. They continued this process many times, using photons from the two higher energy states. They saw that photons emitted from the two states preferred to travel in opposite directions, which was evidence of the underlying crystal topology.

To confirm that the design could indeed offer protected lanes of traffic for single photons, the team created a 60 degree turn in the hole pattern. In typical photonic crystals, without built-in protective features, such a kink would likely cause some of the light to reflect backwards or scatter elsewhere. In this new chip, topology protected the photons and allowed them to continue on their way unhindered.

“On the internet, information moves around in packets of light containing many photons, and losing a few doesn’t hurt you too much”, says co-author Sabyasachi Barik, a graduate student at JQI. “In quantum information processing, we need to protect each individual photon and make sure it doesn't get lost along the way. Our work can alleviate some forms of loss, even when the device is not completely perfect.”

The design is flexible, and could allow researchers to systematically assemble pathways for single photons, says Waks. "Such a modular approach may lead to new types of optical devices and enable tailored interactions between quantum light emitters or other kinds of matter."

Written by E. Edwards

*Mohammad Hafezi is an Associate Professor in the University of Maryland (UMD) Departments of Electrical and Computer Engineering and Physics. Edo Waks is a Professor in the UMD Department of Electrical and Computer Engineering.

Read More

A team of researchers has devised a simple way to tune a hallmark quantum effect in graphene—the material formed from a single layer of carbon atoms—by bathing it in light. Their theoretical work, which was published recently in Physical Review Letters, suggests a way to realize novel quantum behavior that was previously predicted but has so far remained inaccessible in experiments.

"Our idea is to use light to engineer these materials in place," says Tobias Grass, a postdoctoral researcher at the Joint Quantum Institute (JQI) and a co-author of the paper. "The big advantage of light is its flexibility. It’s like having a knob that can change the physics in your sample."

The proposal suggests a method to alter a physical effect that occurs in flat materials held at very low temperatures and subjected to extremely strong magnets—at least a thousand times stronger than a fridge magnet. Under these circumstances, electrons zipping around on a two-dimensional landscape start to behave in an unusual way. Instead of continuously flowing through the material, they get locked into tight circular orbits of particular sizes and energies, barely straying from their spots. Only a certain number of electrons can occupy each orbit. When orbits are partially filled—which gives electrons some room to breathe—it activates new kinds of interactions between the charged particles and leads to a complex quantum dance.

Electrons carry out this choreography—known as the fractional quantum Hall effect—in graphene. Interestingly, tuning the interactions between electrons can coax them into different quantum Hall dance patterns, but it requires a stronger magnet or an entirely different sample—sometimes with two layers of graphene stacked together.

The new work, which is a collaboration between researchers at JQI and the City College of New York, proposes using laser light to circumvent some of these experimental challenges and even create novel quantum dances. The light can prod electrons into jumping between orbits of different energies. As a result, the interactions between the electrons change and lead to a different dance pattern, including some that have never been seen before in experiments. The intensity and frequency of the light alter the number of electrons in specific orbits, providing an easy way to control the electrons’ performance. "Such a light-matter interaction results in some models that have previously been studied theoretically," says Mohammad Hafezi, a JQI Fellow and an author of the paper. "But no experimental scheme was proposed to implement them."

Unlocking those theoretical dances may reveal novel quantum behavior. Some may even spawn exotic quantum particles that could collaborate to remain protected from noise—a tantalizing idea that could be useful in the quest to build robust quantum computers.

Written by Nina Beier

Read More

If you holler at someone across your yard, the sound travels on the bustling movement of air molecules. But over long distances your voice needs help to reach its destination—help provided by a telephone or the Internet. Atoms don’t yell, but they can share information through light. And they also need help connecting over long distances.

Now, researchers at the Joint Quantum Institute (JQI) have shown that nanofibers can provide a link between far-flung atoms, serving as a light bridge between them. Their research, which was conducted in collaboration with the Army Research Lab and the National Autonomous University of Mexico, was published last week in Nature Communications. The new technique could eventually provide secure communication channels between distant atoms, molecules or even quantum dots.

An excited atom—that is, one with some extra energy—emits light when it loses energy. Usually atoms spit this light out in random directions and at different times. But this random process can be tamed if excited atoms are bunched up close together. In that case, atoms can sync up their light emissions, like the rhythmic clapping of an appreciative audience. However, this synchronization effect, which is caused by light of different atoms joining together, doesn’t reach very far because the strength of light weakens drastically over short distances. While your neighbor might hear you yelling over several meters, atoms need to be really close to interact with each other—typically closer than one micron, which is a hundred times smaller than the width of a human hair.

Now, physicists have extended the range over which atoms can synchronize their light emission by using an optical nanofiber. In an experiment, the researchers immerse a nanofiber in a cloud of cold rubidium atoms and excite the atoms with a laser beam. As atoms in the cloud move around, they sometimes get very close to the fiber. If an atom emits light near the fiber, the glass thread can capture the light and pipe it to another atom, even if the atoms are far apart.

The team observed a group of atoms emitting light pulses at different rates than their ordinary, unsynchronized selves—one signature of these far-reaching interactions. The effect persisted even when physicists cleaved the atomic cloud in two so that atoms in separate clouds could only connect through the fiber, and not through other atoms in the cloud.

The atoms in this experiment are only separated by distances of a few pieces of paper, but the authors say that longer distances—meters or even kilometers—should be doable. “We have shown that optical nanofibers are excellent for connecting atoms that are quite far apart—if the atoms were the size of people, it would be a distance of more than 300 kilometers,” says Pablo Solano, the lead author of the paper and a former JQI graduate student.* “The question now is not whether the atoms interact, but how far can we push their optical-fiber-mediated connections.” On the scale of atoms even a few meters is an enormous distance. But the authors say that a combination of optical nanofibers and regular fiber optics—technologies already deployed for long-distance phone calls, cable TV and the Internet—could extend the range of these atomic connections even farther.

* Solano is now a postdoctoral researcher at the MIT-Harvard Center for Ultracold Atoms.

Written by N. Beier

Read More

Two independent teams of scientists, including one from the Joint Quantum Institute, have used more than 50 interacting atomic qubits to mimic magnetic quantum matter, blowing past the complexity of previous demonstrations. The results appear in this week’s issue of Nature.

As the basis for its quantum simulation, the JQI team deploys up to 53 individual ytterbium ions—charged atoms trapped in place by gold-coated and razor-sharp electrodes. A complementary design by Harvard and MIT researchers uses 51 uncharged rubidium atoms confined by an array of laser beams. With so many qubits these quantum simulators are on the cusp of exploring physics that is unreachable by even the fastest modern supercomputers. And adding even more qubits is just a matter of lassoing more atoms into the mix.

“Each ion qubit is a stable atomic clock that can be perfectly replicated,” says JQI Fellow Christopher Monroe*, who is also the co-founder and chief scientist at the startup IonQ Inc. “They are effectively wired together with external laser beams. This means that the same device can be reprogrammed and reconfigured, from the outside, to adapt to any type of quantum simulation or future quantum computer application that comes up.” Monroe has been one of the early pioneers in quantum computing and his research group’s quantum simulator is part of a blueprint for a general-purpose quantum computer.

Quantum hardware for a quantum problem

While modern, transistor-driven computers are great for crunching their way through many problems, they can screech to a halt when dealing with more than 20 interacting quantum objects. That’s certainly the case for quantum magnetism, in which the interactions can lead to magnetic alignment or to a jumble of competing interests at the quantum scale.

“What makes this problem hard is that each magnet interacts with all the other magnets,” says research scientist Zhexuan Gong, lead theorist and co-author on the study. “With the 53 interacting quantum magnets in this experiment, there are over a quadrillion possible magnet configurations, and this number doubles with each additional magnet. Simulating this large-scale problem on a conventional computer is extremely challenging, if at all possible.”

When these calculations hit a wall, a quantum simulator may help scientists push the envelope on difficult problems. This is a restricted type of quantum computer that uses qubits to mimic complex quantum matter. Qubits are isolated and well-controlled quantum systems that can be in a combination of two or more states at once. Qubits come in different forms, and atoms—the versatile building blocks of everything—are one of the leading choices for making qubits. In recent years, scientists have controlled 10 to 20 atomic qubits in small-scale quantum simulations.

Currently, tech industry behemoths, startups and university researchers are in a fierce race to build prototype quantum computers that can control even more qubits. But qubits are delicate and must stay isolated from the environment to protect the device’s quantum nature. With each added qubit this protection becomes more difficult, especially if qubits are not identical from the start, as is the case with fabricated circuits. This is one reason that atoms are an attractive choice that can dramatically simplify the process of scaling up to large-scale quantum machinery.

An atomic advantage

Unlike the integrated circuitry of modern computers, atomic qubits reside inside of a room-temperature vacuum chamber that maintains a pressure similar to outer space. This isolation is necessary to keep the destructive environment at bay, and it allows the scientists to precisely control the atomic qubits with a highly engineered network of lasers, lenses, mirrors, optical fibers and electrical circuitry.

“The principles of quantum computing differ radically from those of conventional computing, so there’s no reason to expect that these two technologies will look anything alike,” says Monroe.

In the 53-qubit simulator, the ion qubits are made from atoms that all have the same electrical charge and therefore repel one another. But as they push each other away, an electric field generated by a trap forces them back together. The two effects balance each other, and the ions line up single file. Physicists leverage the inherent repulsion to create deliberate ion-to-ion interactions, which are necessary for simulating of interacting quantum matter.

The quantum simulation begins with a laser pulse that commands all the qubits into the same state. Then, a second set of laser beams interacts with the ion qubits, forcing them to act like tiny magnets, each having a north and south pole. The team does this second step suddenly, which jars the qubits into action. They feel torn between two choices, or phases, of quantum matter. As magnets, they can either align their poles with their neighbors to form a ferromagnet or point in random directions yielding no magnetization. The physicists can change the relative strengths of the laser beams and observe which phase wins out under different laser conditions.

The entire simulation takes only a few milliseconds. By repeating the process many times and measuring the resulting states at different points during the simulation, the team can see the process as it unfolds from start to finish. The researchers observe how the qubit magnets organize as different phases form, dynamics that the authors say are nearly impossible to calculate using conventional means when there are so many interactions.

This quantum simulator is suitable for probing magnetic matter and related problems. But other kinds of calculations may need a more general quantum computer with arbitrarily programmable interactions in order to get a boost.

“Quantum simulations are widely believed to be one of the first useful applications of quantum computers,” says Alexey Gorshkov**, JQI Fellow and co-author of the study. “After perfecting these quantum simulators, we can then implement quantum circuits and eventually quantum-connect many such ion chains together to build a full-scale quantum computer with a much wider domain of applications.”

As they look to add even more qubits, the team believes that its simulator will embark on more computationally challenging terrain, beyond magnetism. “We are continuing to refine our system, and we think that soon, we will be able to control 100 ion qubits, or more,” says Jiehang Zhang, the study’s lead author and postdoctoral researcher. “At that point, we can potentially explore difficult problems in quantum chemistry or materials design.”

Written by E. Edwards

* Christopher Monroe is a fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science. He is a Distinguished University Professor & Bice Seci-Zorn Professor in the UMD Physics Department.

**Alexey Gorshkov is a fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science. He is an Adjunct Professor in the UMD Physics Department.

Read More

Given enough time, a forgotten cup of coffee will lose its appeal and cool to room temperature. One way of telling this tepid tale involves a stupendous number of coffee molecules colliding like billiard balls with themselves and colder molecules in the air above. Those constant collisions siphon energy away from the coffee, bit by bit, in a process that physicists call thermalization.

But this story doesn’t mention quantum physics, and scientists think that thermalization must ultimately have a precursor at the quantum level. Recently, scientists have sketched out some of the ways that small quantum systems thermalize, sometimes even when they are almost completely isolated.

Last week, in Science Advances, a team of researchers from JQI and Indiana University reported finding a new kind of effect on the road to thermalization—one in which a chain of up to 22 trapped ions, all initially with their quantum spins aligned, can retain a memory of a flipped spin long after it begins to roam through the chain.

Unlike previous results in which imperfections trapped such flips near their starting spot, the memory in this experiment comes from the long-range communication of the ions and confirms a theoretical prediction by two of the paper’s authors.

Read More
  • Neural networks take on quantum entanglement
  • Techniques that enable computers to learn can also describe complex quantum systems.
  • June 12, 2017 Quantum Many-Body Physics

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (and the top player this year). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states, although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists. 

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

By Chris Cesare,

Read More

Researchers have found that a small stretch is enough to unleash the exotic electrical properties of a newly discovered topological insulator, unshackling a behavior previously locked away at cryogenic temperatures.

The compound, called samarium hexaboride, has been studied for decades. But recently it has enjoyed a surge of renewed interest as scientists first predicted and then discovered that it was a new type of topological insulator—a material that banishes electrical currents from its interior and forces them to travel along its periphery. That behavior only emerges at around 4 degrees above absolute zero, though, thwarting potential applications.

Now, experimentalists at the University of California, Irvine (UCI), working with JQI Fellow Victor Galitski and former JQI postdoctoral researcher Maxim Dzero (now at Kent State University), have found a way to activate samarium hexaboride’s cryogenic behavior at much higher temperatures. By stretching small crystals of the metal by less than a percent, the team was able to spot the signature surface currents of a topological insulator at 240 K (minus 33 C)—nearly room temperature and, in any case, a far cry from 4 K. The currents even persisted once the strain was removed.

Their technique, which was recently reported in Nature Materials, uses piezoelectric elements that bend when they are fed with an electric current. By suspending a sample of samarium hexaboride between two titanium supports and pulling on one side, researchers could measure the crystal’s electrical properties for different temperatures and amounts of stretch.

Last year, Galitski partnered with the same experimental group at UCI and discovered a potential application for samarium hexaboride’s unusual surface currents. They found that holding a small crystal at a fixed voltage could produce oscillating currents on its surface. Such tick-tock signals are at the heart of modern digital electronics, but they typically require clocks that are much larger than the micron-sized crystals.

The new result might make such applications more likely, and it could even be achieved without any piezo elements. It may be possible to grow samarium hexaboride as a thin film on top of another material that would naturally cause it to stretch, the researchers say.

Read More

Consider, for a moment, the humble puddle of water. If you dive down to nearly the scale of molecules, it will be hard to tell one spot in the puddle from any other. You can shift your gaze to the left or right, or tilt your head, and the microscopic bustle will be identical—a situation that physicists call highly symmetric.

That all changes abruptly when the puddle freezes. In contrast to liquid water, ice is a crystal, and it gains a spontaneous rigid structure as the temperature drops. Freezing fastens neighboring water molecules together in a regular pattern, and a simple tilt of the head now creates a kaleidoscopic change.

In 2012, Nobel-prize winning physicist Frank Wilczek, a professor at the Massachusetts Institute of Technology, proposed something that sounds pretty strange. It might be possible, Wilczek argued, to create crystals that are arranged in time instead of space. The suggestion prompted years of false starts and negative results that ruled out some of the most obvious places to look for these newly named time crystals.

Now, five years after the first proposal, a team of researchers led by physicists at the Joint Quantum Institute and the University of Maryland have created the world's first time crystal using a chain of atomic ions. The result, which finally brings Wilczek's exotic idea to life, was reported in Nature on March 9.

Much like freezing destroys the symmetry of liquid water, a time crystal disturbs a regularity in time. This is somewhat surprising, says lead author and JQI postdoctoral researcher Jiehang Zhang, since nature usually responds in sync to things that change in time. "The earth rotates around the sun once a year, and the seasons have the same period," Zhang says. "That’s what you would naturally expect."

A time crystal doesn't follow the lead, instead responding with a slower frequency—like a bell struck once a second that rings every other second. The atomic ions in the Maryland experiment, which researchers manipulated using laser pulses, responded exactly half as fast as the sequence of pulses that drove them.

Zhang, JQI Fellow Christopher Monroe and a group of experimentalists at UMD teamed up with a theory group at the University of California, Berkeley to create their time crystal. The Berkeley group, led by physicist Norman Yao, had previously proposed a way to create time crystals in the lab. For a chain of atomic ions, the challenge came down to finding the right sequence of laser pulses, along with assembling the sea of mirrors and lenses that ensured the lasers impinged on the ions in the right way.

To create their time crystal, researchers activated three types of laser-driven behavior in a chain of ten ytterbium ions. First, each ion was bombarded with its own individual laser beam, flipping an internal quantum property called spin by roughly 180 degrees with each pulse. Second, the ions were induced to interact with each other, coupling their internal spins together like two neighboring magnets. Finally, random disorder—essentially noise—was sprinkled onto each ion, a feature known from previous experiments to prevent the spins from jostling and heating up the chain.

Altogether, this sequence twisted around the ions' spins, and researchers kept track of the orientation of each spin after many repetitions of the sequence. When all three laser-driven behaviors were turned on, the spins of each ion synced up, and they would rhythmically return to their original direction at half the speed of the laser sequence.

But a time crystal is more than mere repetition, and this alone would not be enough to claim the creation of a time crystal, Zhang says. A crystal also needs to be rigid. "If you put a bunch of billiard balls on a pool table separated by exactly 10 centimeters, is that a crystal?" Zhang says.  "Not really, because if you shake the table a little bit it will fall apart."

Zhang and his colleagues demonstrated that their ions had this rigidity by attempting to artificially "melt" the time crystal. By modifying one of the laser pulses—essentially shaking the table—they observed that the rhythm remained stable, up to a point. Past a certain amount of heating, the time crystal dissolved away, just as an ice cube can melt back into a small puddle of water. But with weak shaking, it remained stable, a fact that provided the key evidence that they had created a time crystal.

This rigidity makes time crystals a potential ingredient for clocking complex quantum systems that have inherent defects and are hard to control. They could have applications to future quantum computers, which will also need to be robust. But such applications are still a long way off, especially since the time crystal that Zhang and collaborators produced lasted less than a millisecond.

"This bizarre state of matter results from a complex interplay between many quantum controls at the individual atomic level," says Monroe. "But time crystals can also emerge in certain solid-state devices, so a general understanding of this phenomenon could help bring such systems into future quantum devices."

In the same issue of Nature, a group of researchers from Harvard University, also working with Berkeley’s Yao, reported the creation of a time crystal using just such a solid-state system. Instead of ions, they used natural defects found in diamond to set up their crystal.

Read More
  • Destabilized solitons perform a disappearing act
  • In the presence of impurities, dark solitons accelerate and vanish from sight
  • February 24, 2017 Quantum Many-Body Physics, PFC

When your heart beats, blood courses through your veins in waves of pressure. These pressure waves manifest as your pulse, a regular rhythm unperturbed by the complex internal structure of the body. Scientists call such robust waves solitons, and in many ways they behave more like discrete particles than waves. Soliton theory may aid in the understanding of tsunamis, which—unlike other water waves—can sustain themselves over vast oceanic distances.

Solitons can arise in the quantum world as well. At most temperatures, gas atoms bounce around like billiard balls, colliding with each other and rocketing off into random directions. Near absolute zero, however, certain kinds of atoms suddenly start behaving according to the very different rules of quantum mechanics, and begin a kind of coordinated dance. Under pristine conditions, solitons can emerge inside these ultracold quantum fluids, surviving for several seconds.

Curious about how solitons behave in less than pristine conditions, scientists at NIST’s Physical Measurement Laboratory, in collaboration with researchers at the Joint Quantum Institute (JQI), have added some stress to a soliton’s life. They began by cooling down a cloud of rubidium atoms. Right before the gas became a homogenous quantum fluid, a radio-frequency magnetic field coaxed a handful of these atoms into retaining their classical, billiard ball-like state. Those atoms are, in effect, impurities in the atomic mix. The scientists then used laser light to push apart atoms in one region of the fluid, creating a solitary wave of low density—a “dark” soliton.

In the absence of impurities, this low-density region stably pulses through the ultracold fluid. But when atomic impurities are present, the dark soliton behaves as if it were a heavy particle, with lightweight impurity atoms bouncing off of it. These collisions make the dark soliton’s movement more random. This effect is reminiscent of Einstein’s 1905 predictions about randomized particle movement, dubbed Brownian motion.  

Guided by this framework, the scientists also expected the impurities to act like friction and slow down the soliton. But surprisingly, dark solitons do not completely follow Einstein’s rules. Instead of dragging down the soliton, collisions accelerated it to a point of destabilization. The soliton’s speed limit is set by the speed of sound in the quantum fluid, and upon exceeding that limit it exploded into a puff of sound waves.

This behavior made sense only after researchers changed their mathematical perspective and treated the soliton as though it has a negative mass. This is a quirky phenomenon that arises for certain collective behaviors of many-particle systems. Here the negative mass is manifested by the soliton’s darkness—it is a dip in the quantum fluid rather than a tall tsunami-like pulse.  Particles with negative mass respond to friction forces opposite to their ordinary cousins, speeding up instead of slowing down.

"All those assumptions about Brownian motion ended up going out the window—none of it applied,” says Hilary Hurst, a graduate student at JQI and lead theorist on the paper. "But at the end we had a theory that described this behavior very well, which is really nice.”

Lauren Aycock, lead author on the paper, lauded what she saw as particularly strong feedback between theory and experiment, adding that “it’s satisfying to have this kind of successful collaboration, where measurement informs theory, which then explains experimental results.”

Solitons in the land of ultracold atoms are intriguing, say Aycock and Hurst, because they are as close as you can get to observing the interface between quantum effects and the ordinary physics of everyday life. Experiments like this may help answer a deep physics riddle: where is the boundary between classical and quantum? In addition, this result may cast light on a similar problem with solitons in optical fibers, where random noise can disrupt the precise timing needed for communication over long distances.

Read More

When is a traffic jam not a traffic jam? When it's a quantum traffic jam, of course. Only in quantum physics can traffic be standing still and moving at the same time.

A new theoretical paper from scientists at the National Institute of Standards and Technology (NIST) and the University of Maryland suggests that intentionally creating just such a traffic jam out of a ring of several thousand ultracold atoms could enable precise measurements of motion. If implemented with the right experimental setup, the atoms could provide a measurement of gravity, possibly even at distances as short as 10 micrometers—about a tenth of a human hair's width.

While the authors stress that a great deal of work remains to show that such a measurement would be attainable, the potential payoff would be a clarification of gravity's pull at very short length scales. Anomalies could provide major clues on gravity’s behavior, including why our universe appears to be expanding at an accelerating rate.

In addition to potentially answering deep fundamental questions, these atom rings may have practical applications, too. They could lead to motion sensors far more precise than previously possible, or serve as switches for quantum computers, with 0 represented by atomic gridlock and 1 by moving atom traffic.

The authors of the paper are affiliated with the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science, both of which are partnerships between NIST and the University of Maryland.

Over the past two decades, physicists have explored an exotic state of matter called a Bose-Einstein condensate (BEC), which exists when atoms overlap one another at frigid temperatures a smidgen of a degree away from absolute zero. Under these conditions, a tiny cloud of atoms can essentially become one large quantum “superatom,” allowing scientists to explore potentially useful properties like superconductivity and superfluidity more easily.

Theoretical physicists Stephen Ragole and Jake Taylor, the paper’s authors, have now suggested that a variation on the BEC idea could be used to sense rotation or even explore gravity over short distances, where other forces such as electromagnetism generally overwhelm gravity's effects. The idea is to use laser beams—already commonly used to manipulate cold atoms—to string together a few thousand atoms into a ring 10 to 20 micrometers in diameter.

Once the ring is formed, the lasers would gently stir it into motion, making the atoms circulate around it like cars traveling one after another down a single-lane beltway. And just as car tires spin as they travel along the pavement, the atoms' properties would pick up the influence of the world around them—including the effects of gravity from masses just a few micrometers away.

The ring would take advantage of one of quantum mechanics' counterintuitive behaviors to help scientists actually measure what its atoms pick up about gravity. The lasers could stir the atoms into what is called a "superposition," meaning, in effect, they would be both circulating about the ring and simultaneously at a standstill. This superposition of flow and gridlock would help maintain the relationships among the ring's atoms for a few crucial milliseconds after removing their laser constraints, enough time to measure their properties before they scatter.

Not only might this quantum traffic jam overcome a difficult gravity measurement challenge, but it might help physicists discard some of the many competing theories about the universe—potentially helping clear up a longstanding traffic jam of ideas.

One of the great mysteries of the cosmos, for example, is why it is expanding at an apparently accelerating rate. Physicists have suggested an outward force, dubbed “dark energy,” causes this expansion, but they have yet to discover its origin. One among many theories is that in the vacuum of space, short-lived virtual particles constantly appear and wink out of existence, and their mutual repulsion creates dark energy's effects. While it's a reasonable enough explanation on some levels, physicists calculate that these particles would create so much repulsive force that it would immediately blow the universe apart. So how can they reconcile observations with the virtual particle idea?

"One possibility is that the basic fabric of space-time only responds to virtual particles that are more than a few micrometers apart," Taylor said, "and that's just the sort of separation we could explore with this ring of cold atoms. So if it turns out you can ignore the effect of particles that operate over these short length scales, you can account for a lot of this unobserved repulsive energy. It would be there, it just wouldn't be affecting anything on a cosmic scale."

The research appears in the journal Physical Review Letters.

This story, originally published as a news item by NIST, was writen by Chad T. Boutin.

Read More

This is part two of a two-part series on Weyl semimetals and Weyl fermions, newly discovered materials and particles that have drawn great interest from physicists at JQI and the Condensed Matter Theory Center at the University of Maryland. The second part focuses on the theoretical questions about Weyl materials that Maryland researchers are exploring. Part one, which was published last week, introduced their history and basic physics. If you haven’t read part one, we encourage you to head there first before getting into the details of part two.

The 2015 discovery of a Weyl semimetal—and the Weyl fermions it harbored—provoked a flurry of activity from researchers around the globe. A quick glance at a recent physics journal or the online arXiv preprint server testifies to the topic’s popularity. The arXiv alone has had more than 200 papers on Weyl semimetals posted in 2016.

Researchers at JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland have been interested in Weyl physics since before last summer’s discovery, publishing 18 papers on the topic over the past two years. In all, more than a dozen scientists at Maryland have been working to understand the fundamental properties of these curious new materials.

In addition to studying specific topics, researchers are also realizing that the physics of Weyl fermions—particles first studied decades ago in different setting—might have wider reach. They may account for the low-energy behavior of certain superconductors and other materials, especially those that are strongly correlated—that is, materials in which interactions between electrons cannot be ignored.

"Weyl physics should be abundant in many, many correlated materials," says Pallab Goswami, a theorist and postdoctoral researcher at JQI and CMTC. "If we can understand the exotic thermodynamic, transport and electrodynamic properties of Weyl fermions, we can probably understand more about low temperature physics in general," Goswami says.

Taking a wider approach

Goswami is not only interested in discovering new Weyl semimetals. He also wants to find strongly interacting materials where Weyl semimetal physics can help explain unresolved puzzles. That behavior is often related to the unique magnetic properties of Weyl fermions.

Recently, he and several colleagues examined a family of compounds known as pyrochlore iridates, formed from iridium, oxygen and a rare earth element such as neodymium or praseodymium. While most of these are insulators at low temperatures, the compound with praseodymium is an exception. It remains a metal and, intriguingly, has an anomalous current that flows without any external influence. This current, due to a phenomenon called the Hall effect, appears in other materials, but it is usually driven by an applied magnetic field or the magnetic properties of the material itself. In the praseodymium iridate, though, it appears even without a magnetic field and despite the fact that the compound has no magnetic properties that have been seen by experiment.

Goswami and his colleagues have argued that Weyl fermions can account for this puzzling behavior. They can distort a material’s magnetic landscape, making it look to other particles as if a large magnetic field is there. This effect is hard to spot in the lab, though, due to the challenge of keeping samples at very cold temperatures. The team has suggested how future experiments might confirm the presence of Weyl fermions through precise measurements with a scanning tunneling microscope.

On the surface

Parallel to Goswami’s efforts to expand the applications of Weyl physics, Johannes Hofmann, a former JQI and CMTC theorist who is now at the University of Cambridge in the UK, is diving into the details of Weyl semimetals. Hofmann has studied Weyl semimetals divorced from any real material and predicted a generic behavior that electrons on the surface of a semimetal will have. It’s a feature that could ultimately find applications to electronics and photonics.

In particular, he studied undulating charge distributions on the surface of semimetals, created by regions with more electrons and regions with less. Such charge fluctuations are dynamic, moving back and forth in response to their mutual electrical attraction, and in Weyl semimetals they support waves that move in only one direction.

The charge fluctuations generate electric and magnetic fields just outside the surface. And on the surface, positive and negative regions are packed close together—so close, in fact, that their separation can be much smaller than the wavelength of visible light. Since these fluctuations occur on such a small scale, they can also be used to detect small features in other objects. For instance, bringing a sample of some other material near the surface will modify the distribution of charges in a way that could be measured. Getting the same resolution with light would require high-energy photons that could destroy the object being imaged. Indeed, researchers have already shown that this is a viable imaging technique, as demonstrated in experiments with ordinary metals.

On the surface of Weyl semimetals one-way waves can travel through these charge fluctuations. Ordinary metals, too, can carry waves but require huge magnetic fields to steer them in only one direction. Hofmann showed that in a Weyl semimetal, it’s possible to create these waves without a magnetic field, a fact that could enable applications of the materials to microscopy and lithography. 

Too much disorder?

Although many studies imagine that Weyl materials are perfectly clean, such a situation rarely occurs in real experiments. Contaminants inevitably lodge themselves into the ideal crystal structure of any solid. Consequently, JQI scientists have looked at how disorder—the dirt that prevents samples from behaving like perfect theoretical models—affects the properties of Weyl materials. Their work has settled an argument theorists have been having for years.

One camp thought that weak disorder—dirt that doesn’t cause big changes—was essentially harmless to Weyl semimetals, since tiny wobbles in the material's electrical landscape could safely be ignored. The other camp argued that certain fluctuations, though weak, affect a wide enough area of the landscape that they cannot be ignored.

Settling the dispute took intense numerical study, requiring the use of supercomputing resources at Maryland. "It was very hard to do this," says Jed Pixley, a postdoctoral researcher at JQI and CMTC who finally helped solve the disorder conundrum. "It turns out that the effects of large local fluctuations of the disorder are weak, but they’re there."

Pixley’s calculations found that large regions of weak disorder create a new type of low-energy excitation, in addition to Weyl fermions. These new excitations live around the disordered regions and divert energy away from the Weyl fermion quasiparticles. The upshot is that the quasiparticles have a finite lifetime, instead of the infinite lifetime predicted by previous studies. The result has consequences for the stability of Weyl semimetals in a technical sense, although the lifetime of the quasiparticles is still quite long. In typical experiments, the effects of large areas of disorder would be tough to spot, although experiments on Weyl semimetals are still in their early days.

Research into Weyl materials shows little sign of slowing down. And the broader role that Weyl fermions play in condensed matter physics is still evolving and growing, with many more surprises likely in the future. As more and more experimental groups join the hunt for exotic physics, theoretical investigations, like those of the scientists at JQI and CMTC, will be crucial to identifying new behaviors and suggesting new experiments, steering the study of Weyl physics toward new horizons.

Read More

This is part one of a two-part series on Weyl semimetals and Weyl fermions, newly discovered materials and particles that have drawn great interest from researchers at JQI and the Condensed Matter Theory Center at the University of Maryland. The first part focuses on the history and basic physics of these materials. Part two focuses on theoretical work at Maryland.

For decades, particle accelerators have grabbed headlines while smashing matter together at faster and faster speeds. But in recent years, alongside the progress in high-energy experiments, another realm of physics has been taking its own exciting strides forward.

That realm, which researchers call condensed matter physics, studies chunks of matter moving decidedly slower than the protons in the LHC. In fact, the materials under study—typically solids or liquids—are usually sitting still. That doesn't make them boring, though. Their calm appearance can often hide exotic physics that arises from their microscopic activity.

"In condensed matter physics, the energy scales are much lower," says Pallab Goswami, a postdoctoral researcher at JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland. "We want to go to lower energies and find new phenomena, which is exactly the opposite of what is done in particle physics."

Historically, that's been a fruitful approach. The field has explained the physics of semiconductors—like the silicon that makes computer chips—and many superconductors, which generate the large magnetic fields required for clinical MRI machines.

Over the past decade, that success has continued. In 2004, researchers at the University of Manchester in the UK discovered a way to make single-atom-thick sheets of carbon by sticking Scotch tape onto graphite and peeling it off. It was a shockingly low-tech way to make graphene, a material with stellar electrical properties and incredible strength, and it led quickly to a Nobel Prize in physics in 2010.

A few years later, researchers discovered topological insulators, materials that trap their internal electrons but let charges on the surface flow freely. It’s a behavior that requires sophisticated math to explain—math that earned three researchers a share of the 2016 Nobel Prize in physics for theoretical discoveries that ultimately explain the physics of these and other materials.

In 2012, experimentalists studying the junction between a superconductor and a thin wire spotted evidence for Majorana fermions, particles that behave like uncharged electrons. Originally studied in the context of high-energy physics, these exotic particles never showed up in accelerators, but scientists at JQI predicted that they might make an appearance at much lower energies.

Last year, separate research groups at Princeton University, MIT and the Chinese Academy of Sciences discovered yet another exotic material—a Weyl semimetal—and with it yet another particle: the Weyl fermion. It brought an end to a decades-long search that began in the 1930s and earned acclaim as a top-10 discovery of the year, according to Physics World.

Like graphene, Weyl semimetals have appealing electrical properties and may one day make their way into electronic devices. But, perhaps more intriguingly for theorists, they also share some of the rich physics of topological insulators and have provoked a flurry new research. Scientists working with JQI Fellow Sankar Das Sarma, the Director of CMTC, have published 18 papers on the subject since 2014.

Das Sarma says that the progress in understanding solid state materials over the past decade has been astonishing, especially the discovery of phenomena researchers once thought were confined to high-energy physics. “It shows how clever nature is, as concepts and laws developed in one area of physics show up in a completely disparate area in unanticipated ways,” he says.

An article next week will explore some of the work on Weyl materials at JQI and CMTC. This week's story will focus on the fundamental physics at play in these unusual materials.

Spotted at long last

Within two weeks last summer, three research groups reported evidence for Weyl semimetals. Teams from the US and China measured the energy of electrons on the surface of tantalum arsenide, a metallic crystal that some had predicted might be a semimetal. By shining light on the material and capturing electrons ejected from the sample, researchers were able to map out a characteristic sign of Weyl semimetals—a band of energies that electrons on the surface inhabit, known as a Fermi arc. It was a feature predicted only for Weyl semimetals.

Much of the stuff on Earth, from wood and glass to copper and water, is not a semimetal. It's either an insulator, which does a bad job of conducting electricity, or a conductor, which lets electrical current flow with ease.

Quantum physics ultimately explains the differences between conductors, insulators, semiconductors and semimetals. The early successes of quantum physics—like explaining the spectrum of light emitted by hydrogen atoms—revolved around the idea that quantum objects have discrete energy levels. For instance, in the case of hydrogen, the single electron orbiting the nucleus can only occupy certain energies. The pattern of light emanating from hot hydrogen gas matches up with the spacing between these levels.

In a solid, which has many, many atoms, electrons still occupy a discrete set of energies. But with so many electrons come many more levels, and those levels tend to bunch together. This leads to a series of energy bands, where electrons can live, and gaps, where they can't. The figure below illustrates this.

Electrons pile into these bands, filling up the allowed energies and skipping the gaps. Depending on where in the band structure the last few electrons sit, a material will have dramatically different electrical behavior. Insulators have an outer band that is completely filled up, with an energy gap to a higher empty band. Metals have their most energetic electrons sitting in a partially filled band, with lots of slightly higher energies to jump to if they are prodded by a voltage from a battery.

A Weyl semimetal is a different beast. There, electrons pile in and completely fill a band, but there is no gap to the higher, unfilled band. Instead, the two touch at isolated points, which are responsible for some interesting properties of Weyl materials.

Quasiparticles lead the charge

We often think of electrons as carrying the current in a wire, but that’s not the whole story. The charge carriers look like electrons, but due to their microscopic interactions they behave like they have a different mass. These effective charge carriers, which have different properties in different materials, are called quasiparticles.

By examining a material's bands and gaps, it's possible to glean some of the properties of these quasiparticles. For a Weyl semimetal, the charge carriers satisfy an equation first studied in 1929 by a German mathematician named Hermann Weyl, and they are now called Weyl fermions.

But the structure of the bands doesn't capture everything about the material, says Johannes Hofmann, a former postdoctoral researcher at JQI and CMTC who is now at the University of Cambridge. "In a sense, these Weyl materials are very similar to graphene," Hofmann says. "But they are not only described by the band structure. There is a topological structure as well, just as in topological insulators."

Hofmann says that although the single-point crossings in the bands play an important role, they don't tell the whole story. Weyl semimetals also have a topological character, which means that the overall shape of the bands and gaps, as well as the way electrons spread out in space, affect their properties. Topology can account for these other properties by capturing the global differences in these shapes, like the distinction between an untwisted ribbon and a Moebius strip.

The interplay between the topological structure and the properties of Weyl materials is an active area of research. Experiments, though, are still in the earliest stages of sorting out these questions.

A theorist’s dream

Researchers at JQI and elsewhere are studying many of the theoretical details, from the transport properties on the surfaces of Weyl materials to the emergence of new types of order. They are even finding Weyl physics useful in tackling condensed matter quandaries that have long proved intractable.

Jed Pixley, a postdoctoral researcher at CMTC, has studied how Weyl semimetals behave in the presence of disorder. Pixley says that such investigations are crucial if Weyl materials are to find practical applications. "If you are hoping semimetals have these really interesting aspects,” he says, “then things better not change when they get a little dirty."

Please return next week for a sampling of the research into Weyl materials underway at JQI and CMTC. Written by Chris Cesare with illustrations and figures by Sean Kelley.

Read More

Theoretical physicists studying the behavior of ultra-cold atoms have discovered a new source of friction, dispensing with a century-old paradox in the process. Their prediction, which experimenters may soon try to verify, was reported recently in Physical Review Letters.

The friction afflicts certain arrangements of atoms in a Bose-Einstein Condensate (BEC), a quantum state of matter in which the atoms behave in lockstep. In this state, well-tuned magnetic fields can cause the atoms to attract one another and even bunch together, forming a single composite particle known as a soliton.

Solitons appear in many areas of physics and are exceptionally stable. They can travel freely, without losing energy or dispersing, allowing theorists to treat them like everyday, non-quantum objects. Solitons composed of photons—rather than atoms—are even used for communication over optical fibers.

Studying the theoretical properties of solitons can be a fruitful avenue of research, notes Dmitry Efimkin, the lead author of the paper and a former JQI postdoctoral researcher now at the University of Texas at Austin. “Friction is very fundamental, and quantum mechanics is now quite a well-tested theory,” Efimkin says. “This work investigates the problem of quantum friction for solitons and marries these two fundamental areas of research.”

Efimkin, along with JQI Fellow Victor Galitski and Johannes Hofmann, a physicist at the University of Cambridge, sought to answer a basic question about soliton BECs: Does an idealized model of a soliton have any intrinsic friction?

Prior studies seemed to say no. Friction arising from billiard-ball-like collisions between a soliton and stray quantum particles was a possibility, but the mathematics prohibited it. For a long time, then, theorists believed that the soliton moved through its cloudy quantum surroundings essentially untouched.

But those prior studies did not give the problem a full quantum consideration, Hofmann says. “The new work sets up a rigorous quantum-mechanical treatment of the system,” he says, adding that this theoretical approach is what revealed the new frictional force.

It’s friction that is familiar from a very different branch of physics. When a charged particle, such as an electron, is accelerated, it emits radiation. A long-known consequence is that the electron will experience a friction force as it is accelerated, caused by the recoil from the radiation it releases.

Instead of being proportional to the speed of the electron, as is friction like air resistance, this force instead depends on the jerk—the rate at which the electron’s acceleration is changing. Intriguingly, this is the same frictional force that appears in the quantum treatment of the soliton, with the soliton’s absorption and emission of quantum quasiparticles replacing the electron’s emission of radiation.

At the heart of this frictional force, however, lurks a problem. Including it in the equations describing the soliton’s motion—or an accelerated electron’s—reveals that the motion in the present depends on events in the future, a result that inverts the standard concept of causality. It’s a situation that has puzzled physicists for decades.

The team tracked down the origin of these time-bending predictions and dispensed with the paradox. The problem arises from a step in the calculation that assumes the friction force only depends on the current state of the soliton. If, instead, it also depends on the soliton’s past trajectory, the paradox disappears.

Including this dependence on the soliton’s history leads to nearly the same equations governing its motion, and those equations still include the new friction. It’s as if the quantum background retains a memory of the soliton’s path.

Hofmann says that BECs provide a pristine system to search for the friction. Experimenters can apply lasers that set the atomic soliton in motion, much like a marble rolling around a bowl—although the bowl is tightly squeezed in one dimension. Observing the frequency and amplitude of this motion, as well as how it changes over time, could reveal the friction’s signature. “Using some typical experimental parameters, we think that the magnitude of this force is large enough to be observable in current experiments,” Hofmann says.

Infographic credit: S. Kelley/NIST and C. Cesare/JQI

Read More

Nature doesn’t have the best memory. If you fill a box with air and divide it in half with a barrier, it’s easy to tell molecules on the left from molecules on the right. But after removing the barrier and waiting a short while, the molecules get mixed together, and it becomes impossible to tell where a given molecule started. The air-in-a-box system loses any memory of its initial conditions.

The universe has been forgetting its own initial state since the Big Bang, a fact linked to the unrelenting forward march of time. Systems that forget where they started are said to have thermalized, since it is often—but not always—an exchange of heat and energy with some other system that causes the memory loss. For example, a melting ice cube forgets its orderly arrangement of water molecules when heat from its surroundings splits the cube’s crystal bonds. In some sense, the initial information about the ice cube—the structure of the crystal, the distance between molecules, etc.—leaks away.

The opposite case is localization, where information about the initial arrangement sticks around. Such a situation is rare, like an ice cube that never melts, but one example is Anderson localization, in which particles or waves in a crystal are trapped near impurities. They tend to bounce off defects in the crystal and scatter in random directions, yielding no net movement. If there are enough impurities in a region, the particles or waves never escape.

Since the discovery of Anderson localization in 1958, it has been an open question whether interacting collections of quantum particles can also localize, a phenomenon known as many-body localization. Now, researchers working with JQI and QuICS Fellow Christopher Monroe have directly observed this localization in a system of 10 interacting ions, trapped and zapped by electric fields and lasers. Their findings are one of the first direct observations of many-body localization in a quantum system, and they open up the possibility of studying the phenomenon with more ions. The results were published June 6 in Nature Physics.

Although it is possible to simulate the behavior of 10 ions with an ordinary computer, the experiment is an important step toward studying many-body localization with dozens of ions. At that point, an accurate simulation is out of reach for even the fastest of today’s machines. It could also shed light on a question that has vexed physicists since the early days of the quantum theory: When does a given quantum system thermalize?

“The transition of quantum systems from thermalized to localized represents a boundary between states governed at long times by quantum mechanics and ones that follow classical physics,” says Jake Smith, a graduate student at JQI and the first author of the paper. “It’s important to know if a given quantum system will thermalize because if it does you can use techniques from classical physics to predict its long-time behavior. That doesn’t require full knowledge of the system and is easier than making predictions with quantum mechanics.”

Smith and his colleagues searched for signs of localization by introducing some disorder into a chain of 10 ytterbium ions. The suspicion was that this disorder would act like the crystal impurities responsible for Anderson localization, preventing information about each ion from dispersing throughout the chain.

In this case, the information is the initial state of each ion’s quantum spin, a property that makes it act like a tiny magnet. This spin, which can point up, down or anywhere in between, and the ion can absorb and emit photons from a laser, changing the direction its spin points. Lasers with the right color and power also let spins interact with each other in a controllable way. By applying the right lasers, researchers could dial in the strength of this interaction, as well as how far it reached. This controllability is one of the major advantages of studying many-body localization with trapped ions.

By focusing a powerful laser to a diameter of just over a micron, the team also applied a random shift to the magnetic environment of each spin, creating the necessary disorder. Then, they tuned the strength of the interactions relative to the size of this disorder and traced the emergence of localization. They performed many experiments with random amounts of disorder, preparing each spin to point either up or down and then measuring all of the spins after a certain amount of time to see where they pointed.

Without disorder, the spins rapidly lost any signature of their initial direction. At the end of the disorder-free runs, each spin was just as likely to point up as it was to point down. However, as they cranked up the disorder, the spins started to retain information about their initial state. If a spin started out pointing up, it was more likely to be measured pointing up, and the same was true for spins initially pointing down. This local memory of the initial condition even persisted through the time it took for several spin interactions to occur.

“This work is a major advance in quantum simulation as our platform can be scaled to dozens of ions, where detailed modeling becomes impossible due to the complexity of many-body quantum states,” Smith says. “Moreover, the high degree of control in our experiment opens the possibility of using these many-body localized states as potential quantum information memories.”

In addition to Smith and Monroe, several other authors contributed to the paper: Aaron Lee, a graduate student at JQI; Phil Richerme, an assistant professor at Indiana University; Brian Neyenhuis, a postdoctoral researcher at JQI; Paul Hess, a postdoctoral researcher at JQI; Philipp Hauke, a postdoctoral researcher at the University of Innsbruck; Markus Heyl, a researcher at the Technical University of Munich; and David Huse, a professor at Princeton University.

Read More

The swirling field of a magnet—rendered visible by a sprinkling of iron filings—emerges from the microscopic behavior of atoms and their electrons. In permanent magnets, neighboring atoms align and lock into place to create inseparable north and south poles. For other materials, magnetism can be induced by a field strong enough to coax atoms into alignment.

In both cases, atoms are typically arranged in the rigid structure of a solid, glued into a grid and prevented from moving. But the team of JQI Fellow Ian Spielman has been studying the magnetic properties of systems whose tiny constituents are free to roam around—a phenomenon called “itinerant magnetism." 

“When we think of magnets, we usually think of some lattice,” says graduate student Ana Valdés-Curiel. Now, in a new experiment, Valdés-Curiel and her colleagues have seen the signatures of itinerant magnetism arise in a cold cloud of rubidium atoms.

The team mapped out the magnetic properties of their atomic cloud, probing the transition between unmagnetized and magnetized phases. Using interfering lasers, the researchers dialed in magnetic fields and observed the atoms’ responses. The experiment, which was the first to directly observe magnetic properties that result from the particles’ motion, was reported March 30 in Nature Communications.

Physicists often study phase transitions, which illuminate the large-scale consequences of microscopic behavior. For instance, liquid water looks very different after it’s frozen or boiled—a result of how temperature effects the motion of atoms. Similarly, permanent magnets lose their magnetic properties when they heat up; the energy that atoms draw from their hot surroundings can overwhelm the bonds that keep their tiny magnetic poles aligned.

To explore magnetism in a cloud of rubidium, the JQI researchers first cooled their atoms down. Because the atoms were so cold, they inhabited only three low-lying quantum energy levels, or states. These states, labeled by a quantum property called spin, interacted differently with magnetic fields. Two of the three states tried to align with or against an applied field, while the third completely ignored it.

The team illuminated the cloud with two interfering lasers—similar to previous experiments in which neutral atoms were made to act like they had a charge—and created an effective magnetic field in the twisting shape of a helix. They then varied the intensity and frequency of the lasers and observed how the atoms responded. After tuning their lasers and allowing the atoms to settle into stable states, the researchers let them fall and observed how the different spin states separated. This allowed them to measure the fraction of atoms that were in magnetic states, a signature of how magnetized the cloud had become.

Three distinct phases emerged, corresponding to different settings for the two laser parameters. When one laser’s frequency was shifted higher and both lasers had relatively low intensities, the atoms sat in their non-magnetic state, unperturbed by the fields. As the frequency shift was turned down and eventually flipped—so that the second laser’s frequency was higher—atoms preferred to fall into one of the two magnetic states, leading to an increase in the atoms’ motion. Atoms even grouped together by state, leading to magnetic domains similar to those that appear in ordinary magnets.

But this magnetic ordering collapsed when the laser intensity was ramped up. In that case, the atoms occupied all three states, but they didn’t bunch into aligned patches as in the magnetized case. Instead, their spins pointed along the local direction of the effective magnetic field created by the lasers.

By scanning many parameter paths, the researchers mapped the magnetic and non-magnetic phases of the rubidium cloud, and they found that the experimental results closely matched theoretical predictions about how rubidium atoms would behave.

The results here open the door to a more detailed study of the magnetic phase transitions in neutral atoms, as well as experiments that study the interactions between mobile magnetic particles. “This experiment is the first example of a magnetic system where the motion of the particles—here atoms—was essential for the magnetic physics,” Spielman says. "Our measurements of spin-orbit coupled bosons pave the way for similar experiments with fermions—mimicking electrons in materials—that might one day help to create new types of magnetic materials."

Infographic credit: S. Kelley/JQI

Read More
  • Rogue rubidium leads to atomic anomaly
  • Unexpected high-energy atoms illuminate the physics of potential quantum processors
  • March 16, 2016 Quantum Many-Body Physics, PFC

The behavior of a few rubidium atoms in a cloud of 40,000 hardly seems important. But a handful of the tiny particles with the wrong energy may cause a cascade of effects that could impact future quantum computers.

Some proposals for quantum devices use Rydberg atoms—atoms with highly excited electrons that roam far from the nucleus—because they interact strongly with each other and offer easy handles for controlling their individual and collective behavior. Rubidium is one of the most popular elements for experimenting with Rydberg physics.

Now, a team of researchers led by JQI Fellows Trey Porto, Steven Rolston and Alexey Gorshkov have discovered an unwanted side effect of trying to manipulate strongly interacting rubidium atoms: When they used lasers to drive some of the atoms into Rydberg states, they excited a much larger fraction than expected. The creation of too many of these high-energy atoms may result from overlooked “contaminant” states and could be problematic for proposals that rely on the controlled manipulation of Rydberg atoms to create quantum computers. The new results were published online March 16 in Physical Review Letters.

“Rydberg atoms interact strongly, which is good for quantum applications,” Porto says. “However, such strong interactions with atoms in the wrong state can impact their usefulness, and our work takes a closer look at this Rydberg physics.”

Playing with atoms

Manipulating atoms is a delicate game. That’s because electrons, bound by their charge to protons in the nucleus, orbit with discrete energy levels determined by quantum physics. Electrons can ascend or descend through these levels as if on an energy ladder, but only when an atom absorbs or emits a packet of light, known as a photon, that is tuned to the energy difference between rungs.

For a single atom, the spacing between energy levels is sharply defined, and photons with slightly different energies—those that don’t quite match the gap between rungs—will excite an atom weakly or not at all. But when many atoms interact, the definite spacing gets smeared out—an example of an effect called broadening. This broadening allows a wider band of energies to excite the atoms, which is precisely the issue that the researchers ran into.

The initial discovery was an accident. “I kept changing the energy and intensity of our lasers, and I just kept seeing the same thing,” says Elizabeth Goldschmidt, a post-doctoral researcher at JQI and the first author of the paper. “You just can’t beat it.”

In an experiment to learn more about this broadening, Goldschmidt and her colleagues began by cooling a cloud of rubidium atoms and trapping them in a 3D grid using interfering beams of light. Then, they used lasers to bathe the entire cloud in photons whose energies could bridge the gap between particular low- and high-energy rungs. Even when varying the laser intensity and the density of atoms over many experiments, the researchers found that they continued to create more Rydberg atoms than expected, an indication that the atomic transition had been broadened.

A possible explanation

The team looked to causes of broadening typically encountered in the lab, such as short-range interactions between atoms or imperfections in laser beams, to explain what they were seeing. But nothing captured the magnitude of the effect.

Although the experiment did not provide direct evidence for the cause, the team suspects that a small fraction of atoms in other Rydberg levels contaminated the excitation process by interacting with clean atoms and broadening their transitions. The first few contaminants, created by stray photons in the environment, led to additional excited atoms and more contaminants, a process that quickly leads to many more excited atoms.

It’s as if the atoms are sampling different shifts to their energy levels because of the changing configuration of these unavoidable contaminants, Goldschmidt says. This causes atoms that wouldn’t otherwise get excited to absorb photons and hop up to the higher energy level, creating more Rydberg atoms.

Such broadening is a challenge for some Rydberg atom-based proposals. Many proposals call for using Rydberg atoms trapped in a lattice to create quantum computers or general purpose quantum simulators that could be programmed to mimic complex physics ordinarily too hard to study in a lab. Rydberg atoms are a favorite platform because they have strong interactions and they don’t need to be right next to each other to interact.

But the broadening discovered here prompts a closer look at these proposals. Some that don’t use Rydberg states directly, but instead use weakly excited Rydberg states to gain some of their advantages and avoid some drawbacks, could also face challenges. “Even with weak lasers that barely excite to the Rydberg state, you still get these contaminants,” Goldschmidt says. “A better understanding of this broadening is important for trying to build Rydberg-based devices for quantum information processing.”

Infographic credit: S. Kelley/JQI

Read More

When it comes to quantum physics, light and matter are not so different. Under certain circumstances, negatively charged electrons can fall into a coordinated dance that allows them to carry a current through a material laced with imperfections. That motion, which can only occur if electrons are confined to a two-dimensional plane, arises due to a phenomenon known as the quantum Hall effect.

Researchers, led by Mohammad Hafezi, a JQI Fellow and assistant professor in the Department of Electrical and Computer Engineering at the University of Maryland, have made the first direct measurement that characterizes this exotic physics in a photonic platform. The research was published online Feb. 22 and featured on the cover of the March 2016 issue of Nature Photonics. These techniques may be extended to more complex systems, such as one in which strong interactions and long-range quantum correlations play a role.

Symmetry and Topology

Physicists use different approaches to classify matter; symmetry is one powerful method. For instance, the microscopic structure of a material like diamond looks the same even while shifting your gaze to a new spot in the crystal. These symmetries – the rotations and translations that leave the microscopic structure the same – predict many of the physical properties of crystals.

Symmetry can actually offer a kind of protection against disruptions. Here, the word protection means that the system (e.g. a quantum state) is robust against changes that do not break the symmetry. Recently, another classification scheme based on topology has gained significant attention. Topology is a property that depends on the global arrangement of particles that make up a system rather than their microscopic details. The excitement surrounding this mathematical concept has been driven by the idea that the topology of a system can offer a stability bubble around interesting and even exotic physics, beyond that of symmetry. Physicists are interested in harnessing states protected by both symmetry and topology because quantum devices must be robust against disturbances that can interfere with their functionality.

The quantum Hall effect is best understood by peering through the lens of topology. In the 1980s, physicists discovered that electrons in some materials behave strangely when subjected to large magnetic fields at extreme cryogenic temperatures. Remarkably, the electrons at the boundary of the material will flow along avenues of travel called ‘edge states’, protected against defects that are most certainly present in the material. Moreover, the conductance--a measure of the current--is quantized. This means that when the magnetic field is ramped up, then the conductance does not change smoothly. Instead it stays flat, like a plateau, and then suddenly jumps to a new value. The plateaus occur at precise values that are independent of many of the material’s properties. This hopping behavior is a form of precise quantization and is what gives the quantum Hall effect its great utility, allowing it to provide the modern standard for calibrating resistance in electronics, for instance.

Researchers have engineered quantum Hall behavior in other platforms besides the solid-state realm in which it was originally discovered. Signatures of such physics have been spotted in ultracold atomic gases and photonics, where light travels in fabricated chips. Hafezi and colleagues have led the charge in the photonics field.

The group uses a silicon-based chip that is filled with an array of ring-shaped structures called resonators. The resonators are connected to each other via waveguides (figure). The chip design strictly determines the conditions under which light can travel along the edges rather than through the inner regions. The researchers measure the transmission spectrum, which is the fraction of light that successfully passes through an edge pathway. To circulate unimpeded through the protected edge modes, the light must possess a certain energy. The transmission increases when the light energy matches this criteria. For other parameters, the light will permeate the chip interior or get lost, causing the transmission signal to decrease. The compiled transmission spectrum looks like a set of bright stripes separated by darker regions (see figure). Using such a chip, this group previously collected images of light traveling in edge states, definitively demonstrating the quantum Hall physics for photons.

In this new experiment Hafezi’s team modified their design to directly measure the value of the topology-related property that characterizes the photonic edge states. This measurement is analogous to characterizing the quantized conductance, which was critical to understanding the electron quantum Hall effect. In photonics, however, conductance is not relevant as it pertains to electron-like behavior. Here the significant feature is the winding number, which is related to how light circulates around the chip. Its value equals to the number of available edge states and should not change in the face of certain disruptions.

To extract the winding number, the team adds 100 nanometer titanium heaters on a layer above the waveguides. Heat changes the index of refraction, namely how the light bends as it passes through the waveguides. In this manner, researchers can controllably imprint a phase shift onto the light. Phase can be thought of in terms of a time delay. For instance, when comparing two light waves, the intensity can be the same, but one wave may be shifted in time compared to the other. The two waves overlap when one wave is delayed by a full oscillation cycle—this is called a 2π phase shift.

On the chip, enough heat is added to add a 2π phase shift to the light. The researchers observe an energy shift in the transmission stripes corresponding to light traveling along the edge. Notably, in this chip design, the light can circulate either clockwise (CW) or counterclockwise (CCW), and the two travel pathways do not behave the same (in contrast to an interferometer). When the phase shift is introduced, the CW traveling light hops one direction in the transmission spectrum, and the CCW goes the opposite way. The winding number is the amount that these edge-state spectral features move and is exactly equivalent to the quantized jumps in the electronic conductance.

Sunil Mittal, lead author and postdoctoral researcher explains one future direction, “So far, our research has been focused on transporting classical [non-quantum] properties of light--mainly the power transmission. It is intriguing to further investigate if this topological system can also achieve robust transport of quantum information, which will have potential applications for on-chip quantum information processing.” 

This text was written by E. Edwards/JQI

Read More

Particles can be classified as bosons or fermions. A defining characteristic of a boson is its ability to pile into a single quantum state with other bosons. Fermions are not allowed to do this. One broad impact of fermionic anti-social behavior is that it allows for carbon-based life forms, like us, to exist. If the universe were solely made from bosons, life would certainly not look like it does. Recently, JQI theorists* have proposed an elegant method for achieving transmutation--that is, making bosons act like fermions. This work was published in the journal Physical Review Letters.

This transmutation is an example of emergent behavior, specifically what’s known as quasiparticle excitations—one of the concepts that make condensed matter systems so interesting. Particles by themselves have mostly well-defined characteristics, but en masse, can work together such that completely distinctive, even exotic phenomena appear. Typically collective behaviors are difficult to study because the large numbers of real particles and all of their interactions are computationally challenging and in many cases prohibitive. JQI Fellow Victor Galitski explains, “The whole idea of emergent excitations is that the quasiparticles are fundamentally different from the actual individual particles. But this actually doesn’t happen that often.” In this case, it turns out that the boson-to-fermion transmutation leads to an interesting phase of matter. Galitski continues, “ Here, the bosons don’t condense--they instead form a state without long-range order. This is an example of a long sought after state of matter called a Bose liquid.

In this research, the authors propose a method for realizing and observing such unusual excitations--here the fermionic quasiparticles. The experiment harnesses the strengths of atom-optical systems, such as using bosons (which are easier to work with), a relatively simple lattice geometry (made from lasers that are the workhorses of atomic physics), and established measurement techniques. Galitski continues, “In some sense this was motivated by an experiment where researchers shook a one-dimensional lattice, and it appears that the experiment we propose here is not beyond the capabilities of current work.”

Here, the central technique also involves taking an optical lattice made from laser light and shaking it back-and-forth. An atom-optical lattice system, analogous to a crystal, has a periodic structure. Laser beams criss-cross to form standing waves of light that resemble an egg carton. Atoms interact with the light such that they are drawn to the valleys of the egg carton.  Like a true solid, this system has an accompanying band-structure, which describes the allowed energies that atoms within the lattice can take on. Without the lattice present, trapped ultracold bosons form a state of matter called a Bose-Einstein condensate. Not much changes when a typical optical lattice is turned on--the bosons will still collect into the lowest energy state and still be in this condensate form. For the simplest lattice configurations, this state corresponds to a single point on a nearly-flattened parabola in the band structure. This configuration is actually the starting point of many atomic physics experiments. Physicists are interested in modifying the energy bands to perhaps uncover more complex phases of matter. To do this, lattice properties must be altered.

In this work, the authors seek to achieve transmutation, and are among those that have previously shown that one way to accomplish this is to construct a lattice whose band structure looks like a moat. (The word moat here means what it did in medieval times--a trench around a structure.)

Lead author and postdoctoral researcher Tigran Sedrakyan explains the significance of the moat, "The moat is instrumental in achieving this statistical transmutation because it appears that the fermions in a moat-band may actually have lower energy than condensed bosons have, enforcing the constituent bosons to transmute."

It turns out that getting the requisite moat to appear has not been so easy. Surprisingly, in this new work, the team found that if, instead of modifying the lattice geometry itself, they take a simple two-dimensional lattice and shake it back and forth, then a moat appears in what was otherwise an unremarkable, almost flat band structure. The rate of shaking is specially chosen such that the bands undergo this transformation.

The particles themselves do not actually change from bosons to fermions. What’s happening is that the environment of the lattice is modifying the bosonic behavior. When the lattice is quivering periodically at a specially determined frequency, the bosons act as if they are governed by fermionic statistics. In the new band structure, the bosons do not form a condensate.

*Research affiliations: Victor Galitski is a JQI Fellow and also a member of UMD’s Condensed Matter Theory Center (CMTC). Tigran Sedrakyan is a postdoctoral researcher split between the Physics Frontier Center at JQI and the University of Minnesota. Alex Kamenev is also at University of Minnesota.

Written by E. Edwards/JQI

Read More

For many years rubidium has been a workhorse in the investigation of ultracold atoms.  Now JQI scientists are using Rb to cool another species, ytterbium, an element prized for its possible use in advanced optical clocks and in studying basic quantum phenomena.   Yb shows itself useful in another way: it comes in numerous available isotopes, some of which are bosonic in nature and some fermionic.

Yb-171 has proven satisfactorily amenable to cooling in the atom trap lab of Steve Rolston and Trey Porto.  First Rb-87 atoms are loaded into a magneto-optic trap---an enclosure where magnetic fields and laser beams are used to confine atoms---and then cooled until they form a Bose-Einstein condensate (BEC).  Slow-moving Yb atoms, in contact with the Rb atoms, are cooled right along with them.  Thus Yb atoms lose excess energy to warming the colder Rb atoms.

In this way Yb atoms can be chilled to a state of quantum degeneracy, a condition in which the Yb atoms in the vapor reach their lowest possible energy configuration.  It’s important to mention here that the Rb atoms are bosons: they can all occupy a single common quantum state, namely the BEC.  The Yb atoms, by contrast, are fermions: quantum interactions preclude their existing side-by-side in a single quantum state.  Instead they must occupy a ladder of energy states, form the lowest level on up. 

“Yb atoms have been chilled to a state of quantum degeneracy before by evaporation techniques,” said Varun Vaidya, one of the JQI researchers, “but mostly by way of evaporative cooling, a process that depletes up to 99.9% of the atoms.  Yb-171 atoms, however, cannot be cooled this way since they don’t interact with each other.”

Yb-171 cooling down to degeneracy was achieved in one experiment, Vaidya said, by cooling it with Yb-173 atoms.  But this left a Yb-171 sample of only about a few thousand atoms. In the JQI the complement of Yb-171 atoms remains ample: up to a quarter million atoms.

What can one do with Yb atoms chilled in this way? One thing is to insert them into an optical lattice, a sort of extra constraint inside the atom trap consisting of criss-crossing laser beams. The beams, establishing a pattern of local electric field minima and maxima, hold the atoms in a regular lattice the way eggs are held in a crate.  Then the Yb atoms can then be selectively excited into higher energy states. 

Vaidya, the lead author on a JQI paper published as an "Editor's Suggestion" in a recent issue of Physical Review A, cites two examples of what could come from excited Yb atoms in a lattice.  One would be the creation of  polarons, quasi-particles consisting of a Yb atoms surrounded by several Rb atoms drawn inwards by a weak force called the Van der Waals interaction. Another possibility is that the Yb atoms could be excited to a higher energy level and then quickly stimulated to refund that energy, falling back to their starting state.  The energy refund would come not in the form of a quantum of light (photon) as in a laser, but in the form of a quantum of acoustic energy (a phonon).  Thus you would get a sort of phononic laser.

Read More
  • At the edge of a quantum gas
  • JQI physicists observe skipping orbits in the quantum Hall regime
  • September 29, 2015 Quantum Many-Body Physics, PFC

From NIST-PML--JQI scientists have achieved a major milestone in simulating the dynamics of condensed-matter systems – such as the behavior of charged particles in semiconductors and other materials – through manipulation of carefully controlled quantum-mechanical models.

Going beyond their pioneering experiments in 2009 (the creation of “artificial magnetism”), the team has created a model system in which electrically neutral atoms are coaxed into performing just as electrons arrayed in a two-dimensional sheet do when they are exposed to a strong magnetic field.

The scientists then showed for the first time that it is possible to tune the model system such that the atoms (acting as electron surrogates) replicate the signature “edge state” behavior of real electrons in the quantum Hall effect (QHE), a phenomenon which forms the basis for the international standard of electrical resistance.* The researchers report their work in the 25 September issue of the journal Science.

“This whole line of research enables experiments in which charge-neutral particles behave as if they were charged particles in a magnetic field,” said JQI Fellow Ian Spielman, who heads the research team at NIST’s Physical Measurement Laboratory.

“To deepen our understanding of many-body physics or condensed-matter-like physics – where the electron has charge and many important phenomena depend on that charge – we explore experimental systems in which the components have no electrical charge, but act in ways that are functionally equivalent and can be described by the same equations,” Spielman said.

Such quantum simulators are increasingly important, as electronic components and related devices shrink to the point where their operation grows increasingly dependent on quantum-mechanical effects, and as researchers struggle to understand the fundamental physics of how charges travel through atomic lattices in superconductors or in materials of interest for eventual quantum information processing.

Quantum effects are extremely difficult to investigate at the fundamental level in conventional materials. Not only is it hard to control the numerous variables involved, but there are inevitably defects and irregularities in even carefully prepared experimental samples. Quantum simulators, however, offer precise control of system variables and yield insights into the performance of real materials as well as revealing new kinds of interacting quantum-mechanical systems.

“What we want to do is to realize systems that cannot be realized in a condensed-matter setting,” Spielman said. “There are potentially really interesting, many-body physical systems that can’t be deployed in other settings.”

To do so, the scientists created a Bose-Einstein condensate (BEC, in which ultracold atoms join in a single uniform, low-energy quantum state) of a few hundred thousand rubidium atoms and used two intersecting lasers to arrange the atoms into a lattice pattern. 

Then a second pair of lasers, each set to a slightly different wavelength, was trained on the lattice, creating "artificial magnetism" — that is, causing the electrically neutral atoms to mimic negatively charged electrons in a real applied magnetic field.

Depending on the tuning of the laser beams, the atoms were placed in one of three different quantum states representing electrons that were either in the middle of, or at opposite edges of, a two-dimensional lattice.

By adjusting the properties of the laser beams, the team produced dynamics characteristic of real materials exhibiting the QHE. Specifically, as would be expected of electrons, atoms in the bulk interior of the lattice behaved like insulators. But those at the lattice edges exhibited a distinctive "skipping"motion.

In a real QHE system, each individual electron responds to an applied magnetic field by revolving in a circular (cyclotron) orbit. In atoms near the center of the material, electrons complete their orbits uninterrupted. That blocks conduction in the system’s interior, making it a “bulk insulator.” But at the edges of a QHE system, the electrons can only complete part of an orbit before hitting the edge (which acts like a hard wall) and reflecting off. This causes electrons to skip along the edges, carrying current.

Remarkably, the simulation's electrically neutral rubidium atoms behaved in exactly the same way: localized edge states formed in the atomic lattice and atoms skipped along the edge. Moreover, the researchers showed that by tuning the laser beams – that is, modifying the artificial magnetic field – they could precisely control whether the largest concentration of atoms was on one edge, the opposite edge, or in the center of the lattice.

“Generating these sorts of dynamical effects was beyond our abilities back in 2009, when we published our first paper on artificial magnetism,” Spielman said. “The field strength turned out to be too weak. In the new work, we were able to approach the high-field limit, which greatly expands the range of effects that are possible to engineer new kinds of interactions relevant to condensed-matter physics.”

* The Hall effect occurs when a current traveling in a two-dimensional plane moves through a magnetic field applied perpendicular to the plane. As electrons interact with the field, each begins to revolve in a circular (cyclotron) orbit. That collective motion causes them to migrate and cluster on one edge of the plane, creating a concentration of negative charge. As a result, a voltage forms across the conductor, with an associated resistance from edge to edge.

Much closer, detailed examination reveals the quantum Hall effect (QHE): The resistance is exactly quantized across the plane; that is, it occurs only at specific discrete allowed values which are known to extreme precision. That precision makes QHE the international standard for resistance.

An additional distinctive property of QHE systems is that they are “bulk insulators” that allow current to travel only along their edges.

This article was written by C. Suplee. Modifications to the original article were made, with permission, by E. Edwards. Animation made be S. Kelley/PML/JQI.

Read More

The quantum Hall effect, discovered in the early 1980s, is a phenomenon that was observed in a two-dimensional gas of electrons existing at the interface between two semiconductor layers. Subject to the severe criteria of very high material purity and very low temperatures, the electrons, when under the influence of a large magnetic field, will organize themselves into an ensemble state featuring remarkable properties.

Many physicists believe that quantum Hall physics is not unique to electrons, and thus it should be possible to observe this behavior elsewhere, such as in a collection of trapped ultracold atoms. Experiments at JQI and elsewhere are being planned to do just that. On the theoretical front, scientists* at JQI and University of Maryland have also made progress, which they describe in the journal Physical Review Letters. The result, to be summarized here, proposes using quantum matter made from a neutral atomic gas, instead of electrons. In this new design, elusive exotic states that are predicted to occur in certain quantum Hall systems should emerge. These states, known as parafermionic zero modes, may be useful in building robust quantum gates.

For electrons, the hallmark of quantum Hall behavior includes a free circulation of electrical charge around the edge but not in the interior of the sample. This research specifically relates to utilizing the fractional quantum Hall (FQH) effect, which is a many-body phenomenon. In this case, one should not consider just the movement of individual electrons, but rather imagine the collective action of all the electrons into particle-like “quasiparticles.” These entities appear to possess fractional charge, such as 1/3.

How does this relate to zero modes? Zero modes, as an attribute of quantum Hall systems, come into their own in the vicinity of specially tailored defects. Defects are where quasiparticles can be trapped. In previous works, physicists proposed that a superconducting nanowire serve as a defect that snags quasiparticles at either end of the wire. Perhaps the best-known example of a composite particle associated with zero-mode defects is the famed Majorana fermion.

Author David Clarke, a Postdoctoral Research Scholar from the UMD Condensed Matter Theory Center, explains, “Zero modes aren’t particles in the usual sense. They’re not even quasiparticles, but rather a place that a quasiparticle can go and not cost any energy.”

Aside from interest in them for studying fundamental physics, these zero modes might play an important role in quantum computing. This is related to what’s known as topology, which is a sort of global property that can allow for collective phenomena, such as the current of charge around the edge of the sample, to be impervious to the tiny microscopic details of a system. Here the topology endows the FQH system with multiple quantum states with exactly the same energy. The exactness and imperturbability of the energy amid imperfections in the environment makes the FQH system potentially useful for hosting quantum bits. The present report proposes a practical way to harness this predicted topological feature of the FQH system through the appearance of what are known as parafermionic zero-modes.

These strange and wonderful states, which in some ways go beyond Majoranas, first appeared on the scene only a few years ago, and have attracted significant attention. Now dubbed ‘parafermions,’ they were first proposed by Maissam Barkeshli and colleagues at Stanford University. Barkeshli is currently a postdoctoral researcher at Microsoft Station Q and will be coming soon to JQI as a Fellow. Author David Clarke was one of the early pioneers in studying how these states could emerge in a superconducting environment. Because both parafermions and Majoranas are expected to have unconventional behaviors when compared to the typical particles used as qubits, unambiguously observing and controlling them is an important research topic that spans different physics disciplines. From an application standpoint, parafermions are predicted to offer more versatility than Majorana modes when constructing quantum computing resources.

What this team does, for the first time, is to describe in detail how a parafermionic mode could be produced in a gas of cold bosonic atoms. Here the parafermion would appear at both ends of a one-dimensional trench of Bose-Einstein Condensate (BEC) atoms sitting amid a larger two-dimensional formation of cold atoms displaying FQH properties. According to first author and Postdoctoral Researcher Mohammad Maghrebi, “The BEC trench is the defect that does for atoms what the superconducting nanowire did for electrons.”

Some things are different for electrons and neutral atoms. For one thing, electrons undergo the FQH effect only if exposed to high magnetic fields. Neutral atoms have no charge and thus do not react strongly to magnetic fields; researchers must mimic this behavior by exposing the atoms to carefully designed laser pulses, which create a synthetic field environment. JQI Fellow Ian Spielman has led this area of experimental research and is currently performing atom-based studies of quantum Hall physics.

Another author of the PRL piece, JQI Fellow Alexey Gorshkov, explains how the new research paper came about: “Motivated by recent advances in Spielman's lab and (more recently) in other cold atom labs in generating synthetic fields for ultracold neutral atoms, we show how to implement in a cold-atom system the same exotic parafermionic zero modes proposed originally in the context of condensed-matter systems.”

“We argue that these zero modes, while arguably quite difficult to observe in the condensed matter context, can be observed quite naturally in atomic experiments,” says Maghrebi. “The JQI atmosphere of close collaboration and cross-fertilization between atomic physics and condensed matter physics on the one hand and between theory and experiment on the other hand was at the heart of this work.”

“Ultracold atoms play by a slightly different set of rules from the solid state,” says JQI Fellow Jay Sau. Things which come easy in one are hard in the other. Figuring out the twists in getting a solid state idea to work for cold atoms is always fun and the JQI is one of the best places to do it.”

(*)  The PRL paper has five authors, and their affiliations illustrate the complexity of modern physics work.  Mohammad Maghrebi, Sriram Ganeshan, Alexey Gorshkov, and Jay Sau are associated with the Joint Quantum Institute, operated by the University of Maryland and the National Institute for Standards and Technology.  Three of the authors---Ganeshan, Clarke, and Sau---are also associated with the Condensed Matter Theory Center at the University of Maryland physics department.  Finally, Maghrebi and Gorshkov are associated with the Joint Center for Quantum Information and Computer Science (usually abbreviated QuICS), which is, like the JQI, a University of Maryland-NIST joint venture.

Read More

From NIST TechBeat--It’s not lightsaber time, not yet. But a team including theoretical physicists from JQI and NIST has taken another step toward building objects out of photons, and the findings hint that weightless particles of light can be joined into a sort of “molecule” with its own peculiar force. Researchers show that two photons, depicted in this artist’s conception as waves (left and right), can be locked together at a short distance. Under certain conditions, the photons can form a state resembling a two-atom molecule, represented as the blue dumbbell shape at center.

The findings build on previous research that several team members contributed to before joining JQI and NIST. In 2013, collaborators from Harvard, Caltech and MIT found a way to bind two photons together so that one would sit right atop the other, superimposed as they travel. Their experimental demonstration was considered a breakthrough, because no one had ever constructed anything by combining individual photons—inspiring some to imagine that real-life lightsabers were just around the corner.

Now, in a paper forthcoming in Physical Review Letters, the team has showed theoretically that by tweaking a few parameters of the binding process, photons could travel side by side, a specific distance from each other. The arrangement is akin to the way that two hydrogen atoms sit next to each other in a hydrogen molecule.

“It’s not a molecule per se, but you can imagine it as having a similar kind of structure,” says JQI Fellow Alexey Gorshkov. “We’re learning how to build complex states of light that, in turn, can be built into more complex objects. This is the first time anyone has shown how to bind two photons a finite distance apart."

While the new findings appear to be a step in the right direction—if we can build a molecule of light, why not a sword?—Gorshkov says he is not optimistic that Jedi Knights will be lining up at NIST’s gift shop anytime soon. The main reason is that binding photons requires extreme conditions difficult to produce with a roomful of lab equipment, let alone fit into a sword’s handle. Still, there are plenty of other reasons to make molecular light—humbler than lightsabers, but useful nonetheless.

 “Lots of modern technologies are based on light, from communication technology to high-definition imaging,” Gorshkov says. “Many of them would be greatly improved if we could engineer interactions between photons.”For example, engineers need a way to precisely calibrate light sensors, and Gorshkov says the findings could make it far easier to create a “standard candle” that shines a precise number of photons at a detector. Perhaps more significant to industry, binding and entangling photons could allow computers to use photons as information processors, a job that electronic switches in your computer do today.

Not only would this provide a new basis for creating computer technology, but it also could result in substantial energy savings. Phone messages and other data that currently travel as light beams through fiber optic cables has to be converted into electrons for processing—an inefficient step that wastes a great deal of electricity. If both the transport and the processing of the data could be done with photons directly, it could reduce these energy losses. Gorshkov says it will be important to test the new theory in practice for these and other potential benefits.

“It’s a cool new way to study photons,” he says. “They’re massless and fly at the speed of light. Slowing them down and binding them may show us other things we didn’t know about them before.”

This news item was written by Chad Boutin, NIST.

M.F. Maghrebi, M.J. Gullans, P. Bienias, S. Choi, I. Martin, O. Firstenberg, M.D. Lukin, H.P. Büchler and A. V. Gorshkov. Coulomb Bound States of Strongly Interacting Photons. Physical Review Letters, forthcoming September 2015

Read More

Physicists use theoretical and experimental techniques to develop explanations of the goings-on in nature. Somewhat surprisingly, many phenomena such as electrical conduction can be explained through relatively simplified mathematical pictures — models that were constructed well before the advent of modern computation. And then there are things in nature that push even the limits of high performance computing and sophisticated experimental tools. Computers particularly struggle at simulating systems made of numerous particles interacting with each other through multiple competing pathways (e.g. charge and motion). Yet, some of the most intriguing physics happens when the individual particle behaviors give way to emergent collective properties. One such example is high-temperature superconductivity, where the underlying mechanism is still under debate.

In the quest to better explain and even harness the strange and amazing behaviors of interacting quantum systems, well-characterized and controllable atomic gases have emerged as a tool for emulating the behavior of solids. This is because physicists can use lasers to force atoms in dilute quantum gases to act, in many ways, like electrons in solids. The hope is studying the same physics in the atom-laser system will help scientists understand the inner workings of different exotic materials.

JQI physicists, led by Trey Porto, are interested in quantum magnetic ordering, which is believed to be intimately related to high-temperature superconductivity and also has significance in other massively connected quantum systems. Recently, the group studied the magnetic and motional dynamics of atoms in a specially designed laser-based lattice that looks like a checkerboard. Their work was published in the journal Science.

To engineer a system that might behave like a real chunk of material, the team loads an ultracold gas of around 1000 rubidium atoms into a two-dimensional optical lattice, which is a periodic array of valleys and hills created by intersecting beams of laser light. The atoms are analogous to the electrons in a solid; the lattice geometry is defined by the pattern of laser light. The depth of the lattice can be precisely adjusted to allow for certain types of atom behaviors. For this research, atoms can move between the lattice sites via quantum tunneling — this is the motional component. Secondly, each atom can be thought of as having an orientation similar to that of a magnet*, ‘up’ and ‘down.’ Like magnets, the atoms, under certain conditions, can interact to form ordered arrangements in the lattice (e.g. up-down-up-down…).

However, observing the emergence of such magnetic states has been challenging because these particular phases of matter are only revealed when the atoms have extraordinarily low energy, here considered as an effective temperature. A typical ultracold atomic gas is around 10-100 nanokelvin. To glimpse magnetic order in an optical lattice, the atoms need to be at picokelvin temperatures, 10 to 1000 times colder.

Porto’s experiment takes a different approach — the researchers work in a regime out-of-equilibrium where magnetic dynamics can be induced at higher, more amenable temperatures. While the daunting picokelvin criteria remains, the authors believe that this methodology will open up new pathways for achieving and studying quantum magnetic matter in lower temperature, equilibrated systems.

The team uses a novel ‘checkerboard’ lattice, in which they have exquisite control over the different sites; this gives them the ability to study both the motional and magnetic behavior of the atoms. The lattice is constructed from two controllable sublattices, denoted A and B, which together form an array of ‘double-wells’ (see above graphic). The researchers can create desired magnetic order by altering the relative depth of sublattice B with respect to sublattice A. This same mechanism is also employed to drive dynamics between lattice wells. In the beginning of the experiment, all atom magnets are initialized in the ‘up’ orientation and are trapped in a two-dimensional uniformly deep optical lattice. The second step is to flip the atoms in the B lattice to ‘down’; this makes the system antiferromagnetically ordered and out-of-equilibrium. Next, the depth of sublattice B is suddenly changed in a maneuver called a ‘quench.’ Essentially, the quench kicks the system, initiating dynamics across the lattice. The atom magnets flip up and down and tunnel between sites as they relax to a configuration that is, in this case, no longer magnetized.

This experiment in some ways gets at the potential power of quantum simulations, even independent of the material science applications. Porto explains, “Here is an atom-lattice system that is challenging to calculate accurately. Yet being able to demonstrate precise control over different competing parameters in such a system and also watch it evolve over time may yield insights into the underlying physics.”

Lead author Roger Brown, now a National Research Council Postdoctoral Fellow at NIST, Boulder continues, “Our relatively extended system in two-dimensions poses an interesting theoretical challenge because numerically exact techniques are not available and traditional analytical theories require approximations. Thus experimental observations may be useful for choosing appropriate theories.”

*Physicists use mathematical models, such as the bosonic t-J model studied here, to understand quantum magnets. Thus for clarity in this news item the atoms are called “magnets.” In the language of the Science article, they are called “spins”.

Written by E. Edwards/JQI 

Read More

If you’re designing a new computer, you want it to solve problems as fast as possible. Just how fast is possible is an open question when it comes to quantum computers, but JQI physicists have narrowed the theoretical limits for where that “speed limit” is. The work implies that quantum processors will work more slowly than some research has suggested. 

The work offers a better description of how quickly information can travel within a system built of quantum particles such as a group of individual atoms. Engineers will need to know this to build quantum computers, which will have vastly different designs and be able to solve certain problems much more easily than the computers of today. While the new finding does not give an exact speed for how fast information will be able to travel in these as-yet-unbuilt computers—a longstanding question—it does place a far tighter constraint on where this speed limit could be.

Quantum computers will store data in a particle’s quantum states—one of which is its spin, the property that confers magnetism. A quantum processor could suspend many particles in space in close proximity, and computing would involve moving data from particle to particle. Just as one magnet affects another, the spin of one particle influences its neighbor’s, making quantum data transfer possible, but a big question is just how fast this influence can work.

The team’s findings advance a line of research that stretches back to the 1970s, when scientists discovered a limit on how quickly information could travel if a suspended particle only could communicate directly with its next-door neighbors. Since then, technology advanced to the point where scientists could investigate whether a particle might directly influence others that are more distant, a potential advantage. By 2005, theoretical studies incorporating this idea had increased the speed limit dramatically.

“Those results implied a quantum computer might be able to operate really fast, much faster than anyone had thought possible,” says postdoctoral researcher and lead author Michael Foss-Feig. “But over the next decade, no one saw any evidence that the information could actually travel that quickly.”

Physicists exploring this aspect of the quantum world often line up several particles and watch how fast changing the spin of the first particle affects the one farthest down the line—a bit like standing up a row of dominoes and knocking the first one down to see how fast the chain reaction takes. The team looked at years of others’ research and, because the dominoes never seemed to fall as fast as the 2005 prediction suggested, they developed a new mathematical proof that reveals a much tighter limit on how fast quantum information can propagate.

“The tighter a constraint we have, the better, because it means we’ll have more realistic expectations of what quantum computers can do,” says Foss-Feig.

The limit, their proof indicates, is far closer to the speed limits suggested by the 1970s result. The proof addresses the rate at which entanglement propagates across quantum systems. Entanglement—the weird linkage of quantum information between two distant particles—is important, because the more quickly particles grow entangled with one another, the faster they can share data. The 2005 results indicated that even if the interaction strength decays quickly with distance, as a system grows, the time needed for entanglement to propagate through it grows only logarithmically with its size, implying that a system could get entangled very quickly. The team’s work, however, shows that propagation time grows as a power of its size, meaning that while quantum computers may be able to solve problems that ordinary computers find devilishly complex, their processors will not be speed demons.

“On the other hand, the findings tell us something important about how entanglement works,” says Foss-Feig. “They could help us understand how to model quantum systems more efficiently.”

This was originally written by C. Boutin for NIST TechBeat, with modifications for JQI made by E. Edwards.

Read More
spin-hall bosons

Every electrical device, from a simple lightbulb to the latest microchips, is enabled by the movement of electrical charge, or current. The nascent field of ‘spintronics’ taps into a different electronic attribute, an intrinsic quantum property known as spin, and may yield devices that operate on the basis of spin-transport.

Atom-optical lattice systems offer a clean, well-controlled way to study the manipulation and movement of spins because researchers can create particle configurations analogous to crystalline order in materials. JQI/CMTC* theorists Xiaopeng Li, Stefan Natu, and Sankar Das Sarma, in collaboration with Arun Paramekanti from the University of Toronto, have been developing a model for what happens when ultracold atomic spins are trapped in an optical lattice structure with a “double-valley” feature, where the repeating unit resembles the letter “W”. This new theory result, recently published in the multidisciplinary journal Nature Communications, opens up a novel path for generating what’s known as the spin Hall effect, an important example of spin-transport.

Behavior in double-valley lattices has previously been studied for a collection of atoms that all have the same value of spin. In this new work, theorists consider two-component atoms--here, the spin state each atom has can vary between “spin-up” and “spin-down” states. Atomic spins in a lattice can be thought of as an array of tiny bar magnets. In the single-component case, the atom-magnets are all oriented the same direction in the lattice. In this case, the magnets can have a tendency to favor only one of the wells of the double-valley. In the two-component system studied here, each atom-magnet can have its’ north-pole pointed either up or down, with respect to a particular magnetic field. Adding this kind of freedom to the model leads to some very curious behavior – the atoms spontaneously separate, with the spin-up atoms collecting into one well of the double-valley, and spin-down atoms in the other. Theorists have dubbed this new state a “spontaneous chiral spin superfluid” (for further explanation of what a superfluid is, see here).

This sort of spin-dependent organization is of great interest to researchers, who could employ it to study the spin Hall effect, analogous to the Hall effect for electrons. Normally, this effect is seen as a result of spin-orbit coupling, or the association of an atom’s spin with its motion. In fact, this has long been the approach for producing a spin Hall effect – apply a current to material with spin-orbit coupling, and the spins will gather at the edges according to their orientation. Ian Spielman’s group at the JQI has pioneered laser-based methods for realizing both spin-orbit coupling and the spin Hall effect phenomenon in ultracold atomic gases. In contrast, for the superfluid studied here, spin-sorting is not the result of an applied field or asymmetric feature of the system, but rather emerges spontaneously. It turns out this behavior is driven by tiny, random quantum fluctuations, in a paradoxical phenomenon known as quantum order by disorder.

Quantum order by disorder

Generally speaking, the transition from disorder to order is a familiar one. Consider water condensing into ice: this is a disordered system, a liquid, transitioning to a more ordered one, a solid. This phase transformation happens because the molecules become limited in their degrees of freedom, or the ability to move in different ways. Conversely, adding noise to a system, such as heating an ice cube, generally leads to a more disordered state, a pool of water. Amazingly, noise or fluctuations in a system can sometimes drive a system into a more ordered state.

The theory team showed that this is indeed the case for certain kinds of atoms loaded into a double-valley optical lattice. While this system is a quiet, mostly non-thermal environment, noise still lurks in the form of quantum mechanical fluctuations. In this system, the spin-up and spin-down atoms can potentially be configured in four different, but energetically identical arrangements. This is known as degeneracy and can be indicative of the amount of order in a system--the more equal energy states, the more disordered a system. It turns out that these arrangements have different amounts of quantum noise and these fluctuations play a crucial role. Surprisingly, the quantum fluctuations will break up the degeneracy, thus restoring order.

What’s the upshot? In this system, the resulting lowest energy configuration--a chiral spin superfluid-- is preferred independent of the type of double-valley lattice geometry, indicating a type of universal behavior. With this in mind, the theorists examine a number of lattice structures where this phenomenon might be realized. For instance, if the fluid is placed into a hexagonal lattice configuration, similar to the structure of graphene, they expect the characteristic spin currents of the spin Hall effect to emerge, as depicted in the graphic, above. In the publication, the team points out that optical lattice systems are a flexible, pristine platform for examining the effect of these tiny variations in quantum fluctuations, which are often masked in real materials. Outside of exploring novel forms of matter like the one found here, research into spin and atom manipulation has applications in emerging electronic-like technologies, such as spintronics, valleytronics and atomtronics.

*JQI (Joint Quantum Institute) and CMTC (Condensed Matter Theory Center)

This news item was written by S. Kelley, JQI.

Read More

From NIST Techbeat1

JQI Researchers at the National Institute of Standards and Technology (NIST) have reported* the first observation of the "spin Hall effect" in a Bose-Einstein condensate (BEC), a cloud of ultracold atoms acting as a single quantum object. As one consequence, they made the atoms, which spin like a child's top, skew to one side or the other, by an amount dependent on the spin direction. Besides offering new insight into the quantum mechanical world, they say the phenomenon is a step toward applications in "atomtronics"—the use of ultracold atoms as circuit components.

A quantum circuit might use spins, described as "up" or "down," as signals, in a way analogous to how electric charge can represent ones and zeros in conventional computers. Quantum devices, however, can process information in ways that are difficult or impossible for conventional devices. Finding ways to manipulate spin is a major research effort among quantum scientists, and the team's results may help the spin Hall effect become a good tool for the job.The spin Hall effect is seen in electrons and other quantum particles when their motion depends on their magnetic orientation, or "spin." Previously, the spin Hall effect has been observed in electrons confined to a two-dimensional semiconductor strip, and in photons, but never before in a BEC.

The team used several sets of lasers to trap rubidium atoms in a tiny cloud, about 10 micrometers on a side, inside a vacuum chamber and then cool the atoms to a few billionths of a degree above absolute zero. Under these conditions, the atoms change from an ordinary gas to an exotic state of matter called a BEC, in which the atoms all behave identically and occupy the lowest energy state of the system. Then, the NIST team employed another laser to gently push the BEC, allowing them to observe the spin Hall effect at work. 

Spin is roughly analogous to the rotation of a top, and if the top is gently pushed straight forward, it will eventually tend to curve either to the right or left, depending on which way it is spinning. Similarly, subject to the spin Hall effect, a quantum object spinning one way will, when pushed, curve off to one side, while if it spins the other way, it will curve to the other. The BEC followed this sort of curved path after the laser pushed it. 

"This effect has been observed in solids before, but in solids there are other things happening that make it difficult to distinguish what the spin Hall effect is doing," says lead author Matthew Beeler, who just completed a postdoctoral fellowship at NIST. "The good thing about seeing it in the BEC is that we've got a simple system whose properties we can explain in just two lines of equations. It means we can disentangle the spin Hall effect from the background and explore it more easily." 

Conceptually, the laser/BEC setup can be thought of as an atom spin transistor—an atomtronic device—that can manipulate spin "currents" just as a conventional electronic transistor manipulates electrical current. (see Illustration)

Beeler says that it is unlikely to be a practical way to build a logic gate for a working quantum computer, though. For now, he says, their new window into the spin Hall effect is good for researchers, who have wanted an easier way to understand complex systems where the effect appears. It also might provide insight into how data can be represented and moved from place to place in atomtronic circuits.

1This story was written by Chad Boutin for NIST Techbeat. It was modified for JQI by E. Edwards, with permission.

Read More