RSS icon
Twitter icon
Facebook icon
Vimeo icon
YouTube icon

Quantum Control, Measurement, and Sensing

About

Creating quantum states on demand and controlling them is a critical component to developing practical quantum-based devices. Subsequent measurement of such states is also a challenge, because by definition, quantum superpositions collapse upon interaction, whether through intentional measurement or due to outside disturbances. Notably, the instability of a quantum state can also be used advantageously to create sensors. Quantum systems can be calibrated such that exposure to certain changing environmental conditions will force a switch from one quantum state to another. In some cases, a quantum phase of matter can abruptly change to a non-quantum phase of matter. Alterations to a quantum system can be monitored and detected, giving physicists information on the environment itself. JQI physicists are researching the many facets of quantum measurement and control, which has applications in areas such as precision spectroscopy and sensing.

Latest News

  • Interfering Waves

When is a traffic jam not a traffic jam? When it's a quantum traffic jam, of course. Only in quantum physics can traffic be standing still and moving at the same time.

A new theoretical paper from scientists at the National Institute of Standards and Technology (NIST) and the University of Maryland suggests that intentionally creating just such a traffic jam out of a ring of several thousand ultracold atoms could enable precise measurements of motion. If implemented with the right experimental setup, the atoms could provide a measurement of gravity, possibly even at distances as short as 10 micrometers—about a tenth of a human hair's width.

While the authors stress that a great deal of work remains to show that such a measurement would be attainable, the potential payoff would be a clarification of gravity's pull at very short length scales. Anomalies could provide major clues on gravity’s behavior, including why our universe appears to be expanding at an accelerating rate.

In addition to potentially answering deep fundamental questions, these atom rings may have practical applications, too. They could lead to motion sensors far more precise than previously possible, or serve as switches for quantum computers, with 0 represented by atomic gridlock and 1 by moving atom traffic.

The authors of the paper are affiliated with the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science, both of which are partnerships between NIST and the University of Maryland.

Over the past two decades, physicists have explored an exotic state of matter called a Bose-Einstein condensate (BEC), which exists when atoms overlap one another at frigid temperatures a smidgen of a degree away from absolute zero. Under these conditions, a tiny cloud of atoms can essentially become one large quantum “superatom,” allowing scientists to explore potentially useful properties like superconductivity and superfluidity more easily.

Theoretical physicists Stephen Ragole and Jake Taylor, the paper’s authors, have now suggested that a variation on the BEC idea could be used to sense rotation or even explore gravity over short distances, where other forces such as electromagnetism generally overwhelm gravity's effects. The idea is to use laser beams—already commonly used to manipulate cold atoms—to string together a few thousand atoms into a ring 10 to 20 micrometers in diameter.

Once the ring is formed, the lasers would gently stir it into motion, making the atoms circulate around it like cars traveling one after another down a single-lane beltway. And just as car tires spin as they travel along the pavement, the atoms' properties would pick up the influence of the world around them—including the effects of gravity from masses just a few micrometers away.

The ring would take advantage of one of quantum mechanics' counterintuitive behaviors to help scientists actually measure what its atoms pick up about gravity. The lasers could stir the atoms into what is called a "superposition," meaning, in effect, they would be both circulating about the ring and simultaneously at a standstill. This superposition of flow and gridlock would help maintain the relationships among the ring's atoms for a few crucial milliseconds after removing their laser constraints, enough time to measure their properties before they scatter.

Not only might this quantum traffic jam overcome a difficult gravity measurement challenge, but it might help physicists discard some of the many competing theories about the universe—potentially helping clear up a longstanding traffic jam of ideas.

One of the great mysteries of the cosmos, for example, is why it is expanding at an apparently accelerating rate. Physicists have suggested an outward force, dubbed “dark energy,” causes this expansion, but they have yet to discover its origin. One among many theories is that in the vacuum of space, short-lived virtual particles constantly appear and wink out of existence, and their mutual repulsion creates dark energy's effects. While it's a reasonable enough explanation on some levels, physicists calculate that these particles would create so much repulsive force that it would immediately blow the universe apart. So how can they reconcile observations with the virtual particle idea?

"One possibility is that the basic fabric of space-time only responds to virtual particles that are more than a few micrometers apart," Taylor said, "and that's just the sort of separation we could explore with this ring of cold atoms. So if it turns out you can ignore the effect of particles that operate over these short length scales, you can account for a lot of this unobserved repulsive energy. It would be there, it just wouldn't be affecting anything on a cosmic scale."

The research appears in the journal Physical Review Letters.

This story, originally published as a news item by NIST, was writen by Chad T. Boutin.

Read More

From credit card numbers to bank account information, we transmit sensitive digital information over the internet every day. Since the 1990s, though, researchers have known that quantum computers threaten to disrupt the security of these transactions.

That’s because quantum physics predicts that these computers could do some calculations far faster than their conventional counterparts. This would let a quantum computer crack a common internet security system called public key cryptography.

This system lets two computers establish private connections hidden from potential hackers. In public key cryptography, every device hands out copies of its own public key, which is a piece of digital information.  Any other device can use that public key to scramble a message and send it back to the first device. The first device is the only one that has another piece of information, its private key, which it uses to decrypt the message. Two computers can use this method to create a secure channel and send information back and forth.

A quantum computer could quickly calculate another device’s private key and read its messages, putting every future communication at risk. But many scientists are studying how quantum physics can fight back and help create safer communication lines.

One promising method is quantum key distribution, which allows two parties to directly establish a secure channel with a single secret key. One way to generate the key is to use pairs of entangled photons—particles of light with a shared quantum connection. The entanglement guarantees that no one else can know the key, and if someone tries to eavesdrop, both parties will be tipped off.

Tobias Huber, a recently arrived JQI Experimental Postdoctoral Fellow, has been investigating how to reliably generate the entangled photons necessary for this secure communication. Huber is a graduate of the University of Innsbruck in Austria, where he was supervised by Gregor Weihs. They have frequently collaborated with JQI Fellow Glenn Solomon, who spent a semester at Innsbruck as a Fulbright Scholar. Over the past couple of years, they have been studying a particular source of entangled photons, called quantum dots.

A quantum dot is a tiny area in a semiconductor, just nanometers wide, that is embedded in another semiconductor. This small region behaves like an artificial atom. Just like in an atom, electrons in a quantum dot occupy certain discrete energy levels. If the quantum dot absorbs a photon of the right color, an electron can jump to a higher energy level. When it does, it leaves behind an open slot at the lower energy, which physicists call a hole. Eventually, the electron will decay to its original energy, emitting a photon and filling in the hole. The intermediate combination of the excited electron and the hole is called an exciton, and two excited electrons and two holes are called a biexciton. A biexciton will decay in a cascade, emitting a pair of photons.

Huber, Weihs, Solomon and several colleagues have developed a way to directly excite biexcitons in quantum dots using a sequence of laser pulses. The pulses make it possible to encode information in the pair of emitted photons, creating a connection between them known as time-bin entanglement. It’s the best type of entanglement for transmitting quantum information through optical fibers because it doesn’t degrade as easily as other types over long distances. Huber and his colleagues are the first to directly produce time-bin entangled photons from quantum dots.

In their latest work, published in Optics Express, they investigated how the presence of material imperfections surrounding the quantum dots influences this entanglement generation. Imperfections have their own electron energy levels and can steal an electron from a dot or donate an electron to fill a hole. Either way, the impurity prevents an exciton from decaying and emitting a photon, decreasing the number of photons that are ultimately released. To combat this loss, the team used a second laser to fill up the electron levels of the impurities and showed that this increased the number of photons released without compromising the entanglement between them.

The team says the new work is a step in the right direction to make quantum dots a viable source of entangled photons. Parametric down-conversion, a competitor that uses crystals to split the energy of one photon into two, occasionally produces two pairs of entangled photons instead of one. This could allow an eavesdropper to read an encrypted message without being detected. The absence of this drawback makes quantum dots an excellent candidate for producing entangled photons for quantum key distribution.

The advent of quantum computing brings new security challenges, but tools like quantum key distribution are taking those challenges head-on. It’s possible that, one day, we could have not only quantum computers, but quantum-secure communication lines, free from prying eyes.

Read More

For the first time, a team including scientists from the National Institute of Standards and Technology (NIST) and JQI have used neutron beams to create holograms of large solid objects, revealing their interior details in ways that ordinary holograms do not.

Holograms—flat images that look like three-dimensional objects—owe their striking look to interfering waves. Both matter and light behave like waves at the smallest scales, and just like water waves traveling on the surface of the pond, waves of matter or light can combine to create information-rich interference patterns.

Illuminating an object with a laser can create an optical hologram. But instead of merely photographing the light reflected from the object, a hologram records how the reflected light waves interfere with each other. The resulting patterns, based on the waves’ phase differences, or relative positions of their peaks and valleys, contain far more information about an object’s appearance than a simple photo. Generally, though, they don’t reveal much about its interior.

It’s the interior neutron scientists explore. Neutrons are great at penetrating metals and many other solid things, making neutron beams useful for scientists who create a new substance and want to investigate its properties. But neutrons have limitations, too. Neutron beams typically probe average properties—fine for objects with repeating structures like a crystal, but not as good for spotting fine-grained details.

But what if we could have the best of both worlds? New research has found a way.

Previous work performed at the NIST Center for Neutron Research (NCNR) involved shooting neutrons through a cylinder of aluminum that had a tiny “spiral staircase” carved into one of its circular faces. The cylinder’s shape imparted a twist to the whole passing beam, but the beam’s individual neutrons also collected individual twists depending on the section of the cylinder that they passed through: the thicker the section, the greater the twist. Researchers realized this was the information needed to create holograms of objects’ innards, and the new paper details their method.

The discovery won’t change anything about interstellar chess games, but it adds to the palette of techniques scientists have to explore solid materials. The team has shown that all it takes is a beam of neutrons and an interferometer—a detector that measures interference patterns—to create direct visual representations of an object and reveal details about specific points within it.

"Other techniques measure small features as well, only they are limited to measuring surface properties," says team member Michael Huber of NIST’s Physical Measurement Laboratory. "This might be a more prudent technique for measuring small, 10-micron size structures and buried interfaces inside the bulk of the material."

The research was a multi-institutional collaboration that included scientists from NIST and JQI, as well as North Carolina State University and Canada’s University of Waterloo.

Story by Chad Boutin. The original story, along with several animations, was posted at NIST's news site

Read More

Scientists have created a crystal structure that boosts the interaction between tiny bursts of light and individual electrons, an advance that could be a significant step toward establishing quantum networks in the future.

Today’s networks use electronic circuits to store information and optical fibers to carry it, and quantum networks may benefit from a similar framework. Such networks would transmit qubits – quantum versions of ordinary bits – from place to place and would offer unbreakable security for the transmitted information. But researchers must first develop ways for qubits that are better at storing information to interact with individual packets of light called photons that are better at transporting it, a task achieved in conventional networks by electro-optic modulators that use electronic signals to modulate properties of light.

Now, researchers in the group of Edo Waks, a fellow at JQI and an Associate Professor in the Department of Electrical and Computer Engineering at the University of Maryland, have struck upon an interface between photons and single electrons that makes progress toward such a device. By pinning a photon and an electron together in a small space, the electron can quickly change the quantum properties of the photon and vice versa. The research was reported online Feb. 8 in the journal Nature Nanotechnology.

“Our platform has two major advantages over previous work,” says Shuo Sun, a graduate student at JQI and the first author of the paper. “The first is that the electronic qubit is integrated on a chip, which makes the approach very scalable. The second is that the interactions between light and matter are fast. They happen in only a trillionth of a second – 1,000 times faster than previous studies.”

CONSTRUCTING AN INTERFACE

The new interface utilizes a well-studied structure known as a photonic crystal to guide and trap light. These crystals are built from microscopic assemblies of thin semiconductor layers and a grid of carefully drilled holes. By choosing the size and location of the holes, researchers can control the properties of the light traveling through the crystal, even creating a small cavity where photons can get trapped and bounce around.

”These photonic crystals can concentrate light in an extremely small volume, allowing devices to operate at the fundamental quantum limit where a single photon can make a big difference,” says Waks.

The results also rely on previous studies of how small, engineered nanocrystals called quantum dots can manipulate light. These tiny regions behave as artificial atoms and can also trap electrons in a tight space. Prior work from the JQI group showed that quantum dots could alter the properties of many photons and rapidly switch the direction of a beam of light.

The new experiment combines the light-trapping of photonic crystals with the electron-trapping of quantum dots. The group used a photonic crystal punctuated by holes just 72 nanometers wide, but left three holes undrilled in one region of the crystal. This created a defect in the regular grid of holes that acted like a cavity, and only those photons with only a certain energy could enter and leave.

Inside this cavity, embedded in layers of semiconductors, a quantum dot held one electron. The spin of that electron – a quantum property of the particle that is analogous to the motion of a spinning top – controlled what happened to photons injected into the cavity by a laser. If the spin pointed up, a photon entered the cavity and left it unchanged. But when the spin pointed down, any photon that entered the cavity came out with a reversed polarization – the direction that light’s electric field points. The interaction worked the opposite way, too: A single photon prepared with a certain polarization could flip the electron’s spin.

Both processes are examples of quantum switches, which modify the qubits stored by the electron and photon in a controlled way. Such switches will be the coin of the realm for proposed future quantum computers and quantum networks.

QUANTUM NETWORKING

Those networks could take advantage of the strengths that photons and electrons offer as qubits. In the future, for instance, electrons could be used to store and process quantum information at one location, while photons could shuttle that information between different parts of the network.

Such links could enable the distribution of entanglement, the enigmatic connection that groups of distantly separated qubits can share. And that entanglement could enable other tasks, such as performing distributed quantum computations, teleporting qubits over great distances or establishing secret keys that two parties could use to communicate securely.

Before that, though, Sun says that the light-matter interface that he and his colleagues have created must create entanglement between the electron and photon qubits, a process that will require more accurate measurements to definitively demonstrate.

“The ultimate goal will be integrating photon creation and routing onto the chip itself,” Sun says. “In that manner we might be able to create more complicated quantum devices and quantum circuits.”

In addition to Waks and Sun, the paper has two additional co-authors: Glenn Solomon, a JQI fellow, and Hyochul Kim, a post-doctoral researcher in the Department of Electrical and Computer Engineering at the University of Maryland.

"Creating a quantum switch" credit: S. Kelley/JQI

Read More

Harnessing quantum systems for information processing will require controlling large numbers of basic building blocks called qubits. The qubits must be isolated, and in most cases cooled such that, among other things, errors in qubit operations do not overwhelm the system, rendering it useless. Led by JQI Fellow Christopher Monroe, physicists have recently demonstrated important steps towards implementing a proposed type of gate, which does not rely on super-cooling their ion qubits. This work, published as an Editor’s Suggestion in Physical Review Letters, implements ultrafast sensing and control of an ion's motion, which is required to realize these hot gates. Notably, this experiment demonstrates thermometry over an unprecedented range of temperatures--from zero-point to room temperature.

Graduate student and first author Kale Johnson explains how this research could be applied, “Atomic clock states found in ions make the most pristine quantum bits, but the speed at which we have been able to access them in a useful way for quantum information processing is slower than it could be. We are changing that by making each operation on the qubit faster while eliminating the need to cool the ion to the ground state after each operation.”

In the experiment the team begins with a single trapped atomic ion. The ion can be thought of as a bar magnet that can be oriented with its north pole ‘up’ or ‘down’ or any combination between the two poles (pointing horizontally along an imaginary equator is up + down).  Physicists can use lasers and microwave radiation to control this orientation. The individual laser pulses are a mere ten picoseconds in length—a time scale that is a tiny fraction of how long it takes for the ion to undergo appreciable motion in the trap. Operating in this regime is precisely what allows researchers to have superior sensing and ultimately control over the ion motion. The speed enables the team to extract the motional behavior of an ion using a technique that works independently of the energy in the motion itself.  In other words, the measurement is equally sensitive to a fast or very slow atom.

The researchers use a method that is based on Ramsey interferometry, named for the Nobel Laureate Norman Ramsey who pioneered it back in 1949. Known then as his “method of separated oscillatory fields,” it is used throughout atomic physics and quantum information science.   

Laser pulses are carefully divided and then reunited to achieve control over the ion’s spin and motion. The researchers call these laser-ion interactions ‘spin-dependent kicks’ (SDK) because each series of specially tailored laser pulses flips the spin, while simultaneously giving the ion a push (this is depicted in the illustration below). With each fast kick, the atom’s quantum wave packet is split into two parts in under three nanoseconds. Those halves are then re-combined at different points in space and time, and the signal from the unique overlap pattern reveals how the population is distributed between the two spin states. In this experimental sequence, that distribution depends on parameters such as the number of SDKs, the time between kicks, and the initial position and speed of the ion. The team repeats this experiment to extract the average motion of the ion, or its effective temperature.

 

In order to realize proposed two-ion quantum gates that do not require cooling the system into its quantum mechanical ground state, multiple spin dependent kicks must be employed with high accuracy such that errors remain manageable. Here the team was able to clearly demonstrate the necessary high-quality spin dependent kicks. More broadly, this protocol shows that adding ultrafast pulsed laser technology to the ion-trapping toolbox gives physicists ultimate quantum control over what can be a limiting, noise-inducing parameter: the motion.

Read More

The concept of temperature is critical in describing many physical phenomena, such as the transition from one phase of matter to another.  Turn the temperature knob and interesting things can happen.  But other knobs might be just as important for studying some phenomena.  One such knob is chemical potential, a thermodynamic parameter first introduced in the nineteenth century by scientists for keeping track of potential energy absorbed or emitted by a system during chemical reactions.

In these reactions different atomic species rearranged themselves into new configuration while conserving the overall inventory of atoms.  That is, atoms could change their partners but the total number of identity of the atoms remained invariant. 

Chemical potential is just one of many examples of how flows can be described.  An imbalance in temperature results in a flow of energy.  An imbalance in electrical potential results in a flow of charged particles.  Meanwhile, an imbalance in chemical potential results in a flow of particles; and specifically an imbalance in chemical potential for light would result in a flow of photons.

Can the concept of chemical light apply to light?  At first the answer would seem to be no since particles of light, photons, are regularly absorbed when then they interact with regular matter.  The number of photons present is not preserved.  But recent experiments have shown that under special conditions photon number can be conserved, clearing the way for the use of chemical potential for light. 

Now three JQI scientists offer a more generalized theoretical description of chemical potential (usually denoted by the Greek letter mu) for light and show how mu can be controlled and applied in a number of physics research areas.

A prominent experimental demonstration of chemical potential for light took place at the University of Bonn (*) in 2010.  It consisted of quanta of light (photons) bouncing back and forth inside a reflective cavity filled with dye molecules.  The dye molecules, acting as a tunable energy bath (a parametric bath), would regularly absorb photons (seemingly ruling out the idea of photon number being conserved) but would re-emit the light.  Gradually the light warmed the molecules and the molecules cooled the light until they were all at thermal equilibrium.  This was the first time photons had been successfully “thermalized” in this way.  Furthermore, at still colder temperatures the photons collapsed into a single quantum state; this was the first photonic Bose-Einstein condensate (BEC).

In a paper published in the journal Physical Review B the JQI theorists describe a generic approach to chemical potential for light. They illustrate their ideas by showing how a chemical-potential protocol can be implemented a microcircuit array. Instead of crisscrossing a single cavity, the photons are set loose in an array of microwave transmission lines. And instead of interacting with a bath of dye molecules, the photons here interact with a network of tuned circuits.

“One likely benefit in using chemical potential as a controllable parameter will be carrying out quantum simulations of actual condensed-matter systems,” said Jacob Taylor, one of the JQI theorists taking part in the new study.  In what some call a prototype for future full-scale quantum computing, quantum simulations use tuned interactions in a small microcircuit setup to arrive at a numerical solution to calculations that (in their complexity) would defeat a normal digital computer.

In the scheme described above, for instance, the photons, carefully put in a superposition of spin states, could serve as qubits. The qubits can be programmed to perform special simulations. The circuits, including the transmission lines, act as the coupling mechanism whereby photons can be respectively up- or down-converted to lower or higher energy by obtaining energy from or giving energy to excitations of the circuits.

(*) J. Klaers, J. Schmitt, F. Vewinger, and M. Weitz, Nature 468, 545 (2010)

Read More

From NIST-PML — Precise measurements of optical power enable activities from fiber-optic communications to laser manufacturing and biomedical imaging — anything requiring a reliable source of light. This situation calls for light-measuring (radiometric) standards that can operate over a wide range of power levels.

Currently, however, different methods for calibrating optical power measurements are required for different light levels. At high levels, existing radiometric standards employ analog detectors, diodes that generate a current proportional to the incident light intensity, but become imprecise at low levels. Low-power detectors, by contrast, very accurately measure discrete (usually very small) numbers of photons, but cannot handle light at higher levels. Because of the incommensurate scales and incompatible technologies, comparison between the two kinds of measurements isn't easy, resulting in long calibration chains to span the difference.

Linking standards for widely different powers requires extending the dynamic range of detection to cover the region between the two measurement regimes. There are two options for bridging this gap: a "top-down" approach using analog detectors and a "bottom-up" method that starts with counting individual photons.

Exploring the second option, a team of JQI scientists, along with colleagues from NIST's Physical Measurement Laboratory (PML), has demonstrated a technique for extending the range of photon-counting detectors by employing optical attenuators, devices that block controlled fractions of incoming light. The results, recently published in Optics Express, could lead to improved standards to cover a much wider range of optical power.

The benefit of anchoring standards to detectors capable of counting single photons is a matter of precision, explains team member Boris Glebov.

“Measuring frequency of light is probably the most precise measurement science can perform," Glebov says. "Thus, if you have a way to link power measurements to photon counts and frequency measurements, the possible precision is incredibly high."

Knowing the energy of each photon (a function of its frequency) and the number of incident photons enables an extremely accurate determination of optical power. This is because photon counting has inherently low uncertainty, says JQI fellow and Quantum Optics Group leader Alan Migdall.

“A single-photon detection scheme means we are counting discrete things, so in principle the error goes away," Migdall says. "Either we have a count or we don’t.”

Over the past few years, Migdall's group has focused considerable effort on developing better ways to count individual photons. In particular, they have worked on improving the performance of a superconducting detector called a transition edge sensor (TES), using devices developed and produced by Sae Woo Nam and colleagues from PML's Quantum Electronics and Photonics Division at NIST's Boulder, Colo., campus.

A transition edge sensor contains a tiny superconducting circuit. When a photon strikes the superconductor, its energy is absorbed in the form of heat. The rise in temperature causes an increase in electrical resistance and a corresponding drop in current, which registers in the detector electronics. The devices provide excellent photon number-resolving capabilities and can operate over a wide range of frequencies, from radio waves to gamma rays. However, the number of photons has been limited, typically to about 20 photons or fewer. In order to use TES devices at higher optical power levels, the operating range needs to be extended.

Previously, the scientists approached this problem by modeling the relaxation time (the time it takes for the sensor to cool down after absorbing photons) and developing certain algorithms for better processing the output signal from the device. This has enabled them to extend the devices' sensitivity range to as high as 6,000,000 photons in a single pulse.

To extend it even further, the scientists devised a method in which a TES is used to calibrate its own input attenuator. This device provides variable optical attenuation — the selective reduction in the light power that passes through it. Controlled attenuation of high-power light merged with a photon-counting detector could connect the high precision offered by photon-counting measurements to measurements made at higher illumination levels.

To perform the calibration, pulsed laser light is directed through a variable attenuator, which is gradually stepped through a series of attenuation values. The resulting signals from the TES are processed by an improved version of the group's algorithm*, enabling accurate statistical determinations of the photon number at each value. Comparing the change in the measured photon number as the input attenuator is adjusted allows the attenuator to be calibrated in place. Significantly, the approach doesn't require knowledge of the power of the light source, which means no external calibration is necessary.

Since the ratio by which an attenuator reduces the power of a signal is independent of input power (up to some limit), measurements of attenuation made at the few-photon level should agree with those made at much higher intensities. To confirm this, the researchers compared the values obtained with a TES to those obtained with a conventional analog power meter. In every case, the measurements agreed within a small statistical uncertainty.

“Even though we calibrate at the few-photon level, these attenuators can be used at higher powers, extending the utility of a TES well beyond its own operational range,” Glebov says. This means a TES-calibrated attenuator can be used to compare detectors, regardless of the optical power they are designed to operate at. In essence, the low uncertainty now associated with the calibrated attenuator can be transferred to other devices, enabling comparisons between standards through relative measurements.

A TES could also be used to calibrate a series of attenuators with only a small increase in combined uncertainty, enabling an even larger range of operation. The ability to dynamically extend the operating range of a TES in situ — without reliance on external standards or needing to reset optical components — could prove useful in situations that by necessity operate at the few-photon level, for instance in quantum key distribution. Aside from extending the operating range of TES detectors, improved determination of optical attenuation could help when characterizing materials that react differently to high- and low-light levels or with samples that can survive only low-light levels, for instance when analyzing sensitive biological samples.

* The Poisson-influenced K-Means algorithm (PIKA). This is a modified version of the K-means clustering algorithm, a popular method of performing data cluster analysis.

Read More

Experimental quantum physics often resides in the coldest regimes found in the universe, where the lack of large thermal disturbances allows quantum effects to flourish. A key ingredient to these experiments is being able to measure just how cold the system of interest is. Laboratories that produce ultracold gas clouds have a simple and reliable method to do this: take pictures! The temperature of a gas depends on the range of velocities among the particles, namely the size of the difference between the slowest- and the fastest-moving particles. If all the atoms evolve for the same amount of time, the velocity distribution gets imprinted in the position of the atoms. This is analogous to a marathon where all the runners start together so you cannot immediately tell whom is the fastest, but after some time you can discern by eye whom is faster or slower based on their location.

In some experiments, however, the cloud is so well-hidden that snapshots are near impossible. A new technique developed by JQI researchers and published in Physical Review A as an Editor’s Suggestion, circumvents this issue by inserting an optical nanofiber (ONF) into a cold atomic cloud.

ONFs are like the normal optical fibers that form the global telecommunications network, except that they are much thinner – only a few hundred nanometers in diameter (about 200th the width of a human hair). This small size allows ONFs to be integrated with another, much larger system without disturbing it. Moreover, light can actually couple into the ONF through its so-called evanescent field. When an electromagnetic field, like laser light, cannot propagate from one media (e.g. air or glass) to another it does not just reflect or disappear at the interface. The field must be continuous and it gradually turns off as it flows into the new media--this spatially decaying field is called the evanescent field. Evanescent fields occur in nature, such as when an ocean a wave breaks into the shore and it slowly propagates just so far into the sand. Notably, due to its narrow size, light traveling down an ONF has significant fraction of its energy residing outside the fiber in the form of an evanescent field. Additionally, the laws of physics do not forbid the reverse process from happening, so light that originates outside the ONF can couple back into and propagate along the ONF.

In this experiment, laser-cooled atoms slowly move around the ONF and “blink” randomly as they absorb and reemit photons from a laser. The probability of such a photon coupling into the ONF depends directly on the amplitude of the evanescent field, and hence the position of the atom relative to the fiber. Once a photon enters the ONF, it travels down the optical fiber and is recorded with sensitive single-photon detector as a “click.” Tallying up how many times two clicks occur in different time windows gives the authors a picture of how the atoms move near the ONF. The width of the resulting signal is a measure of the average amount of time the atoms interact with the ONF, so that narrower (wider) signals correspond to faster (slower) atoms. Using these times, the authors were able to calculate the temperature of the cloud. When they compared it to the well-known method of taking pictures, they found good agreement.

This technique could be applied to systems where access for traditional imaging systems is limited or even impossible, such as in some types of hybrid quantum systems. One example would be experiments that seek to trap a cloud of rubidium atoms near a superconducting device, all housed within a dilution refrigerator. Operation of the dilution refrigerator requires careful shielding of optical and thermal radiation, preventing the use of the standard imaging temperature measurement. Additionally, other types of nanophotonic systems that use evanescent waves to link to atoms may also benefit from this type of thermometry.

This research summary was written by authors J. Grover and P. Solano. 

Read More

Optical fibers are hair-like threads of glass used to guide light. Fibers of exceptional purity have proved an excellent way of sending information over long distances and are the foundation of modern telecommunication systems. Transmission relies on what’s called total internal reflection, wherein the light propagates by effectively bouncing back and forth off of the fiber’s internal surface. Though the word “total” implies light remains entirely trapped in the fiber, the laws of physics dictate that some of the light, in the form of what’s called an evanescent field, also exists outside of the fiber. In telecommunications, the fiber core is more than ten times larger than the wavelength of light passing through. In this case, the evanescent fields are weak and vanish rapidly away from the fiber. Nanofibers have a diameter smaller than the wavelength of the guided light. Here, all of the light field cannot fit inside of the nanofiber, yielding a significant enhancement in the evanescent fields outside of the core. This allows the light to trap atoms (or other particles) near the surface of a nanofiber.

JQI researchers in collaboration with scientists from the Naval Research Laboratory have developed a new technique for visualizing light propagation through an optical nanofiber, detailed in a recent Optica paper. The result is a non-invasive measurement of the fiber size and shape and a real-time view of how light fields evolve along the nanofiber. Direct measurement of the fields in and around an optical nanofiber offers insight into how light propagates in these systems and paves the way for engineering customized evanescent atom traps.  

In this work, researchers use a sensitive camera to collect light from what’s known as Rayleigh scattering, demonstrating the first in-situ measurements of light moving through an optical nanofiber. Rayleigh scattering happens when light bounces, or scatters, off of particles much smaller than the wavelength of the light. In fibers, these particles can be impurities or density fluctuations in the glass, and the light scattered from them is ejected from the fiber.  This allows one to view the propagating light from the side, in much the same way as one can see a beam of sunlight through fog. Importantly, the amount of light ejected depends on the polarization, or the orientation of oscillation of the light, and intensity of the field at each point, which means that capturing this light is a way to view the field.

The researchers here are interested in understanding the propagation of the field when the light waves are comprised from what are known as higher-order modes. Instead of having a uniform spatial profile, like that of a laser pointer, these modes can look like a doughnut, cloverleaf, or another more complicated pattern. Higher-order modes offer some advantages over the lowest order or “fundamental” mode. Due to their complexity, the evanescent field can have comparatively more light intensity in the region of interest — locally just outside the fiber. These higher order modes can also be used to make different types of optical patterns. Nanofibers aren’t yet standardized and thus careful and complete characterization of both the fiber and the light passing through them is a necessary step towards making them a more practical and adaptable tool for research applications.

This research team had previously developed techniques for controlling the fiber manufacture process in order to support extremely pure higher-order modes. Mode quality depends on things like the width of the fiber core and how this width changes over the length of the fiber. Small deviations in the fiber diameter and other imperfections can cause undesirable combinations and the potential loss of certain modes. By analyzing how the transmitted light changes as the fiber is stretched into a nanofiber, they could infer how the modes change while propagating through the fiber. However, until now there was no way to directly measure the intensity of the field along the fiber, which would offer far more insight and control over how the evanescent fields are shaped at the location of the trapped atoms. This could be useful for analyzing fibers where the propagation conditions change multiple times, or in the case where a fiber undergoes strain or bending during use.

By collecting images of the Rayleigh scattering, the scientists can directly see how the field changes throughout a nanofiber and also the effects of changing the pattern of light injected into the fiber. In addition, the team was able to use the imaging information to feedback to the system and create desired combinations of modes in the nanofiber — demonstrating a high level of control. The same technique can be used to measure the profile and width of the fiber itself. In this case, they were able to estimate a fiber radius of 370 nanometers and variations in the waist down to 3 nm. Notably, this type of visualization is done in-situ with relatively standard optics and does not require destroying the fiber integrity with the special coatings that are necessary when using a scanning electron microscope. This also means these characterizing measurements can be used to optimize the fields that interact with atoms during experiments. “An advantage of this technique is that it can be applied to fibers that are already installed in an apparatus,” explains Fredrik Fatemi, a research physicist at the Naval Research Laboratory and author on the paper: “One could even probe fibers or other nanophotonic structures designed for fundamental modes by using shorter optical wavelengths.”

To further refine this approach, the researchers plan to modify the optics in order to capture the entire length of the nanofiber in a single image. Currently, the images are made by stitching several high-resolution images together, as in the image seen above.  

Read More

From NIST TechBeat

In another advance at the far frontiers of timekeeping by National Institute of Standards and Technology (NIST) researchers, the latest modification of a record-setting strontium atomic clock has achieved precision and stability levels that now mean the clock would neither gain nor lose one second in some 15 billion years*—roughly the age of the universe. Precision timekeeping has broad potential impacts on advanced communications, positioning technologies (such as GPS) and many other technologies. Besides keeping future technologies on schedule, the clock has potential applications that go well beyond simply marking time. Examples include a sensitive altimeter based on changes in gravity and experiments that explore quantum correlations between atoms.

As described in Nature Communications,** the experimental strontium lattice clock at JILA, a joint institute of NIST and the University of Colorado Boulder, is now more than three times as precise as it was last year, when it set the previous world record.*** Precision refers to how closely the clock approaches the true resonant frequency at which the strontium atoms oscillate between two electronic energy levels. The clock's stability—how closely each tick matches every other tick—also has been improved by almost 50 percent, another world record.

The JILA clock is now good enough to measure tiny changes in the passage of time and the force of gravity at slightly different heights. Einstein predicted these effects in his theories of relativity, which mean, among other things, that clocks tick faster at higher elevations. Many scientists have demonstrated this, but with less sensitive techniques.****

"Our performance means that we can measure the gravitational shift when you raise the clock just 2 centimeters on the Earth's surface," JILA/NIST Fellow Jun Ye says. "I think we are getting really close to being useful for relativistic geodesy."

Relativistic geodesy is the idea of using a network of clocks as gravity sensors to make precise 3D measurements of the shape of the Earth. Ye agrees with other experts that, when clocks can detect a gravitational shift at 1 centimeter differences in height—just a tad better than current performance—they could be used to achieve more frequent geodetic updates than are possible with conventional technologies such as tidal gauges and gravimeters.

In the JILA/NIST clock, a few thousand atoms of strontium are held in an "optical lattice," a 30-by-30 micrometer column of about 400 pancake-shaped regions formed by intense laser light. JILA and NIST scientists detect strontium's "ticks" (430 trillion per second) by bathing the atoms in very stable red laser light at the exact frequency that prompts the switch between energy levels.

The JILA group made the latest improvements with the help of researchers at NIST's Maryland headquarters and the Joint Quantum Institute (JQI). Those researchers contributed improved measurements and calculations to reduce clock errors related to heat from the surrounding environment, called blackbody radiation. The electric field associated with the blackbody radiation alters the atoms' response to laser light, adding uncertainty to the measurement if not controlled.

To help measure and maintain the atoms' thermal environment, NIST's Wes Tew and Greg Strouse calibrated two platinum resistance thermometers, which were installed in the clock's vacuum chamber in Colorado. Researchers also built a radiation shield to surround the atom chamber, which allowed clock operation at room temperature rather than much colder, cryogenic temperatures.

"The clock operates at normal room temperature," Ye notes. "This is actually one of the strongest points of our approach, in that we can operate the clock in a simple and normal configuration while keeping the blackbody radiation shift uncertainty at a minimum."

In addition, JQI theorist Marianna Safronova used the quantum theory of atomic structure to calculate the frequency shift due to blackbody radiation, enabling the JILA team to better correct for the error.

Overall, the clock's improved performance tracks NIST scientists' expectations for this area of research, as described in "A New Era in Atomic Clocks" at www.nist.gov/pml/div688/2013_1_17_newera_atomicclocks.cfm. The JILA research is supported by NIST, the Defense Advanced Research Projects Agency and the National Science Foundation.

* For this figure, NIST converts an atomic clock's systematic or fractional total uncertainty to an error expressed as 1 second accumulated over a certain minimum length of time. That is calculated by dividing 1 by the clock's systematic uncertainty, and then dividing that result by the number of seconds in a year (31.5 million) to find the approximate minimum number of years it would take to accumulate 1 full second of error. The JILA clock has reached a higher level of precision (smaller uncertainty) than any other clock.

** T.L. Nicholson, S.L. Campbell, R.B. Hutson, G.E. Marti, B.J. Bloom, R.L. McNally, W. Zhang, M.D. Barrett, M.S. Safronova, G.F. Strouse, W.L. Tew and J. Ye. Nature Communications. Systematic evaluation of an atomic clock at 2 × 10-18 total uncertainty. April 21, 2015.

**** Another NIST group demonstrated this effect by raising the quantum logic clock, based on a single aluminum ion, about 1 foot. See 2010 NIST news release, "NIST Pair of Aluminum Atomic Clocks Reveal Einstein's Relativity at a Personal Scale," at www.nist.gov/public_affairs/releases/aluminum-atomic-clock_092310.cfm.

Read More

For University of Maryland researchers, the last year has marked a series of new discoveries and innovations. UMD will honor nine nominees for the most promising new inventions at the Celebration of Innovation and Partnerships event on April 29, 2015. UMD’s Office of Technology Commercialization, part of the Division of Research, received a total of 187 disclosures in 2014. The nine nominees for Invention of the Year were selected based on their potential impact on science, society and the open market. Winners will be announced in three categories: life sciences, physical sciences and information sciences.

A single photon detection system developed at NIST, by researchers from JQI and the Jet Propulsion Lab at CalTech, was among the nominees. The co-inventors are: 

Alessandro Restelli, JQI-UMD

Josh Bienfang, JQI-NIST

Alan Migdall, JQI-NIST

William Farr, Jet Propulsion Lab

The group developed a single-photon avalanche diode (SPAD) detection system that is so sensitive that it detects photons that arrive at times well before a readout gate is applied, thus increasing the system’s detection duty cycle. This invention represents a new mode of operation for SPADs, similar to charge-coupled devices (CCD), in which single-photon signals may be accumulated within the detector and read out some time later. This increases the duration of time during which the detector is sensitive to single-photon signals. This new mode of operation will expand the usefulness of SPADs in the areas of Light Detection and Ranging (LIDAR) and quantum cryptography. 

Source: CMNS with modifications for JQI website made by E. Edwards

Read More

The word “defect” doesn’t usually have a good connotation--often indicating failure. But for physicists, one common defect known as a nitrogen-vacancy center (NV center) has applications in both quantum information processing and ultra-sensitive magnetometry, the measurement of exceedingly faint magnetic fields. In an experiment, recently published in Science, JQI Fellow Vladimir Manucharyan and colleagues at Harvard University used NV centers in diamond to sense the properties of magnetic field noise tens of nanometers away from the silver samples.

Diamond, which is a vast array of carbon atoms, can contain a wide variety of defects. An NV center defect is formed when a nitrogen atom substitutes for a carbon atom and is adjacent to a vacancy, or missing carbon atom, in the lattice. NV centers have discrete, atom-like energy levels that can be probed using green laser light. Like atomic systems, the NV centers can be used as a qubit. In this experiment, physicists harness the sensitivity of these isolated quantum systems to characterize electron motion.

A conductive silver sample is deposited onto a diamond substrate that contains NV centers. While these defects can occur naturally, the team here purposefully creates them approximately 15 nanometers away from the silver layer. At temperatures above absolute zero, the electrons inside of the silver layer (or any conductor) bounce around and generate random currents--this is a phenomenon known as Johnson noise. Since electrons are charged particles, their motion results in  fluctuating magnetic fields, which extend outside of the conductor.

Typically, changing magnetic fields can wreak all sorts of havoc, including for the nearby NV centers. Here, each NV center is used as a sensor that can be thought of as switching between two states, 1 and 0. The sensor can be calibrated in the presence of a constant magnetic field such that it is in state 1. If the sensor experiences an oscillating magnetic field,  the sensor switches to state 0. There is one more important component to this sensor--it can detect magnetic field strength as well. For weak magnetic field fluctuations, the NV sensor will slowly decay to state 0; for stronger fluctuations, it will decay much faster from 1 to 0. By detecting different decay times, physicists can precisely measure the fluctuating magnetic fields, which tells them about the electron behavior at a  very small length scale. Like any good sensor, the NV centers are almost completely non-invasive—their read-out with laser light does not disturb the sample they are sensing.

The team studied the scaling of the magnetic noise with different parameters such as temperature and distance from the silver surface and found excellent agreement with theoretical predictions. In addition, by changing the nature of the silver sample from “polycrystalline” to “single-crystalline” they were able to observe a dramatic difference in the behavioral trends of the magnetic field noise, particularly as the sample was cooled. In polycrystalline samples, atoms are not arranged in a regular repeating lattice over long distances, thus electrons travel don’t travel very far-- roughly 10 nanometers  or less-- before scattering off an obstacle. These frequent collisions are the main source of field noise in polycrystalline materials. In contrast, a single crystal is uniform at these length scales and electrons can travel over 100 times farther. The electron movement, and corresponding magnetic field noise from the single silver crystal is a departure from so-called Ohmic predictions of the polycrystalline case, and the team was able to explore both of these extremes non-invasively.

These results demonstrate that single NV centers can be used to directly study electron behavior inside of a conductive material on the nanometer length scale. Notably, this technique does not require electrical leads, applied voltages, or even physical contact with the sample of interest, thus enabling the measurement of much smaller or more fragile samples. Future applications of this technique include the study of complex condensed matter phenomena, as well as metrology for commercial materials science.

This was written by E. Edwards in collaboration with S. Kolkowitz and V. Manucharyan.

Read More

The 2014 chemistry Nobel Prize recognized important microscopy research that enabled greatly improved spatial resolution. This innovation, resulting in nanometer resolution, was made possible by making the source (the emitter) of the illumination  quite small and by moving it quite close to the object being imaged.   One problem with this approach is that in such proximity, the emitter and object can interact with each other, blurring the resulting image.   Now, a new JQI study has shown how to sharpen nanoscale microscopy (nanoscopy) even more by better locating the exact position of the light source.

DIFFRACTION LIMIT

Traditional microscopy is limited by the diffraction of light around objects.  That is, when a light wave from the source strikes the object, the wave will scatter somewhat.  This scattering limits the spatial resolution of a conventional microscope to no better than about one-half the wavelength of the light being used.  For visible light, diffraction limits the resolution to no be better than a few hundred nanometers.

How then, can microscopy using visible light attain a resolution down to several nanometers?  By using tiny light sources that are no larger than a few nanometers in diameter.  Examples of these types of light sources are fluorescent molecules, nanoparticles, and quantum dots.  The JQI work uses quantum dots which are tiny crystals of a semiconductor material that can emit single photons of light. If such tiny light sources are close enough to the object meant to be mapped or imaged, nanometer-scale features can be resolved.  This type of microscopy, called “Super-resolution imaging,” surmounts the standard diffraction limit. 

IMAGE-DIPOLE DISTORTIONS

JQI fellow Edo Waks and his colleagues have performed nanoscopic mappings of the electromagnetic field profile around silver nano-wires by positioning quantum dots (the emitter) nearby.  (Previous work summarized in press release).  They discovered that sub-wavelength imaging suffered from a fundamental problem, namely that an “image dipole” induced in the surface of the nanowire was distorting knowledge of the quantum dot’s true position. This uncertainty in the position of the quantum dot translates directly into a distortion of the electromagnetic field measurement of the object.

The distortion results from the fact that an electric charge positioned near a metallic surface will produce just such an electric field as if a ghostly negative charge were located as far beneath the surface as the original charge is above it.  This is analogous to the image you see when looking at yourself in a mirror; the mirror object appears to be as far behind the mirror as you are in front.  The quantum dot does not have a net electrical charge but it does have a net electrical dipole, a slight displacement of positive and negative charge within the dot. 

Thus when the dot approaches the wire, the wire develops an “image” electrical dipole whose emission can interfere with the dot’s own emission.  Since the measured light from the dot is the substance of the imaging process, the presence of light coming from the “ image dipole” can interfere with light coming directly from the dot.  This distorts the perceived position of the dot by an amount which is 10 times higher than the expected spatial accuracy of the imaging technique (as if the nanowire were acting as a sort of funhouse mirror).

The JQI experiment successfully measured the image-dipole effect and properly showed that it can be corrected under appropriate circumstances.  The resulting work provides a more accurate map of the electromagnetic fields surrounding the nanowire.

The JQI scientists published their results in the journal Nature Communications.

Lead author Chad Ropp (now a postdoctoral fellow at the University of California, Berkeley) says that the main goal of the experiment was to produce better super-resolution imaging: “Any time you use a nanoscale emitter to perform super-resolution imaging near a metal or high-dielectric structure image-dipole effects can cause errors. Because these effects can distort the measurement of the nano-emitter’s position they are important to consider for any type of super-resolved imaging that performs spatial mapping.”

“Historically scientists have assumed negligible errors in the accuracy of super-resolved imaging,” says Ropp.  “What we are showing here is that there are indeed substantial inaccuracies and we describe a procedure on how to correct for them."

Read More

Measuring faint magnetic fields is a trillion-dollar business.  Gigabytes of data, stored and quickly retrieved from chips the size of a coin, are at the heart of consumer electronics.   Even higher data densities can be achieved by enhancing magnetic detection sensitivity---perhaps down to nano-tesla levels.

Greater magnetic sensitivity is also useful in many scientific areas, such as the identification of biomolecules such as DNA or viruses.  This research must often take place in a warm, wet environment, where clean conditions or low temperatures are not possible.  JQI scientists address this concern by developing a diamond sensor that operates in a fluid environment.  The sensor makes magnetic maps (with a 17 micro-tesla sensitivity) of small particles (a stand-in for actual bio-molecules) with a spatial resolution of about 50 nm.  This is probably the most sensitive magnetic measurement conducted at room temperature in microfluidics.

The results of the new experiment conducted by JQI scientist Edo Waks (a professor at the University of Maryland) and his associates appear in the journal NanoLetters .

DIAMOND NV CENTERS

At the heart of the sensor is a tiny diamond nano-crystal.  This diamond, when brought close to a magnetic particle while simultaneously being bathed in laser light and a subtle microwave signal, will fluoresce in a manner proportional to the strength of the particle’s own magnetic field.  Thus light from the diamond is used to map magnetism.

How does the diamond work and how is the particle maneuvered near enough to the diamond to be scanned?

The diamond nanocrystal is made in the same process by which synthetic diamonds are formed, in a process called chemical vapor deposition.  Some of the diamonds have tiny imperfections, including  occasionally nitrogen atoms substituting for carbon atoms.  Sometimes a carbon atom is missing altogether from the otherwise tightly-coordinated diamond solid structure.  In those cases where the nitrogen (N) and the vacancy (V) are next to each other, an interesting optical effect can occur.  The NV combination acts as a sort of artificial atom called an NV color center.  If prompted by the right kind of green laser, the NV center will shine.  That is, if will absorb green laser light and emit red light, one photon at a time. 

The NV emission rate can be altered in the presence of magnetic fields at the microscopic level. For this to happen, though, the internal energy levels of the NV center has to be just right, and this comes about when the center is exposed to signals from the radio-frequency source (shown at the edge of the figure) and the fields emitted by the nearby magnetic particle itself.

The particle floats in a shallow lake of de-ionized- water based solution in a setup called a microfluidic chip.  The diamond is attached firmly to the bottom of this lake.  The particle moves, and is steered around the chip when electrodes positioned in the channels coax ions in the liquid into forming gentle currents.  Like a ship sailing to Europe with the help of the Gulf Stream, the particle rides these currents with sub-micron control.  The particle can even be maneuvered in the vertical direction by an external magnetic coil (not shown in the drawing).

“We plan to use multiple diamonds in order to do complex vectorial magnetic analysis.,” says graduact student Kangmook Lim, the lead author on the publication.  “We will also use floating diamonds instead of stationary, which would be very useful for scanning nano- magnetism of biological samples.”

REFERENCE PUBLICATION:  “Scanning localized magnetic fields in a microfluidic device with a single nitrogen vacancy center,” Kangmook Lim, Chad Ropp, Benjamin Shapiro, Jacob M. Taylor, Edo Waks, NanoLetters, 5 February 2015; http://pubs.acs.org/doi/abs/10.1021/nl503280u

Read More

 A new experiment conducted at the University of California at Berkeley used quantum information techniques for a precision test of a cornerstone principle of physics, namely Lorentz invariance.  This precept holds that the results of a physics experiment do not depend on its absolute spatial orientation.  The work uses quantum-correlated electrons within a pair of calcium ions to look for shifts in quantum energy levels with unprecedented sensitivity.   JQI Adjunct Fellow and University of Delaware professor Marianna Safronova, who contributed a theoretical analysis of the data, said that the experiment was able to probe Lorentz symmetry violation at the level comparable to the ratio of the electroweak and Planck energy scales.  These correspond, respectively, to the energy scale of the universe at which the electromagnetic and weak forces become comparable in strength, and the scale where gravity becomes comparable in strength to the other physical forces.

 The Lorentz symmetry is fundamental to both the standard model of particle physics and general relativity. However, the theoretical effort aimed at unifying gravity with other fundamental interactions suggests that Lorentz invariance may not be an exact symmetry. Moreover, it may be possible to detect minuscule Lorentz-violating effects at the experimentally accessible energy scales.  Thus, Lorentz symmetry tests such as carried out at Berkeley may provide a low-energy window into the possible scenarios of theories beyond the standard model and general relativity.  Safronova notes that tabletop experiments such as the Berkeley effort complements direct searches for new physics conducted at high-energy labs such as the Large Hadron Collider.

The Berkeley experiment did not detect any telltale shifts in energy levels.  However, the importance of probing the efficacy of the Lorentz principle is so great that even a null result at high sensitivity is notable.  The scientists use the data to establish that no shifts in the behavior of electrons (and hence no evidence of Lorentz-violating effects) were observed at a sensitivity of one part in 1018.  This is some 100 times better than the best previous measurements. The experiment also improved by a factor of five the assertion that the speed of light is isotropic (equal in all directions).  All these results of the Berkeley experiment are published in the 29 January issue of Nature magazine.

The authors include UC Berkeley team of Hartmut Häffner, Thaned Pruttivarasin (now at the Quantum Metrology Laboratory, RIKEN, Japan), Michael Ramm, and Michael Hohensee (now at the Lawrence Livermore National Lab); Sergey Porsev at the University of Delaware and PNPI, Russia; Ilya Tupitsyn from the St. Petersburg State University in Russia and Marianna Safronova, JQI and the University of Delaware.

 

MICHELSON-MORLEY EXPERIMENT

The Berkeley experiment imposed stringent limits on the Lorentz symmetry violation  in the same way that the classic experiment conducted by Albert A. Michelson and Edward W. Morley in 1887 ruled out the existence of subtle “aether” fields.  In those years scientists supposed that light waves, like all then known waves, had to propagate through an underlying medium---as ocean waves roll through water and sound waves are pressure waves moving through air.  To look at the status of aether, Michelson and Morley broke a pulse of light into two parts, which then took equal but perpendicular paths.  Reflecting from mirrors these pulses were recombined to form an interference pattern.  The apparatus’ (riding on the Earth around its orbit) moving through the presumed stationary aether would impose a slightly different pathway for the two beams.  This in turn would shift the interference pattern, heralding the aether.  No shift was discovered, supporting Albert Einstein’s later assertion that the aether does not exist.

The Berkeley experiment does the same thing for electron waves for an apparatus that rotates around along the Earth in its daily revolution.  According to modern field theory, all particles including atoms, and the atoms’ own constituents such as electrons, can be thought of as fields, variations in the likelihood of quantum energy being present at various places and times.  Lorentz invariance might, in principle, have a different standing for each of the known fields.  The Berkeley experiment may be interpreted as test of Lorentz invariance for either electrons or light. 

 

THE NEW MICHELSON-MORLEY

The electrons in question are the outer electrons of calcium ions maintained in an electric enclosure called a Paul trap.  In quantum information experiments energy levels of calcium ions serve as the basis for quantum bits, or qubits, as depicted in Figure 1a.

Such atomic qubits can be manipulated with laser beams into residing in a superposition of two discrete levels simultaneously.  In the trap two ions 16 microns apart are exposed to a static magnetic field.  In a process called the Zeeman Effect, the magnetic field causes the internal quantum levels of the ions to split into finely spaced sub-levels.  These subsidiary levels are designated by the possible z-component of the magnetic moment labelled mJ.  For a calcium ion the outermost electron is in a state called D5/2, meaning that the electron is in a D orbital (the probability of the electron being in space is highest along a dumbbell-shaped surface) and the total electron angular momentum J has a value of 5/2 units.  Turning on a magnetic field splits what was a single quantum energy level into six sub-states identified by the strength of the electron’s magnetic strength along the direction singled out by the external magnetic field as shown in Figure 1b.  The six values are designated as +5/2, +3/2, +1/2, -1/2, -3/2, and -5/2.

One of the biggest potential sources of uncertainty in the measurement of the transition between the two states is introduced by the tiny fluctuations in the strength of the external magnetic field.  Since the energy spacing of the sub-levels is directly proportional to the magnetic field, any noise in the field translates directly into noise, or uncertainty, in the measured energy levels. To counteract magnetic noise and to make their measurements as sensitive as possible, the physicists use not a single ion, but two correlated ions in the quantum superposition of  (-1/2,  +1/2) and (-5/2, +5/2) states as shown in Figure 2. 

By using two quantum-correlated ions, magnetic noise can be minimized since fluctuations would produce cancelling effect on such pairings of qubit configurations.  Construction of such “decoherence-free state” has been developed for quantum information applications.

As a result, the electron wavefunction (denoted by the Greek letter psi) of the two-ion object (or the wave packet) is in a superposition of ±1/2 and ±5/2 states.  This condition is allowed to last for about 93 milliseconds.  During this time, because they have a different spatial orientation, the 1/2 part and the 5/2 part of the wave packet will evolve differently if a spatially-anisotropic, Lorentz-violating effect is at work.    This in turn will subtly alter the nature of the superposition of the two-ion state. That fact, coupled with the motion of the atom trap through space as the Earth rotates over its daily period, would create an interference effect in measurements of the status of the ions if a spatial-orientation-dependent force were there.  The lack of any observed interference suggested that there was no Lorenz-violating effect, at a sensitivity down to a level of 11 millihertz (times Planck’s constant).   This translates into a Lorentz-violation sensitivity of one part per 1018.

University of Indiana physicist Alan Kostelecky, an expert on the subject of Lorentz violations, had this to say of the experiment in his “News and Views” article published in the same issue of Nature: “This represents a milestone sensitivity, because it is smaller than the dimensionless ratio of about 10−17 between the strengths of the electroweak and gravitational forces that could naturally be expected to govern violations of Lorentz invariance arising in unified theories of quantum physics and gravity.  The authors' experiment is thus the first to delve into this realm of sensitivity for electrons.”

Could the Lorentz-violating effects be there after all, but at a harder-to-measure magnitude?  Scientists are compelled to keep looking as such observation would be unambiguous evidence for new physics.   “We believe we can achieve much higher sensitivity, maybe by a factor of 10000,” said Safronova, who is an Adjunct Fellow of the Joint quantum Institute and a physics professor at the University of Delaware.   “This could happen by using ions that are more sensitive to Lorentz violation, such as Yb+ or certain highly-charged ions and by recording measurements over a longer period.”  JQI’s Christopher Monroe is using trapped Yb+ ions for his forefront quantum information research.

Read More

We want data.  Lots of it.  We want it now.  We want it to be cheap and accurate.

 Researchers try to meet the inexorable demands made on the telecommunications grid by improving various components.  In October 2014, for instance, scientists at the Eindhoven University of Technology in The Netherlands did their part by setting a new record for transmission down a single optical fiber: 255 terabits per second

 Alan Migdall and Elohim Becerra and their colleagues at the Joint Quantum Institute do their part by attending to the accuracy at the receiving end of the transmission process.  They have devised a detection scheme with an error rate 25 times lower than the fundamental limit of the best conventional detector.  They did this by employing not passive detection of incoming light pulses.  Instead the light is split up and measured numerous times.

 The new detector scheme is described in a paper published in the journal Nature Photonics.

 “By greatly reducing the error rate for light signals we can lessen the amount of power needed to send signals reliably,” says Migdall.  “This will be important for a lot practical applications in information technology, such as using less power in sending information to remote stations.  Alternatively, for the same amount of power, the signals can be sent over longer distances.”

Phase Coding

Most information comes to us nowadays in the form of light, whether radio waves sent through the air or infrared waves send up a fiber.  The information can be coded in several ways.  Amplitude modulation (AM) maps analog information onto a carrier wave by momentarily changing its amplitude.  Frequency modulation (FM) maps information by changing the instantaneous frequency of the wave.  On-off modulation is even simpler: quickly turn the wave off (0) and on (1) to convey a desired pattern of binary bits.

 Because the carrier wave is coherent---for laser light this means a predictable set of crests and troughs along the wave---a more sophisticated form of encoding data can be used.  In phase modulation (PM) data is encoded in the momentary change of the wave’s phase; that is, the wave can be delayed by a fraction of its cycle time to denote particular data.  How are light waves delayed?  Usually by sending the waves through special electrically controlled crystals.

 Instead of using just the two states (0 and 1) of binary logic, Migdall’s experiment waves are modulated to provide four states (1, 2, 3, 4), which correspond respectively to the wave being un-delayed, delayed by one-fourth of a cycle, two-fourths of a cycle, and three-fourths of a cycle.  The four phase-modulated states are more usefully depicted as four positions around a circle (figure 2).  The radius of each position corresponds to the amplitude of the wave, or equivalently the number of photons in the pulse of waves at that moment.  The angle around the graph corresponds to the signal’s phase delay.

 The imperfect reliability of any data encoding scheme reflects the fact that signals might be degraded or the detectors poor at their job.  If you send a pulse in the 3 state, for example, is it detected as a 3 state or something else?  Figure 2, besides showing the relation of the 4 possible data states, depicts uncertainty inherent in the measurement as a fuzzy cloud.  A narrow cloud suggests less uncertainty; a wide cloud more uncertainty.  False readings arise from the overlap of these uncertainty clouds.  If, say, the clouds for states 2 and 3 overlap a lot, then errors will be rife.

 In general the accuracy will go up if n, the mean number of photons (comparable to the intensity of the light pulse) goes up.  This principle is illustrated by the figure to the right, where now the clouds are farther apart than in the left panel.  This means there is less chance of mistaken readings.  More intense beams require more power, but this mitigates the chance of overlapping blobs.

Twenty Questions

So much for the sending of information pulses.  How about detecting and accurately reading that information?  Here the JQI detection approach resembles “20 questions,” the game in which a person identifies an object or person by asking question after question, thus eliminating all things the object is not.

In the scheme developed by Becerra (who is now at University of New Mexico), the arriving information is split by a special mirror that typically sends part of the waves in the pulse into detector 1.  There the waves are combined with a reference pulse.  If the reference pulse phase is adjusted so that the two wave trains interfere destructively (that is, they cancel each other out exactly), the detector will register a nothing.  This answers the question “what state was that incoming light pulse in?” When the detector registers nothing, then the phase of the reference light provides that answer, … probably.

That last caveat is added because it could also be the case that the detector (whose efficiency is less than 100%) would not fire even with incoming light present. Conversely, perfect destructive interference might have occurred, and yet the detector still fires---an eventuality called a “dark count.”  Still another possible glitch: because of optics imperfections even with a correct reference–phase setting, the destructive interference might be incomplete, allowing some light to hit the detector.

The way the scheme handles these real world problems is that the system tests a portion of the incoming pulse and uses the result to determine the highest probability of what the incoming state must have been. Using that new knowledge the system adjusts the phase of the reference light to make for better destructive interference and measures again. A new best guess is obtained and another measurement is made.

As the process of comparing portions of the incoming information pulse with the reference pulse is repeated, the estimation of the incoming signal’s true state was gets better and better.  In other words, the probability of being wrong decreases.

Encoding millions of pulses with known information values and then comparing to the measured values, the scientists can measure the actual error rates.  Moreover, the error rates can be determined as the input laser is adjusted so that the information pulse comprises a larger or smaller number of photons.  (Because of the uncertainties intrinsic to quantum processes, one never knows precisely how many photons are present, so the researchers must settle for knowing the mean number.) 

A plot of the error rates shows that for a range of photon numbers, the error rates fall below the conventional limit, agreeing with results from Migdall’s experiment from two years ago. But now the error curve falls even more below the limit and does so for a wider range of photon numbers than in the earlier experiment. The difference with the present experiment is that the detectors are now able to resolve how many photons (particles of light) are present for each detection.  This allows the error rates to improve greatly.

For example, at a photon number of 4, the expected error rate of this scheme (how often does one get a false reading) is about 5%.  By comparison, with a more intense pulse, with a mean photon number of 20, the error rate drops to less than a part in a million.

The earlier experiment achieved error rates 4 times better than the “standard quantum limit,” a level of accuracy expected using a standard passive detection scheme.  The new experiment, using the same detectors as in the original experiment but in a way that could extract some photon-number-resolved information from the measurement, reaches error rates 25 times below the standard quantum limit.

“The detectors we used were good but not all that heroic,” says Migdall.  “With more sophistication the detectors can probably arrive at even better accuracy.”

The JQI detection scheme is an example of what would be called a “quantum receiver.”  Your radio receiver at home also detects and interprets waves, but it doesn’t merit the adjective quantum.  The difference here is single photon detection and an adaptive measurement strategy is used.  A stable reference pulse is required. In the current implementation that reference pulse has to accompany the signal from transmitter to detector.

Suppose you were sending a signal across the ocean in the optical fibers under the Atlantic.  Would a reference pulse have to be sent along that whole way?  “Someday atomic clocks might be good enough,” says Migdall, “that we could coordinate timing so that the clock at the far end can be read out for reference rather than transmitting a reference along with the signal.”

Read More
Thermal interference

Observing the quantum behavior of light is a big part of Alan Migdall’s research at the Joint Quantum Institute.  Many of his experiments depend on observing light in the form of photons---the particle complement of light waves---and sometimes only one photon at a time, using “smart” detectors that can count the number of individual photons in a pulse.  Furthermore, to observe quantum effects, it is normally necessary to use a beam of coherent light, light for which knowing the phase or intensity for one part of the beam allows you to know things about distant parts of the same beam.

In a new experiment, however, Migdall and his JQI colleagues perform an experiment using incoherent light, where the light is a jumble of waves.  And they use what Migdall calls “stupid” detectors that, when counting the number of photons in a light pulse, can really only count up to zero, as anything more than zero befuddles these detectors and is considered as number that is known only to be more than zero.

Basically the surprising result is this: using incoherent light (with a wavelength of 800 nm) sent through a double-slit baffle, the JQI scientists obtain an interference pattern with fringes (the characteristic series of dark and light stripes denoting respectively destructive and constructive interference) as narrow as 30 nm.

This represents a new extreme in the degree to which sub-wavelength interference (to be defined below) has been pushed using thermal light and small-photon-number light detection.  The physicists were surprised that they could so easily obtain such a sharp interference effect using standard light detectors.  The importance of achieving sub-wavelength imaging is underscored by the awarding of the 2014 Nobel Prize for chemistry to scientists who had done just that.

The results of Migdall’s new work appear in the journal Applied Physics Letters (1).  Achieving this kind of sharp interference pattern could be valuable for performing a variety of high-precision physics and astronomy measurements.

BEATING THE DIFFRACTION LIMIT

When they pass through a hole or past a material edge, light waves will diffract---that is, a portion of the light will fan out as if the edge were a source of waves itself.  This diffraction will limit the sharpness of any imaging performed by the light.  Indeed, this diffraction limitation is one of the traditional features of classical optical science dating back to the mid 19th century.  What this principle says is that in using light with a certain wavelength (denoted by the Greek letter lambda) an object can in general be imaged with a spatial resolution roughly no finer than lambda.  One can improve resolution somewhat by increasing lens diameters, but unless you can switch to light of shorter lambda, you are stuck with the imaging resolution you’ve got.  And since all the range of available wavelengths for visible light covers only a range of about 2, gaining much resolution by switching wavelengths requires exotic sources and optics. 

The advent of quantum optics and the use of “nonclassical light” dodged the diffraction limit.  It did this, in certain special circumstances, by considering light as consisting of particles and using the correlations between those particles 

The JQI experiment starts out with a laser beam, but it purposely degrades the coherence of the light by sending it through a moving disk of ground glass.  Thereafter the light waves propagating toward the measuring apparatus downstream originate from a number of places across the profile of the rough disk and are no longer coordinated in space and time (in contrast to laser light).  Experiments more than a decade ago, however, showed that “thermal” light (not unlike the light emitted haphazardly by an incandescent bulb) made this way, while incoherent over long times, is coherent for times shorter than some value easily controlled by the speed of the rotating ground glass disk.

Why should the JQI researchers use such thermal light if laser light is available?  Because in many measurement environments (such as light coming from astronomical sources) coherent light is not available, and one would nevertheless like to make sharp imaging or interference patterns.  And why use “stupid” detectors?  Because they are cheaper to use. 

THE EXPERIMENT

In the case of coherent light, a coordinated train of waves approach a baffle with two openings (figure, top).  The light waves passing through will interfere, creating a characteristic pattern as recorded by a detector, which is moved back and forth to record the arrival of light at various points.  The interference of coherent light yields a fixed pattern (right top in the figure).   By contrast, incoherent light waves, when they pass through the slits will also interfere (lower left), but will not create a fixed pattern.  Instead the pattern will change from moment to moment. 

In the JQI experiment, the waves coming through the slits meets with a beam splitter, a thin layer of material that reflects roughly half the waves at an angle of 90 degrees and transmits the other half straight ahead.  Each of these two portions of light will strike movable detectors which scan across sideways.  If the detectors could record a whole pattern, they would show that the pattern changes from moment to moment.  Adding up all these patterns washes out the result.  That is, no fringes would appear.

Things are different if you record not just the instantaneous interference pattern but rather a correlation between the two movable detectors.  Correlation, in this case, means answering this question: when detector 1 observes light at a coordinate x1 how often does detector 2 observe light at a coordinate x2?

Plotting such a set of correlations between the two detectors does result in an interference-like pattern, but it is important to remember that this is not a pattern of light and dark regions.  Instead, it is a higher order effect that tells you the probability of finding light “here” given that you found it “over there.”  Because scientists want to record those correlations over a range of separations between “here” and “over there” that includes separations that pass through zero, there is a problem. If the two locations are too close, the detectors would run into each other.

To avoid that a simple partially silvered mirror, commonly called a beam splitter, effectively makes two copies of the light field.  That way the two detectors can simultaneously sample the light from virtual positions that can be as close as desired and even pass through each other.  

And what about the use of stupid detectors, those for which each “click” denoting an arrival tells us only that more than zero photons have arrived? However, here the time structure of the incoming light pulse becomes important in clarifying the measurement. If we look at a short enough time, we can arrange that the probability of more than one photon is very low, so a click tells us that with good accuracy that indeed just one photon has arrived. But then if we design the light so that its limited coherence time is larger than the recovery time of our stupid detectors, it is possible for the detector to tell us that a specific number of photons were recorded, perhaps 3 or 10, not just the superfluous  “more than zero” answer.  “In this way, we get dumb detectors to act in a smart way,” says Migdall.

This improved counting the number of photons, or equivalently the intensity of the light at various places at the measuring screen, ensures that the set of correlations between the two detectors does result in an interference-like pattern in those correlations.  Not only that, but the fringes of this correlation pattern---the distance between the successive peaks---can be as small as 30 nm.

So while seeing an interference pattern could not be accomplished with dumb detectors, it could be accomplished by engineering the properties of the light source to accommodate the lack of ability of the detectors and then accumulating a pattern of correlation between two detectors.

Considering that the incoming light has a wavelength of 800 nm, the pattern is sharper by a factor of 20 or more from what you would expect if the diffraction limitation were at work.  The fact that the light used is thermal in nature, and not coherent, makes the achievement more striking.

This correlation method is not the same as imaging an object.  But the ease and the degree to which the conventional diffraction resolution limit could be surmounted will certainly encourage a look for specific applications that might take advantage of that remarkable feature. 

Read More
Superfluid interference

In certain exotic situations, a collection of atoms can transition to a superfluid state, flouting the normal rules of liquid behavior. Unlike a normal, viscous fluid, the atoms in a superfluid flow unhindered by friction. This remarkable free motion is similar to the movement of electron pairs in a superconductor, the prefix ‘super’ in both cases describing the phenomenon of resistanceless flow. Harnessing this effect is of particular interest in the field of atomtronics, since superfluid atom circuits can recreate the functionality of superconductor circuits, with atoms zipping about instead of electrons. Now, JQI scientists have added an important technique to the atomtronics arsenal, a method for analyzing a superfluid circuit component known as a ‘weak link’. The result, detailed in the online journal Physical Review X, is the first direct measurement of the current-phase relationship of a weak link in a cold atom system.

“What we have done is invented a way to characterize a particular circuit element [in a superfluid atomtronic circuit],” says Stephen Eckel, lead author of the paper. “This is similar to characterizing a component in an ordinary electrical circuit, where one measures the current that flows through the component vs. the voltage across it.”

Properly designing an electronic circuit means knowing how each component in the circuit affects the flow of electrons. Otherwise, your circuit won’t function as expected, and at worst case will torch your components into uselessness. This is similar to the plumbing in a house, where the shower, sink, toilet, etc. all need the proper amount of water and water pressure to operate. Measuring the current-voltage relationship, or how the flow of current changes based on a voltage change, is an important way to characterize a circuit element. For instance, a resistor will have a different current-voltage relationship than a diode or capacitor. In a superfluid atom circuit, an analogous measurement of interest is the current-phase relationship, basically how a particular atomtronic element changes the flow of atoms.

Interferometric Investigations

The experiment, which took place at a JQI lab on the NIST-Gaithersburg campus, involves cooling roughly 800,000 sodium atoms down to an extremely low temperature, around a decidedly chilly hundred billionths of a degree above absolute zero. At these temperatures, the atoms behave as matter waves, overlapping to form something called a Bose-Einstein condensate (BEC). The scientists confine the condensate between a sheet-like horizontal laser and a target shaped vertical laser. This creates two distinct clouds, the inner one shaped like a disc and the outer shaped like a ring. The scientists then apply another laser to the outer condensate, slicing the ring vertically. This laser imparts a repulsive force to the atoms, driving them apart and creating a low density region known as a weak link (Related article on this group's research set-up).

The weak link used in the experiment is like the thin neck between reservoirs of sand in an hourglass, constricting the flow of atoms across it. Naturally, you might expect that a constriction would create resistance. Consider pouring syrup through a straw instead of a bottle -- this would be a very impractical method of syrup delivery. However, due to the special properties of the weak link, the atoms can flow freely across the link, preserving superfluidity. This doesn’t mean the link has no influence: when rotated around the ring, the weak link acts kind of like a laser ‘spoon’, ‘stirring’ the atoms and driving an atom current.

After stirring the ring of atoms, the scientists turn off all the lasers, allowing the two BECs to expand towards each other. Like ripples on a pond, these clouds interfere both constructively and destructively, forming intensity peaks and valleys. The researchers can use the resulting interference pattern to discern features of the system, a process called interferometry.

Gleaning useful data from an interference pattern means having a reference wave. In this case, the inner BEC serves as a phase reference. A way to think of phase is in the arrival of a new day. A person who lives on the other side of the planet from you experiences a new day at the same frequency as you do, once every 24 hours. However, the arrival of the day is offset in time, that is to say there is a phase difference between your day and the other person's day.

As the two BECs interfere, the position of the interference fringes (peaks in the wave) depends on the relative phase between the two condensates. If a current is present in the outer ring-shaped BEC, the relative phase is changing as a function of the position of the ring, and the interference fringes assume a spiral pattern. By tracing a single arm of the spiral a full 360 degrees and measuring the radial difference between the beginning and end of the trace, the researchers can extract the magnitude of the superfluid current present in the ring.

They now know the current, so what about the phase across the weak link? The same interferometry process can be applied to the two sides of the weak link, again yielding a phase difference. When coupled with the measured current, the scientists now have a measure of how much current flows through the weak link as a function of the phase difference across the link, the current-phase relationship. For their system, the group found this dependence to be roughly linear (in agreement with their model).

A different scenario, where the weak link has a smaller profile, might produce a different current response, one where non-linear effects play a larger role. Extending the same methods makes it possible to characterize these weak links as well, and could be used to verify a type of weak link called a Josephson junction, an important superconducting element, in a cold atom system. Characterizing the current-phase relationship of other atomtronic components should also be possible, broadening the capabilities of researchers to analyze and design new atomtronic systems.

This same lab, led by JQI fellow Gretchen Campbell, had recently employed a weak link to demonstrate hysteresis, an important property of many electronic systems, in a cold atom circuit. Better characterizing the weak link itself may help realize more complex circuits.  “We’re very excited about this technique,” Campbell says, “and hope that it will help us to design and understand more complicated systems in the future."

This article was written by S. Kelley/JQI.

Read More

Atomtronics is an emerging technology whereby physicists use ensembles of atoms to build analogs to electronic circuit elements. Modern electronics relies on utilizing the charge properties of the electron. Using lasers and magnetic fields, atomic systems can be engineered to have behavior analogous to that of electrons, making them an exciting platform for studying and generating alternatives to charge-based electronics.

Using a superfluid atomtronic circuit, JQI physicists, led by Gretchen Campbell, have demonstrated a tool that is critical to electronics: hysteresis. This is the first time that hysteresis has been observed in an ultracold atomic gas. This research is published in the February 13 issue of Nature magazine, whose cover features an artistic impression of the atomtronic system.

Lead author Stephen Eckel explains, “Hysteresis is ubiquitous in electronics. For example, this effect is used in writing information to hard drives as well as other memory devices.  It’s also used in certain types of sensors and in noise filters such as the Schmitt trigger.” Here is an example demonstrating how this common trigger is employed to provide hysteresis.  Consider an air-conditioning thermostat, which contains a switch to regulate a fan. The user sets a desired temperature. When the room air exceeds this temperature, a fan switches on to cool the room. When does the fan know to turn off? The fan actually brings the temperature lower to a different set-point before turning off. This mismatch between the turn-on and turn-off temperature set-points is an example of hysteresis and prevents fast switching of the fan, which would be highly inefficient.

In the above example, the hysteresis is programmed into the electronic circuit. In this research, physicists observed hysteresis that is an inherent natural property of a quantum fluid. 400,000 sodium atoms are cooled to condensation, forming a type of quantum matter called a Bose-Einstein condensate (BEC), which has a temperature around 0.000000100 Kelvin (0 Kelvin is absolute zero). The atoms reside in a doughnut-shaped trap that is only marginally bigger than a human red blood cell. A focused laser beam intersects the ring trap and is used to stir the quantum fluid around the ring.

While BECs are made from a dilute gas of atoms less dense than air, they have unusual collective properties, making them more like a fluid—or in this case a superfluid.  What does this mean? First discovered in liquid helium in 1937, this form of matter, under some conditions, can flow persistently, undeterred by friction. A consequence of this behavior is that the fluid flow or rotational velocity around the team’s ring trap is quantized, meaning it can only spin at certain specific speeds. This is unlike a non-quantum (classical) system, where its rotation can vary continuously and the viscosity of the fluid plays a substantial role.

Because of the characteristic lack of viscosity in a superfluid, stirring this system induces drastically different behavior. Here, physicists stir the quantum fluid, yet the fluid does not speed up continuously. At a critical stir-rate the fluid jumps from having no-rotation to rotating at a fixed velocity. The stable velocities are a multiple of a quantity that is determined by the trap size and the atomic mass.

This same laboratory has previously demonstrated persistent currents and this quantized velocity behavior in superfluid atomic gases. Now they have explored what happens when they try to stop the rotation, or reverse the system back to its initial velocity state. Without hysteresis, they could achieve this by reducing the stir-rate back below the critical value causing the rotation to cease. In fact, they observe that they have to go far below the critical stir-rate, and in some cases reverse the direction of stirring, to see the fluid return to the lower quantum velocity state.

Controlling this hysteresis opens up new possibilities for building a practical atomtronic device. For instance, there are specialized superconducting electronic circuits that are precisely controlled by magnetic fields and in turn, small magnetic fields affect the behavior of the circuit itself. Thus, these devices, called SQuIDs (superconducting quantum interference devices) are used as magnetic field sensors. “Our current circuit is analogous to a specific kind of SQuID called an RF-SQuID”, says Campbell. “In our atomtronic version of the SQuID, the focused laser beam induces rotation when the speed of the laser beam “spoon” hits a critical value. We can control where that transition occurs by varying the properties of the “spoon”. Thus, the atomtronic circuit could be used as an inertial sensor.”

This two-velocity state quantum system has the ingredients for making a qubit. However, this idea has some significant obstacles to overcome before it could be a viable choice. Atomtronics is a young technology and physicists are still trying to understand these systems and their potential. One current focus for Campbell’s team includes exploring the properties and capabilities of the novel device by adding complexities such as a second ring.

This research was supported by the NSF Physics Frontier Center at JQI. 

This article was written by E. Edwards/JQI

Read More

Subscribe to A Quantum Bit 

Quantum physics began with revolutionary discoveries in the early twentieth century and continues to be central in today’s physics research. Learn about quantum physics, bit by bit. From definitions to the latest research, this is your portal. Subscribe to receive regular emails from the quantum world. Previous Issues...

Sign Up Now

Sign up to receive A Quantum Bit in your email!

 Have an idea for A Quantum Bit? Submit your suggestions to jqi-comm@umd.edu