QM, Wave-Particle Duality and the Delayed-Choice Experiments


1.0 - The Wave-Nature of Reality

If one draws a long cable tightly between two distant posts, and then "disturbs" the cable near one post by striking it with, say, a baseball bat, a "wave" will be seen to propagate along the cable, eventually to the distant post where (say) a bell hanging on the cable signals the arrival of the wave. What "material thing" travelled the length of the cable, to ring the bell? Not a single molecule of the cable made that journey. It was only the disturbance in the "field" (the cable, as a medium capable of supporting wave propagation) that traversed the distance (See figure 1). That wave carried the energy of your strike, just as surely as if you had thrown a rock at the bell from a distance.

[Figure 1]

If you were to actually conduct such an experiment, you should notice that striking the cable harder will cause a more "violent" pulse to travel, and ring the bell louder - but will NOT make the journey any faster. The rate at which a wave propagates through a medium is a property of the medium, not of the driving force. One could make the pulse travel faster by altering the cable material, or by increasing the tension in the cable, for instance.

Likewise, the frequency of wave oscillations will not affect propagation speed. In air, both low notes and high notes travel with equal speed. In the electromagnetic (EM) field, red light and blue light (and for that matter, radio waves and gamma rays) all propagate at the same "speed of light".

[Figure 2]

What would happen if, at the moment you strike the cable at one end, someone else briefly "shakes" the cable at the other end? What will happen when your "strike" pulse collides with the opposing "shake" pulse (See figure 2). The answer is, they never really "collide". Rather, they "coincide" for a time, and the cable (the medium) will exhibit the sum of their respective disturbance contributions - but they pass right through one another and emerge on their way, unchanged by the encounter. Each pulse is merely adding or subtracting fluctuation to the medium through which it propagates, and is unaware that the state of that medium may already be perturbed by other activity.

Figure 3 provides some terminology common in describing a propagating waveform, especially when driven continuously by a source. Every wave propagation is an oscillation in a medium, above and below that medium's "neutral" state. Like a pendulum that must swing both ways, the medium responds to a disturbance by oscillating between regions that are alternatingly higher (crests) or lower (troughs) than its undisturbed state (neutral).

[Figure 3]

The wavelength of a wave is the distance between identical "phase points" in successive waves, such as the distance between adjacent crests or adjacent troughs. The "frequency" of a wave is the number of wavelengths that pass a given point in 1 second. If a medium, such as the water in a pond, allows waves to propagate at (say) 50 inches per second, and a pebble tossed into the pond creates ripples with a wavelength of 2 inches, then 25 ripples will pass a given point in 1 second, and we say that the frequency of the wave is 25 "cycles per second", or 25 "hertz".

In air at about sea level, sound travels at about 300 meters per second (roughly 1000 kilometers per hour). [Note, you can use this fact to estimate the distance to a lightning strike - just count off the number of seconds between the visible flash, and the subsequent sound of thunder.] When you are in a room with others and you snap your fingers, people in all directions hear the sound, as it reaches them at 300 meters per second. Of course, no molecule of air travelled that distance at that speed, but the snap of your fingers caused a spherical pressure wave to travel outward at that rate.

[Figure 4]

Figure 4 depicts a cross-section of the air in a room, as a tuning fork emits a continuous tone at a fixed frequency. At sea level, the nominal air pressure is about 14.7 pounds per square inch, and would be represented by a gray color in this image. As the tuning fork sounds, rings (spheres) of alternating high pressure (toward white) and low pressure (toward black) ripple outward. Note: If the tuning fork is ringing at "A 440" (middle "A" on a piano), how far apart are the succesive wavefronts? Answer: from freq = speed/wavelength, we get wavelength = speed/freq = 300/440 = 0.68 meters. If you could "freeze time", and walk toward the tuning fork with a meter that measures air pressure, you would notice it slightly rise and fall every 0.68 meters.

Interference Fringes

In a physics auditorium, the students sat quietly as the instructor placed a tuning fork on a stand on the stage and struck it. The tone of the tuning fork could be heard clearly throughout the room. The instructor then placed a second, identical tuning fork on a stand about 8 feet to the left of the first one, and struck it as well. He then asked the members of the class to raise their hands if they thought the sound had gotten louder - and about half the class raised their hands. The other half of the class claimed that the sound had actually gotten quieter, or had disappeared entirely! The instructor then asked us to get up and move around the room. Sure enough, there were regions where the sound was loud, and others where it faded almost to silence.

How is this possible?

[Figure 5]

Figure 5 demonstrates this effect. This image was produced by adding together two separate images, each representing sinusoidal oscillations of pressure at the same wavelength. Consider that you are standing anywhere on the lower green line. At any such spot, your distance to each of the two sources is identical. For instance, at the very moment the "57th" crest from source A reaches your location, so has the "57th" crest from source B, so the two crests add to form a greater crest. Likewise, the troughs from each wave always meet together, forming a lower trough. Hence at your location, the waves occur "in-sync", there are strong pressure oscillations, and you hear the tone clearly. Now consider the upper green line. Anywhere on this line, you will also hear the tone clearly. Your position is such that your distance to source A is less than the distance to source B, but by exactly 1 full wavelength. Hence (for instance) the "57th" crest from source A is coinciding with the "56th" crest from source B.

In contrast, when you stand anywhere on the red line, your distance to source A is exactly 1/2 a wavelength shorter than from source B. Whenever you are receiving a crest from source A, you are receiving a trough from source B, and vice-versa. These cancel one another, and you experience no pressure oscillations (and hence, no sound) from the two sources.

In figure 6a, we show the sum of two waveforms that are "in-sync", and their sum demonstrates what is called "constructive interference". In figure 6b, the second waveform has been shifted by half a wavelength, and thus appears "inverted", and the sum exhibits what is called "destructive interference". In essense, the two waves cancel out one another.

[Figure 6]

This is the principle exploited in "noise-cancelling" headphones (see figure 7). As unwanted sound reaches the exterior of the headphone, a tiny microphone "hears" that sound before it reaches your ear. Since electricity travels much faster than sound (and electronic microprocessors can calculate sufficiently fast) that sound is inverted by a processor and then "played back" into the sound-stream through a tiny speaker at precisely the point where it serves to cancel the unwanted sound.

[Figure 7]

Wave Diffraction

Another important principle of wave behavior is Wave Diffraction. This principle is illustrated in figure 8 below.

[Figure 8]

Imagine a very large pool of water and a pointwise source of oscillations. Circular wavefronts travel out in all directions. Follow a wavefront in one direction, and over time the wavefront appears straighter and straighter, until it seems to be a linear wavefront, like an ocean swell headed for the beach. Suppose this linear wavefront is approaching a concrete wall erected across the pool, able to block the wave, except for a narrow gap in the wall. What happens to the portion of the wavefront that passes through the gap? It does not continue on as a narrow disturbance, but rather acts as if that gap is the source of a completely new pointwise disturbance, and what emerges beyond the wall is a new circular wavefront, concentric to the location of the gap.

A Dutch mathematician named Christiaan Huygens (1629-1695) was able to provide a mathematical explanation for this "bending around a corner" behavior of a wavefront, by showing that even a linear wavefront is identically explained as the confluence, at every point, of new circular wavefronts that all superposed, and add so as to maintain the overall linear form of the original wavefront. This is now known as the Huygens-Fresnel principle.

Resonance and Harmonics

Up to this point, every wave-related property we have described has involved wave propagation of arbitrary wavelength and frequency. For instance, you can smoothly adjust the frequency (musical tone) of a string on a (fretless) slide guitar, and the resulting air-pressure fluctuations will accommodate that sound. There are infinitely many tones (frequencies) between C and C-sharp, even if the human ear cannot distinguish very small differences. But when a wave-behavior is kept in a bound system, such as in a string of fixed length and tension, or the vibrations of air in a closed chamber, a curious discrete-integer behavior arises. This phenomenon is known as Harmonic Resonance.

[Harmonics-1]

See the figure labeled Harmonics-1. Two people, holding opposite ends of a jumprope, can make it "oscillate" in one large loop, or in two loops, or three, etc. But they cannot make it oscillate in anything "between" these values, so they can only obtain certain discrete wavelengths. The largest wavelength is called the "fundamental" resonance of the system, and has the lowest associated energy. The subdivision of this fundamental into halves, thirds, quarters, fifths, ... produce frequencies that are 2, 3, 4, 5, ... times the fundamental frequency, and are called "harmonics" of the fundamental frequency.

[Harmonics-2]

In the figure labeled Harmonics-2, we depict an analogous situation when sound reverberates between opposing walls in a small room. Each image illustrates regions of compression, and of rarefaction, of the air molecules as certain tones resonate in the room, and a sine graph indicates the range of compression (positive) or rarefaction (negative) that occurs. The small black arrows indicate the direction of air molecules as they "bounce away" from regions of high pressure, and the large blue arrows indicate the oscillation of the system between opposing states. Note how, just as with the rope in the previous figure, this sequence of harmonics (1, 2, 3, ... n, ... ) are the only divisions capable of resonance - tones in between these will not resonate and will fade quickly.

This is what happens in the air, when you "sing in the shower" and you hit certain notes that resonate very loudly - you have found a note whose wavelength fits between the opposing walls exactly 1, or 2 or 3 or some other whole-number "n" times. Any notes in between these will not resonate, and will die-out quickly, but these will tend to last and can be kept going with very little volume on your part.

This curious ability of a continuous wave-phenomenon to give rise, in a bounded system, to discrete (non-continuous or "stepwise") integer behavior is (in my opinion) at the root of the "discrete particle" phenomenon (indeed, the "discrete event" phenomenon) we tend to distinguish from waves.


The Wave-Nature of Light

It was c. 1801 that an Englishman named Thomas Young is credited with performing certain experiments with light that demonstrated its wave-nature. The following series of illustrations yield the gist of these experiments.

In figure 9a, we depict a box with a rectangular hole on the front, and unexposed photographic film on the opposing inside surface. Assume this experiment is conducted "in the dark" (and imagine that we can "see" through the top of the box, even though it is closed). From a distance, one shines a light at the face of the box. Quite naturally, the face of the box casts a shadow on the rear surface, except for the rectangular hole.

[Figure 9a]

As the hole is made sufficiently narrow, a "slit", a different phenomenon appears. Rather than seeing a sharp and well-defined "slit" of light on the film, what appears is a broad "fuzzy band" of light (Figure 9b). (Of course, it will take longer to expose the film, due to the narrowness of the slit.) This broad band of light suggests that light acts as a wave phenomenon, and exhibits diffraction as explained earlier in figure 8.

[Figure 9b]

Further confirming evidence of light's wave-nature came when Young employed two narrow slits, side by side, in what is called the "double-slit experiment" (see figure 9c below). What appears on the film are interference fringes, alternating regions of light and dark, in exact analogy to the earlier experiment described in figure 5 above, regarding interference from two sources of sound of the same frequency.

[Figure 9c]

It may seem obvious, but it is very important to note that if one were to cover one of the double-slits with opaque tape, expose the film for a while, then cover the remaining slit and uncover the first and expose the film further, NO fringes are formed. All that one obtains is the ordinary "sum" of two broad "fuzzy bands", slightly displaced, which adds to form a single, slightly larger fuzzy band. Clearly, BOTH slits must be open at the same time in order for wave interference to occur.

In order to obtain the sharpest fringe images, one must try to use monochromatic light (single color, hence single, or narrow frequency range), since different light frequencies will cause broader or narrower fringe spacing. Figure 10a and 10b below demonstrates how source frequency affects fringe spacing.

[Figure 10a]

[Figure 10b]

If you find a pool of very still water, and simultaneously drop two pebbles into the water, several feet apart, you can actually see the concentric ripples from each point produce the interference shown above, with regions of ripples sandwiched between regions of calm. However, one cannot easily "see" light as it travels, but only when it reflects off of objects and is caught by your eyes, so one must infer the presence of interference by the appearance of light and dark bands on the film at the back of the box. Thomas Young, considering the distance from the slits to the back of the box, and the spacing between fringes, was able to infer the actual wavelengths of different colors of light to good accuracy, even though these wavelengths are just a few 10,000,000ths of a meter.

We can understand how a wave can propagate along a rope, or across water, and that sound is the propagation of air-pressure oscillations, but of what is light an oscillation? We know that light can travel perfectly well in a vacuum, so the oscillations are not of any "material" per se. Rather, all of space is said to be permeated with what is called the Electro-Magnetic (EM) field. This does not really "explain" much in terms of substance, but it can be shown that at any point where an electric charge is made to oscillate, electrically-charged particles further away will respond precisely as if the oscillating charge had induced "waves" of field-strength to propagate outward. Furthermore, magnetic field strength will be forced to oscillate perpendicular to the direction of charge oscillation. This works in both directions - in that a rapidly oscillating magnet will induce both propagating magnetic, and electric field fluctuations. This principle is exploited in electrical generators and motors, wherein bundles of wire coils are made to sweep past fixed magnets, causing electric current to be induced in the wires (a generator), or current is supplied to such coils, inducing a magnetic field that reacts to the fixed magnets and spins (a motor).

If the electrical polarity in a wire is alternated quickly enough, it will begin to propagate EM waves at that frequency. This is how a radio antenna "broadcasts". But whether it is radio frequency, microwave, visible light, UV, x-ray or gamma-ray, all are simply different "notes" in the spectrum of EM field oscillations, and no different "in substance" than the high and low notes on a piano are a different substance.


The Particle-Nature of Light

In 1905, Albert Einstein provided the first satisfactory explanation for a very different experiment with light, although the results were at odds with his conception of the world as having "smooth, continuous" fields as its fundamental description. These experiments involve what is called the "photoelectric effect". In order to understand the issues involved, we need a short digression on electrostatics.

Electrostatics

From times so ancient that no one (to my knowledge) is credited, a curious phenomenon was discovered. If you took two small balls of a light, dry material (traditionally "pith", the dried inner pulp of certain plant stems, although today styrofoam would suffice), hung them together from threads, and touched them with a rubber rod that had been rubbed with fur, the balls would suddenly spring away from the rod, and from each other. Clearly, they were now repelled by something about that rod, and whatever it was was also transferred to them. The very same phenomenon would occur if you used a glass rod rubbed with silk (See the figure labeled "Electrostatics-1"). That is, it appeared to be the very same phenomenon, except for one most-curious fact - if you brought the rubber rod toward the balls that had been activated by the silk rod, or vice-versa, they would be attracted to that rod.

[Electrostatics-1]

The same - only different ... what was going on?

Most all of the material, the "matter" of which we are made, and of which we come into contact, is made of "atoms". Without going into details of atomic structure, it will suffice to know that atoms tend to have as many electrons (negative charges) toward their exterior as they have protons (positive charges) deep in their interior. Therefore, most materials remain electrostatically neutral, possessing roughly equal amounts of positive and negative electric charge.

In the figure labeled "Electrostatics-2" I depict two plates of material. I have dispensed with representing the "atoms" themselves, and simply represent the materials as having (initially) equal charge. Because of this, the electromagnetic field between them has no electrostatic "gradient" - it is flat. If a free electron (unbound from any atom) is released between them, it has no tendency to be pushed or pulled in any direction - it feels no force, just as a marble laid upon a level tabletop has no tendency to roll.

[Electrostatics-2]

If the charge on the plates is made unequal, as under the electrostatic pressure of a battery, a field-gradient is produced, and the electron will accelerate away from the region of excess negative charge, and toward the region having a relative excess of positive charge. This gradient is analogous to "tipping the table", causing the marble to roll "downhill". (NOTE: If we were to release a "free proton" between these plates, it would experience the gradient in the opposite direction, and flow toward the negative plate.)

Notice that I refer to "free" electrons or protons. If the voltage in the battery were raised high enough, or the plates placed more closely together, even the "bound" electrons on the left plate could break their bonds with their atoms, and jump across the gap (arc). This represents an "electric circuit". Of all materials, metals in particular have very loosely-bound outermost electrons. In a long piece of metal wire, if you force an excess of electrons onto one end, they will easily "bump" neighboring electrons into the next atoms, and those electrons will bump their neighbors along, and very quickly (at about 75% of the speed of light, in fact) this "crowd-wave" will be felt at the other end of the wire, even though the actual electrons migrate down the wire at only a few millimeters per hour.

This should suffice for our "digression" on electrostatics. Back to the "photo-electric effect" ...

The Photoelectric Effect

The equipment to demonstrate the photoelectric effect is straightforward - a simple electric circuit powered by a battery, a ampmeter to measure the electric current (essentially, the number of electrons that flow past a point per second), and an evacuated glass tube into which opposing ends of the circuit protrude. These ends do not touch, but are separated by a specific distance, and (hence) represent a "break" in the circuit - electrons cannot flow through the circuit unless they can "arc" across the gap in the vacuum tube.

The voltage supplied by the battery is intentionally insufficient to cause electrons to "jump" across the gap between these electrodes. The electrons crowd together at the plate-shaped "anode", but do not possess sufficient energy to break free of the bonds to their atoms and fly across to the positively-charged cathode. This is depicted in the upper illustration of the figure labeled "Photoelectric-1".

[photoelectric-1]

When red light, at low intensity, is shown upon the surface of gathered electrons, the current meter detects a current flow. The electrons have absorbed sufficient energy from the light to rise above the level needed to break free of the surface, and with enough kinetic energy to flow to the cathode and "complete" the circuit. Moreover, if one increases the intensity of the red light (makes it brighter), the number of electrons making the journey per second increases.

[photoelectric-2]

If one increases the distance between the anode and cathode, requiring the electrons to bridge a larger gap, the red light is not able to provide ANY of the electrons enough energy to make current flow, no matter how intense the red light is made (see the figure labeled "Photoelectric 2"). However, if blue light, even of low intensity, is used, the electrons receive sufficient energy to bridge the larger gap.

These results suggested that light, when interacting with "matter", acted as if it were delivered in specific "packets", to specific locations. Packets of red light contained less energy than packets of blue light. If the energy imparted by a red packet was enough to excite an electron to bridge a certain gap, the electron could "absorb" that energy and make the journey. If the gap were larger, and a red packet did not represent sufficient energy, increasing the number of red packets would not help - you would be providing "more total energy" to the surface, but that energy would simple be reflected or dissipate - it could not be absorbed by the electrons because no single packet provided enough energy to cause a change of state.

Put in simplest terms, there was no way for an electron to "accept" a packet too weak to complete a state change, and retain that energy while saying "may I have another?" Each packet-absorption was an all-or-nothing event, and if insufficient to boost it's would-be recipient to the "next discrete level", it was a "nothing" event.

These "packets" of light energy were subsequently named "photons".

The Wave-Particle Dilemma

Reconciling the wave-nature and particle-nature of light presented a major dilemma. The continuous wave fluctuation description perfectly explained light diffraction and the interference fringes produced from the double-slit experiment, but provided no way to explain the "all-or-nothing" quantum behavior evidenced in the photoelectric effect. The photon description of light as discrete "particles" of specific energy, that must be absorbed "whole", or not at all, explained the photoelectric effect, but could not explain light diffraction and interference fringes.

In certain experiments (single-slit diffraction, double-slit interference) light behaves as a wave, a "disturbance" propagating continuously through a medium that does not "itself" travel anywhere. In another experiment (the photoelectric effect) light presents itself as if it were a true and indivisible particle, actually travelling and affecting a point-wise location.

It should be noted, however, that this "particle" quality is limited - and hardly corresponds to our macroscopic sense of "particles" like bullets or similar "material" objects. Spray a million bullets at a concrete wall possessing a narrow slit, and the bullets that pass through the slit do not diffract and spread themselves into a broad fuzzy band of impact points. With two slits, they do not "interfere" and provide impact points approximating any fringe pattern.

[particle-intersect]

When it comes to "self-intersection", the difference between "classical" material particles, and the sense of photons as quantized particles is also apparent (see figure labeled "Particle Intersection"). Bullets and similar material objects are not expected to pass through one another or remain unaffected by such encounters - they will collide, be diverted, and/or shatter into various debris. In constrast, beams of light do not interact - they will exhibit wave interference in the medium during their coincidence, but continue on completely unaffected once beyond the area of coincidence. (Recall the tightrope example at the top of the page.)

As we shall see, while light is "propagating", and not interacting with "matter" (being absorbed, or being emitted) its description as "quanta", or discrete particles, is just a convenience and no such behavior is manifest. In fact, it is impossible to "observe light", either as wave or particle, as it is "en-route" between its involvement at points of emission or absorption.


Sharpening the Dilemma - The Birth of Quantum Mechanics

Depending upon how one wants to interpret the predictions (and results) of quantum mechanics, we might have to decide that the future can affect the past, or that a thing can be in two places at once, or even that space and time itself do not fundamentally exist. At least, in the microscopic "quantum" level of interactions, nature seems to employ such feats of magic. And yet, as events "bubble up" to our scale of experience, we do not observe these apparent violations of causality or of locality, and cannot take advantage of them to do "seemingly impossible" things (although, we can do some very strange things.)

Below, we will illustrate these issues with some very bizarre experiments. The experiments themselves are rather straightforward, and not difficult to understand - but attempting to reconcile them together may prove a challenge.

However, to begin this journey, we should lay some groundwork, and start where quantum mechanics itself became necessary.

The Impossible Atom

In the years from about 1850 to 1900, thanks to the insights and mathematics of people like Ampere, Coulomb, Gauss, and especially Maxwell, the equations of the theory of electrodynamics were well understood. These equations described the relationships between electric charge, electric and magnetic force, the behaviors of charged particles and electrical current under motion through the electromagnetic field, and much more. Armed with these equations, engineers could "design" all manner of circuits and devices involving coils and magnets, "predict" the behavior that should result from their being built, and then find that, indeed, the devices performed just as the equations indicated they would. The theory of electrodynamics and electromagetism, embodied in Maxwell's equations, was (and still is) an incredible success story in our understanding of physical phenomena.

For atomic physicists, however, this "success" was both a boon, and a bane.

According to the prevailing model of "matter", electrons that were not "free" were "bound" to their atom in the manner of little planets orbiting a little sun. This model of the atom already had "problems". It was observed that an atom could be "excited" by the addition of energy, but when the atom relaxed, it would only return that energy at very specific frequencies (specific colors of light), and nothing in between. To follow the "planetary" analogy, it would be as if you tried to push venus faster in its orbit, but it would not respond until you added enough energy that its orbit "suddenly" matched that of earth or mars. Such discontinuous behavior was inexplicable.

The success of Maxwell's equations was the final nail-in-the-coffin for the planetary model of the atom. These equations dictated that any electrically charged object (eg, an electron) that undergoes acceleration (change of velocity in any direction, such as occurs when orbiting or oscillating) MUST radiate electromagnetic waves - and (thus) lose energy in the process. According to Maxwell's equations, the electrons "orbiting" in their atoms would immediately radiate away their energy and spiral down into the nucleus (there, they would be captured by the proton, cancel the electric charge and convert the proton to a neutron - all the matter in the universe would be reduced to a gaseous cloud of neutrons). In short, such atoms were impossible.

Fundamental Uncertainty

Contemporaneously, Werner Heisenberg, under the tutelage of the Danish physicist Neils Bohr, elucidated the principle that nature would not allow certain "pairs" of qualities, such as the position of an object and its momentum, to be simultaneously resolved with arbitrary accuracy - beyond a certain point, forcing one measurement to higher resolution meant giving up resolution of the other. Such measurement pairs were called "conjugate variables". The disparity was incredibly tiny compared to human scale, but for subatomic events this limit loomed large, and the inability to resolve such pairs, together, to arbitrary accuracy rendered our picture of subatomic events unavoidably "fuzzy".

For those interested, two of the equivalent formulations for Heisenberg's Uncertainty Principle are

The mathematical structure developed to handle this new "kink" in our knowledge of the world is called "Quantum Mechanics", and expresses event specifics in terms of probabilities, rather than certainties. At its core lay Heisenberg's "Uncertainty Principle", and Erwin Schroedinger's "Wave Equations". These equations served to represent particles in terms of what were called "probability density waves". Under this formulation, one could no longer hold that, say, an electron in motion was at a particular place at a particular time. Instead, the electron was "more likely to be near this location, and less likely to be further away".

The mathematical representation of a particle contained components that would correspond to the degree of certainty obtainable in conjugate measurements. To understand this, refer to the figure below, wherein a particle is represented by an oscillating waveform bounded within an enclosing envelope. This representation contains two components of interest - the wavelength of the inner waveform, and the length and shape of the envelope.

One of the formulations of the uncertainty principle is that the product of our uncertainty in the momentum of a particle (momentum = mass*velocity, where velocity includes both speed AND direction), and our uncertainty in the position of the particle, must always exceed h/(4Pi). Knowing more of one means knowing less of the other. Let the wavelength of the inner waveform represent the particle's momentum, and let the overall shape of the envelope represent the particle's likely position. From Figure "Quantum Wave Representation", we can get a good sense of the momentum (can measure the inner wavelength reasonably well) and we have a pretty good idea of where the particle was most-likely located, and less-likely to be located.

[Quantum Wave Representation]

Suppose we wanted to be more certain of the object's location. We could slow the particle, "narrowing" the position envelope, but we have now made it more difficult to ascertain its momentum - there is too little evidence of the inner waveform to accurately assess its wavelength (see Quantum Wave b"). We could measure the momentum with high accuracy by increasing the particle's velocity (Quantum Wave c), but now the position envelope has lengthened and we become less certain of the particle's position.

Replacing the earlier conception of particles having specific locations, energies, momenta, with the QM representation in terms of probability density waves and wave-functions turned out to be wildly successful in explaining physical phenomena at atomic scales. First and foremost, it solved the "impossible atom" problem. Rather than an electron being a "moving particle" within an atom, its representation was more that of a "standing wave" or resonance, able to maintain a specific energy state.

Electrons that are "free", not bound to any atom, are treated as "point-charges" in the EM-field, whose propertied are described by the appropriate wave-equations. That is to say, they are like "dimples" in the field that have no particular diameter, but can exert force on other positively or negatively charged particles with a strength that depends on distance - as distance drops toward 0, the repulsion between two electrons rises toward infinity - but the electrons themselves have no "size" or surface. However, when bound to an atom, an electron no longer "behaves" like a point charge. Instead, it acts a bit more like a localized EM-resonance, called an electron "shell" or "orbital". If an atom has multiple electrons, no two of them can be in precisely the same resonance state - each must act like a different "harmonic" of the fundamental rest-state shell, and the energy difference between these shells is a fixed characteristic of each type of atom (due to the unique nucleus that defines the type of element involved.)

Physics meets Philosophy

Despite the early successes of quantum mechanics (QM), many prominent scientists expressed grave concerns. To accept QM as "the answer" in describing what could be known in detail about our universe would call into question many of the "unspoken assumptions" about reality, for which science was designed to provide answers. Two of the most significant of these were:

The term "Local Realism", often referred to simply as "locality", implies that to understand how a "local" physical event will proceed, such as how two marbles that collide will be deflected, should depend only upon the local measurements (their masses, speeds, angles of incidence, etc) and should NOT depend upon any contemporaneous event occurring arbitrarily far away. Nothing that happens at location X should have an effect upon location Y faster than light can travel between X and Y. In essense, "distance is real" and must be overcome for a physical action at one location to affect another. There can be no instantaneous "spooky action at a distance", as Einstein complained regarding QM.

One should note that, as "speed" is always defined in terms of change in position per change in time, asserting position-locality also implies asserting time-locality. Distant "moments in time" should not also be "close" in a spooky way.

The concept of Contrafactual Definiteness (CFD) is a bit harder to describe. It embodies aspects of causality, determinism and "definiteness". Without "causality", it is hard to maintain that you have any "science", or theory-of-the-world at all. When an "event" occurs, however "event" is defined, it is assumed that the event was "caused" - the immediately preceding state of things is what made the event occur. Although scientists might argue about which theory best provided the "rules" by which the universe operated, it was rather universally assumed that if those rules were somehow "known", and the local specifics preceding a given event were also "known" with complete accuracy, then the event would "have to happen" - it would be determined, and (at least potentially) predictable.

This view implied a strictly deterministic universe, sometimes called the "Billiard Ball" universe (or "Fatalistic" universe, since the fate of everything has already been determined by past events). The idea being, if you could somehow know all the rules by which the universe operated, and also know (for some moment in the past) the entire state of everything exactly (the precise location, mass, momentum, etc, of every particle everywhere), then (in principle) you could know the entire future. Not that any reasonable scientist believed you could "know" all these details - nor conduct the enormous calculation that would be implied - but that is beside the point. The "universe itself" knows its own rules and its own state, and hence proceeds forward without a choice in the matter.

Science (up to this point) had marched comfortably and consistently in concert with strict determinism. Exchanges of momentum and energy, trajectories, mass, gravity, electromagnetism, etc, dictated events precisely. Even the distortions of space and time given by Einstein's special and general relativity in measurements taken from differing inertial frames were formulaically predictable distortions, yielding precise predictions for measurements.

Prior to quantum mechanics, Einstein's special and general relativity had already demolished the long-cherished notion of simultaneity - that two things could be said to have happened "at the same time". To get a sense of this, imagine someone places a rock on an anvil, and then strikes it with a hammer, causing the rock to split into two pieces that go flying off in opposite directions, and "simultaneously" shatter two opposing windows. Call these windows W1 and W2. Someone flying by at an appreciable fraction of the speed of light snaps a picture, and in that picture, window W1 is already shattering from a rock-impact, while the other rock has not yet reached window W2. Someone traveling by in the opposite direction also snaps a picture, which reveals window W2 shattering while the other rock has not yet reached window W1. So, ..., did W1 and W2 shatter "at the same time", or W1 before W2, or W2 before W1? Who is "right"? According to relativity, there is no such thing as "right" in this situation - each observer experiences the world from their particular frame of reference, and need only agree when they are in the same frame of reference. There is no "preferred" or "true" frame of reference - hence no universal simultaneity.

Importantly, note that the shattering of W1 is not purported to "cause" the shattering of W2, or vice versa. Even the peculiarities of relativity do NOT allow ANY observer to see either window shatter prior to seeing the rock split by the hammer. No effect can be observed to preceed its own cause, or else causality loses its meaning altogether.

Loss of Causal Determinism?

Likewise, QM does not allow such "strong violation" of causality to be observed. It does, however, imply a weaker form of violation - events that occur in the absense of "cause". For instance, take a sample of a radioactive isotope which decays to a stable isotope with a half-life of 1 hour. After 1 hour, half of the atoms will have converted, and after another hour, half of those that remain will be converted, etc. Indeed, give a single radioactive atom, its likelihood of avoiding decay in the first n-hours of observation is given by (1/2)^n. According to the determinist-reductionist viewpoint, there must be something about that atom that, if only known, would allow us to know when it will decay - the decay must be "caused" by something. But the mathematics of QM precludes there being such cause.

Einstein strenuously objected to such fundamental non-determinism, famously declaring "God does not play dice with the universe!" (a declaration he repeated often enough that at one point Neils Bohr replied to Einstein with "Stop telling God what to do!"). It should be noted that Einstein (along with the other "semi-classicist" scientists) did not object to the notion that the uncertainty principle placed fundamental limits upon what could be determined in measurements (loss of "predictive causality"), but rather to the stronger notion that the universe did not "resolve itself" so completely as to be a deterministic machine. It was one thing to admit, say, that the exact position and also momentum of an electron could not be together resolved, or that the exact moment of decay of an atom could not be predicted - but surely the electron must still "be exactly someplace, with exactly some momentum" at each point in time, and surely something "causes" an atom to decay precisely when it does.

The stronger "Copenhagen Interpretation" of QM would not allow such notions any ground upon which to stand. This led the semi-classicist scientific community to declare that QM was an "incomplete" description of physics, and engendered a variety of conjectured "Hidden Variables" theories. These theories postulated the existence of physical qualities, reflected by variables that might remain unknown (and perhaps, unknowable), but if known would in principle return physics to a sound footing consistent with deterministic causality and definiteness.

However, each time a "Hidden Variables" (HV) theory was proposed, its structure would lead to predictions that contradicted those given by QM. Sometimes, experiments could be devised to determine which theory gave the correct prediction, and invariably it was the QM prediction that was confirmed. This did not rule out, however, that some future HV theory might be devised that would be successful - and in certain cases, the kinds of experiments that would need to be performed in order to determine which theory was "correct" could not yet be conducted - they required more sensitive instruments than were available.

The tension between the semi-classicist HV theories and the strong Copenhagen interpretation of quantum mechanics came to a head when in 1964 John Bell proved an inequality now known as "Bell's Theorem". Bell was able to prove that for ANY HV theory designed to return both strict determinacy and local realism to physics, one could design an experiment for which the correlation between certain remote events could not surpass 50% using that HV theory, yet would be predicted to surpass 50% according to quantum mechanics. Hence, for ANY such HV theory to be "correct", QM would not only have to be an incomplete theory, it would have to be an incorrect theory. There was no way for the predictions of QM to be both correct, AND part of a larger, more deterministic HV theory.

In the decades that followed, instruments sensitive enough to conduct the experiments that would serve to decide the validity of quantum mechanics became available.


Experiments Demonstrating Quantum Weirdness

Implications of the mathematics of quantum theory suggested all manner of particular and specific results should be observed in various experiments (some that were highly counter-intuitive). Many of the predictions of QM were so outrageous that prominent scientists declared QM to be flawed, and they posited specific "gedanken experiments" (thought experiments) in order to show how irrational the QM results would have to be. The most famous of these was called the "EPR Paradox", after its formulators Einstein, Podolski and Rosen. There can be many variants of the EPR experiment that demonstrate the same quantum weirdness, some involving photon polarizations, some involving electron "spin" states, and more. What they have in common involves what is called quantum entanglement and superposition of states.

For the moment, it will suffice to say that "superposition" refers to the concept that when a particle can (classically) be in one of several states, but has not interacted with anything (such as a measuring device) that might ascertain its "actual" state, it must be treated as if it exists in a superposition of the various possible states. The term "quantum entanglement" refers to the situation wherein two particles that have "quantum-interacted", especially particle-pairs born of a single quantum event, must maintain a specific relationship between their states (quantum numbers) that (apparently) exhibits a non-local correlation.

To begin, I reiterate the earlier findings we have made regarding the wave-particle duality of light, with a small but significant twist. I then present a simplified version of the "Delayed Choice" experiment as depicted in the magazine "Scientific American" some years back.

Stage 1. Revisiting the Wave-Particle Duality

Recall from figures 9a, 9b, and 9c above that we demonstrated the wave-nature of light. Light could be treated as a continuous wave-oscillation propagating outward across the electromagnetic field. Figure 9b showed that if light encountered a narrow slit, it would diffract in all directions as if that slit were a new source of waves. Figure 9c showed that if a pair of narrow slits were used, these two "new sources" of circular wavefronts would interfere, just as sound waves from a pair of tuning forks would interfere, producing regions of reinforcement and of cancellation that are evidenced as the light exposes a photographic film beyond the slits.

Contrastingly, we showed (via the photoelectric effect) that when light interacts with "matter", it seems to be delivered only in specific and individual packets (photons) whose energy depends upon the frequency, or "color" of the light.

In the decades since 1960, our mastery of microelectronics and "solid-state" physics has led to the ability to craft light sources so controlled as to reliably emit only one photon at a time, like clockwork. For our purposes, we will consider a light source that reliably emits only one photon per second. Likewise, we generally replace the photographic film with arrays of microscopic photon-detectors, such as one finds in today's digital cameras. These are also sensitive to such degree that a single photon can be reliably detected. (I will sometimes continue to refer to the array of photon-detectors as "the film".)

Wave-Diffraction, in "Slow Motion"

What happens if we re-employ our "shadowbox" (figures 9a, 9b, 9c) but now employ our single-photon-per-second emitter? The results are depicted in the figure below. As each photon is emitted, it can only interact with the "film" at exactly one point - no single photon can "spread itself" (as a wave would) when passing through a narrow slit. However, as more and more such "single point interactions" are recorded, an amazing result takes shape!

[qm-1-md]

[qm-2-md]

[qm-3-md]

While each individual photon seems to land "almost at random" upon the film, their overall distribution conforms to wave diffraction (in the case of the single slit), and most amazingly to display interference fringes in the case of a double-slit!

How is it possible that both slits are having an effect, if only a single photon at a time is traversing the apparatus? It seems clear that we must consider the photon to be an "indivisible", only in terms of it being an indivisible "energy delivery" event at the detection screen, and not a "particle" in the classical sense of having a "body" with a specific location or trajectory while "in transit". While a photon is in transit, between its "emission event" from the light source, and the "absorption event" at the detection screen, the photon behaves precisely as a wave propagating in the EM field, possessing no particular direction or location.

This weird resolution of the wave-particle duality deserves a second examination, and the following 7 figures serve to provide a step-by-step approach to this resolution.

[dce-1]

Figure DCE-1

The figure above (labeled DCE-1) depicts how a semi-mirror can serve to "split" incident light into two diverging paths. In contrast to pure transparency, which allows all light to pass and strike the rightmost screen, and to pure reflection, which allows no light to pass but redirects all of the light onto the leftmost screen, a semi mirror can be constructed that will allow half of the intensity to pass through, and the other half to be reflected.

[dce-2]

Figure DCE-2

The next figure (DCE-2) demonstrates how such a semi-mirror, in conjunction with additional ordinary mirrors, can be used to construct a controlled "beam-splitter". In this case, we see that the two resulting beams are sent in parallel down a pair of tubes (wave-guides), each of which terminate at a narrow, vertical slit. Waves emanating from each slit form concentric circular wavefronts, which intersect with one another to produce the "interference fringes" on the recording screen, as explained earlier both in terms of sound and of light for a given frequency.

[dce-3]

Figure DCE-3

The figure DCE-3 merely serves to demonstrate that the two diverging paths for the light may become separated by arbitrary distance, half taking a far "northern" path, and half a far "southern" path, but upon being redirected by mirrors back to the pair of open slits, the wave interference is still evident.

[dce-4]

Figure DCE-4

Critically, figure DCE-4 demonstrates that if either the northern or southern path is blocked, even if one alternates randomly back and forth regarding which path is blocked, the light deposited upon the screen will no longer have interference fringes. Instead, a single, diffuse band of light is deposited. This should come as no surprise - for wave interference to occur, waves from two separate source locations must intersect in a common space at the same time, not at separate times.

But what happens if we employ a lightsource that can reliably emit only ONE photon of light energy per second? For a given frequency (color) of light, this "fixed energy per photon" has been demonstrated, and explained by Einstein's photo-electric experiment above. By crafting a small laser with materials that demand a specific frequency of light be emitted (due to the lowest energy state-change of the constituent atoms), yet is powered so weakly that only a single photon's energy per second is supplied, such "controlled photon" emitters can be constructed. For the remaining 3 figures, we employ exactly such a lightsource.

[dce-5]

Figure DCE-5

Figure DCE-5 appears to confirm the "indivisible nature" of a photon, as a "particle of light". Here, we replace the "narrow slits" at the end of the apparatus with a pair of sensitive photon detectors. As long as only one photon per second enters the beam splitter, it appears that only one of the two photon detectors may register that photon, never both. There may be no way to predict, at each second, which photon detector is triggered, but only one will detect a photon in any given second. Does this really mean that light is fundamentally a "particle", like an electron or proton, with a fixed location and trajectory at each moment of time? Upon passing through the splitter-mirror, is the photon really taking "only the northern path", or "only the southern path" through the device? Such a description may suffice to explain the behavior we see in this figure, but it is actually a stronger statement than is really necessary or warranted. All that this figure actually demonstrates is that, when a single photon's-worth of energy must be "deposited somewhere", the universe ensures that this happens in one place only, not in two separate places. To understand why we "split hairs" to make such a narrow distinction, consider the next figure.

[dce-6]

Figure DCE-6

In figure DCE-6, each individual photon of light enters the beam-splitter apparatus, and when it emerges, it affects a single point on the "photographic film" (or in today's modern electronic camera screens, affects only one of the cells of the multi-megapixel charge-coupled device array), recording the position wherein the photon was detected. At first, these positions will appear to be scattered randomly across the screen - but as more and more locations are recorded, it becomes clear that the positions are not entirely random, but conform to a distribution that increasingly "fills in" to form the interference fringes we saw earlier. The way this phenomenon is interpreted is that the universe, not being forced to "decide" that the photon has taken any particular path, allows the photon to exist in a "superposition of states" (like Schoedinger's cat, being both alive and dead in the sealed box until the box is opened). Thus, the photon effectively takes BOTH paths, and the photon's "wave nature" is able to self-interfere as it emerges from the two parallel slits. This recreates the varying "intensity" of light we have come to call the "interference fringes", explained now in terms of affecting the probability that the photon be detected at varying locations.

[dce-7]

Figure DCE-7

In the final figure of this discussion, figure DCE-7 (the actual Delayed-Choice-Experiment), we modify the apparatus by placing the photon detectors at the midpoints of the wave-guides, the northern and southern extremes of the two possible paths. However, each detector is placed behind an "electronically-adjustable" mirror (another modern invention whose transparency or reflectivity is instantly altered as charge is applied to its liquid crystal internals.) When these mirrors become "solid", they do not allow either of the photon detectors to register the photon. In this case, the photon is able to maintain a "superposition" of states, emerge from BOTH tubes, engage in wave interference, and contribute a point to the fringe-distribution, as depicted in the upper half of the screen. Most interestingly (thanks to advances in high speed digital electronics) we can make the northern and southern paths sufficiently long that after the photon has passed the initial beam-splitter mirror, we still have time to decide whether we want the electronic mirrors to be reflective or transparent. This study was conducted because some felt that the apparatus itself, being in one state or another, might somehow "signal" the photon as it emerges from the lightsource, as to whether it should behave as a wave and travel both paths, or as a particle and travel only one path.

When the choice to make the electronic mirrors was so delayed, but the mirrors we made transparent before being reached by the photon, (once again) only ONE detector or the other, not both, would ever detect the photon, and in such case, no photon emerged from the two slits. This served to resolve the EPR "paradox" in favor of QM. One could imagine the apparatus stretched such that the northern and southern ends are light-years apart. As a photon approached those extremes, its "wave intensity" would of necessity be traveling in BOTH tubes, because if the decision was suddenly made to have the mirrors be solid, the wave at both extremes would need to be reflected back to the dual slits, and contribute to the wave interference fringes. Yet, if instead the "last moment" decision was to have the mirrors be transparent, and one the two photon detectors "registers" that photon due to that very wave intensity, how can it be that at the very same moment, the opposing detector light years ways that was also about to receive the same wave intensity, is somehow "instantly precluded" from allowing a photon detection to manifest?

Just prior to reaching the two remote detectors, both paths are carrying equi-potent "photon probability wave". Yet the very moment one side detects the photon, the opposing detector (arbitrarily far away) is immediately precluded from detecting the photon. How? It is "as if" some signal or information travelled between them "instantaneously" to ensure the universe remained consistent - only one photon of energy, only one detection allowed.

This weird example of "non-locality" or "seemingly instant communication", however, cannot really be USED to communicate instantaneously, or to cause any "spooky action at a distance", as Einstein had feared. Why? Sitting at one of those detectors, you have no control over when the FAR detector gets a 1 or 0, because you cannot control whether YOUR detector gets a 1 or 0. The universe, discovering that one location detects the photon, merely (yet, instantly) ensures that the other location does not.

There is yet one more weirdness. Certain electronic devices have been built that can "quantum-clone" a passing photon, essentially allowing the "detection" of cloned photon, while also allowing the original photon to continue on "uninterrupted". However, when such devices are used to "detect" the path a photon is taking (or, to force the universe to make such a decision), the photon that continues on its way is now "known" to be in one tube or the other, and not in both. The result is that, upon emerging from the end of the apparatus, the photon contributes a point to the "one fuzzy band" distribution, depicted at the bottom half of the screen.

In essence, QM demands that the "probability density wave", as given by the Schroedinger wave equation, does not merely describe the varying likelihood of where a particle "might already be" as it is in transit, but rather is every bit "the particle", and gives the potentiality that the particle might "become manifest" should it happen to interact with something (be absorbed, detected, etc) at any given location.

... to be continued ...


[ Home ]