Let me show you something I find completely magical. Behind this piece of glass, there appears to be a three-dimensional scene. You can move around your head and you see the light play off of the objects in different ways. For example, that glass Klein bottle in the front warps what's behind it. But it's an illusion. In reality, all that's there is an empty table and a diverging laser beam shining on the glass. Inside that glass is a single piece of film that's been exposed in a very special way that records the entire three-dimensional scene. It's hard
to capture in a video how surreal this feels in person because despite there being nothing behind that glass, every visual cue available is screaming to your brain that something really is there. One of my favorite versions of this is a recording taken of a microscope. Again, it gives this illusion of a three-dimensional object stored on a two-dimensional piece of film. It's a little tricky to get this right, but if you put your eye at just the right point in space, you can look down the barrel of the microscope and see what it's imaging. In this
case, a little microchip. I want you to take a moment to reflect on just how incredible this is simply from the standpoint of how much information needs to be stored on that film. An ordinary photograph records a scene from just a single viewing angle, but right here we have available to us an entire continuum of differing perspectives. In the first example, this includes how the light from the pie creature refracts through the glass in different ways from different angles, or how sometimes you get this little glint of light off the disco ball. Every optical detail
from all these viewing angles is stored on that two-dimensional piece of film. What you're looking at is a hologram. When most people hear the word hologram, what pops into their mind is something like what R2D2 projects in Star Wars, but real-world holograms are a special kind of recording taken on film that gives the illusion of looking through a window into a recorded scene. The principle behind this was discovered by Dennis Gabor in 1947 while he was working on methods for electron microscopy, and as the story goes, the fundamental insight came to him while he was
watching a tennis match. It wasn't until decades later, with the invention of lasers, that holography actually became practical, and in 1971 Gabor won the Nobel Prize for his discovery. The type that I'm showing you here, where the scene is visible only when you illuminate it with a laser from behind, is the simplest version. It's what's called a transmission hologram. In this video, you and I will roll up our sleeves and dig into the details to understand how this works. A more advanced variant is what's called a white light reflection hologram, which as the name suggests
can be illuminated using ordinary light that reflects off of it. This includes that microscope that I was showing you earlier, which is part of a collection from the Exploratorium in San Francisco, which they very generously let us record. First I want to show you the process for how you actually record a hologram, and then you and I will cover some of the fundamental principles of optics that explain one very simple and very specific hologram, but in such a way that we can get a deep and visceral understanding for it. Then after that I'll share a
slightly more abstract but much more powerful framing that explains how it works so well in general. Optics can be tricky and sometimes magical, so my goal at each of these steps is for this to really feel like something that you could have rediscovered for yourself. When my collaborator Paul and I asked around for help to record our own custom hologram, multiple people pointed us to these two very skilled holographers, Craig Newswanger and Sally Weber, who generously showed us the ropes when it comes to the delicate orchestration of lasers involved in doing this. The whole setup
risks feeling like a set of arbitrary steps, but I think maybe the best way to motivate it is to contrast what we're doing with ordinary photography. In a photograph, a given point is only influenced by a very narrow region of the scene that you're recording, something like a pixel. The simplest kind of camera would be a pinhole camera, where you only let light pass through a tiny little hole that exposes your film meaning each part of the film can only see one narrow little region of the scene through that hole. The point is that you're
limiting things to a single viewing angle to influence the film. All of the other information from all other possible viewing angles is lost. The key thought to have in your mind when it comes to a hologram is that the goal is to recreate the entire light field around a scene. What I mean by that is when you set up some scene and there's some light shining on it, there's this incredibly complicated undulating electromagnetic field that surrounds everything, and the specifics of that field are dependent on every optical property of the objects in your scene and
the lights illuminating it. What you see when you're an observer depends on where your eye is in that field. In our test scene, for example, from some angles you see the pie creature's eye refracted through the glass of the Klein bottle, from others you see a glint of light off that disco ball, but in principle everything you can see from various different viewing angles is entirely a function of whatever this undulating light field is that surrounds the scene. So if you could come up with a procedure that recreates the full state of that field, even
when the objects are no longer there, it would give the complete illusion that we're aiming for. Contrast this with the pinhole camera, where a ton of information is lost about that light field based on filtering through the hole. After all, that hole is there to limit the exposure to just one viewing angle. So the first thing you would do if you wanted to record the whole field is to get rid of that pinhole altogether, or the lens that simulates it with most cameras. On its own, exposing the film would now create a non-sensically blurred mess,
but the key to making it sensible is to account for another piece of information about light that normal photography loses, phase. Think about a beam of light shining on a single point of film, say one that has a pure frequency that you could think of as an oscillating sine wave. The height of this wave is called its amplitude, and how far you are along in the cycle is called the phase. Only one of those determines how much the film gets exposed, the amplitude of the wave. The exposure takes some time, and it's influenced by the
average intensity of this wave over that time. Actually, more specifically, the exposure is proportional to the square of the amplitude of this wave. That's going to be important for us much later. If you were to shift this wave back by half of its wavelength, it makes no difference to the exposure on the film, in a sense that point of the film completely forgets everything about the phase of the light that exposed it. You could almost reinvent holography for yourself if you ask, how might it be possible to record the phase of the light, not just
the amplitude? Can you come up with a procedure where one beam of light would expose the film differently than another beam that is in all respects identical, except for the fact that its phase is shifted back, say by half of a cycle? If you give yourself a moment, and if you're rather clever, you might think of something like this. Shine on a second beam of light that has exactly the same frequency. We're going to call this the reference wave. When the first beam is in sync with that reference wave, the two will constructively interfere, producing
a wave that has twice the amplitude, so the film gets exposed a lot at that point. But if that first beam was shifted back half of a cycle, the two would now cancel out with each other, interfering destructively, so the film would be exposed much less at that point. It's not a perfect recording of the phase, but in this case the exposure is highly sensitive to phase shifts in a way that's not at all true for normal photography. Admittedly, it might not yet be clear why this has anything to do with storing a three-dimensional scene
on a two-dimensional film, but at least in principle, if you know that the goal is to recreate the full state of a light field around your scene, perhaps you could believe that having a record of the phase variations in that light field along the plane of the film might somehow be relevant. This insight of interfering with a reference beam only works if all the light has the same frequency. So looking at your setup, you cannot illuminate the scene with ordinary white light. What you have to do is use a laser. A clever way to do
this is to pass that laser through a beam splitter, where half of it gets spread out, bounces off the scene, and hits the film. We'll call that the object wave. And then the other half also gets spread out, but it doesn't interact with anything before hitting the film. This will act as the reference wave. Those two waves interfere at the plane of the film in a way that depends heavily on the phase of that object wave. The full setup that Craig and Sally made for us looks only a little bit more complicated than this. One
little nuance is that beam splitters change the polarization of light, so you have to do something to counteract that. The object beam follows this trajectory here, which lets our scene get right up next to the film and get illuminated from the front. And then on the left we have the reference beam. But at a high level, the essence of any holography setup is to have these two different sources of light, both of the same single frequency, interfering with each other at the plane of the film. This means that the exposure pattern on the film is
based on the extremely subtle ways that these two waves interfere with each other. The exposure pattern looks absolutely nothing like the original objects. If you zoomed in with a microscope, you would see a complete mess, some rapidly oscillating fringes between points of high and low exposure. From a practical standpoint, you should note how this also means the exposure pattern is incredibly sensitive to even the tiniest movement of the objects in the scene during the recording. If their position shifts by an amount comparable to the wavelength of the light, a few hundred nanometers, it can completely
change that pattern. So when Craig and Sally were showing us the ropes, one thing that really surprised me was how part of the process involved all of us sitting in this meditative stillness throughout the exposure, which took a couple of minutes. We even had to sit still during the minutes leading up to it, reducing as much motion as possible in the air of the room. Now at this point, even if you believe that this resulting interference pattern on the film somehow records information about the light and its phase and all of that, it's not at
all obvious how you could use it to recreate the original field, making that illusion of the scene behind the glass visible from many angles. But here, my friend, is the magic. If you now remove all the objects from the scene and you block that object beam, so the only thing shining on this now exposed film is the reference beam, then what it produces beyond the glass includes a complete recreation of that object wave, a recreation of light that would be there if the scene were still there and the object beam were still shining. This is
the surprise of holography, the mystery that you and I are going to dig into for the remainder of the video. Why is it that shining the reference wave through this film, which was exposed using the combined object and reference waves, gives you such a bizarrely perfect recreation of the object? How does this delicate and apparently nonsensical interference pattern on the film somehow record an entire three-dimensional scene? To drive home how magical this feels, we cut out a very small circle from the film that we recorded. In an ordinary photograph, cutting out a small piece obviously
cuts away the vast majority of the scene, but for a hologram holding up that same small little circle of film to the reference beam, as you shift your viewing position looking through that circle, you can see essentially every part of the scene recorded, from that pie creature to the disco ball and the various shapes behind it. You just can't see all of them at once. Continuing with the goal of rediscovery, universal problem solving tip number one is to begin analyzing any hard problem by taking the simplest version of that problem. So to puzzle over how
holography works, a natural place you might start is to ask, what happens if you record a hologram of the simplest object you could think of, maybe a little point floating in three-dimensional space? We'll think of the light that reflects as a wave propagating radially away from that point. The basic outline for what follows is that we'll figure out what the exposure pattern on the film is in this very simple case, then deduce why shining the reference through that known exposure pattern creates the illusion of a single point behind the glass, even when it's not there,
and then we'll generalize to more complicated objects. I should maybe take a moment to be clear on how I'm representing light waves throughout this video. Light is a wave in the electromagnetic field, where for example the electric half of that is an association between each point in space and a little oscillating vector pointing in three dimensions. Where that field is strong in one direction, I'll color a point blue, and when it's strong in the opposite direction, I'll color it red, and the dark bands in between represent where it's zero. In principle, the field exists everywhere
in three-dimensional space, but typically it's a lot easier to think about if I only color the points along a two-dimensional slice of it. Even more narrowly, it's often helpful to think of a little sine wave over this when we want to think of what the wave-like variations are just along a single one-dimensional line through space. And for the detail oriented among you, throughout this video we'll be assuming that the light is linearly polarized, meaning it just oscillates in one direction, which it typically would be if we're recording a hologram. Now, when we take a hologram
of this single point, the radial wave coming from that point is our object wave, and for the reference wave, again in the spirit of simplicity, I want you to think of it as coming from very very far away, enough so that we can model it as a plane wave coming in perpendicular to the film. In other words, what I mean by that is that all of the wave crests from that reference beam are parallel to the film. You should know that in practice for real holography, you want this reference beam to come in at an
angle, later on you're going to understand why, but for right now, keeping the analysis simple, making that beam perpendicular gives us a friendly situation to study. I can simulate for you what the combination of both of these waves looks like, and what the resulting exposure pattern on the film looks like, but it is helpful to think through for yourself. For example, if the light off of that object hitting the nearest point of the film happens to be in phase with the reference beam, it means that center point would be strongly exposed. In that case, for
points farther away from the center, because the distance to the object gets longer, the phase of the object wave along this strip falls in and out of phase with the reference wave, meaning that the exposure pattern oscillates between points of high and low exposure. For example, at a certain point on that film, the distance to the object will be exactly half of a wavelength longer than the distance between the object and the closest point on the film, at that center. So if the object and reference beams had been in phase in the middle, it would
mean that they're exactly out of phase here, so the combined wave has a low amplitude and you get this dark spot at that point. The same would be true for everything along this ring around the center, of all points the same distance to the object. In general, the exposure pattern here looks like a bunch of concentric rings. If you enjoy exercises, you might like taking a moment to try writing an explicit formula for the radii of all of those rings, as a function of the distance between the object point and the film, as well as
the wavelength of the light. At a more qualitative level though, just note that the distance between these fringes gets smaller as you go farther away from the middle. This pattern is important enough in optics that it has its own special name, it's called a Fresnel zone plate. The wavelength of visible light is really small, and that means that the fringes of exposed points are spaced very close together. Farther away from the middle point, they approach the wavelength of the light itself. In the simulation, notice how if I bring the object point closer, the inner rings
get smaller, and if I pull that object point farther away, the inner rings get bigger. So in a manner, this pattern records the three-dimensional coordinates of our point. The center of the rings give you the xy coordinates, and the ring spacing uniquely determines the z coordinate. So you go ahead and you record this simplest of all holograms, and you end up with film that has this peculiar zone plate exposure pattern. But how does reconstruction work in this case? What happens when you remove the object and the object beam, and the only thing shining on the
exposed film is the reference wave? Well, if you look at just a small region of the film, the zone plate pattern looks like a bunch of parallel stripes, and this is related to a very well-studied phenomenon in optics known as a diffraction grating. And while I could just plop down something known as the diffraction equation right now to continue with the explanation, it is a real joy to see how it arises from first principles. So continuing that theme of letting this feel like something that you might have rediscovered, let's take a brief sidestep into a
mini-lesson so that I don't have to rob you of that joy. Here's the basic puzzle for this mini-lesson. You've got a light wave shining onto a wall, and imagine this wall is solid and opaque, except for having a bunch of very thin evenly spaced slits that the light can pass through. The basic question is, what does the light wave look like on the other side of this wall? And in particular, how does it depend on the distance between those slits? Now, a full answer is extremely complicated, so much so that it's surprising that we'll be
able to say anything useful at all. And before I spoil the general shape of the answer with the simulation here, let me be clear about how we're going to think about it. For each individual one of those slits, we'll imagine that it's thin enough that you can model it as a single point emanating light, matching whatever wavelength and frequency the incoming light has. So if there was some wall or film behind this single slit, you wouldn't really see anything all that interesting. It would be brightest in the middle, and then taper off very gently to
the side, depending on how the amplitude of that wave diminishes over distance. In particular, nothing here really gives you a hint about the phase of the light, so the wave nature remains relatively under-spoken. Very famously, this changes when you open up a second slit. In fact, when Thomas Young did this for monochromatic light in 1801, it was one of the earliest confirmations that light really is a wave. For you and me, thinking about two slits is a nice warm-up before we get to more. As an example, think about the point at the center of that
wall on the opposite side of the slits. In that case, the waves coming from each one would be in phase with each other, so they interfere constructively, and that's why you get a bright spot. If you move over a little bit to the side, where the distance to one slit is exactly half a wavelength longer than the distance to the other, they would add destructively, and that's why you get a dark spot. And in general, you get these oscillations between bright and dark as you scan left to right. The key point here, which you have
to imagine was a bit of a surprise in 1801, is that the brightness on the wall is not just a sum of what it would be for each individual slit. The key, when you have light just of one frequency like this, is to understand where the waves are in and out of phase with each other. Also, I want you to notice how the pattern changes if we change the wavelength of the light. A shorter wavelength would give more rapid oscillations between bright and dark spots. There's a really nice exhibit at the Exploratorium in San Francisco
showing this double slit interference. You shine a thin laser beam through two even thinner slits, and on a wall very far down, what you see is an array of light and dark spots, just like what we saw in the simulation. That's just two slits, but remember that our key question is what happens when you have a whole bunch of slits, say spaced a distance d away from each other. I have to say this was incredibly fun to simulate, and you can probably guess how the simulation here is working. At every point in space, you consider
the n different light waves coming from each one of those slits, you add up the values of those waves, which might add or subtract depending on their phase, and then you color that point depending on the result. Blue for positive values, red for negative values. In the immediate vicinity of the slits, this is a complete chaotic mess, but the surprise is that when you zoom out, order emerges. You have one beam of light shining down the middle, but you also get these other beams shining off on either side, and one of those beams specifically is
going to be the key to explaining our first hologram. In particular, I want you to understand how you'd compute the angle between that other beam and the center. How would you analyze a point which is along a line some angle theta away from that perpendicular? Well, the key question is what are the distances from that point to all of the individual slits where the light is coming from? If this point is far enough away, then when you zoom in close to those slits, all of those lines look like they're parallel to each other. What you
want to know is whether a given one of these lines is longer than another one of the lines, and more specifically, how much longer it would be. I think one of the nicest ways to think about this is to zoom out again so that we can see the point we're analyzing, and imagine pivoting one of those lines around that point so you're looking at everywhere else that's the same distance away. If you were zoomed in next to the slits, that pivoting motion looks basically like you're translating in a perpendicular direction. So what that means is
if you drop a little perpendicular line from one of the slits to the line adjacent to it, you can conclude that the distance between that first slit and the point we're analyzing is exactly the same as this section of the adjacent line, meaning that the difference between those two distances is visible as this little snippet right here. So the key question, if you want to understand whether those two waves are in or out of phase with each other, is how big is that snippet? What's the size of that difference? Well if you go in and
you draw the appropriate little right triangle, the hypotenuse of that triangle is the spacing between the slits, which I'm going to call d, so this key difference we care about looks like one of the legs of that triangle, d times the sine of a certain little angle that I'm going to call theta. It's not too hard to convince yourself that that angle theta is actually the same as the angle between all these lines and the vertical, so the difference between all of these lengths, and this is really the key lesson of diffraction gradings, looks like
d times the sine of that angle. In particular, if it was the case that this key expression, d times the sine of theta, happened to be exactly the wavelength of the light, commonly denoted lambda, all of the beams emanating from these different slits are going to be in phase with each other, so they'll constructively interfere at the point we're analyzing. This is why, even in the immediate vicinity, you have this chaotic mess that's very hard to describe, zoomed out a sufficient distance away, you have a distinct beam along a certain direction, and moreover, you now
know how to compute the angle of that direction. It's the angle such that d, the spacing between your slits, times the sine of that angle equals the wavelength of the light. That will be a key equation for us, so do remember it, but this graphic also helps give a nice qualitative intuition for how changing the spacing influences the angle. Imagine locking the length of this critical leg of the triangle, you want it to always equal the wavelength. Then if you want it to narrow the spacing of the grating, making d smaller, the only way to
keep that leg locked is to make the angle theta bigger. On the other hand, if you had a wider spacing and you increased that value d, the only way to keep that leg locked is to decrease the value of theta. You can also say a little bit more here too. For large enough values of d, it's possible for d times the sine of theta to not just be a single wavelength, but to equal two whole wavelengths, so you would still get constructive interference but at a bigger angle, and the same goes for any whole number
multiple of the wavelength. And again, this is something that you can see in practice. Here we're shining a green laser through a diffraction grating that has 500 lines per millimeter. You can not only see how it splits up into these distinct beams, but you could actually do the math to figure out the angle between those beams. The centermost one is commonly called the zeroth order beam, the ones immediately next to it are called the first order beams, and if they exist, the other ones are called second order, third order, and so on. Another fun thing
to notice is how this equation depends on the wavelength of the light. So if you shine white light through a diffraction grating like this, you get a separation into distinct wavelengths, distinct colors like a rainbow. If you've ever noticed how the reflections off of a DVD or a CD produce this rainbow pattern, it's essentially the same effect going on there. The angle of reflection depends on the spacing between the ridges and the wavelength of the light. So that's the packaged lesson about diffraction gratings, but now let's zoom out and remember why we were doing this.
If you record a hologram of a simple point in space, the exposure pattern on the film is this Fresnel zone plate, concentric rings that get spaced closer together the farther you go away. In fact, you can be a little bit more quantitative and write down the exact spacing between those fringes. Let's say you draw a line from a point on this film down to the object, or maybe I should say down to where the object was during recording. Consider the angle between this line and the one perpendicular to the film, which I'm going to call
theta prime to distinguish it from the diffraction angles we were just looking at. I won't narrate through all the details here, but you can work out this fringe spacing exactly. You start with the premise that the distance between adjacent fringes to the object should differ by one whole wavelength. I know some of you are curious and want to pause and ponder, so I'll leave up the details, but the upshot is that it all boils down to a delightfully clean and convenient fact. The spacing between the fringes for our Fresnel zone plate, which I'll label as
D, multiplied by the sine of this angle, theta prime, equals the wavelength of the light. This might give you a little déjà vu, and if it strikes you as uncannily similar to the diffraction equation, you are 100% right and it's the key to how our first hologram works. When you just shine the reference wave through this pattern, near a given point on the film, it acts like a diffraction grating, splitting the wave into distinct beams, and the angle of those first order beams satisfies the diffraction equation, D times sine of theta equals lambda. Therefore, the
angle of one of these first order beams exactly matches the angle of the line connecting this point of the film to where the object was during recording, even though the object isn't there anymore. So, critically, think about why this would imply there's the illusion of a point behind the screen. For an observer on the other side of that film, as the reference wave is shining onto it, every point of that film is emitting multiple different beams, but one of those beams at each point matches what a beam from that object dot would look like if
that object dot were still there. So the observer always sees this bright spot on the glass at a point where a line from their eye to the past object position intersects the glass. As they move their head around, and their eye moves in 3D space, that apparent bright spot on the glass moves with them in just such a way that it gives the illusion of a floating point of light behind that glass. The way things are set up right now, the other beams would add some distraction for that observer. The zeroth order beam, for example,
at all of these points is essentially a rescaled version of what the reference wave would look like if the film weren't there. To the observer, this looks like a bright glow coming from everywhere behind the film. This would be pretty obnoxious, but it can be solved by shining the reference beam in at an angle. You might be curious about the other first order beam, and it's actually very interesting. If you draw a line along that other first order beam from every point at the film, all of those lines converge at a single point on the
opposite side of the film. This is our first peek at a funny little artifact of holography known as the conjugate image. Much earlier I said that the way you want to think about a hologram is that you're trying to record and reconstruct the state of the light field around a scene. The truth is that a hologram doesn't exactly do this. What it actually recreates can be thought of as a sum of three different light waves. One of them is a scaled copy of the reference wave, which for our simple point example corresponds to the zeroth
order beam. Another corresponds to a copy of the wave emanating from the object. This is exactly the thing we want to reconstruct, and in our simple example it corresponds to one of these first order beams, but there's also a third wave in there. In this simple example of a hologram of a single point, that third component looks like the reference beam being concentrated in to a single reflection of that point through the film. More generally though, this third component looks like the reference beam being refocused onto a reflected version of your full object on the
opposite side of the film. In the hologram that we recorded with Craig and Sally, you can actually see this by putting a piece of paper behind the glass, where at the appropriate distance away, you see the light focused onto the shape of, for example, the pie creature, which was part of the scene we recorded. This is known as the conjugate image. In the earliest days of holography, Dennis Gabor had a lot of trouble seeing the holograms that he wanted to, because this conjugate image kept getting in the way, muddying up the waters. However, again this
is something where if you use a reference beam that shines on the film at an angle, you can get a clean separation of the reconstructed object wave from this conjugate wave artifact. The astute among you might ask about the higher order beams from a diffraction grating. After all, we saw how the equation doesn't just imply three beams for large enough values of d, you can have many more angles that satisfy the diffraction equation. This is something I actually didn't know until learning about holography, but if instead of a binary grating, where light either completely passes
through or gets completely blocked, you instead have a material that only partially allows light through with an opacity that follows a sine wave pattern, then you actually don't get higher order beams. The diffraction pattern only includes zeroth and first order beams. The most elegant way to show why, in my opinion, comes from the more formal explanation of holograms that I want to show you in a few minutes, but for right now just know that you're safe to only think about three beams emanating from each point of the film. So, stepping back, you, the problem solver
puzzling over holography, have at least this one example to hold onto. A hologram of a single point gives a zone plate, and the diffraction effect of shining the reference through the zone plate includes, among other things, a recreation of the wave from that point. A natural place your mind might go from here is to wonder about two such points in space. Right here is a simulation for what the resulting exposure pattern would look like. It's based on adding the two radial waves from those points together with a reference wave, and you can clearly tell that
it's related to the zone plate associated with each one, effectively encoding two distinct 3D positions, but it is more complicated than simply adding the two individual patterns, in the same way that the double slit interference is more complicated than simply adding the brightness from each individual slit. The result depends on how the waves and their phases interfere. Even still, you can see how this might lead you to think about building up a scene as a combination of multiple points, and hypothesizing that the resulting exposure pattern can be thought of as some kind of combination of
the zone plates for each one. The simulation I'm showing you has only 30 points, and already the result effectively looks like random noise. But nevertheless, I think you would agree that a reasonable hypothesis would be that the diffraction effect of shining a reference beam through this pattern might be the sum of what you would get from the zone plates associated with each individual point, and you know that looking through from the other side of this film, it would give the illusion of those 30 points being there. If that were true, you could think of a
more complicated object or more complicated scene as a continuous cloud of many little points, and the resulting pattern would not only encode the 3D position and orientation of that object, but it would also enable the simple reconstruction that we're looking for. As stated so far, this is all sketchy and non-rigorous, but it does actually lead you to some very real intuition about how holography works in practice. Think back to the very start, when I asked you to appreciate how much information is being stored on this film, in that it can recreate a scene from a
wide continuum of viewing angles. Part of the catch is that for this process to work, the film has to have extremely high resolution. For comparison, standard Polaroid Instant Film resolves only around 10 lines per millimeter. Microfilm will get you somewhere a little over 100 lines per millimeter, but the film that we used to record this hologram can resolve many thousands of lines per millimeter. And now you know why this is necessary. The spacing between fringes in a zone plate becomes very very narrow, and faithfully recording those variations is necessary to reconstruct the object beam at
that point. Without the outer fringes, the effect for a viewer would be that they simply don't see the object if they're viewing from a sharp angle. This is not a complete explanation without justifying how exactly the exposure pattern that you get from multiple points would have a diffraction effect that equals the sum of what you would get for the individual points. Even then, there's a deeper issue. Building up a scene from individual points would not do true justice to the power of holography. The reason we included that glass Klein bottle in our example is to
emphasize this power. That glass does not act as a collection of points, merely reflecting light. It affects the surrounding optics of the scene, like how the light from the objects behind gets bent and distorted as it passes through. The hologram perfectly recreates this effect in a way that could not be explained by adding up individual points. Anyone who has experience with ray tracing will appreciate the difference here. Also, the sharp-eyed viewers among you might have noticed a couple gaps in the explanation even for a single point. For example, I mentioned shining in the reference beam
at an angle, but that would also change the exposure pattern, so it's not entirely clear that the logic still works. Sometimes in math, a complex problem can be solved incrementally by building up simple cases, but other times the simple cases act more like training exercises, and the best way to tackle a general situation is to set up a new framework for thinking about it, such that the general solution simply falls right out. In the case of studying holography, there is an entirely different and more powerful way to explain holograms that subsumes all of these issues.
I'll walk through it at the very end here as a kind of technical appendix to the whole video, but I think it would be nice to wrap up the main discussion by stepping back and discussing holograms more broadly. Everything we just described covers a transmission hologram, the simplest variety, and these have been possible ever since the late 60s. More advanced techniques exist where the film can reproduce the scene not using a laser, but with ordinary white light reflecting off of it. And it goes beyond that. There are techniques where you can make changes in the
scene over time visible as you view the hologram from different angles, or where you can take a 3D computer graphics model and produce a hologram of it. You and I covered the bicycle, but people went on to cars and planes thereafter. In our example, we were modeling the film as a two-dimensional surface with no thickness, and that's perfectly okay for transmission holograms, but it's worth noting that that changes for white light reflection holograms. Variations in the exposure through the thickness of that film become relevant for filtering out the appropriate wavelengths during reconstruction. The fact that
Dennis Gabor won a Nobel Prize should give you some hint that the significance here goes beyond its use as an art form. That clip that I showed earlier where we delicately touch the object with a feather is displaying the original scene, superimposed with its own hologram that we had just freshly recorded. The dark stripes that you see on the pie creature arise from the extremely tiny shifts in position and how that affects the interference of the light. And you can actually use that to measure the size of those shifts. This gives a very small glimpse
into the relevance that holograms have to interferometry, which is the technique of using wave interference to measure extremely tiny distances. More broadly, being able to record and reproduce an entire wavefront is just a useful tool to have in your belt for a variety of experiments. For any of you who want a deeper look at the history and development of holography and physics, I highly recommend reading Gabor's Nobel Prize lecture. Throughout the lesson, I've repeatedly referenced the goal of making holograms feel like something you could have invented yourself. But there's an irony here, because in his
own reflections, Gabor describes this not as a deliberate discovery. He called it, quote, an exercise in serendipity, the art of looking for something and finding something else. But luck favors a prepared mind, and the more that you engage with imagined rediscovery of the inventions that you find around you, the better moment of similar serendipity. And finally, for the stalwart viewers hungry for the final details, here's the more formal description of why holography works. For the small price of introducing complex numbers into the discussion, the result kind of just pops out from a few lines of
algebra. So think about recording a hologram where you have both a reference wave and an object wave shining on the film. We're going to write down the sum of those two waves as R plus O. The exposure pattern depends on the amplitude of this combined wave at each point on the film. And like I said earlier, more precisely, it's proportional to the square of that amplitude. Each of these symbols, say the object wave O, is meant to represent a function that takes in a point in space, as well as a time, and returns some real
number value describing the strength of the point. It's an incredibly complicated function, but at a fixed point in space, this output value simply oscillates up and down over time, which we've been visualizing throughout as oscillations between blue and red. Very often, the math that you do with waves has a habit of becoming a lot more elegant when you treat an oscillating value like this, not merely as a real number, but as the real component of a rotating complex number. This might feel very strange if it's something you haven't seen before, but one way to motivate
why you might do this is that a complex number very elegantly encodes both the phase and the amplitude of the wave at a snapshot in space and time. The wave amplitude is visible as the size of this value, its distance from the origin, and the phase is visible as the angle this number makes off the horizontal. The physical significance when you model things like this is just that the real component of that rotating complex value tells you the strength of the wave at that point. But you might think of adding on an imaginary component like
this as sort of extra bookkeeping that lets you more readily keep track of the phase and amplitude of the wave at that point. For example, the amplitude of that combined wave, the thing we care about to describe the exposure, can now be written as the magnitude of this sum, R plus O, where now we're thinking of that as a sum between two complex values at every point on the plane of the film. So for example, it places where those two values align, that corresponds to constructive interference between the waves, and the film is going to
be more opaque. Where they are out of phase, you get a really small result, and that corresponds to the film being more transparent. Now think about reconstruction when you just shine the reference wave through this exposure pattern. Let's write the wave immediately on the other side of the film as R times 1 minus the opacity at that point on the film. So for example, where the film is very transparent, R passes through unchanged, but where it's more opaque, R is going to get scaled down. We can go ahead and substitute in the expression for opacity
that we have, and then distribute a little, and what I want to do is focus on analyzing this component right here, which is going to be part of the wave on this other side of the film. When you expand this expression and interpret its terms, the truth of holography stares right back at you. To do so, what you need to know about complex numbers is one definition and one fact. The definition is that the complex conjugate of a number, which you typically denote with a star, is its reflection over the horizontal axis. And the fact
that you need is that when you multiply a number by its own complex conjugate, the result is the square of the magnitude of that number. So in our expression, that exposure term could also be written as R plus O multiplied by its own complex conjugate, R-star plus O-star. The reason to do this is now you have something you can expand algebraically, although admittedly when you do this it might first look like a big pile of symbols that is very far removed from physical intuition about reconstructing a three-dimensional scene. But give it just a moment here.
When you organize the terms the right way, the answer sort of pops out. Whenever you see a value multiplied by its own complex conjugate, just think of it as some real number, some scaling factor. For example, one part in the final expression looks like some certain real number all times R, the reference wave. And what this corresponds to is a scaled copy of that reference wave here on the other side of the film. In our simple point example, that would correspond to the zeroth order beam from the diffraction grating off the zone plate. The key
term is this part right here. It looks like some real number, some scaling factor, multiplied by the object wave itself. So here you are on the other side of the film during reconstruction, and the expression for the state of the light field includes this scaled copy of the object wave. It doesn't matter how complicated O is, the scene that it comes from could have whatever complicated optics that you want. But here on the other side of the film there will always be this scaled copy of o. In our simple point example, this corresponds to one
of those first order beams. And finally, you have this funny little term at the end that involves the complex conjugate of the object wave. This corresponds, as you can probably guess, to the conjugate image, and maybe now the name makes a little bit more sense. This is a difficult term to make sense of, and I'm not going to say too much more about it, other than to show that for the hologram we made, if you flip the film around, you can see this conjugate image as a very bizarre, warped, and reflected version of the original
scene. So that is it. A little piece of complex algebra shows why you have a copy of the object wave on the other side of the film. And importantly, there were no assumptions about how simple that object wave is. Also, it makes no assumptions about how simple the reference wave is. You just need to be able to reproduce the same wave that you used during recording, or at least something close to it. Even though this is more powerful, on its own I would find this explanation very unsatisfying. The breakdown of a single point with the
zone plate pattern and diffraction gradings, for all of its shortcomings, gives a lot more intuition in my opinion. For instance, the algebra tells you nothing about how the film resolution could matter, but the abstract path and all the power that it brings does offer some reassurance that the shortcomings in that first explanation are all surmountable. In his Nobel Prize lecture, Gabor wrote, in holography, nature is on the inventor's side. You do actually need to say a little bit more to finish the abstract derivation. Strictly speaking, we've only shown that you have a copy of this
object wave on the plane immediately after the film, but what you want to show is that it exists in all of 3D space, beyond that film. Rigorously justifying this would go beyond the scope of an already long lesson, but it's not too hard a fact to believe. What we're basically saying is that if you have some function that you know satisfies whatever equations light must obey, namely the object wave O, and you have another function describing a steady state wave where you know that O shows up as a component along a 2D boundary like this,
then in the free space beyond that boundary, the wave must also include as a component O in its entirety. This actually has a nice connection to the key mystery from the very beginning of the lesson of how a three-dimensional scene could possibly be stored on 2D film. Part of what's required for this to be possible is that light obeys laws regular enough that the state of a light wave in 3D space is sufficiently constrained by its value on a 2D boundary.