Today I'd like to tell you about a piece of math known as holomorphic dynamics. This is the field which studies things like the Mandelbrot set, and in fact one of my main goals today is to show you how this iconic shape, the poster child of math, pops up in a more general way than the initial definition might suggest. Now this field is also intimately tied to what we talked about in the last video, with Newton's fractal, and another goal of ours towards the end of this video will be to help tie up some of the loose ends that we had there.
So first of all, this word holomorphic might seem a little weird. It refers to functions that have complex number inputs and complex number outputs, and which you can also take a derivative of. Basically what it means to have a derivative in this context is that when you zoom in to how the function behaves near a given point, to the point and its neighbors, it looks roughly like scaling and rotating, like multiplying by some complex constant.
We'll talk more about that in just a bit, but for now know that it includes most of the ordinary functions you could write down, things like polynomials, exponentials, trig functions, all of that. The relevant dynamics in the title here comes from asking what happens when you repeatedly apply one of these functions over and over, in the sense of evaluating on some input, then evaluating the same function on whatever you just got out, and then doing that again, and again and again and again. Sometimes the pattern of points emerging from this gets trapped in a cycle, other times the sequence will just approach some kind of limiting point.
Or maybe the sequence gets bigger and bigger and it flies off to infinity, which mathematicians also kind of think of as approaching a limit point, just the point at infinity. And other times still they have no pattern at all, and they behave chaotically. What's surprising is that for all sorts of functions that you might write down, when you try to do something to visualize when these different possible behaviors arise, it often results in some insanely intricate fractal pattern.
Those of you who watched the last video have already seen one neat example of this. There's this algorithm called Newton's method, which finds the root of some polynomial p, and the way it works is to basically repeatedly iterate the expression x minus p of x divided by p prime of x, p prime being the derivative. When your initial seed value is in the loose vicinity of a root to that polynomial, a value where p of x equals zero, this procedure produces a sequence of values that really quickly converges to that root.
This is what makes it a useful algorithm in practice. But then we tried to do this in the complex plane, looking at the many possible seed values and asking which root in the complex plane each one of these seed values might end up on. Then we associated a color with one of the roots, and then colored each pixel of the plane based on which root a seed value starting at that pixel would ultimately land on.
The results we got were some of these insanely intricate pictures, with these rough fractal boundaries between the colors. Now in this example, if you look at the function that we're actually iterating, say for some specific choice of a polynomial, like z cubed minus one, you can rewrite the whole expression to look like one polynomial divided by another. Mathematicians call these kinds of functions rational functions.
And if you forget the fact that this arose from Newton's method, you could reasonably ask what happens when you iterate any other rational function. And in fact, this is exactly what the mathematicians Pierre Fatou and Gaston Julia did in the years immediately following World War I. And they built up a surprisingly rich theory of what happens when you iterate these rational functions, which is particularly impressive given that they had no computers to visualize any of this the way you and I can.
Remember those two names, they'll come up a bit later. By far the most popularized example of a rational function that you might study like this, and the fractals that can ensue, is one of the simplest functions, z squared plus c, where c is some constant. I'm going to guess that this is at least somewhat familiar to many of you, but it certainly doesn't hurt to quickly summarize the story here, since it can help set the stage for what comes later.
For this game, we're going to think of c as a value that can be changed. It'll be visible as this movable yellow dot. For the actual iterative process, we will always start with an initial value of z equals zero.
So after iterating this function once, doing z squared plus c, you get c. If you iterate a second time, plugging in that value to the function, you get c squared plus c. And as I change around the value c here, you can kind of see how the second value moves in lockstep.
Then we can plug in that second value to get z3, and that third value to get z4, and continue on like this, visualizing our chain of values. So if I keep doing this many different times for the first many values, for some choices of c, this process remains bounded. You can still see it all on the screen.
And other times, it looks like it blows up, and you can actually show that if it gets as big as 2, it'll blow up to infinity. If you color the points of the plane where it stays bounded black, and you assign some other gradient of colors to the divergent values based on how quickly the process rushes off to infinity, you get one of the most iconic images in all of math, the Mandelbrot set. Now this interactive dots and stick visualization of the trajectory, by the way, is heavily inspired by Ben Sparks' illustration and the Numberphile video he did about the Mandelbrot set, which is great, you should watch it.
I honestly thought it was just too fun not to re-implement here. I would also highly recommend the interactive article on ako. net about all of this stuff for any of you who haven't had the pleasure of reading that yet.
What's nice about the Ben Sparks illustration is how it illuminates what each different part of the Mandelbrot set actually represents. This largest cardioid section includes the values of c so that the process eventually converges to some limit. The big circle on the left represents the values where the process gets trapped in a cycle between two values.
And then the top and bottom circles show values where the process gets trapped in a cycle of three values, and so on like this. Each one of these little islands kind of has its own meaning. Also notice there's an important difference between how this Mandelbrot set and the Newton fractals we were looking at before are each constructed, beyond just a different underlying function.
For the Mandelbrot set we have a consistent seed value, z equals zero, but the thing we're tweaking is the parameter c, changing the function itself. So what you're looking at is what we might call a parameter space. But with Newton's fractal, we have a single unchanging function, but what we associate with each pixel is a different seed value for the process.
Of course, we could play the same game with the map z squared plus c. We could fix c at some constant, and then let the pixels represent the different possible initial values, z naught. So whereas each pixel of the Mandelbrot set corresponds to a unique function, the images on the right each just correspond to a single function.
As we change the parameter c, it changes the entire image on the right. And again, just to be clear, the rule being applied is that we color pixels black if the process remains bounded, and then apply some kind of gradient to the ones that diverge away to infinity based on how quickly they diverge to infinity. In principle, and it's kind of mind-warping to think about, there is some four-dimensional space of all combinations of c and z naught, and what we're doing here is kind of looking through individual two-dimensional slices of that unimaginable pattern.
You'll often hear or read the images on the right being referred to as Julia sets or Julia fractals, and when I first learned about all this stuff, I'll admit that I kind of was left with the misconception that this is what the term Julia set refers to, specifically the z squared plus c case, and moreover that it's referring to the black region on the inside. However, the term Julia set has a much more general definition, and it would refer just to the boundaries of these regions, not the interior. To set the stage for a more specific definition, and to also make some headway towards the first goal that I mentioned at the start, it's worth stepping back and really just picturing yourself as a mathematician right now, discovering all of this.
What would you actually do to construct a theory around this? It's one thing to look at some pretty pictures, but what sorts of questions would you ask if you actually want to understand it all? In general, if you want to understand something complicated, a good place to start is to ask if there are any parts of the system that have some simple behavior, preferably the simplest possible behavior.
In our example, that might mean asking when does the process just stay fixed in place, meaning f of z is equal to z. That's a pretty boring set of dynamics, I think you'd agree. We call a value with this property a fixed point of the function.
In the case of the functions arising from Newton's method, by design they have a fixed point at the roots of the relevant polynomial. You can verify for yourself, if p of z is equal to zero, then the entire expression is simply equal to z. That's what it means to be a fixed point.
If you're into exercises, you may enjoy pausing for a moment and computing the fixed points of this Mandelbrot set function, z squared plus c. More generally, any rational function will always have fixed points, since asking when this expression equals z can always be rearranged as finding the roots of some polynomial expression, From the fundamental theorem of algebra, this must have solutions, typically as many solutions as the highest degree in this expression. Incidentally, this means you could also find those fixed points using Newton's method, but maybe that's a little too meta for us.
ight now. Now just asking about fixed points is maybe easy, but a key idea for understanding the full dynamics, and hence the diagrams we're looking at, is to understand stability. We say that a fixed point is attracting if nearby points tend to get drawn in towards it, and repelling if they're pushed away.
And this is something that you can actually compute explicitly using the derivative of the function. Symbolically, when you take derivatives of complex functions, it looks exactly the same as it would for real functions, though something like z squared has a derivative of 2 times z. But geometrically, there's a really lovely way to interpret what this means.
For example, at the input 1, the derivative of this particular function evaluates to be 2, and what that's telling us is that if you look at a very small neighborhood around that input, and you follow what happens to all the points in that little neighborhood as you apply the function, in this case z squared, then it looks just like you're multiplying by 2. This is what a derivative of 2 means. To take another example, let's look at the input i.
We know that this function moves that input to the value negative 1, that's i squared. But the added information that its derivative at this value is 2 times i gives us the added picture that when you zoom in around that point, and you look at the action of the function on this tiny neighborhood, it looks like multiplication by 2i, which in this case is saying it looks like a 90 degree rotation combined with an expansion by a factor of 2. For the purposes of analyzing stability, the only thing we care about here is the growing and shrinking factor.
The rotational part doesn't matter. So if you compute the derivative of a function at its fixed point, and the absolute value of this result is less than 1, it tells you that the fixed point is attracting, that nearby points tend to come in towards it. If that derivative has an absolute value bigger than 1, it tells you the fixed point is repelling, it pushes away its neighbors.
For example, if you work out the derivative of our Newton's map expression, and you simplify a couple things a little bit, here's what you would get out. So if z is a fixed point, which in this context means that it's one of the roots of the polynomial p, this derivative is not only smaller than 1, it's equal to 0. These are sometimes called super-attracting fixed points, since it means that a neighborhood around these points doesn't merely shrink, it shrinks a lot.
And again, this is kind of by design, since the intent of Newton's method is to produce iterations that fall towards a root as quickly as they can. Pulling up our z squared plus c example, if you did the first exercise to find its fixed points, the next step would be to ask, when is at least one of those fixed points attracting? For what values of c is this going to be true?
And then, if that's not enough of a challenge, try using the result that you find to show that this condition corresponds to the main cardioid shape of the Mandelbrot set. This is something you can compute explicitly, it's pretty cool. A natural next step would be to ask about cycles, and this is where things really start to get interesting.
If f of z is not z but some other value, and then that value comes back to z, it means that you've fallen into a two cycle. You could explicitly find these kinds of two cycles by evaluating f of f of z and then setting it equal to z. For example, with the z squared plus c map, f of f of z expands out to look like this.
A little messy, but you know, it's not too terrible. The main thing to highlight is that it boils down to solving some degree four equation. You should note though that the fixed points will also be solutions to this equation, so technically the two cycles are the solutions to this minus the solutions to the original fixed point equation.
And likewise you can use the same idea to look for n cycles by composing f with itself n different times. The explicit expressions that you would get quickly become insanely messy, but it's still elucidating to ask how many cycles would you expect based on this hypothetical process. If we stick with our simple z squared plus c example, as you compose it with itself, you'd get a polynomial with degree four and then one with degree eight and then degree sixteen and so on and so on, exponentially growing the order of the polynomial.
So in principle, if I asked you how many cycles are there with a period of one million, you can know that it's equivalent to solving some just absolutely insane polynomial expression with a degree of two to the one million. So again, fundamental theorem of algebra, you would expect to find something on the order of two to the one million points in the complex plane which cycle in exactly this way. And more generally, for any rational map you'll always be able to find values whose behavior falls into a cycle with period n.
It ultimately boils down to solving some probably insane polynomial expression. And just like with this example, the number of such periodic points will grow exponentially with n. I didn't really talk about this in the last video about Newton's fractal, but it's sort of strange to think that there are infinitely many points that fall into some kind of cycle even for a process like this.
In almost all cases though, these points are somewhere on the boundary between those colored regions and they don't really come up in practice because the probability of landing on one of them is zero. What matters for actually falling into one of these is if one of the cycles is attracting in the sense that a neighborhood of points around a value from that cycle would tend to get pulled in towards that cycle. A highly relevant question for someone interested in numerical methods is whether or not this Newton's map process ever has an attracting cycle, because if there is it means there's a non-zero chance that your initial guess gets trapped in that cycle and it never finds a root.
The answer here is actually yes. More explicitly, if you try to find the roots of z cubed minus 2z plus 2 and you're using Newton's method, watch what happens to a cluster that starts around the value zero and sort of bounces back and forth. And well okay, in this case the cluster we started with was a little bit too big so some of the outer points get sprayed away, but here's what it looks like if we start with a smaller cluster.
Notice how all of the points genuinely do shrink in towards the cycle between zero and one. It's not likely that you hit this with a random seed, but it definitely is possible. The exercise that you could do to verify that a cycle like this is attracting, by the way, would be to compute the derivative of f of f of z, and you check that at the input zero this derivative has a magnitude less than one.
The thing that blew my mind a little is what happens when you try to visualize which cubic polynomials have attracting cycles at all. Hopefully if Newton's method is going to be at all decent at finding roots, those attracting cycles should be rare. First of all, to better visualize the one example we're looking at, we could draw the same fractal that we had before, coloring each point based on what root the seed value starting at that point will tend to, but this time we'll have an added condition of coloring points that says that if the seed value never gets close enough to a root at all, we will color the pixel black.
Notice if I tweak the roots, meaning that we're trying out different cubic polynomials, it's actually really hard to find any place to put them so that we see any black pixels at all. I can find this one little sweet spot here, but it's definitely rare. Now what I want is some kind of way to visualize every possible cubic polynomial at once with a single image in a way that shows which ones have attracting cycles.
Luckily it turns out that there is a really simple way to test whether or not one of these polynomials has an attracting cycle. All you have to do is look at the seed value which sits at average of the three roots, this center of mass here. Turns out, this is not at all obvious, if there's an attracting cycle, you can guarantee that this seed value will fall into that attracting cycle.
In other words, if there are any black points, this will be one of them. If you want to know where this magical fact comes from, it stems from a theorem of our good friend Fatou. He showed that if one of these rational maps has an attracting cycle, you can look at the values where the derivative of your iterated function equals zero, and at least one of those values has to fall into the cycle.
That might seem like a little bit of a weird fact, but the loose intuition is that if a cycle is going to be attracting, at least one of its values should have a very small derivative, that's where the shrinking will come from. And this in turn means that that value in the cycle sits near some point where the derivative is not merely small but equal to zero, and that point ends up being close enough to get sucked into the cycle. This fact also justifies why with the Mandelbrot set, where we're only using one seed value z equals zero, it's still enough to get us a very full and interesting picture.
If there's a stable cycle to be found, that one seed value is definitely going to find it. I feel like maybe I'm assigning a little too much homework and exercises today, but if you're into that, yet another pleasing one would be to look back at derivative expression that we found with our function that arises from Newton's method, and use this wonderful theorem of Vateau's to show our magical fact about cubic polynomials, that it suffices to just check this midpoint over the roots. Honestly though, all of those are details that you don't really have to worry about.
The upshot is that we can perform a test for whether or not one of these polynomials has an attracting cycle by looking at just a single point, not all of them. And because of this, we can actually generate a really cool diagram. The way this will work is to fix two roots in place, let's say putting them at z equals negative one and z equals positive one, and then we'll move around that third root, which I'll call lambda.
Remember, the key feature that we're looking for is when the point at the center of mass is black. So what I'll do is draw a second diagram on the right, where each pixel corresponds to one possible choice of lambda. What we're going to do is color that pixel based on the color of this midpoint of the three roots.
If this feels a little bit confusing, that's totally okay. There are kind of a lot of layers at play here. Just remember, each pixel on the right corresponds to a unique polynomial, as determined by this parameter lambda.
In fact, you might call this a parameter space. Sound familiar? Points in this parameter space are colored black if, and only if, the Newton's method process for the corresponding polynomial produces an attracting cycle.
Again, don't worry if that takes a little moment to digest. Now, at first glance, it might not look like there are any black points at all on this diagram. And this is good news.
It means that in most cases Newton's method will not get sucked into cycles like this. But, and I think I've previewed this enough that you know exactly where this is going, if we zoom in we can find a black region, and that black region looks exactly like a Mandelbrot set. Yet again, asking a question where we tweak a parameter for one of these functions yields this iconic cardioid and bubbles shape.
The upshot is that this shape is not as specific to the z squared plus c example as you might think. It seems to relate to something more general and universal about parameter spaces with processes like this. Still, one pressing question is why we get fractals at all.
In the last video, I talked about how the diagrams for Newton's method have this very peculiar property, where if you draw a small circle around the boundary of a colored region, that circle must actually include all available colors from the picture. And this is true more generally for any rational map. If you were to assign colors to regions based on which limiting behavior points fall into, like which limit point or which limit cycle or does it tend to infinity, then tiny circles that you draw either contain points with just one of those limiting behaviors, or they contain points with all of them.
It's never anything in between. So in the case where there's at least three colors, this property implies that our boundary could never be smooth, since along a smooth segment, you can draw a small enough circle that touches just two colors, not all of them. And empirically, this is what we see.
No matter how far you zoom in, these boundaries are always rough. And furthermore, you might notice that as we zoom in, you can always see all available colors within the frame. This doesn't explain rough boundaries in the context where there's only two limiting behaviors, but still, it's a loose end that I left in that video worth tying up, and it's a nice excuse to bring in two important bits of terminology, Julia sets and Fatou sets.
If a point eventually falls into some stable, predictable pattern, we say that it's part of the Fatou set of our iterated function. And for all the maps that we've seen, this includes almost everything. The Julia set is everything else, which in the pictures we've seen would be the rough boundaries between the colored regions.
What happens as you transition from one stable attractor to another? For example, the Julia set will include all of the repelling cycles and the repelling fixed points. A typical point from the Julia set though, will not be a cycle.
It'll bounce around forever with no clear pattern. Now, if you look at a point in the Fatou set, and you draw a small enough disc around it, as you follow the process, that small disc will eventually shrink as you fall into whatever the relevant stable behavior is. Unless you're going to infinity, but you could kind of think of that as the disc shrinking around infinity, but maybe that just confuses matters.
By contrast, if you draw a small disc around a point on the Julia set, it tends to expand over time as the points from within that circle go off and kind of do their own things. In other words, points of the Julia set tend to behave chaotically. Their nearby neighbors, even very nearby, will eventually fall into qualitatively different behaviors.
But it's not merely that this disc expands. A pretty surprising result, key to the multicolor property mentioned before, is that if you let this process play out, that little disc eventually expands so much that it hits every single point on the complex plane, with at most two exceptions. This is known as the stuff-goes-everywhere principle of Julia sets.
Okay, it's not actually called that. In the source I was reading from, it's mentioned as a corollary to something known as Montel's theorem. But it should be called that.
In some sense, what this is telling us is that the points of the Julia set are not merely chaotic, they're kind of as chaotic as they possibly can be. Here, let me show you a little simulation using the Newton's map, with a cluster of a few thousand points, all starting from within a tiny distance, one one-millionth, from a point on the Julia set. Of course, the stuff-goes-everywhere principle is about the uncountably infinitely many points that would lie within that distance, and that they eventually expand out to hit everything on the plane, except possibly two points.
But this little cluster should still give the general idea. A small, finite sample from that tiny disk gets sprayed all over the place in seemingly all directions. What this means for our purposes is that if there's some attractive behavior of our map, something like an attracting fixed point or an attracting cycle, you can be guaranteed that the values from that tiny disk around the point on the Julia set, no matter how tiny it was, will eventually fall into that attracting behavior.
If we have a case with three or more attracting behaviors, this gives us some explanation for why the Julia set is not smooth, why it has to be complicated. Even still, this might not be entirely satisfying because it kicks the can one more step down the road, raising the question of why this stuff-goes-everywhere principle is true in the first place. Like I mentioned, it comes from something called Montel's theorem, and I'm choosing not to go into the details there, because honestly, it's a lot to cover.
The proof I could find ends up leaning on something known as the J function, which is a whole intricate story in its own right. I will of course leave links and resources in the description for any of you who are hungry to learn more. And if you know of a simpler way to see why this principle is true, I'm definitely all ears.
I should also say as a brief side note that even though the pictures we've seen so far have a Julia set which has an area of zero, it's kind of the boundary between these regions, there are examples where the Julia set is the entire plane. Everything behaves chaotically, which is kind of wild. The main takeaway for this particular section is the link between the chaos and the fractal.
At first it seems like these are merely analogous to each other, you know, Newton's method turns out to be a kind of messy process for some seed values, and this messiness is visible one way by following the trajectory of a particular point, and another way by the complexity of our diagrams, but those feel like qualitatively different kinds of messiness. Maybe it makes for a nice metaphor, but nothing more. However, what's neat here is that when you quantify just how chaotic some of the points are, well, that quantification leads us to an actual explanation for the rough fractal shape via this boundary property.
Quite often you see chaos and fractals sort of married together in math, and to me at least it's satisfying whenever that marriage comes with a logical link to it, rather than as two phenomena that just happen to coincide.