- This is a video about one of the most important, yet least understood concepts in all of physics. It governs everything from molecular collisions to humongous storms. From the beginning of the universe through its entire evolution, to its inevitable end.
It may, in fact, determine the direction of time and even be the reason that life exists. To see the confusion around this topic, you need to ask only one simple question. What does the Earth get from the sun?
- What does the earth get from sun? - Well, it's light rays? - What do we get from the sun?
- Heat. - Warmth. - Warmth, light.
- Vitamin D, we get vitamin D from- - We do get vitamin D from the ultraviolet rays. - Well, a lot of energy. - What does the earth get from this, energy?
- Yeah, energy. - Energy. - Nailed it.
Every day, the earth gets a certain amount of energy from the sun. And then how much energy does the earth radiate back into space relative to that amount that it gets from the sun? - Probably not as much, I, you know, I don't believe it's just radiating right back.
- I'd say less. - Less. - Less.
- I say less. - I guess about 70%? - It is a fraction.
- I'd say 20%. - Because. .
. - Because we use some of it. - We use some of the energy.
- Mm-hmm. - We consume a lot, right? - But the thing about energy is it never really goes away.
You can't really use it up. - It would have to break even, wouldn't it? Same amount, yeah.
- You know, cause and effect. It'd be equal in some ways, right? - For most of the earth's history, it should be exactly the same amount of energy in from the sun as earth radiates into space.
- Wow. - Because if we didn't do that, then the earth would get a lot hotter, that'd be a problem. - That'd be a big problem.
- So, if that is the case. . .
- Yeah. - Then what are we really getting from the sun? - That's a good question.
- Hmm. - It gives us a nice tan. - It gives us a nice tan, I love it.
We're getting something special from the sun. - I don't know, what do we get without the energy? - But nobody talks about it.
To answer that, we have to go back to a discovery made two centuries ago. In the winter of 1813, France was being invaded by the armies of Austria, Prussia, and Russia. The son of one of Napoleon's generals was Sadi Carnot, a 17-year-old student.
On December 29th, he writes a letter to Napoleon to request to join in the fight. Napoleon preoccupied in battle, never replies. but Carnot gets his wish a few months later when Paris is attacked.
The students defend a chateau just east of the city, but there're no match for the advancing armies, and Paris falls after only a day of fighting. Forced to retreat, Carnot is devastated. Seven years later, he goes to visit his father who's fled to Prussia after Napoleon's downfall.
His father was not only a general, but also a physicist. He wrote an essay on how energy is most efficiently transferred in mechanical systems. When his son comes to visit, they talk at length about the big breakthrough of the time, steam engines.
Steam engines were already being used to power ships, mine ore, and excavate ports. And it was clear that the future industrial and military might of nations depended on having the best steam engines. But French designs were falling behind those of other countries like Britain.
So, Sadi Carnot took it upon himself to figure out why. At the time, even the best steam engines only converted around 3% of thermal energy into useful mechanical work. If he could improve on that, he could give France a huge advantage and restore its place in the world.
So he spends the next three years studying heat engines, and one of his key insights involves how an ideal heat engine would work, one with no friction and no losses to the environment. It looks something like this. Take two really big metal bars, one hot and one cold.
The engine consists of a chamber filled with air, where heat can only flow in or out through the bottom. Inside the chamber is a piston, which is connected to a flywheel. The air starts at a temperature just below that of the hot bar.
So first, the hot bar is brought into contact with the chamber. The air inside expands with heat flowing into it to maintain its temperature. This pushes the piston up, turning the flywheel.
Next, the hot bar is removed, but the air in the chamber continues to expand, except now without heat entering, the temperature decreases. In the ideal case, until it is the temperature of the cold bar. The cold bar is brought into contact with the chamber and the flywheel pushes the piston down.
And as the air is compressed, heat is transferred into the cold bar. The cold bar is removed. The flywheel compresses the gas further increasing its temperature until it is just below that of the hot bar.
Then the hot bar is connected again and the cycle repeats. Through this process, heat from the hot bar is converted into the energy of the flywheel. And what's interesting to note about Carnot's ideal engine is that it is completely reversible.
If you ran the engine in reverse, first the air expands lowering the temperature, then the chamber is brought into contact with the cold bar, the air expands more, drawing in heat from the cold bar. Next, the air is compressed, increasing its temperature. The chamber is placed on top of the hot bar and the energy of the flywheel is used to return the heat back into the hot bar.
However many cycles were run in the forward direction, you could run the same number in reverse, and at the end, everything would return to its original state with no additional input of energy required. So by running an ideal engine, nothing really changes. You can always undo what you did.
So what is the efficiency of this engine? Since it's fully reversible, you might expect the efficiency to be 100%, but that is not the case. Each cycle, the energy of the flywheel increases by the amount of heat flowing into the chamber from the hot bar, minus the heat flowing out of the chamber at the cold bar.
So to calculate the efficiency, we divide this energy by the heat input from the hot bar. Now the heat in on the hot side is equal to the work done by the gas on the piston, and this will always be greater than the work done by the piston on the gas on the cold side, which equals the heat out. And this is because on the hot side, the hot gas exerts a greater pressure on the piston than that same gas when cold.
To increase the efficiency of the engine, you could increase the temperature of the hot side, or decrease the temperature of the cold side, or both. Lord Kelvin learns of Carnot's ideal heat engine and realizes it could form the basis for an absolute temperature scale. Imagine that the gas is allowed to expand an extreme amount, so much that it cools to the point where all the gas particles effectively stop moving.
Then they would exert no pressure on the piston, and it would take no work to compress it on the cold side, so no heat would be lost. This is the idea of absolute zero, and it would make for a 100% efficient engine. Using this absolute temperature scale, the Kelvin scale, we can replace the amount of heat in and out with the temperature of the hot and cold side respectively, because they are directly proportional.
So we can express efficiency like this, which we can rewrite like this. What we have learned is that the efficiency of an ideal heat engine doesn't depend on the materials or the design of the engine, but fundamentally on the temperatures of the hot and cold sides. To reach 100% efficiency, you'd need infinite temperature on the hot side or absolute zero on the cold side, both of which are impossible in practice.
So even with no friction or losses to the environment, it's impossible to make a heat engine 100% efficient. And that's because to return the piston to its original position, you need to dump heat into the cold bar. So not all the energy stays in the flywheel.
Now, in Carnot's time, high pressure steam engines could only reach temperatures up to 160 degrees Celsius. So their theoretical maximum efficiency was 32%, but their real efficiency was more like 3%. That's because real engines experience friction, dissipate heat to the environment, and they don't transfer heat at constant temperatures.
So for just as much heat going in, less energy ends up in the flywheel. The rest is spread out over the walls of the cylinder, the axle of the flywheel, and is radiated out into the environment. When energy spreads out like this, it is impossible to get it back.
So this process is irreversible. The total amount of energy didn't change, but it became less usable. Energy is most usable when it is concentrated and less usable when it's spread out.
Decades later, German physicist, Rudolf Clausius, studies Carnot's engine, and he comes up with a way to measure how spread out the energy is. He calls this quantity, entropy. When all the energy is concentrated in the hot bar, that is low entropy, but as the energy spreads to the surroundings, the walls of the chamber and the axle will entropy increases.
This means the same amount of energy is present, but in this more dispersed form, it is less available to do work. In 1865, Clausius summarizes the first two laws of thermodynamics like this. First, the energy of the universe is constant.
And second, the entropy of the universe tends to a maximum. In other words, energy spreads out over time. The second law is core to so many phenomena in the world.
It's why hot things cool down and cool things heat up, why gas expands to fill a container, why you can't have a perpetual motion machine, because the amount of usable energy in a closed system is always decreasing. The most common way to describe entropy is as disorder, which makes sense because it is associated with things becoming more mixed, random, and less ordered. But I think the best way to think about entropy is as the tendency of energy to spread out.
So why does energy spread out over time? I mean, most of the laws of physics work exactly the same way forwards or backwards in time. So how does this clear time dependence arise?
Well, let's consider two small metal bars, one hot and one cold. For this simple model, we'll consider only eight atoms per bar. Each atom vibrates according to the number of energy packets it has.
The more packets, the more it vibrates. So let's start with seven packets of energy in the left bar and three in the right. The number of energy packets in each bar is what we'll call a state.
First, let's consider just the left bar. It has seven energy packets, which are free to move around the lattice. This happens nonstop.
The energy packets hop randomly from atom to atom giving different configurations of energy, but the total energy stays the same the whole time. Now, let's bring the cold bar back in with only three packets and touch them together. The energy packets can now hop around between both bars creating different configurations.
Each unique configuration is equally likely. So what happens if we take a snapshot at one instant in time and see where all the energy packets are? So stop, look at this.
Now there are nine energy packets in the left bar, and only one in the right bar. So heat has flowed from cold to hot. Shouldn't that be impossible because it decreases entropy?
Well, this is where Ludwig Boltzmann made an important insight. Heat flowing from cold to hot is not impossible, it's just improbable. There are 91,520 configurations with nine energy packets in the left bar, but 627,264 with five energy packets in each bar.
That is the energy is more than six times as likely to be evenly spread between the bars. But if you add up all the possibilities, you find there's still a 10. 5% chance that the left bar ends up with more energy packets than it started.
So, why don't we observe this happening around us? Well, watch what happens as we increase the number of atoms to 80 per bar and the energy packets to 100, with 70 in the left bar and 30 in the right. There is now only a 0.
05% chance that the left solid ends up hotter than it started. And this trend continues as we keep scaling up the system. In everyday solids, there are around 100 trillion, trillion atoms and even more energy packets.
So heat flowing from cold to hot is just so unlikely that it never happens. Think of it like this Rubik's cube. Right now, it is completely solved, but I'm gonna close my eyes and make some turns at random.
If I keep doing this, it will get further and further from being solved. But how can I be confident that I'm really messing this cube up? Well, because there's only one way for it to be solved, a few ways for it to be almost solved, and quintillions of ways for it to be almost entirely random.
Without thought and effort, every turn moves the Rubik's cube from a highly unlikely state that of it being solved to a more likely state, a total mess. So if the natural tendency of energy is to spread out and for things to get messier, then how is it possible to have something like air conditioning where the cold interior of a house gets cooler and the hot exterior gets hotter? Energy is going from cold to hot, decreasing the entropy of the house.
Well, this decrease in entropy is only possible by increasing the entropy a greater amount somewhere else. In this case, at a power plant, the concentrated chemical energy and coal is being released, heating up the power plant in its environment, spreading to the turbine the electric generators, heating the wires all the way to the house, and producing waste heat in the fans and compressor. Whatever decrease in entropy is achieved at the house is more than paid for by an increase in entropy required to make that happen.
But if total entropy is constantly increasing and anything we do only accelerates that increase, then how is there any structure left on earth? How are there hot parts separate from cold parts? How does life exist?
Well, if the earth were a closed system, the energy would spread out completely, meaning, all life would cease, everything would decay and mix, and eventually, reach the same temperature. But luckily, earth is not a closed system, because we have the sun. What the sun really gives us is a steady stream of low entropy that is concentrated bundled up energy.
The energy that we get from the sun is more useful than the energy we give back. It's more compact, it's more clumped together. Plants capture this energy and use it to grow and create sugars.
Then animals eat plants and use that energy to maintain their bodies and move around. Bigger animals get their energy by eating smaller animals and so on. And each step of the way, the energy becomes more spread out.
- Okay, interesting. - Yeah. - Oh wow, I did not know that.
- There you go. Ultimately, all the energy that reaches earth from the sun is converted into thermal energy, and then it's radiated back into space. But in fact, it's the same amount.
I know this is a- - You do know this is. . .
- I'm a PhD physicist. - Oh, okay, but anyway, so. .
. - I trust you. The increase in entropy can be seen in the relative number of photons arriving at and leaving the earth.
For each photon received from the sun, 20 photons are emitted, and everything that happens on earth, plants growing, trees falling, herds stampeding, hurricanes and tornadoes, people eating, sleeping, and breathing. All of it happens in the process of converting fewer, higher energy photons into 20 times as many lower energy photons. Without a source of concentrated energy and a way to discard the spread out energy, life on earth would not be possible.
It has even been suggested that life itself may be a consequence of the second law of thermodynamics. If the universe tends toward maximum entropy, then life offers a way to accelerate that natural tendency, because life is spectacularly good at converting low entropy into high entropy. For example, the surface layer of seawater produces between 30 to 680% more entropy when cyanobacteria and other organic matter is present than when it's not.
Jeremy England takes this one step further. He's proposed that if there is a constant stream of clumped up energy, this could favor structures that dissipate that energy. And over time, this results in better and better energy dissipators, eventually resulting in life.
Or in his own words, "You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant. " So life on earth survives on the low entropy from the sun, but then where did the sun get its low entropy? The answer is the universe.
If we know that the total entropy of the universe is increasing with time, then it was lower entropy yesterday and even lower entropy the day before that, and so on, all the way back to the Big Bang. So right after the Big Bang, that is when the entropy was lowest. This is known as the past hypothesis.
It doesn't explain why the entropy was low, just that it must have been that way for the universe to unfold as it has. But the early universe was hot, dense, and almost completely uniform. I mean, everything was mixed and the temperature was basically the same everywhere, varying by at most 0.
001%. So how is this low entropy? Well, the thing we've left out is gravity.
Gravity tends to clump matter together. So taking gravity into account, having matter all spread out like this, would be an extremely unlikely state, and that is why it's low entropy. Over time, as the universe expanded and cooled, matter started to clump together in more dense regions.
And in doing so, enormous amounts of potential energy were turned into kinetic energy. And this energy could also be used like how water flowing downhill can power a turbine. But as bits of matter started hitting each other, some of their kinetic energy was converted into heat.
So the amount of useful energy decreased. Thereby, increasing entropy. Over time, the useful energy was used.
In doing so, stars, planets, galaxies, and life were formed, increasing entropy all along. The universe started with around 10 to the 88 Boltzmann constants worth of entropy. Nowadays, all the stars in the observable universe have about 9.
5 times 10 to the 80. The interstellar and intergalactic medium combined have almost 10 times more, but still only a fraction of the early universe. A lot more is contained in neutrinos and in photons of the cosmic microwave background.
In 1972, Jacob Bekenstein proposed another source of entropy, black holes. He suggested that the entropy of a black hole should be proportional to its surface area. So as a black hole grows, its entropy increases.
Famous physicists thought the idea was nonsense and for good reason. According to classical thermodynamics, if black holes have entropy, then they should also have a temperature. But if they have temperatures, they should emit radiation and not be black after all.
The person who set out to prove Bekenstein wrong was Stephen Hawking. But to his surprise, his results showed that black holes do emit radiation, now known as Hawking radiation, and they do have a temperature. The black hole at the center of the Milky Way has a temperature of about a hundred trillionth of a Kelvin, emitting radiation that is far too weak to detect.
So still pretty black. But Hawking confirmed that black holes have entropy and Bekenstein was right. Hawking was able to refine Bekenstein's proposal and determine just how much entropy they have.
The super massive black hole at the center of the Milky Way has about 10 to the 91 Boltzmann constants of entropy. That is 1,000 times as much as the early observable universe, and 10 times more than all the other particles combined. And that is just one black hole.
All black holes together account for 3 times 10 to the 104 Boltzmann constants worth of entropy. So almost all the entropy of the universe is tied up in black holes. That means, the early universe only had about 0.
000000000000003% of the entropy it has now. So the entropy was low, and everything that happens in the universe like planetary systems forming, galaxies merging, asteroids crashing, stars dying, to life itself flourishing, all of that can happen because the entropy of the universe was low and it has been increasing, and it all happens only in one direction. We never see an asteroid uncrash or a planetary system unmix into the cloud of dust and gas that made it up.
There is a clear difference between going to the past and the future, and that difference comes from entropy. The fact that we are going from unlikely to more likely states is why there is an arrow of time. This is expected to continue until eventually, the energy gets spread out so completely that nothing interesting will ever happen again.
This is the heat death of the universe. In the distant future, more than 10 to the 100 years from now, after the last black hole has evaporated, the universe will be in its most probable state. Now, even on large scales, you would not be able to tell the difference between time moving forwards or backwards, and the arrow of time itself would disappear.
So it sounds like entropy is this awful thing that leads us inevitably towards the dullest outcome imaginable. But just because maximum entropy has low complexity does not mean that low entropy has maximum complexity. It's actually more like this tea and milk.
I mean, holding it like this is not very interesting. But as I pour the milk in, the two start to mix and these beautiful patterns emerge. They arise in an instant and before you know it, they're gone back to being featureless.
Both low and high entropy are low in complexity. It's in the middle where complex structures appear and thrive. And since that's where we find ourselves, let's make use of the low entropy we've got while we can.
With the right tools, we can understand just about anything, from a cup of tea cooling down to the evolution of the entire universe. And if you're looking for a free and easy way to add powerful tools to your arsenal, then you should check out this video sponsor, brilliant. org.
With Brilliant, you can master key concepts in everything from math and data science to programming and physics. All you need to do is set your goal, and Brilliant will design the perfect learning path for you, equipping you with all the tools you need to reach it. Want to learn how to think like a programmer?
Then Brilliant's latest course, "Thinking in Code" is a fast and easy way to get there. Using an intuitive drag and drop editor, it teaches you what you really need to know, including essential concepts like nesting and conditionals. You can start by jumping right in to program a robot and then learn how to apply your new tools to your everyday life, like automating reminders on your phone or building a bot that filters your matches on a dating app.
What I love about Brilliant is that they connect what you learn to real world examples. And because each lesson is hands-on, you'll build real intuition, so you can put what you've learned to good use. To try everything Brilliant has to offer free for a full 30 days, visit brilliant.
org/veritasium. I will put that link down in the description. And through that link, the first 200 of you to sign up will get 20% off Brilliant's annual premium subscription.
So I wanna thank Brilliant for sponsoring this video, and I wanna thank you for watching.