What's the best method to arrange oranges to fill a box in as compact way as possible? Placing them carefully in ordered rows? Or tossing them in at random?
In our three-dimensional world, the familiar grocery store stack turns out to be the optimal solution. But what about in higher dimensions? Dimension a hundred, a thousand, a jillion.
So as the dimension grows and grows, what happens then? Very mysterious, what should be the correct thing to do there? This is known as the sphere packing problem.
It's like the stupidest question you can think of, right? So you take a box, I give you a bunch of balls, and you say, okay, how many balls can you put in this box? How many of them can we fit in without actually letting them overlap or distorting them?
An international group of young mathematicians have made a major advance on the sphere packing problem with a new proof. One that works in all dimensions. A sphere in two dimensions is a circle defined by X and Y coordinates.
The densest packing is exactly what you'd imagine if you placed coins on a flat surface. You have some sort of honeycomb-type structure. You have one penny and then six around it.
That's not super hard to prove. In the late 16th century, British Explorer, Sir Walter Raleigh, asked the mathematician Thomas Harriot to calculate the optimal way to stack cannonballs on the deck of a ship. Harriot later shared the problem with the astronomer Johannes Kepler.
In 1611, Kepler hypothesized that the optimal three-dimensional packing would be a pyramid shape. Any sphere in the interior would touch 12 others and the spheres would occupy about 74. 05% of the available space.
This became known as the Kepler Conjecture. The question became, how do we understand why this is really optimal? Can we prove it?
It was actually a really long standing open question in math to prove that that's the best way to do things. In A 3D, it took about 400 years. Sphere packings can serve as a sort of basic model of the structure of matter.
Three dimensional spheres can represent atoms. An ordered packing might model a crystal or other solids, whereas disordered arrangements can represent liquids or gases. Sort of way of probing the phases of matter as well, an physics.
To extend the problem from 3D spherers into higher dimensions, more coordinates can be added. Each just defines some point inside of a box. The sphere packing problem in higher dimensions has a range of applications including in error correcting codes used in computer science.
Turns out to be deeply connected with topics in information theory. In 2016, the Ukrainian mathematician Maryna Viazovska solved the sphere packing problem in dimensions 8 and 24. Proving that in these dimensions, certain lattices are optimal.
Sheater won a Fields Medal for her work. But beyond these dimensions, it remained an open question. A big question underlying this is structure versus randomness.
Should the best sphere packings be crystalline and ordered? Or should they be completely random and unstructured? When Julian Sahasrabudhe assembled his working group in person for the first time solving the sphere packing problem wasn't on their agenda.
We actually had a slightly different problem in mind when we got together in Cambridge. Failed to solve that problem in such a way that we realized that our failure continues to inform progress on this other problem. To arrive at their sphere packing proof.
The group first turned the problem into one about a graph, which consists of points and lines known as vertices and edges. Totally forgot about geometry because geometry's confusing and then just work directly with this graph. Their goal was to construct an efficient, high-dimensional sphere packing from a collection of randomly placed points representing the centers of spheres.
I need to be able to put a sphere around one center, a sphere, around the other center, and the spheres can't the overlap. If two spheres do intersect, then an edge is drawn connecting the two points. What we're very interested in is what is the largest subset of a graph that contains no edges?
This subset of points without edges is called an independent set. You construct independent sets just by constructing it bit by bit, not trying to find the whole complicated object all at once. You can form an independent set by selecting points in the original graph using a random process called the Rödl nibble.
You just place the balls one at a time until there is no more space to place any balls. Keep applying this process until you've built up an independent set that's large to create a good sphere packing. Our work is a very as random packing as you can get.
The resulting proof broke a 75-year-old record. It gave a denser way to pack spheres in all higher dimensions. But I think it'd be very interesting if random like sphere packings are the best period, and this is really what the big question is about.
Imagine a pool of prime numbers. Or a large messy collection of numbers chosen at random. Is it inevitable that patterns will appear within these sets?
True randomness is impossible. No matter how random your set looks, they'll always contain this kind of structure. There's always a little pattern inside any large unstructured object.
Identifying just how large sets of numbers can get before patterns must emerge is a central problem in combinatorics. Combinatorics often is seen as a study of counting. I think more broadly it has to do with the study of patterns.
Patterns of equally-space numbers are called arithmetic progressions. So a three-term arithmetic progression is like 3, 7, 11. A four term arithmetic progression is like 3, 7, 11, 15, and so on.
In February, three grad students made the first breakthrough in decades on one of the biggest unsolved problems involving arithmetic progressions. Generally as a graduate student, you don't try to tackle the cutting-edge problems in the field right away. In 1936, the mathematicians Paul Erdős and Pál Turán explored infinite sets that contain some non-zero fraction of the whole numbers.
For example, the set of even numbers contains 50% of all whole numbers. Erdős and Turán asserted that even if the fraction is very small, as long as it's not zero, the set will be sure to contain all arbitrarily long arithmetic progressions. In 1975, he mathematician Endre Szemerédi proved the conjecture for sets containing infinitely many numbers.
But what happens with finite sets of numbers? Say you start with some pool of numbers like the numbers one through 20. Now you build a set using some fraction of those numbers, which is called the density of the set.
The question becomes what's the highest density or fraction of numbers you can add to your set before you accidentally include an arithmetic progression. You can start to ask questions about, okay, at what densities do we still have these patterns? What is the best avoiding set?
I want to take as many elements as possible without creating, let's say, a five term arithmetic progression. How dense a set can you take? With a pool of 20 numbers the answer is 16 or 80% for a five term progression.
Szemerédi proved that as your starting pool grows, the fraction of numbers you can take before arithmetic progressions appear approaches zero. Mathematicians want to know how quickly does this happen? What is the bound for the highest density set that avoids progressions of a given size?
There was an interest for a long time to make the bounds for Szemerédi's theorem better and better. and for very short progressions, progressions of length three, there's been a lot of work. Last year, two computer scientists nearly solved this question for three-term progressions like 4, 8, 12.
But for longer progressions, even progressions of length four. The subject was much trickier. Mehtaab Sawhney and Ashwin Sah, first met in 2017 as undergrads at MIT.
Since then, the pair of become frequent collaborators. We sorta met through this shared interest in combinatorics. We wrote something like 50 papers in graduate school together.
We our academic siblings. Meanwhile, James Leng, then a grad student under Terrence Tao at UCLA, was working on techniques related to Szemerédi's problem. Sah and Sawhney found out about his work.
I had this new technique. James mentioned, he had some really exciting ideas in progress, made it seem like it might be possible to prove something like this. We sort of realized that the technique was even more powerful than I had originally thought.
Over the course of a few months of long distance collaboration, the three young mathematicians figured out how to get a better upper bound on the size of sets with no five-term progressions. They then extended their work to progressions of any length. There was a general strategy, but sort of whether any particular bit would work was sort of very unclear.
It was kind of dicey and eventually okay, it fell into place and then we were happy. The techniques they developed for the paper have already found applications in other areas of math. There were many more creative ideas in this paper.
It really was a very impressive achievement, especially for three graduate students. The Langlands Program is a vast endeavor to develop what's been called a "Grand Unified Theory" of mathematics. The way mathematical objects combine and lead to these conjectures, there's something extremely appealing about it, the Langlands Program, especially in geometric Langlands.
The geometric Langlands conjecture is a key component of this sweeping paradigm. And now after three decades of work, Dennis Gaitsgory along with his former student, Sam Raskin and seven others, have produced a monumental 800 page proof. What is now known as the Langlands program began in 1967 when the mathematician Robert Langlands wrote a letter to the French number theorist, André Weil, describing a plan to connect far reaching branches of math.
It's a lot of parts of number theory, parts of physics, and it has kind of different compartments and different kind of corners that are operated in parallel. The Langlands Program takes its inspiration from another part of mathematics, Fourier theory, which splits complex signals into simpler components. The Fourier transform is one of these kind of basic building blocks of much of math that we simplify.
We tried to think of everything in terms of these basic patterns. In 1822, the mathematician Joseph Fourier showed that any wave can be broken down into an infinite sum of sine waves using a technique now called the Fourier Transform. The Fourier Transform is like a recipe generator.
You input a complicated wave and you get back its ingredients, the amplitude and frequency of each component sine wave. Fourier theory is an essential part of modern technology. Its applications range from jpeg compression and image recognition to quantum physics and MRIs.
It has also opened a revolutionary new framework in pure mathematics. Our experience with Fourier theory guides many of the ways that we think about the Langlands Program and the way the subjects develops and the sort of phenomena that we're seeing. Fourier theory has two components, basic building blocks and labels.
Imagine a child's toy castle. This castle can be disassembled into individual building blocks and these components can then be sorted by color into bins, and then labeled. Similarly, the Fourier transform disassembles a complex wave into individual sine waves.
These are like the building blocks and each sine wave can be labeled with its frequency. To open up new connections between distant mathematical worlds, Langlands researchers look for analogies in Fourier theory in other contexts. Are there other basic building blocks that can fit into the bins, and if so, what are the labels?
The geometric Langlands program is one way to answer these questions. The fundamental building blocks akin to sine waves are called eigensheaves. Eigensheaves are subsets of complex abstractions of functions called sheaves.
So named because mathematicians visualize them like sheaves of wheat growing on top of other mathematical objects. The labels on the bins are something called representations of the fundamental group, descriptions of the loops that can be drawn on spheres, donuts, and other shapes. For Gaitsgory, the epic pursuit began in 1994 when he first heard about the geometric Langlands program as a graduate student.
I guess I understood about 15% if that, but I was kind of completely awestruck. After two decades of work on the problem, Gaitsgory finally began to see a way forward towards a proof. Until that moment, it was in some sense, like walking in the dark in the woods.
From that moment on, I saw the framework. Gaitsgory drew what he called the fundamental diagram, an outline of the solution. But his diagram was missing one piece.
He needed to know that every single eigensheave is contained within a special composite chief known as the Poincaré sheaf. Raskin also became hooked on the problem. Sam became a grad student a year exactly after this revelation until 2006.
After finishing his PhD, Raskin continued to study the Poincaré sheaf. A Poincaré sheaf is like white light. Just as white light contains every color, mathematicians expected the Poincaré sheaf to contain every eigensheave.
Even though it's hard to write down an individual eigensheaf, it turns out that you can write down very directly what happens when you amalgamate all of them. If Raskin could prove that the composite Poincaré sheaf contained all eigensheaves, then he could use this as a tool to access the individual eigensheaves, like a prism splitting white light into a rainbow. In 2022, Raskin and his graduate student finally found this proof completing Gaitsgory's fundamental diagram.
They cracked this mystery and after which basically shape of the solution became clearer. Over the course of the next two years, Gaitsgory and Raskin led a team that wrote five papers that proved the Geometric Langlands conjecture. They've built a whole world.
It took a long time to figure out what's the right edifice to build and what materials are available and how to stack them up. Of course, mathematics is infinite. Once you solve these, some new paradigms appear.