Can We Scale Error Correction in Quantum Computing? - Quantum Paper Review
3.21k views3819 WordsCopy TextShare
Qiskit
On this episode of Crosstalk, Abby sits down with IBM error correction scientists Ted Yoder and Serg...
Video Transcript:
the main goal of quantum error correction is to uh create protected logical cubits better than the physical cubits that they're they're made out of so is that why on the the cover of the nature paper when this was published you kind of have those sort donut shap things the C must have high eror free low overhead sufficiently small uh size maybe a few hundred physical cubits you know these ldbc codes can actually have logical error rates that are comparable with the surface code while using or magnitude fewer physical [Music] cubits hello my name is Abby Mitchell and this is our Quantum paper review Series today we're going to be discussing a paper called high threshold and low overhead fault tolerant Quantum memory this paper was published in nature in March 2024 and was actually chosen to be on the cover of nature which is pretty cool today I'm going to be speaking to two authors of that paper IBM fellow Sergey bravi and IBM senior research scientist Ted Yoda to understand the methodology and the implications of this work we're going to be sitting down and asking some questions but before we do that let's first understand a bit more about the paper and what it's talking about starting with explaining the title so let's take a look the title is high threshold and low overhead fault tolerant Quantum memory this can be a bit confusing so we're going to break it down piece by piece starting with Quantum memory in classical Computing the term memory refers to how a computer stores information in binary States zeros and ones and Quantum memory is talking about the same thing but for information stored as Quantum States specifically the title mentions fault tolerant Quantum memory in Quantum Computing it is a real challenge to keep cubits stable enough to hold their Quantum states often the cubits decohere and we accumulate more and more errors as the computation goes on to achieve fault tolerance means that the error rate of your computation is if not nothing then so negligible that it doesn't affect the final result one popular area of research that focuses on this goal is the development of quantum error correcting codes in this paper the authors developed a new set of error correcting codes part of a family of codes known as low density parity check codes or ldpc codes for short okay let's take a look at another section of the title high threshold and low overhead in this context high threshold is referring to the fact that these ldpc codes developed by the team uh offer a relatively high threshold of about 0. 7% put very simply the higher the error threshold the more errors on the physical device the codes can correct for so an error threshold of 0. 7% means that if your physical Cubit errors are less than 0.
7% which is feasible error rates for today's devices then these error correcting codes could be effective for comparison one of the most successful and famous codes to date called surface code has an error threshold of 1% lastly to understand what we're talking about with the low overhead part of the title we need to understand encoding efficiency or encoding overhead the basic principle of quantum error correction is that you can use a large number of imperfect cubits to correct the error on a smaller subset of cubits so if you have a Quantum experiment that requires let's say hypothetically 20 perfect cubits or in other words 20 logical cubits then you could use perhaps 100 imperfect cubits plus some Quantum error correcting code to get accurate results for those 20 logical cubits so encoding overhead refers to how many physical cubits you need per logical cubit in order for your error correcting code to work in this paper the researchers were able to come up with error correcting codes with very low overhead in one calculation they were able to encode 12 logical cubits using 288 physical cubits whereas surface code would have required about 3,000 physical cubits for the same performance so in short what the researchers did in this paper is they developed a set of ldpc quantum error correcting codes that work at a similar error threshold to the current best performing surface codes but with a much lower overhead in terms of the number of physical cubits required okay now that we understand a bit more about the paper itself let's dive into the interviews hi Ted welcome to the studio thank you so much for being here uh before we dive in uh would you mind just introducing yourself right so I've been a research scientist at IBM for about six years and uh my focus right now is on Quantum era correction hi a and thanks a lot for hosting this uh interview yeah so I'm Sergey bra and I researcher at uh IBM Quantum group my interests include uh um uh uh Quantum uh Computing theory for example error correction algorithms and complexity Theory great um let's Dive Right In then before we get into some of the you know more complex details of the paper um do you think you could just explain for our viewers what error correction is and how does it differ from error suppression and mitigation yes so so uh uh let's begin with classical eror correction and it's a fairly simple concept let's say that you compose an email and you made a typo in some sentence then the person who reads uh the email uh would be able to sport the typo and this is some sort of error correction uh so human language has a lot of redundancy uh and so each letter is is correlated with adjacent letters in a certain way so if there is a typo in some letter you can easily spot it because it violates these correlations and Quantum correction Works in a similar way so we uh uh uh it introduce a redundancy in in a way that we encod Quantum States uh and uh this redundancy leads to certain correlations between cubits that uh uh but we can measure uh we call this correlations parity checks and if some error occurs on some Cubit it uh it violates with parity checks and we can detect this uh so if they put together sufficiently many parity checks we we get enough information to uh identify error and correct it the main goal of quantum error correction is to uh create protected logical cubits that are much better than the physical cubits that they're they're made out of um and you actually make a big entangled State out of these physical cubits and within that are some logical cubits The Logical cubits are not uh localized on just a handful of physical cubits but they're actually spread out over this entangled State um and one thing this means is that if you have a few physical cubits with errors uh you can still recover The Logical information um and so in practice how that's how that's actually done is you measure certain properties of the state parity checks or stabilizers um and we actually feed that information into a classical algorithm called a decoder and uh the decoder will tell us uh what errors it thinks occurred it will give a good guess for Quantum in the quantum case we have two types of Errors we have bit flip errors which are similar to classical typos M but we also have face flip errors uh that destroy Quantum superpositions so so this errors map Quantum state to classical states which we don't like so we have to be able to correct two types of Errors at the same time does the same the error correcting codes that you've used in this paper um account for both of those types of Errors uh yes we can correct both uh a bit flip and face uh flip Errors By this quotes from what I've uh reading from your your paper and um just general error correction um resources that I've I've been looking into I see that um there's this type of error correcting code called surface codes which are um pretty popular in the field at the moment but the codes that uh you folks created for this paper are called uh ldpc codes um can you explain to me what the difference is uh between these two approaches yes so actually the surface code is one particular example of ldpc codes and so the the specific codes you created are a different type of ldpc code uh yes this is another example of ldpc codes but it's more efficient uh because we can pack more logical cubits into the same number of physical cubits uh uh ldpc stands for low density parity check CT uh uh uh so and uh uh a parity check uh uh uh uh tells us whether uh um uh the number of errors on a certain subset of cubits is even or odd uh okay so you know that there's either going to be 10 errors or 11 errors yeah yes exactly so this is only a partial information about the error but if we have many parity checks we can combine this information and this can be enough to correct the error all I actually need to do to specify a code is tell you what parity checks to measure um and it would be nice if the parity checks I told you to measure were not too difficult to to measure so can you give me an example of what exactly so um uh the parity check should not uh act on too many of the physical Cupids at once you shouldn't have to talk to too many at once uh to measure the this this parity check and also the physical cubits that you have shouldn't shouldn't be involved in too many of the checks um so if you have those two properties then you actually have an ldpc code so that's what low density parity check means um it turns out that the surface code actually is a low density parity check um because every cubid in the code is involved in four parody checks and every parody check uh involves four or or fewer cubits uh crucially the surface code only includes one logical Cupid no matter how big you make the code you only get one logical Cupid right um and this uh fundamentally differs from from General ldpc codes as you as you make a general ldpc code bigger you get more logical cubits inside okay so in order for these error correcting codes to work and forgive me I'm I'm not really an error correction expert um it requires some uh fundamental changes to the hardware it's not just you know a way of coding your your program you actually have to make changes to the hardware to be able to do these this this correction uh yes yes so first of all we we have to introduce uh redundant encoding that I mentioned M so say if you want to to to encod only 10 logical cubits we might need say 100 physical cubits to have enough redundancy and error correcting codes like this um they're not something that we've really yet implemented on real devices yet at the moment it's more sort of in the theoretical space is that correct we've we've implemented small error correcting codes um uh so uh codes that can uh detect or correct one uh one cubit error um the uh we even have some experiments where um we Achieve Beyond uh break even so beyond uh the the uh physical Cubit error rates with our logical cubits what is what is that mean exactly The Logical error rate depends on the physical error rate of your cbits um if you have uh really bad cubits um then it could be that The Logical R rate is actually worse than your than your physical cubits uh if you have really good physical cubits then the quantum error correcting code will sort of correct the remaining errors that exist and so your logical cubits will be better than your physical cubits um and so there's some crossover point there where the logical cubits start outperforming the physical cubits right okay and that is a break even point or uh sometimes called a threshold so can you tell me a bit more about the specifics of the ldpc codes that that you you folks invented for this paper so each Quantum Cotes has several uh parameters uh uh uh for example one important parameter is uh uh a cord distance um so it measures uh uh uh the number of uh uh the minimum number of single Cubit errors that can destroy encoded State uh for example if uh we say that a code has distance five it means that uh you can detect uh less than five errors no matter uh where these errors happen and what was the code distance for for these codes you developed uh so we give several examples of codes and uh one example is distance 12 quot w in quotes uh 12 logical cubits into 144 physical cubits and that is significantly better than uh surface codes yes so so it's roughly 10 times more efficient when surface Cod if if you if you look at how many logical cubits you can uncode amazing and you called them um by variate bicycle codes or BB codes for short um yes so this uh this term is not super important I guess we need some Theory uh coding Theory background to to make sense of it you talk about the the error threshold uh for the codes that you created um and that threshold is 0. 7% uh why is that a significant uh number so we we usually uh um uh uh uh describe random Errors By the error rate say if the error rate is 1% It means that there is a chance of 1% of having an error on each particular Cubit okay and um so if uh we have such such random errors are more close to what we have in the lab in the actual Quantum device uh uh as opposed to worst case errors uh um so uh uh uh and um error fre show tells you the maximum error rate that the code could tolerate I see so for error correcting codes if you have a higher threshold does that mean essentially the more noisy your computer can be for you can when you can still use those codes effectively on it uh yes exactly so we would like to have courts with uh a very high aror free short right okay for example this office code has one about 1% AR free short and BB Cotes from our paper have roughly 7% okay so that's that's pretty pretty good then if it's that close to because service codes is is the most popular one at the moment right so 0. 7 being quite close to the error rate that the surface codes has uh yes that's correct okay let's take a look at figure three and can you tell me a bit about the performance of these BB codes that is sort of demonstrated in this uh diagram right um so here we've we're showing the results of simulating uh the byar bicycle codes Under a certain noise model and with a certain decod um and in this noise model every physical component in the circuit fails with an error rate p and that's plotted on the x-axis and then on the y- AIS uh we're showing The Logical error rate of you know the prot uh the protected physical cubits in the code um and I think one thing that's really important to to notice and is like an indication that error correction is working is that a small change in the physical error rate can lead to a very large change in The Logical error rate so if you have an order of magnitude lower physical error rate you actually get several orders of magnitude lower logical error rate and what is the difference between fig uh PA a and po B part A we're just showing a variety of byari bicycle codes in Part B we're showing uh one byari bicycle code this 144 Cubit code compared with a variety of surface codes with different uh amount of error correction uh potential um and what we point out here is that the ldbc code the the 144 cubic code that's this black line that's the black line is right where it should be um compared with these surface codes so it it has an aor cracking performance or what we call a code distance of 12 and that's lying right between the distance 11 and 13 surface codes um while using a factor of you know an order magnitude few uh number of physical cubits so essentially what this is showing is that the BB codes that you developed are know within the range of how the surface codes perform but crucially you're only using what is it 144 cubits whereas all these others are using like using thousand thousands thousands of cubits exactly yeah so in red and green are parity check cubits which we use uh to actually measure these parody checks um so they must communicate with the data cubits um and and learn the values of these these operators we want to measure um and this diagram I mean it's in 2D here but it's actually sort of a 3D shape like it kind of wraps around is that correct that's that's right so uh these connections you can't um lay out in just in just the 2D sheet of paper here it's actually on the surface of a Taurus um so if you go off the top edge of this drawing then you actually come back on the bottom Edge and there are some connections that do that and likewise with the left and right if you go off the left Edge you come on the right so is that why on the the the cover of the nature paper when this was published you kind of have those sort of donut shaped things that's sort of representing this that's right this is a sort of natural way to to visualize the the codes um the other thing that I want to point out here is that um while a lot of these connections are uh just uh between nearest neighbor cubits and sort of this Square lattice this grid um there're also these longer range connections and there are two of those longer range connections per cubit um and this is you know really the source of the uh um the efficient overhead of these codes um as compared with something like the service code the the fact that we have these longer range connections when we just started working on on this project we had uh a kind of a wish list uh of error correction what properties they would like to have in a Quantum Court uh um for example uh uh uh uh the C must have high a uh low overhead meaning that we can encod many logical cubits sufficiently small uh size maybe a few hundred physical cubits um then um uh they must be able to do logical Gates on encoded cubits um and uh uh there are many other properties uh like uh Cubit connectivity should not be too high and so on um and uh and we discovered that uh these BB codes have kind of check marks for almost all items in in our wish list uh which is really great and um and we we working on checking of the remaining items great so there's going to be further work that you're doing then to push these BB codes even further uh yes so we we keep working on on with courts and I I personally very interested in improving uh decoding algorithms um and decoding algorithm uh uh looks at parity checks that we measured and tells you a prediction of what what an error has occurred in on our physical cubits so that we can correct it and is that uh like how important is that to the the overall process of developing error correcting codes um yeah yes so the performance of a Quantum Cod like error free show depends on how good is your decoding algorithm I see so it's uh it's it's crucial that you optimized your decoding algori as uh as far as possible interesting so then if you C could you know theoretically improve the efficiency of your encoding algorithm that could help you push that error threshold you know above 0.