hi everybody I'm Kevin Sun I'm a software developer at IBM and today I'll be talking about mapping problems to cubits so in the context of the kiss kit pattern workflow well there's first there's four steps in the pattern first we map the problem to Quantum circuits and operators next we would optimize the circuits for the Target Hardware then we execute our circuits on on the hardware and finally we postprocess the results so in this lecture I will be solely talking about the first step the mapping step and in this lecture I will cover two applications
combinatorial optimization and quantum chemistry okay so let's talk about combinatorial op iation in combinatorial optimization the goal is to find an input that maximizes or minimizes the value of a cost function so here I've written the cost function as C and the input is X and I want to find an input that maximizes the value of the function so some examples one would be the napsack problem so this is a problem from computer science so here you have a napsack that can fit a limited set of items the input X would be the selection of
items to include in the napsack and the items have different values and the cost function would be the total value of the items that you put in the napsack and a somewhat more practical sounding example would be the vehicle routing problem so here you have a fleet of vehicles and you want to choose uh routes for them in a way to to minimize the total distance here x would be the routes for the vehicles and the cost function C of X would be the total distance traveled by all the vehicles so this would actually be
a minimization problem many combinatorial optimizations are too difficult to solve exactly in practice so often we settle for approximate Solutions and the quality of an approximate solution X is Quantified by the approximation ratio which is just the the cost function value of the solution divided by the optimal value so here opt is defined as the maximum possible value of the cost function so if the problem is too difficult to solve exactly then we wouldn't actually know the true value of opt however sometimes we are able to get a lower bound on it in this lecture
I'll be giving an example of a specific problem that we will map to cubits and the specific problem I'll be talking about is the max cut problem so the max cut problem is a graph problem so we're given a graph and the goal is to partition its vertices into two sets such that the number of edges between the two sets is maximized so on the slide here I have a picture of a graph and I've also depicted a partition of the two vertices so the color of the vertices indicates which set of the partition it's
in and given this partition some of the edges will be cut and that's depicted by the color of the edges so that here the purple edges are cut and also by the the black dotted line which shows the edges being cut now the decision version of the max cut problem is a canonical example of an NP complete problem so this class of problems is it's very difficult and in general we we don't expect to be able to solve this problem exactly in practice now I'm going to give some definitions that will help us obtain a
computer representation of this problem so first of all our partition of two vert of vertices into two sets is called a cut and the size of a cut is the number of edges between the two sets of the partition so in this example here the size of the cut is five because five of the edges are cut and now I can say the goal of the problem is to find a cut with the largest possible size now as I get closer to the computer representation a cut can be represented as a bit string a sequence
of zeros and ones so here there's one bit for each vertex and the value of the bit is indicates which set the vertex belongs to so zero means it belongs to one set and one means it belongs to another set and the goal here is to find a bit string that maximizes the cost function so here I've written the cost function in um in a form that takes the bit the bit string as input and it's a sum of terms where the sum run runs over the edges and each Edge will contribute a a a
term and the edge will contribute a one if that edge is cut by the by the Partition meaning that the vertices of the edge are different from each other so given a bit string that represents a cut this expression here will give you the size of the cut so now that I have the cost function I can State the goal of the max C problem as to find a bit string that Ma maximizes this function here so we're going to do a little trick here where we do a change of variables so the the bits
that I've represented with B are zeros and ones and we're just going to do a change of variables so that they become plus and minus one instead of zero and one and using these uh new variables I can actually express my cost function using arithmetic operations so here the the Z variables are being multiplied and they're plus and minus one which means that if they differ from each other this is going to be a minus one and this expression is going to evaluate to one which is what we wanted and if they're the same this
expression is going to evaluate to zero which is also what we wanted and I'm just going to mention briefly that this is an example of what's known as the forer expansion of a Boolean function which is where you express a a Boolean function as a polinomial of of the bits so now that I have changed my variables I can write my cost function like this here [Music] and the final trick is to promote these plus and minus one variables to Quantum variables and the way that it works is that I will replace each of these
plus or minus one variables which I notated with Z I'm going to replace each of them with the poly Z operator which is which is similar to a plus or minus one iable because it's just diagonal matrix with plus and minus one on the diagonals right and if I do that in this expression well the poly Z mat the the poly Z is a matrix which means that this becomes a an operator basically it's a big Matrix and it's a Quantum operator and here I'm I'm abusing notation a bit and writing C for both the
for the the objective function and the quantum operator here so it's it's not that egregious of an Abus of notation because if I take this this Quantum operator and I act it on a computational basis State all that happens is that that basis state gets multiplied by a factor that's equal to the cost function so this Quantum operator really is representing this this cost function and because of this expression I can also write if I have a computational basis VOR I can also write this expectation value and this expectation value will just be the value
of the cost function on that bit string and this operator is called the max cut hamiltonian so in the context of quantum mechanics hamiltonian is just a linear operator that's Herm missan uh and in terms of matrices it just means it's a it's a matrix whose a conjugate transpose is equal to itself okay so I now have the max cut hamiltonian which is a Quantum representation of our cost function and to get a bit more concrete of what about it of what it looks like it's a 2 the N by 2 the N diagonal matrix
and its diagonal entries are the the values of the cost function so here I've written what the Matrix actually looks like and I've reverted to uh zero and one for bits to match the quantum Computing convention and now I can restate the goal of the max cut problem as I want to find a computational basis Vector with the highest igen value of this um of this Max cut hamiltonian so the max cut problem originally arose in the context of classical algorithms and the classical strategy for solving this problem would be to sample a bit string
from a probability distribution generated by an efficient classical algorithm and the best known algorithm for this is what's known as the go Gman Williamson algorithm and that achieves an approximation ratio of about 0.878 the quantum strategy for this problem would be to prepare a Quantum stay on a quantum computer using an efficient Quantum algorithm and then measure this state in the computational basis this will give you a bit string which will be uh Your solution to the to the problem now because Quantum Computing is a generalization of classical Computing the quantum strategy can only be
can only be stronger than the classical strategy uh it's it's at least as strong that is uh and the there's a crucial conjecture here which is kind of underlying the field of quantum Computing which we can call the quantum Advantage conjecture is that the quantum strategy can sample from more and better probability distributions than the classic strategy which is to say that the quantum strategy can actually get give you better Solutions than the classical strategy or um f Solutions faster and I'm just going to briefly show here an example of how you would map the
problem onto circuits so so far I've shown how you would map it onto an operator right and there's a common algorithm for addressing the Masa problem called the qaoa which stands for the quantum approximate optimization algorithm and here you have the max cut hamiltonian uh but you also have another hamiltonian which is sometimes called a driver hamiltonian and it's a sum of single Cubit X operators and you have an initial state which here is the sum the uniform superposition over all computational basis States and then you have this the QA qaoa on sats which is
a Quantum circuit that's or rather a state where the circuit is constructed by applying alternating applications of these of time evolution by either the the max cut hamiltonian or the driver hamiltonian and the evolution time here are free parameters uh so in practice you would optimize these these parameters here in order to maximize the cost function and so the qaoa uh SE it actually has an interesting theoretical motivation behind it I'm not going to get into the details but it's a common example of a circuit that you would um use to address the max cut
problem uh and in the next lecture Daniel Edgar will'll be diving into more detail about optimization so look out for that lecture okay so that's all I'll say about combinatorial optimization and the rest of this lecture will be about quantum chemistry and how we map problems in quantum chemistry onto quantum computers okay so a basic question in quantum chemistry is you're given a configuration of atoms and you want to know its properties so here I'm depicting a very simple example example so this is a water molecule H2O right you have two hydrogen atoms bonded to
an oxygen atom a uh a water Mo in a water molecule the the bond between the hydrogen and the oxygen has a certain length and the the angle between the hydrogen's has a certain angle and we can actually measure these quantities in a laboratory however what's also cool is that we can actually input these this information or the configuration of the atoms on to a computer and because we know the laws of quantum mechanics we can actually compute these the bond length and the angle we can compute these quantities on a computer so this is
water which is a fairly small molecule so we can handle these on classical computers however the best known algorithms for calculating properties of molecules they scale exponentially with the size of the molecule or some other measure of the size of the problem which means that we if we want to have accurate Solutions on classical computers we can only handle very small systems and once the system gets to a certain size our algorithms can no longer um compute these properties and that's why predicting the properties of molecules and materials is one of the most anticipated applications
of quantum computers so in the setup of chemistry we have a we have a number of nuclei and electrons and they're arranged in some configuration and we can Define these variables that represent the distances between the the nuclei and electrons the various particles and the state of the system is represented by a a wave function which is a function of the coordinates of the particles and the energy of the system is represented by by a hamiltonian and here I've written the hamiltonian the molecular hamiltonian it's an operator which Maps functions to functions it's a sum
of various terms some terms represent the kinetic energy which is a derivative operator and then we also have terms representing the interactions between the various particles now the molecular hamiltonian that I showed is a pretty complicated expression often in chemistry we make What's called the born Oppenheimer approximation to simplify the problem and make it a bit more amable to calculation and the idea of the Bor Oppenheimer approximation is that nuclei are much more massive and therefore much they move much slower than electrons so let's just treat them as fixed points so if we do this
then the kinetic energy of the nuclei is zero and the interaction between the nuclei is just a constant so we can drop these terms from the hamiltonian and the result is that we is what we call the electronic hamiltonian which only contains the terms that involve the electrons so we still have the kinetic energy of the electrons and then we also have the interaction between electron and the nuclei and the electron electron interactions and here the R here reminds us that this hamiltonian actually is parameterized by the by the positions of the nuclei and uh
often in chemistry the goal is to solve the Schrodinger equation for this hamiltonian which means finding its igen values and igen vectors and often we're interested in the low-lying igen values and IG vectors so okay so we have now the electronic hamiltonian which is this operator that acts on a space of functions it involves derivatives so it's a continuous object to represent the hamiltonian on a computer we need to discretize it somehow and there's different ways that you can discretize this hamiltonian to represent it on a computer in this lecture we're going to focus on
one of the methods which is called second quantization and here I'm I've written the hamiltonian the form of the hamiltonian in second quantization so here the the the hpq and the hpq RS are just numbers the A and the a daggers are quantum operators you can think of them as big matrices so in the discretization we've gone from this continuous object that involves derivatives and we've transformed it into a big Matrix and now I need to tell you about these a and a dager operators so the idea of second quantization is that um electrons will
be described by a system system of fermionic modes they're called fermionic modes because we're dealing with fion which is the type of is the category of particles that electrons belong to and a system of n fermionic modes or orbitals so modes and orbitals are the same thing the terms are interchangeable a system of enonic modes is described using a set of n fonic Annihilation operators these are the a the a operators and these operators satisfy the fironic anti-commutation relations so these operators satisfy some some some equations and the ad joint the a dagger the a
diager terms which is the adjoint of an Annihilation operator is a creation operator now let's look at these relations and try to understand them so the first one is saying that it says AP * AQ plus AQ * a equal 0 because these two terms add to zero that means one of them is the negative of the other and these terms basically look the same right it's just that P and Q are swapped so this is saying that if I have a product of two annihilation operators if I change the order then I pick up
a minus sign and another way of saying that is that AP and AQ anti-c commute for the second relation this is telling us the relation between an an Anni operator and a creation operator so if they act on different modes then The Chronic or Delta is going to be zero which means that they still anti-com however on the same mode if if p and Q are the same then this expression adds up to the identity operator so from these fironic anti-commutation relations we have so we have a set of operators and they satisfy the relations
it turns out that you can actually tell a lot about the the structure of the vector space in terms of these operators it turns out that the operators AP dagger AP these operators commute and they have I values zero and one and these are called the occupation number operators because they tell you the occupation of a mode remember uh the way that we're thinking about this is that the fironic modes each mode can be occupied or not occupied by by an electron and and these these occupation number operators tell you the occup occupation of the
mode there is a normalized vector called the vacuum state which is a mutal zero ion Vector of all the occupation number operators so this is the state that has no electrons in it if I have a state which is a zero IG Vector of the occupation number operator then if I act on this state with a creation operator AP dagger then it's going to be a one igen vector of that occupation number operator so this is why we say that AP daer creates a firon in mode p and similarly if I have a one iion
vector of the occupation number operator then the annihilation operator AP will turn it into a zero ion Vector so that's why we say that AP destroys a firon in mode P we also knowe that the creation and dation operators squar to zero and this is one manifestation of the poly Exclusion Principle and the way that you can interpret this is that if I try to create an electron in the same mode twice uh well you can't because the result is just zero and finally we can actually construct an orth the normal basis of two to
the N vectors labeled by bit strings Z and the bit strings are going to be of length n and the value of the bits are going to tell you whether or not to apply the corresponding creation operator to the vacuum state so here we're starting from the vacuum State and we're applying some subset some combination of creation operators to it so we're creating electrons in various modes and these basis factors are called Slater determinants it sounds like a uh the terminology might sound a bit obscure but later on we will see why where the termin
terminology comes from so we can also figure out how the how an Annihilation operator acts on the basis vectors that we just we just constructed so if I take an an an Annihilation operator for for mode p and I act it on a on a basis Vector represented by a bit string if the corresponding beit is zero which means that that mode is not occupied then if I Anni if I apply the anation operator I'll get zero if the mode is occupied which means there's a one there then when I apply the annihilation operator it'll
flip that one to a zero uh but we also pick up a a a coefficient here which is it's a it's a phase factor it's going to be plus or minus one and it's going to depend on the parody of the preceding bits right so here I'm destroying an electron in mode p and but we were picking up a phase that depends not only on that mode but also on all the other modes and this this Behavior differs from cubits right because if in Cubit land if I apply an operator to a subset of cubits
that operator is going to be tensor product with the identity on the rest of the cubits which means that the the state of the rest of the cubits is not going to influence the action of that operator if it's it's different here because I'm annihilating just on mode P but I pick up a phase factor that depends on the occupations of the modes that precede p and the con the the consequence of this is that the notion of locality differs between cubits and fironic modes and this is going to be an important consideration when we
when we map Fons onto cubits so what we have so far is that a system of fironic modes is represented using fonic creation and Annihilation operators on the other hand when we are dealing with cubits usually we represent our operators using tensor products of the poly operators IX Y and Z so this raises a question how can we represent the fonic operators using poly operators so just based on the action of the annihilation operator from before we can make a first attempt because there's an operator which sometimes called the the Cubit destruction operator right we
can try mapping the annihilation operator onto this operator here and here this expression here is an operator which takes the one state to zero and it annihilates the zero state if I apply this to the zero State I just to get zero and here I've written the 2x two Matrix that this operator represents and I can also write it in terms of the poly poly X and Y matrices and if I take this operator and I act it on a bit string well for uh if if the P bit is zero then AP applied so
it will indeed give you zero and if it's one then it will indeed flip it to a zero however the phase factor is missing to fix this what we can do is we can add what's what's often called a z string so this is going to be a string of poly Z operators so here you know in in addition to having the destruction operator on Cubit P we also have Z operators acting on cubits one through P so here I've expanded out the expression so I have my destruction operator here for the higher Cub bits
it is indeed just the identity operator but then for the lower cuq bits we have Z's on all of them and this string of Z's will indeed compute the phase that we wanted so if you define your operators like this then we have a then you'll have a set of of operators expressed in terms of the poly matrices which do satisfy the fermionic anti-commutation relations so these operators are a valid representation for your system Fons and this representation is called the Jordan vigner transformation so something important to notice about the Jordan Vigor transformation is that
as I said before operators that are local in terms of fance are not local in terms of cubits right because this is a this operator is local in terms of the firon because it's simply destroy in a firmon in mode P however the Cubit operator it doesn't it acts not only on the PE Cubit but also on Cub bits one two up to P minus one with the poly Z right and we can think about this as introducing an overhead in the simulation I want to I have fans I want to simulate them with cubits
there's an overhead that results from the introduction of these Z strings and the overhead is O of n it's linear in the number of modes or the number of cubits uh if I have my an elction operator that acts in the last mode it would then we would have a z string on the rest of the cubits and that operator would actually act on all the cubits at the same time however the Jordan rner transformation is just one of many possible for meon a cubit m in it's one of the most popular because it's very
simple uh there are other ones and it turns out that using something called the bravi kayv transformation the overhead can be reduced to O of log n so that's that's pretty cool I'm not going to describe it here but if you're interested you can read about the BR kintai of transformations to see how that works now just to give you a few examples of how common operators map under the Jordan Viner transformation so the occupation number operator maps to just basically a single Cubit Z operator so this operator is local it only acts on qbit
P and it's just a z operator it's shifted and scaled but it's it's just a z operator the tunnel interaction which is this thing I've written here it um so it acts on two modes right and it's kind of represents electrons hopping between these modes right creating in one destroying in the other and and we're adding the complex conjugate so that this is a herian operator and this gets mapped onto an XX plus y y interaction in in in Cubit land however we also have this Z string going between cubits p and Q so it
acts on Cubit Cubit P Cubit q but then also the ones in between however from this from this expression we can see that if the modes are adjacent so if Q is p + 1 then this this this Cubit operator is actually local right we don't have the Z string anymore because these modes are adjacent it's just XX plus y y on adjacent cubits so this is a two Cubit operator and yeah so the upshot here is that if at least for the Jordan vigner transformation often if these operators are next to each other the
Z strings will cancel each other out and that's something that's very useful to take advantage of okay so now I'll give an example of mapping a a a circuit a fironic circuit to a cubit circuit and the example I'll take is the orbital rotation the the orbital rotation is a fundamental operation in firic simulations and here the or rotation is presented by this this calligraphic U and the way that it Maps a an Annihilation operator is it takes the annihilation operator and it Maps it onto a linear combination of annihilation operators and I'll just write
that linear combination as BP so it Maps AP to BP and here the coefficients of the linear combination they form a unitary Matrix and here just I'd like you to notice that the the U in normal font here is an N byn Matrix so it's a small Matrix caligraphic U here is 2 N by 2 the N it's a big Matrix so we have a small Matrix and that describes a big Matrix it turns out that these BP operators they also satisfy the fironic anti computation relations so I've written these relations here in terms of
the BP operators and because of that the BP are also fironic Annihilation operators they destroy firion in a different set of modes related to the original modes by a basis change so really the orbital rotation all it does is it's a change of basis from one set of modes to another set of modes and a cubit analogy for this would be changing from the Z basis to the X basis with cubits if I want to do a single Cubit we change from the single Cubit Z basis to the X basis on all the Cubit that
would just be a single layer of basis change Gates however with the orbital rotation it's going to be a bit more complicated than that now as an example of an application of the orbital rotation here I have the expression again for the molecular hamiltonian and you can actually think of the hamiltonan as being composed of two terms this is this part is often called the one body term the one body part and you have the two- body part here and the one body part is a quadratic catonian which means that there's the terms have products
of at most two foric creation and Annihilation operators and a useful fact is that any quadratic hamiltonian can always be Rewritten in a diagonal form so here it's Rewritten as a sum of or a a linear combination of occupation number operators um on a different set of modes so the original modes are a the new modes we and in this new basis of the of B modes um the hamiltonian is just a sum of occupation number operators which is diagonal and therefore this can be simulated easily so the the conclusion is that just by doing
an orbital rotation we can make this one body part of the hamiltonian diagonal so we can simulate it easily and I'll just mention really quickly that it turns out the two body part can also be simulated with oral rotations using What's called the double factorized or also known as the low rank representation of the two body part so how how do we how do we rep how do we Implement an orbital rotation on a quantum computer well an orbital rotation is represented by an by an N byn unitary Matrix U and the goal is to
apply this big U that's being represented on a quantum computer and there's a useful fact which we'll use which is that the the map from Little U to Big U right this map actually satisfies what's what's what's known as a group homomorphism property all that means is that if I multiply small matrices together the big matrices also get multiplied and we'll take advantage of this because we're going to take our un unitary Matrix U and we're going to do a decomposition of the unary it's kind of like with a sure you can do a QR
decomposition it's similar to that here we're decomposing it into a sequence of operations D is a diagonal matrix and G um these are going to be given rotations which I'll explain in a bit and this decomposition of of little U is going to yield a corresponding composition of bigu in terms of basic Gates now a given rotation is a well it's a matrix that is a rotation Matrix in a two-dimensional Subspace so here the given rotation is going to be parameterized by P and Q which are the modes that it's going to be rotating together
and then there's going to be angles Theta and F so f is needed in case we need to introduce complex phases in the rotation and this given rotation here will represent a certain operation on on the hbert space of quantum States and the way that this operation acts on the firic modes p and Q is it acts as a rotation so it's going to change the annihilation operators from modes p and Q into linear combinations of each other and that can be written in this simpler form or this Matrix form where the annihilation operators are
simply getting mapped onto a rotation Matrix applied to those operators and it turns out that this operation here rotating two modes can be achieved by this unitary that I written here so these These are the I AP diag AP terms these are just complex phase phases and the real or the more interesting part of the operation is this rotation here which acts on two modes and this is the operation that affects this given rotation so yeah so the strategy for implementing an orbital rotation is we're going to decompose you into a diagonal matrix and a
sequence of given rotations and we're going to apply the sequence of given rotation Gates defined by those by those given rotations and then the the diagonal Factor can be implemented using these phase Gates what I mean when I say that these are phase Gates so um as we saw before the occupation number operator so this is just the occupation number on mode p and that maps onto up to a shifting and scaling it just maps onto a poly Z operator so this is just a single Cubit Z rotation in in the cubits so we have
the same gate here this is just a single cubid Z rotation um it's a very easy operation to achieve uh now this rotation gate here it acts on two modes and in general there would be a difficulty because these um you know even though it acts on two from yic modes we saw before that in terms of cubits it might not be local and it can introduce the Z strings right between p and Q however if p and Q are adjacent then this is a local operator and it is in fact just a 2 cubic
gate again if p and Q are adjacent this gets mapped on c82 cubic G so it would be nice if we could perform this given rotation composition in a way where the given rotations act on adjacent indices because that would give us a sequence of two cubic Gates which furthermore just act on um adjacent cubits it turns out that this is possible and here I'm just showing a picture of H of uh what the orbital rotation actually looks like when it's compiled down to cubits so first all you'll notice that there's two circuits that are
independent from each other that has to do with the spin variable which allows you to parallelize the orbital rotation across to different across the different spins um so let's I'll I'll explain spin in a bit so we'll ignore that for now if we look at just like one of these circuits we can see these XX plus y y Gates which represent the given rotations and as you see they are acting on adjacent cubits and another nice property is that they're very dense so we're fitting as many gates as possible into as many into as few
layers as we can and so this is a visual representation of the given rotation decomposition of the the unitary Matrix and the phase Gates here are implementing the diagonal okay so now that I've talked about mapping hermion on cubits terms of The Operators and give an example of the circuit I'm going to conclude by uh briefly explaining a bit how the discretization works so as I said before we started with this electronic hamiltonian which is this continuous object that involves derivatives and we obtained the second quanti tonian which is just WR in terms of matric
that can be represented on a computer so the idea of the discretization is we're going to choose a finite set of functions to represent the space of functions and we're going to project the hamiltonian onto the space span by these functions so we're going to start by choosing a finite set of what we'll call spatial orbitals to represent a function of a single electron position and here the spatial orbitals are just these functions that are represented by F and what this is showing is that we are going to represent the spatial functions or the functions
of a of a spatial coordinate R they're going to be represented has a linear combination of these spatial orbitals and um see in general if you have an arbitrary function you're not going to be able to represent it just using a finite linear combination like this so that's um that's that's something that we have to accept because on a on a computer we can only represent a finite number of things if the more functions that we include in the more spatial orbitals that we use for our expansion the more accurate this this can be so
describing an electron requires not only describing its position but there's another property of electrons called spin which I mentioned before and the spin is going to be described by two or normal functions Alpha and beta representing spin up and spin down and we we use these spin functions together with the spatial functions to construct spin orbitals and a spin orbital is going to descri describe both position and spin and a spin orbital is going to be a product of a spatial function with either the alpha function or the beta spin function and the result is
that the coordinates for an electron include position and spin so once we have these these basis functions these spin orbitals how can we build a many electron wave function from from from them well a fundamental postulate of quantum mechanics regarding Fons states that a many electron wave function must be anti- symmetric with resp respect to exchanging the coordinate of any two electrons so the wave function is a function of the electronic coordinates and what this postulate is saying is that if I exchange two coordinates so here I'm taking x i and XJ and then I'm
swapping them to get XJ and x i the wave function picks up a minus sign so if I have my spin orbitals and I want to construct well this spin orbitals are are functions for a single electron right and I want to construct a two electron wave function one thing that I can try is to just multiply the spin orbitals right so my function of two electrons is just going to be the product of spin orbitals uh the issue with this is is not anti-symmetric right however we can fix it by forming this linear combination
here so with this expression if I swap X1 and X2 then I end up with the negative of this expression so this expression does satisfy the anti- Symmetry property so now that we have a way to build up antisymmetric wave functions using from our from our spin orbitals at least for two electrons um we we can try to generalize this so we can take this expression and we can actually write it as a determinant of a matrix a 2 by a 2X two Matrix and this allows us to generalize because for n n electrons we
can write we can form this determinant of an N byn Matrix and this is also going to satisfy the anti- Symmetry property so this gives us a way to build up an anti-symmetric wave function from spin orbitals and this wave function here is called a Slater determinant so in this expression for the determinant we we only need to specify the spin orbitals that appear in the determinant in order to be able to reconstruct this in other words we can actually create a more simplified notation for this later determinant where we just list out which orbitals
appear because just from this information we can form the determinant so the way that we would describe this state in words is in this in this this Slater determinant the orbitals Kai a Kai B up to Kai C are occupied and the Order of the orbitals actually matters because of the anti- Symmetry principle if I swap two of the orbitals then I pick up a minus sign so earlier in this lecture I had referred to some I had also referred to some states as Slater determinants those were the bit string the bit string states that
we constructed by applying creation operators to the vacuum state so th those are indeed the same States and the connection is given by this expression here so if I start with the vacuum State and I apply a creation operator to it then I get a Slater determinant so here I'm creating a an electron in mode p and the result is the Slater determinant with mode P occupied to give an example of this with eight orbitals if I have orbitals 1 2 and four occupied in the slat determinant that would be the same state as taking
the vacuum State and applying creation operators 1 2 and four and in terms of bit strings well the bit strings going is going to have length eight which is the total number of orbitals and bits one 2 and four are going to be set to one and the rest of them are going to be zero so this is the connection between Slater determinants and creation and Annihilation operators and bit strings okay so how do we actually get these spin orbitals in practice so one common method which is fundamental to quantum chemistry in practice is called
the Harry ful method and in the harsh F method it can be seen as a as a way of approximating the ground state of the electronic hamiltonian as a single Slater determinant so again and the goal is to compute the lowest igen Vector right and in general the lowest igen Vector of the hamiltonian is not a slady determinant but let's just take the best approximation which is a SL determinant and in this expression SII is a Slater determinant and to find the best Slater determinant we would optimize over the choice of spin orbitals to actually
implement the Harry ful method on a classical computer we would choose a a finite set of spatial basis functions and and use them to represent the spatial part of of the spin orbitals so here I have written this expression here which is showing that again for a function we're going to represent it as a linear combination of of the spatial basis functions and the energy that we get as a result is going to depend on the choice of of basis functions and the basis functions are going to form what we call the basis set and
often these functions are constructed from gausian and the more functions that we include in the basis set the more accurate the results going to be okay so once we have our spin orbitals we can finally say how to actually get the discretized hamiltonian so again we started with this the original hamiltonian written terms of of derivatives and to get the The Matrix form of this hamiltonian it turns out that the coefficients are going to be given by these integrals of the spin orbitals and that's all I'll say about this and if you want to learn
more about the details you can consult a book on quantum chemistry okay so to review now what we covered in this T we talked about problem mapping and first we talked about problem mapping for combinatorial optimization so the steps for the mapping here would for would be first you write the objective function in terms of binary plus and minus one variables then you convert the objective function to a diagonal hamiltonian by substituting the binary variables with the poly Z operator and then you would design a Quantum circuit and measure its output state in the computational
basis to obtain a solution to the problem for quantum chemistry the steps would be well first you would discretize the electronic hamiltonian using second quantization so this would involve getting a set of spin orbitals for example using the Harry ful method and then writing down the hamiltonian using creation and Annihilation operators uh then you would map your fermionic creation and Annihilation operators to Cubit operators using for example the Jordan bner transformation and then when you're constructing your circuits it's often useful to take advant AG of the cancellation of Z strings to to make your circuits
and Gates more easily implementable on the quantum computer and that's it for this lecture thanks for tuning in