hi I'm Chris Wood I'm a senior research scientist at IBM Quantum and today I'm going to be talking about the quiz kit runtime Primitives version two that was recently released and how you can use this to run Quantum computations on IBM's devices so the quiz kit runtime is essentially a containerized cloud execution environment for performing high performance Quantum computations so its main feature is to enable the ability to run Quantum programs in an environment where the classical computer is physically closer to the quantum computer and the interface for doing this is through the runtime Primitives
which I'm going to discuss The Primitives are a foundational building block for Quantum programs designed around optimizing the execution of quantum workloads they provide options to customize the iteration and execution of programs to maximize the quality you can get from running on IBM Quantum processes so there are two Quantum Primitives that we use when running Quantum computations the sampler primitive and the estimator primitive and I'm going to cover both in the this lecture so the sampler primitive is a low-level execution primitive where we care about extracting the measurement outcomes of quantum circuits and it returns
single shot measurement outcomes so the circuits that are run on the sampler should include measurements to return these outcomes for the estimator primitive on the other hand is a higher level abstraction for running Quantum circuits where we care about the expectation values of operators so since this returns expectation values the circuit does not need to contain measurements since we're going to be adding measurements to that to compute these with the release of quiz kit 1.0 we introduced a new version of the runtime Primitives version two so I'm going to be discussing those in this lecture
now the main change in the version two Primitives over version one was to allow more efficient specification and execution of parametric programs in particular the implementation of The Primitives for our Hardware in the quiz kit IBM runtime was introduced in version 021 and this is designed to allow built-in support of error suppression and error medic techniques used for utility scale workflows so I'll start with the sampler primitive so the sampler V2 primitive as I already mentioned is intended for low-level workflows so it's designed for the execution of circuits and returning the measurement outcomes from those
the API for the new sampler Primitives is shown here so it has a run method and any implementation of a sampler must Implement a run method like this it accepts a form of quantum payloads in something that's known as a primitive unified block or a pub and optionally you can also specify a number of shots that you would like to run each of these pubs for so this input can select one or more pubs to run and they'll all be run together in a single job and the shots is optional if you don't specify it
then the shots will be chosen by the sampler primitive itself now primitive unified blocks were newly introduced with version two and they're designed to handle these parametric Quantum circuits so A Primitive unified block is essentially a program structure for a sample it can be thought of as a tuple of three objects a Quantum circuit which specifies the circuit that we want to execute containing measurements an optional set of parameter values that if the circuit's parameterized we'll evaluate it many times for each different set of input parameters an optionally contain shots to allow the execution of
different shots for different circuits if you're running multiple circuits in multiple pubs now the Restriction of these arguments in the sampler are that the circuit has to be what's called an Isa Quantum circuit and it needs to contain at least a classical register and measure instructions on that register so it has some output to return the parameter values that you specify in a pubs are essentially an array of float values to to assign to all the parameters in the circuit and in general this can be a tensor shaped array if we want to run many
different sets of parameters on our circuit for a given workload the optional shots argument allows us to specify within the pub itself the number of samples we want to draw from the sampler so essentially the number of times we execute each set of parameters on the given circuit now when you're running with these there's a various ways you can represent the pub and we call these Pub like inputs um because some of these values are optional you don't always have to include them and I'll show various ways you can specify a pub for running on
the new sampler primitive so the first way is the parametric circuit and this is the tupple described above where you pass in a parametric Quantum circuit the parameter values and the shots you would like to run with now you don't have to include shots in this specification and if you don't they can either be provided in the run method or by the Primitive itself and in this case a sampler Pub that is valid is just a parametric circuit and the parameter values now we also want to allow executing non-parametric circuits through this interface and in
this case a valid Pub is a non-parametric circuit since it's non-parametric there are no parameter values to run it with and we can also specify the shots or leave the shots up to the sampler now this last case of just a single non-parametric s uh circuit is similar to the previous version one sampler or just using the original backend. run now I'll go through a quick example of what this might look like if you're using the sampler so for this example we'll consider a simple 2 Cubit Bell State where we prepare an entangled State on
two cubits and perform a measurement so the first thing I do is load my IBM runtime service load my account with a saved API token and select the back end I'd like to run on for this example I'm running on IBM Auckland so the next thing I do is use quiz kit to create a quantum circuit in this case it's a two Cubit circuit with a hadam mod gate and a controlled knot gate and then measuring both cubits now because Samplers require Isa circuits which I'll cover a bit more in a few slides I have
to transpile this circuit to my intended back end to make it an Isa circuit and then I can construct a simple Pub for this non-parametric case now to run this on the sampler I import the sampler from the IBM runtime initialize It For My Chosen back end and then run this Pub in this case I run it for 10 shots using the sampler run method to specify the shots and then I can from the job I get from this I can extract the measurement result and look at the outcomes the sampler job run on a
sampler V2 will contain A Primitive result that has an ordered list of the pub results for each of the input pubs that we ran so if you run a single Pub this list will be a length one list containing a single Pub result if we run with 10 pubs in the run call it would be a length 10 list with 10 Pub results a pub result is a container class that contains all the data from the measurement outcomes as well as any optional metadata that may have been added by the Primitive and these can be
accessed by two attributes the data attribute which Returns the data structure containing the measurement results and the metadata attribute that returns a dictionary of any optional metadata added by the Primitive now the data container is a special class called a data bin that was introduced in quiz kit 1.0 the data bin can store the measurement outcomes from the sampler so if we run with a circuit with one or more classical registers then the data bin will contain fields and attributes with the same name as the name of the classical registers run and their value will
be the measurement outcomes for those measurements and when working with a data bin you can access these either by attributes using the register name or as a mapping using the the name as a field each of these measurement outcomes is also a special data structure called a bit array now I'll go a bit over the bit arrays and how you can work with these so to access the measurement data in one of these Pub results you'll get a bitarray for the each individual classical register in your circuit and the bit array stores the single shot
measurement outcomes for the number of shots we Ren for that circuit so that might look like this I can get the the measurement data extract um the bits in this case it was a single class classical register named Mees and it has a variety of attributes I can use to access information about the measurements outcomes so you can think of a bit array as being an ND array like object like an umpi array with the following attributes it has a shape and that shape is given by the input Pub parameter shapes um so if we
run one set of parameters it would be shape one if we ran 10 set of parameters it would be shape 10 it contains the number of bits that will run on that register so in this case it was a two bit register that I measured my outcomes into it records the number of shots for the execution we ran in this case the number of shots was 10 and then there's also an array that inst stores the measurement outcomes on all the bits and now to get a little bit into the details the internal structure of
this array for efficiency stores the measurement outcomes as a packed array of bytes or 8 bit integers so for my example here of a bell state with two measurement outcomes which in the ideal case are both 0 0 or 1 one uh represented as a as a b array um these are stored as values zero and one now if you're used to working with the previous counts formats of backends there are helper methods on this bitay class to allow you to convert into these string Methods so the get bit strings example on the bit array
can be called to convert these bytes into uh bit strings for each of the shots and there's also a get counts method to return this in a counts dictionary format like shown as previously mentioned when you run on a sampler the circuits have to be something known as an Isa circuit so now I'm going to go over what I'm Isa circuit actually is so an Isa circuit is a Quantum circuit object in quiz kit which satisfies the following properties it has to have the same number of cubits as the device it's going to be run
on it has to only contain allowed gate instructions that is supported by the device it's going to run on and it along with this it has to satisfy the connectivity of the device it's going to run on so if a device supports controlled knot Gates it can only be controlled knot Gates on cubits that are allowed to be coupled by these so if you're used to working with abstract Quantum circuits and running them to convert an abstract or logical Quantum circuit to an Isa circuit in quiz kit this is done via transpilation of the circuit
to the intended target or backend so for an example if I took the two Cubit Bell State as shown here with two cubits two c l bits and I wanted to transpile it to the IBM Orland back back end I used for my example this is a 27 Cubit backend and the output might look like something like this the C has been transpiled to the native basis gates for this device which is a square root X gate RZ rotation and c not gate and it's been expanded with Ancilla cubits uh to be a 27 Cubit
for this device now as I mentioned the new V2 Primitives are heavily optimized to work with parametric workloads so now I'll just recap what a parametric circuit is so a parametric circuit either an Isa circuit or an abstracto logical circuit is a quum circuit that contains Unbound parameter values that we may wish to evaluate for a variety of different values so to take our previous example and to make it parameterized I could control the initial single Cubit rotation via parameter which would also control the amount of entanglement that is generated so I've modified it in
this example to be a controlled y Rotation by an angle Theta rather than a hadam mod gate so when running a parametric circuit the shape of the parameter values controls the shape of the input Pub and also the shape of the pub results so essentially the if you think of the parameter values as an array or a tensor the shape of this array controls the shape of both the pub and the results if I had a non-parametric circuit then this would be equivalent to a trivial shape which is specified as shape of the empty tupple
the same as in numpy if I ran a Isa circuit with K parameters and I wanted to run K difference n different sets of these K parameters uh that input array I would run in my Pub has a shape NK but the shape of the pub and the shape of the pub result is actually going to be a one-dimensional array of shape n and you can always think of a parametric Pub as being converted into a list of non-parametric pubs by binding all the parameter values and this would be how things were run in the
previous version of The Primitives so to go back to our previous example and show how this is run on a primitive using a parametric Pub I'll take my 2q bit circuit with a Theta angle specifying the the degree of rotation uh to generate entanglement and I can generate a set of parameter values I want to evaluate for Theta so using nonp I made a linearly spaced set of parameter values from 0 to Pi with 20 different values again I can transpile my circuit to make an Isa circuit for this on the back end I'm intending
to run on and then I can produce my Pub which now contains the circuit and the array of parameter values now this Pub will have shape 20 because there's 20 different values of the single parameter I wish draw run so running this for a th000 shots and getting the results I'll get back a bit array for my measurement register now that has shape 20 and 1,000 shots so for each of these 20 different Theta values I get back a thousand shots of measurement outcomes and so using that I can plot the results versus the parameter
Theta so using mat lab here I'll get my bit array and collect the basically compute the probability of the state being 0 0 or the probability of the state being 1 one by comparing the values of the bit array that is zero or three and taking that sum and normalizing it by the number of shots and plotting that I can see something like this where uh starting from zero my state is entirely in 0 0 as Theta approaches Pi on 2 I get to a state of maximal entanglement where the state is 50% 0 0
50% Z 1 one and at a angle of theta equals Pi now the state is almost entirely in 1 one so as I mentioned there's various ways to specify shots when running on a sampler so I'm going to go over them now so you can specify shots either in the pub itself in the Run call or you could just leave them absent and leave it up to the sampler and depending on how you do that there's a there's a huris stic for the resolution of the shot values when you do the execution so if shots
is specified in the pub this will take precedence and be used so for example if I took two pubs here one both with the same Sur one I wanted to run for a th for 100 shots and one I wanted to run for 200 shots um and I called the sampler now with a th000 shots like this you can see that in the outcome the pub Result One only has 100 shots and one has 200 shots so essentially the shots argument of the Run call has been ignored because both pubs specified the shots themselves now
the second case where I use the shots argument this will be used for any of the input pubs that don't specify the shots so now to modify this I'll remove the shot values from the first Pub so it doesn't specify the shots it wants to run in while the second one still specifies 200 shots and running now I can see that the first Pub was run with 1,000 shots while the second one was run with 200 shots and the final case that is if shots isn't specified in the Run call and isn't specified in the
pub then the sampler has to have a default value for the number of shots and it's up for the sampler to decide de what this will be depending on the implementation of the sampler so to do this uh the second example again now removing the shots keyword ARG from the Run call I can see that the first Pub that didn't specify shots was run for 4,096 samples um while the second one was run for 200 and this was run on the IBM runtime sampler and that's because the sampler in the IBM runtime chooses a default
value of shots for 4,096 okay now we've given a quick overview of the new V2 interface of the sampler and its inputs and outputs I'll move on to the estimator so just to recap the estimator primitive is a higher level primitive than the sampler intended for computing expectation values on the final state of a circuit the estimator API looks similar to the sampler API where it can accept a list of pubs for input however these are estimator pubs not sampler pubs and instead of shots the the Run argument for specifying the runtime is given by
a Precision argument so like the sampler it can accept one or more pubs as its input program and the optional Precision argument essentially expresses the desired Precision we want to evaluate the expectation value to so the Primitive unified blocks or pubs for the estimator are a little different than for the sampler Pub because now we need to give a specification of the observables we want to compute the expectation values for in this case it's given by a a topple now that has four elements the first one again is a Isa circuit like with the sampler
however the second one now is a list of observables we wish to evaluate on the the pub and then we have the optional parameter values and the optional Precision so like with sampler because some of these values are optional there's a variety of different publ inputs that are valid to give to the estimator so the first one where everything is specifi ifed is a parametric Isa circuit to evaluate expectation values on an array of observables that must be Isa observables for that circuit then an array of parameter values and a Target Precision now I could
drop the Precision and I could do a topple of three elements of just the circuit observables and parameter values and then there's also I can also consider running non-parametric circuits where I don't need to specify the parameter values but I still need to specify the non-parametric circuit and observables in Precision or just the non-parametric circuit and the observables and leave the precision as the default value so now when work working with estimator we need to talk about how we represent observables that we wish to measure on circuits so first to go over what an observable
is it is a h Mission operator that is represented as a linear combination of poly operators in quiz kit so a single poly operator is an observable and you can represent these either as a string that contains an i x y or Z terms to represent an N Cubit Poley or you can use the built-in operator objects in the quiz kit Quantum info module such as the poly object for representing an array of observables uh we can use a list of poly operators this can be done by either passing in a python list of py
or using one of the quantum info objects such as a poly list which represents a list of poy now a more complicated observable such as a hamiltonian or another permission operator can be represented using the sparse poop which allows you to represent an operator as a linear combination of poy with coefficients and for it to be a h Mission operator the coefficients must be real so to give an example of this uh I'm going to go back to my bell circuit example create an Isa circuit for the estimator note here that I remove the measurement
from the circuit unlike the sampler because the measurement is going to be done by the estimator itself so I create a bell circuit without measurement just with a hadamard gate and a controlled knot gate and then I transpile it for my back end and now if I wish to evaluate a zz observable I can create this with a sparse poly op and now to convert this into an Isa observable on my back end which is a 27 Cubit observable I can use the apply layout method of The Spar poop to map it to the same
cubits that the transpiler mapped the circuit to and this is done by using the layout of the transpiled circuit now the result when running on an estimator will also be a primitive result containing a list of pub results just like the sampler however the difference now is that the data bin that is returned by an estimator doesn't contain measurement outcomes from classical registers it instead contains two fields to return the expectation values there's an ev's field which Returns the mean estimated expectation values for all the requested circuits and parameter values and there's a standards field
that Returns the standard error of the mean of the expectation value estimates so for my example if I take an estimator on my IBM Orland backend generator pub with the ISA circuit and Isa observable and run it because this was a non-parametric um circuit with a single observable I get back a single float value for my expectation value and standard error is shown here so like the sampler the estimator Pub and results both have a shape however this doesn't just depend on the parameter values like it did in the sampler case for an estimator the
shape depends on both the parameter values and the observables so first I'll go over trivial shape pubs now like with the sampler these have a shape given by the empty tupple and in these cases the the return of the expectation value will be a float now examples that can have this trivial shape are a non-parametric pub evaluated with a single observable so no parameter values but you can also have a parametric pub with a single parameter value and a single observable that will return a single float now in the general case where both the observables
and parameter values can be tensors or ND arrays then the shape will depend on what's the called the broadcasted shape of these two tenses so broadcasting is a concept from numpy for combining two n dimensional array objects into a larger array where the Dion of the the final array depends on the dimensions of the two components that are broadcasted and the dimension of these arrays have to satisfy certain rules to be compatible with broadcasting so the two arrays do not need to have the same number of Dimensions but their Dimensions do have to satisfy certain
properties if they don't have the same Dimension the resulting broadcasted array will have the same number of Dimensions as the largest of the input arrays and each of the dimensions is going to be the largest size of the corresponding dimension dimensions of the the two arrays being broadcast and any missing dimensions are essentially padded and assumed to have size of one and when padding this starts with the rightmost dimension and works its way left so if I had an array with three dimensions an array with two the the array with two Dimensions would have a
one added on the left to make it a three-dimensional array and then when doing this with two arrays they're compatible to be broadcasted where once you've paded them to be the same shape then either all the dimensions are equal or one of the arrays has to have Dimension one for that that axis now this is much easier shown with an example so if I take two arrays here uh say a matrix or 2D array a with shape 5x4 and then a 1D or vector array B with shape one the result here would be a 2d
Matrix with shape 5x4 because the array B would be padded with an extra dimension of one on the first and then because they're both one the shape would come from a now to take a bigger example if I did a 3D array um a with shape 15x 3x 5 and B with shape 15x 1X 5 again the result would be 15x 3x5 now to to look at this with pictures if I think about broadcasting the parameter value sets and the observable array and what the resulting shape of the expectation values we could get for the
sampler would be so the trivial case which I mentioned is just one set of parameter values so this has shape of the empty topple and a single observable in the observables array this also is an empty tupple and that will give me a single float or shape trivial expectation Value Estimate now if I consider broadcasting a single set of observables I could again have a either an unparameterized circuit or a param parametric circuit with a single set of parameter values and now an array given by a one list of observables with shape five and I
would get back a 1D array of shape five of the final expectation values where in this case the circuit will be run five times with that set of parameters to estimate the expectation value of each observable for that set of parameters flipping this around I could have had five different sets of parameter values each that I want to run with a single observable and that would be a shape five result where I estimate the same observable for each of the different sets of parameter values and now to go to more advanced types I can use
this broadcasting to compute both inner and outer products of observables so for example if both my parameter value sets in observables were a list of shape five uh the resulting expectation value would also have shape five where the first set of parameters is evaluated with the first observable the second set of parameters with the second observable and so on by padding Dimensions I can also do an outer product so if I added a dimension of one to the start of the parameter value sets and added a dimension of one to the end of the observables
as shown here I can produce a matrix of outputs of in this case six different parameter sets of input and four different observables where I evaluate each of the six parameter values with each of the four observables to return this uh 4x6 array of expectation values and then of course this can generalize to higher Dimensions using numpy broadcasting rules as shown here so this is maybe easier given with an example so if I go back to the Bell example and now consider the parametric Bell case where I have my parameter Theta that controls the degree
of entanglement trans pilot to an Isa circuit and again I'll use a set of parameters with 20 different values for Theta from 0 to Pi but for each of these values I'll now evaluate three different observables uh XX on both cubits y y on both cubits and ZZ on both cubits now to do this um because I have a shape 20 set of parameter values I'm going to reshape my Isa observables into a shape 3x1 so that the broadcasting um would be a a shape um 3x 20 array and then when I run this I'm
going to get back my shape Pub results and now for each of my observables I requested to measure I now get all the different parameter valued expectation values and can plot them as shown here seeing the XX y y and ZZ for each parameter value so in the examples I used what was called an Isa observable when running with the estimator now I'll just take a moment to give a few more details on what it means for an observable to be an Isa observable so recall that a h Mission observable for the estimator can be
thought of as a an operator with a linear combination of poy with real coefficients now for this operator to be an Isa observable then it means that each Poley has to be defined on the same number of cubits as the ISA circuit so if it was a 27 Cubit Isa circuit each Poley would have to be defined on 27 cubits so if you're used to working with abstract circuits and py described on them such as a 2 Cubit poy then to map them to ISA observables you need to apply the layout of the transpiled circuit
to the observables so that they're mapped as the same cubits in the transpiled circuit as your original abstract circuit was and to make this easier several of the operator classes in the quiz kit Quantum info module include an apply out apply layout method that can be used to do this for you including the sparse poop and poly operator so I already used this in the previous examples but to just go over it again let's consider a simple case here of a 1 Cubit abstract circuit which I transpile to a back end a 27 Cubit back
end here so that circuit might look like this in this case you can see that the transpiler ma this the Cubit of my abstract circuit to the fourth Cubit of the the the physical backends cubits so if I made an observable that was a 1 Cubit abstract observable here given by a linear combination of x + y + z then I can apply my layout to this observable to get an Isa observable and you can see that it's been expanded to a 27 Cubit sparse poly op with identities on all the other cubits except the
fourth Cubit where this Cubit was mapped to now when you're working with estimator inputs in pubs you can work with the raw numpy arrays and observable lists as I've described in my examples but there are also some helper container classes included in quiz kit that you can use to to make this a little easier so two of these container classes are the observables array and the bindings array the observables array is numpy array like structure that can basically represent an n-dimensional tensor array of sparse poop observables and the bindings array is a numpy like array
object for representing a ND shaped array of sets of parameter values and because these are nonp like objects they have convenience methods to do things like reshaping them and I'll just make a disclaimer that these classes are currently experimental so their API is not guaranteed to be stable in future versions of quiz kit so to give an example of what this looks like I'll import the observables array class from The Primitives module and I can create an observables array that is I want to be a shape 31 list of poy so I initialize it with
a list of poy and use its reshape method to to reshape it to shape 31 and then it also has an apply layout method so I can apply the layout to all the py in this list in in one function call and use the circuit layout as shown here similarly I can import the bindings array to work with parameters one thing to keep in mind with working with the bindings array is you need to know the names of the parameters because this is like a a dictionary like object that Maps the names of parameters to
arrays of values next I'll talk a little bit about how observables are measured on the estimator so when you go to evaluate an estimator pub with a possible a shape of different observables you wish to measure when running on the IBM runtime Primitives they'll apply some grouping strategies to collect different observables that can be measured together to minimize the number of actual physical circuits with measurements that need to be evaluated so this is done by collecting all the Poley terms that need to be measured in all the operators and grouping the poy that can be
computed from marginals of a single measurement so the way grouping is done in the runtime is equivalent to the poly list Quantum info classes group commuting method with the value of qbit y equals true and this will essentially collect poy on different tensor products of cubits that can be measured as a single tensor product so to take an example where I want to measure nine different single Poes of X Y and Z on each of three different cubits as shown as a list shown here if I group this this will be com collected into three
different sets of Po lists that can be measured together so the way the grouping worked here is it collected um Z on Cubit z x on Cubit 1 and X on Cubit 2 that can be measured at the same time and similarly for the other ones it collected different combinations of the single poliy into one group of Poes that can be measured simultaneously so another thing you can use when using the estimator is when Computing a hamiltonian or a more complicated observable that's represented by a linear sum of poy you can also include the individual
poly terms to compute the component expectation values when you're running your Pub and because of the grouping this doesn't actually cost you anything extra in terms of quantum computation to compute the extra measurements so what this could look like if I take an example of a hamiltonian here on three cubits where I wanted to measure say the average of the single Cubit Z term on each of the poy represented here as a sparse poop I could construct my observables array to include the hamiltonian I want to measure as well as each of the individ idual
poliy which I can obtain from the sparse poly up with its poy method and this will be an observables array now with four observables in it given by the the the sum of the individual py as well as each of the individual py and when this is run on the estimator it still can all be measured in a single measurement circuit now that I've given an overview of the new API of the version two of The Primitives I'm going to be talking about some specific details of IBM's implementation of these Primitives in the quiz kit
IBM runtime so the API I've described above allows people to create portable programs that could run on any implementation of the V2 estimator and sampler Primitives now when you're running on the IBM Primitives there's some additional functionality which can be enabled and configured by the user using options and in particular these can be used to enable error suppression and error mitigation techniques when running on the IBM Primitives so both the sampler and estimator in the IBM runtime support a variety of options and I'm going to just mention a few of them here so in this
case of the V2 sampler there's an option to control the default number of shots when running um sampler pubs and some other options that might be of interest are options to control enabling and configuring dynamical decoupling if you want to apply dynamical decoupling error suppression when running your circuits and also sub options to control twirling which allows you to enable and configure automatic po twirling when running your workflows in the case of the estimator it has a default Precision argument that can be used to set the default Precision when running pubs it also has a
default shots argument which is can be used as a proxy for setting the default Precision if you're more comfortable working with shots when you run on the estimator there's also a resilience level option that enables pre preset defaults for a variety of options related to error suppression and mitigation to enable you to activate certain settings with a single option which I'll describe more later and it also supports both dynamical decoupling and sring like the sampler and related to resilience level there's also sub options for resilience that allow you to fine-tune or tweak the different levels
of error suppression and mitigation through their own options so now I'll take a little bit of time to talk a bit about the toiling options which were added in the V2 sampler and V2 estimator so without going into too much detail on what poly toing is in terms of theory you can think of it as a method of engineering the noise in a Quantum device so when you perform gates in a Quantum circuit they may have noise on them and a layer of gates might have some complicated noise process that includes incoherent errors coherent errors
and other sorts of errors and when you apply poly twirling this is a method of mapping this to what's called a po poly error Channel which is an incoherent mixture of poly errors now how this works in The Primitives is it requires some transpilation of the circuits that you wish to run to identify layers of 2 cubit Gates that will be twirled or measurements that can be twirled it will then reparameterize your circuit to add parameters that can be used to inject py into the circuit to implement the twirling and then it will do some
random sampling of py in a special way to be inserted into the circuit that don't modify the output of the circuit but change the form of the noise that when you average over many different samples of these poy acts to implement the the poly twirling to average the channels to a poly Channel now when you do this with an estimator it's going to return the expectation values averaged over these different samples and in the case of the sampler it's going to return you the single shot measurement outcomes concatenated across the different samples so the outputs
you get in terms of the shape of the expectation values or the bit arrays doesn't change whether you enable or disable twirling now twirling can be controlled by a bunch of options inside the estimator and sampler and now I'll go over a few of them so first there's an option of whether to apply poly twirling at all so by setting enable Gates equals true you can activate poly twirling of two Cubit gates in your circuit there's also an option separately to enable measurement and if you enable measurement poly twirling will be done on all the
measurements in your circuit that can be poly TWR and this means measurements that are not used in any control flow or conditional operations in your circuit there's also options to control the way sampling is done done so by default poly twirling will choose some number of randomization and divide your shots you wish to run amongst them but you can also customize this by explicitly setting an option to control the number of randomizations when poly twirling and also to control the number of shots per randomization when you're poorly twirling finally there's a strategy option which allows
you to specify how twl cubits are twir in the gate layers when the the twirling transpilation identifies the 12r 2 Cubit Gates so what this might look like there's two ways you can specify options when working with The Primitives so once you've initialized and sampler for example you can set options via attributes to enable these as shown here or you can initialize the options when you initialize the sampler Itself by specifying them as a dictionary like shown here and passing it in when you're initializing the sampler and as I previously mentioned um if you do
not set the number of randomizations yourself default values will be chosen for you when you enable twirling and by default the number of randomizations chosen will be 32 so if you run some number of shots on your sampler then they'll be divided amongst 32 different randomizations and if you do set these values yourself then the product of the num randomizations and the shots per randomization has to be greater than equal to the number of shots you request for those for all pubs that are run in that run call of the sampler so the twirling strategy
is something new so I'll go over a bit about the options that are supported there so when you have a circuit that you wish to twirl there's a variety of ways you could do this and this is what the strategy specifies so there's four different strategies supported by the sampler and estimator in its current form so the first one is a active strategy and in this case when identifying layers of two Cubit gates to be 12ed only the cubits that are active in the gates of those layers will be 12ed so if I take an
example like shown here where I have three different layers each with one c not gate only the two cubits in each of those layers will be twirled so in the first layer that's Cubit Z and one in the second layer it's cubits 1 and two and in the third layer it's cubits 2 and three now the next option you can specify for the strategy is active acume and in this case it will twell the accumulated gate cubits up to including the current layer so going to this example the first layer only Cubit Z and one
have been used in the circuit up to this point so they'll be 12ed but when I get to the second layer it will 12 cubits 0 1 and two because Cubit Z had already been used in the circuit and now cubits 1 and two were used in this layer and then similarly for the third layer cubid 0 1 2 and three will be 12 the next value you can support is active circuit and in this case it will twrl in each 12r layer it will twrl the union of all cubits that used in the entire
circuit so in this case Cubit 0 1 2 and three are used in the circuit so for all my 12 layers Cubit 0 1 two and three will be 12ed and the final option you can specify is all and in this case all cubits in the circuit will be 12ed in all layers so when you're running an I um an Isa circuit this can include all cubits on the device so in this case it was a a six Cubit circuit and in all my layers cubits 0 through five are all 12 in layers now talk
a little bit about the resilience options supported by the V2 estimator in the quisk IBM runtime so the the IBM run times estimator supports a couple of different built-in error mitigation methods that you can able either via resilience levels or individually via resilience options now working with these individual options to turn individual error mitigation methods on and off you can enable measure mitigation via measure measurement mitigation option and this will enable mitigation of the final measurements used to compute expectation values in your circuit now for mitigating errors in the gates of your circuit there's two
mitigation methods you can enable the first is zero noise extrapolation which can be enabled using the zna z& mitigation option and the second one is probabilistic error cancellation which can be enabled by using the PC mitigation option note that these options can't be enabled at the same time you can only have z& or PC activated not both both measurement mitigation can be enabled on its own or with either z& or PC though so I won't go into details of these error mitigation methods as they'll be covered in later lectures and tutorials of this summer school
now if you enable one of these error mitigation methods there's additional sub options you can explore to further control how the mitigation is done so there's a variety of options that can be used to configure z& mitigation including the type of extrapolation meth method the PC you can control the number of samples and the the scaling um when doing measure mitigation this requires learning the noise model of measurements and there's some options you can specify there to control the way the learning is done and similarly when doing PC mitigation there's some noise model learning that
has to be done and there's options to control the exact way that is done now while you can use these options individually if you just want to activate mitigation methods without worrying about the fine details there's built-in resilience levels that can be used to enable a variety of these options automatically using preset defaults so there's three different resilience levels supported by the estimator so resilience level zero which has no mitigation or error suppression applied resilience level one which enables measurement mitigation and measurement twirling but doesn't apply any gate mitigation and then level two which has
measurement mitigation and twirling applied as well as gate twirling and z& mitigation and when you enable these you can also fine-tune them with options after you've initialized the estimator so for example if I wanted to initialize my estimator at level two to activate z& I could then go in and control the z& extrapolator I wish to use or turn dynamical decoupling on with with the options now this is a very high level overview of the the New Primitives in the quiz kit runtime and for additional resources there's a variety of documentation and tutorials you can
follow up with on the IBM website thank you