Skip to content

Running a single task: Amplitude Rabi

This guide shows you step-by-step how to perform an Amplitude Rabi task in QruiseOS. This consists of:

  1. Setting up the QPU session and backend
  2. Performing the measurement
  3. Analysing the data
  4. Committing the results to the knowledge base
  5. Measurements vs tasks

While this guide covers the Amplitude Rabi experiment specifically, the same methods can be applied to any experiment from our Experiment catalogue. If you want to run one of your own measurements, take a look at our Integrating a new measurement tutorial.

1. Setting up the QPU session and backend

We first need to connect to the knowledge base by creating a session. This holds your access credentials and the knowledge base address so that you can access the configuration of your QPU. We then use that session to initialise a measurement backend so that you can connect to your QPU.

from kb_backend_adapter import create_session
from qpu_backend_adapter import create_measurement_backend

# create session
session = create_session()

# create backend
measurement_backend = create_measurement_backend(session=session, qpu="myqpu")
Locked to commit: a7hf6io7e92k6wbs65obmnp9wrddtqm on branch main

2. Performing the measurement

Because we're using one of Qruise's pre-defined measurements, setting it up simply involves specifying the experiment type ("AmplitudeRabi"), supplying the QPU backend and target qubit, and then instantiating AmplitudeRabiMeasurement() with those parameters. "AmplitudeRabi" acts as a tag so you can easily find all data associated with that experiment in the future, while the AmplitudeRabiMeasurement() class contains all the instructions for performing the experiment, including the sequence of gates and pulses and any required parameters.

import measurements

# choose qubit to measure
qubit_name = "Q1"

# define experiment type, backend, and qubit
meas_kwarg = dict(
experiment_type="AmplitudeRabi",
measurement_backend=measurement_backend,
qubit=session.Qubit[qubit_name],
)

# instantiate the measurement with desired settings
measurement = measurements.AmplitudeRabiMeasurement(**meas_kwarg)

We can then define the sweep range of the drive amplitude and perform the measurement by calling measurement.measure().

from qruise.experiment.utils.sweep_utils import SweepRange

# define drive amplitude sweep range
max_drive_amplitude = 0.8
sweep_range = SweepRange(0, max_drive_amplitude, max_drive_amplitude / 30)

# perform measurement
data = measurement.measure(sweep_range)

data is an xarray.Dataset holding the raw measurement data (the \(I\) and \(Q\) values) for each amplitude defined by sweep_range.

data

data xarray

The use of xarray makes it easy to plot any coordinates and variables with hvplot.

import hvplot.pandas

data.to_dataframe().hvplot.scatter("x180_amplitude", "I")

amplitude Rabi plot

We can see that the plot looks as we'd expect for an Amplitude Rabi measurement; however, to extract the \(\pi\)-pulse amplitude, we need to analyse the data.

3. Analysing the data

QruiseOS includes a wide range of ready-made analysis functions that can easily be applied to your measurement data to extract useful quantities such as qubit frequency, decay timescales, and pulse amplitudes.

For the Ampltiude Rabi experiment, we can use QruiseOS's AmplitudeRabiAnalysis function to extract the \(\pi\)-pulse amplitude. As inputs, we only need to give the data and a title. We can then print a summary of the analysis and plot the results.

from qruise.experiment.experiments.amplitude_rabi import AmplitudeRabiAnalysis

# perform analysis, print summary, and show plot
analysis = AmplitudeRabiAnalysis(data, title="Q1 Amplitude Rabi")
analysis.print_summary()
analysis.show_plot()
x180_amplitude is 2.13846e-01 V with std of 6.713e-04 V

amplitude Rabi analysis

The success attribute is typically used to decide if the output of the analysis should be committed to the knowledge base and serve as the latest value for some qubit characterisation – in this case, the \(\pi\)-pulse amplitude.

analysis.success
True

Great, you can see that our analysis was successful! We can now commit the results to the knowledge base.

4. Committing the results to the knowledge base

Committing your analysis results to the knowledge base ensures your calibration records stay current and that future experiments automatically use the latest values.

To properly define the knowledge base commit, you need to first retrieve the object whose value you want to update. In the code snippet of schema.py below, you can see that the Qubit class includes an x180 attribute, which is a GateCharacterization. Within the GateCharacterization class, there's an amplitude attribute. It's the amplitude of the x180 that we'll update with our commit.

# ...
   class GateCharacterization(DocumentTemplate):
       _subdocument = ()
       amplitude: Optional[Quantity]
       duration: Optional[Quantity]
       drag: Optional[Quantity]
       freq: Optional[Quantity]
       drag_optimal_control: Optional[Quantity]
# ...
  class Qubit(Qubit):
    # ...
    x180: Optional["GateCharacterization"]
    x90: Optional["GateCharacterization"]
    x180_ef: Optional["GateCharacterization"] # ...
# ...

To retrieve the x180 object, we need to select the qubit in the session and then access its x180 attribute.

qubit = session.Qubit[qubit_name]
x180 = qubit.x180

Before we commit the results to the knowledge base, we need to prepare the context using the ExperimentContext class. This allows us to save all the relevant information, so that when you look back at this specific experiment, you know exactly when and how it was executed.

from rich import print
from qruise.automation.features.flow import ExperimentContext

# print x180 amplitude
print(type(x180), "\n", x180.amplitude)

# prepare context for experiment
# includes session, experiment type, target qubit, local variables, and analysis results
context = ExperimentContext(
session=session,
type="AmplitudeRabi",
targets=[qubit],
context=locals(),
analysis=analysis,
)
GateCharacterization[Example schema]
 {"@type": "Quantity", "step": 0.01, "event": {"@id": "AmplitudeRabi/ed2ac5c9-5428-4fc9-b6d2-30b602b2dcad_Q1",
"@type": "@id"}, "range": {"@type": "Range", "max_val": 0.8, "min_val": 0.05}, "value": 0.21384627866541256,
"unit": "V", "std": 0.0006713481234471507}

Now we can commit the results to the database. We already checked that our analysis was successful, but we can automate the process so that our results are only committed if our analysis worked.

# create experiment record
experiment = context.experiment

# if analysis was successful, update x180 in knowledge base
if analysis.success:
    x180.amplitude.value = analysis.value       # update x180 value
    x180.amplitude.event = experiment           # link to experiment record (context)
    x180.amplitude.std = analysis.std           # add analysis
    context.commit()                            # commit all
Experiment ID not provided, using ID: sjhtgz9s7tmwetad6hexq15c
Saving documents
Done!

Our results have now been saved to the knowledge base with the unique ID AmplitudeRabi/sjhtgz9s7tmwetad6hexq15c.

5. Measurements vs tasks

The only interaction with the QPU occurs with the line data = measurement.measure(sweep_range). Every other step – configuring the measurement, analysing the results, and updating the knowledge base – is critical, however, to the functionality. We can therefore consider this notebook a task rather than a mere measurement, since it provides all these additional elements.