Adding a new task to a workflow
The purpose of this tutorial is to show how to add a new Amplitude Rabi experiment task to an existing workflow.
This tutorial follows directly on from Running a pre-defined workflow, so if you haven't already, we'd recommend going through that first.
03_adding_new_task_to_workflow
This guide covers the same content as the 03_adding_new_task_to_workflow
example in JupyterLab. If you have access, it might be a better idea to follow the example there, as this will give you hands on experience navigating the different files required to add a new task.
A new task can be added to the workflow in three simple steps.
1. Add the new task to the directory¶
The first step is simply to add the new task notebook to your directory.
2. Add the task to qruise-flow.yaml
¶
As we discussed in the previous guide, qruise-flow.yaml
defines the tasks executed in the workflow.
If we simply want Amplitude Rabi to run after the other experiments, we just need to add it at the end of the experiment tasks in qruise-flow.yaml
, as shown in the code snippet below.
name: onboarding-03-adding-new-task-to-workflow
qubits: [Q1]
couplings: []
stages:
init:
- name: schema
experiments:
qubit:
- name: resonator-spectroscopy
- name: filter-spectroscopy
- name: pulsed-spectroscopy
- name: amplitude-rabi
coupling: []
If we didn't want automatic sequencing, we would need to declare specific dependencies in qruise-flow.yaml
. You can learn more about this in the Running workflows with dependencies user guide.
For now, we'll stick to the automatic sequencing.
3. Update schema.py
¶
One role of schema.py
is to keep track of the different results of all the experiments in a structured way. When you add a new task, you must ensure the schema can accept its data.
We can do this by importing the AmplitudeRabi
class and then registering it in the schema by defining an empty subclass alongside the experiments already contained in the workflow, as shown in the code snippet below. This tells the schema that the workflow includes an AmplitudeRabi
experiment, so QruiseOS will recognise, validate, and store its output (here, x180_amplitude
) when the workflow runs.
# ...
from qruise.experiment.schema.experiments import (
FilterSpectroscopy,
PulsedSpectroscopy,
ResonatorSpectroscopy,
AmplitudeRabi,
)
# ...
class ResonatorSpectroscopy(ResonatorSpectroscopy):
pass
class FilterSpectroscopy(FilterSpectroscopy):
pass
class PulsedSpectroscopy(PulsedSpectroscopy):
pass
class AmplitudeRabi(AmplitudeRabi):
pass
You can then run qruise flow run
in the terminal as usual.
You can check on the dashboard that (in contrast to onboarding-02-running-a-workflow
from the previous tutorial) onboarding-03-adding-new-task-to-workflow
includes the AmplitudeRabi
experiment.
You can also run qruise kb log
in the command line to view the database log and confirm that the AmplitudeRabi
experiment was executed correctly.
You can see here that, as we'd expect, amplitude Rabi was the last task executed on the QPU. Great!