Running a workflow
In QruiseOS, tasks are defined in Jupyter notebooks. This enables users to immediately start performing experiments using our pre-defined experiments. The ability to run the notebooks cell-by-cell is also very beneficial when it comes to integrating a new experiment. However, as the number of tasks and qubits grows, running each notebook manually becomes increasingly impractical. Workflows automate this process by executing many tasks across multiple qubits in a predefined sequence.
Tip
This guide gives a general overview of workflows. For specific examples of how to execute workflows, see our user guides, e.g. Running a pre-defined workflow.
Workflow files explained¶
First, let's walk through the typical files you need to run a workflow.
Task notebooks¶
Task notebooks contain all the instructions needed to execute and analyse an experiment (or simulation) and to store the data in the knowledge base. They are self-contained and (excepting any dependencies between measurements) can be run in any order you choose.
Task notebooks are named according to <ordinal-number>-<task name>.ipynb
. The task name helps identify the task in the workflow and should match the name used in qruise-flow.yaml
. The ordinal number does not affect the execution order of tasks, but it can be useful for keeping files organised in the intended sequence. The ordinal numbers can also be used to implement custom experiment selection; for example, if you only want to run certain tasks in a workflow.
The qruise-flow.yaml
file¶
The workflow is defined in qruise-flow.yaml
. It specifies which tasks to run on which qubits, defines task dependencies, and allows further workflow customisation.
Let's walk through an example file, shown below.
name: my-first-workflow
qubits: [Q1, Q3]
couplings: [[Q1, Q3]]
batch_groups:
- name: all_qubits
qubits:
- [Q1, Q3]
stages:
init: []
experiments:
qubit:
- name: ramsey
- name: time-t1
dependencies:
- ramsey
- name: readout-discriminator-train
dependencies:
- time-t1
coupling:
- name: correlated-readout-error
dependencies:
- ramsey
name: my-first-workflow
: the name of your workflow.qubits: [Q1, Q3]
: the qubits involved in the workflow, here we have two qubits.couplings: [[Q1, Q3]]
: the coupled qubit pairs on which to run two-qubit experiments. Here,Q1
andQ3
are coupled.batch_groups
: specify qubits that can be measured concurrently, hereQ1
andQ3
. You can learn more about batch groups here.experiments
: all the experiment tasks that make up the workflow, including single-qubit tasks (qubit
) and two-qubit tasks (coupling
).dependencies
: for each task, the list of other tasks that must be completed before it can start. You can seetime-t1
has the dependencyramsey
. This means thattime-t1
cannot be executed until afterramsey
is complete. Learn more about dependencies in QruiseOS here.
If no dependencies are defined, the tasks are executed in the order they appear in qruise-flow.yaml
.
The schema.py
file¶
The schema defines the structure for storing and organising data in QruiseOS. It describes the key components of your device, such as qubits and resonators, how they’re connected, and how experimental results (such as coherence times or fitted parameters) should be saved.
By using a schema, QruiseOS knows how to handle, link, and validate different types of information – from device layout to calibration results – making it easier to manage and reuse data across workflows.
Tip
The QPU on which the workflow gets run is defined in the .QKB.json
file. Details of this file are explained in our CLI docs.
utils.py
¶
utils.py
contains helper functions that make it easier to work with the knowledge base, including connecting, starting a session, and creating a measurement backend. This means you don't need to repeat the same setup code at the start of every task notebook.
bootstrap.yaml
¶
bootstrap.yaml
is only needed when running the very first workflow on a new QPU. It contains initial device parameters, such as qubit frequencies, gate times, and coupling information, which are used to populate the knowledge base.
All subsequent workflows retrieve this information directly from the knowledge base, which is continuously updated with new device data as more workflows are run.
Launching a workflow¶
To launch a workflow from your Jupyter instance, simply run the following command from the terminal:
The result of this workflow build and execution is shown in the screenshot below. The workflow is given an ID – here, astute-turaco
– and the tasks are executed in the prescribed order.
An alternative, more visual way to view the workflow is via the dashboard.
Tip
Workflows can be scheduled using our CLI tool. For example, you can run a nightly qubit characterisation workflow so that your team is always up and running with the latest and most accurate data.