Question-2

Question-2: How can we achieve fully automatic calibration and operation of multi-qubit circuits?

Tutorials

Leonardo Di Carlo (QuTech, Delft – the Netherlands)
An introduction to quantum computing with superconducting circuits and the main challenges to their scalability

Tuesday 2nd March 2021, 12:00-13:30 GMT

This purpose of this tutorial is twofold. First, we provide an introduction to quantum computing using monolithic superconducting circuits in a circuit quantum electrodynamics hardware architecture. Second, we discuss the main challenges to their scalability. The primary goal is to interest a machine-learning community in the speedup of device characterization and (re)calibration of quantum operations for NISQ applications and quantum error correction.

Timo Kötzing (Hasso Plattner Institute, Potsdam – Germany)
Automatic Parameter Tuning via Heuristic Search
Tuesday 2nd March 2021, 14:00-15:30 GMT

The performance of modern algorithms in computer science typically depends on a number of parameters which govern the behavior of the algorithm. Setting these parameters just right is a complex task, typically dependent on the exact area of application of the algorithm, i.e. on the data to be given to the algorithm. Traditionally, algorithm designers might play with these parameters some, using their detailed knowledge of the inner workings of the algorithm, in order to find good parameter settings. However, more and more this process can be automated by parameter tuning algorithms which explore the space of available parameter settings, evaluating possible choices along the way. One way to explore is Heuristic Search, iteratively generating more and more possible parameter settings similar to previous promising settings.

Regarding calibrating multi-qubit circuits, the general structure of the problem is the same as for tuning the parameters of algorithms: There is a set of allowed settings for the parameters, each of these settings has an associated quality (which might suffer from noise). In order to find an element of good quality in this search space, the same principles (exploration vs. exploitation trade-off, search adaptation, surrogate models, hardware in the loop, …) would also be applicable in this setting. Thus, I want to present some of these ideas to you and discuss possibilities and limitations.

Invited Speakers

Justyna P. Zwolak (NIST, Gaithersburg – USA)
Ray-based framework for tuning quantum dot devices: Two dots and beyond

Tuesday 2nd March 2021, 17:45-18:30 GMT

Arrays of quantum dots (QDs) are one of the many candidate systems to realize qubits – the fundamental building blocks of quantum computers – and provide a platform for quantum computing. However, the current practice of manually tuning QDs is a relatively time-consuming procedure, inherently impractical for scaling up and other applications. Recently, we have proposed an auto-tuning paradigm that combines a machine learning (ML) algorithm with optimization routines to assist experimental efforts in tuning semiconductor QD devices . Our approach provides a paradigm for fully-automated experimental initialization through a closed-loop system that does not rely on human intuition and experience.

To address the issue of tuning arrays in higher dimensions, we expand upon our prior work and propose a novel approach in which we “fingerprint” the state space instead of working with full-sized 2D scans of the gate voltage space. Using 1D traces (“rays”) measured (“shone”) in multiple directions, we train an ML algorithm to recognize the relative position of the features characterizing each state (i.e., to “fingerprint”) in order to differentiate between various state configurations. I will report the performance of the ray-based learning when used off-line on experimental scans of a double dot device and compare it with our existing, CNN-based approach. I will also discuss how it extends to higher-dimensional systems. Using rays not only allows us to automate the recognition of states but also to significantly reduce (e.g., by 60 % for the two-dots case) the number of measured points required for tuning.

Hendrik Bluhm (RWTH Aachen University – Germany)
Machine learning for automating quantum dot qubit operations

Thursday 4th March 2021, 12:00-12:45 GMT

Like other quantum computing platforms, quantum dot based spin qubits need carefully calibrated gate and readout operations. Moreover, capturing single electrons with controlled tunnel coupling requires the adjustment of gate voltages, which can be a time consuming task for humans and must be automated to achieve scalability. I will discuss machine learning and related approaches to address several of these needs. 

The starting point of device optimization is to form quantum dots filled with a known number of electrons. This is typically achieved by measuring charge diagrams, in which changes of the dot occupancy in response to two gate voltages lead to steps in the response of a charge sensor. We show that convolutional neural network can be used for the classification of such images. However, they tend to be rather sensitive to changes in the data characteristics, which require retraining. 

For the iterative adjustment of tunnel couplings, we analyze measurements with conventional fits. Based on these, we demonstrate an algorithm that learns the effect of gate voltage changes based on Bayesian updates of a gradient matrix and can adjust several tunnel couplings simultaneously in a few steps.  A similar approach may be suitable for tuning gate operations, for which we have used a finite difference based gradient estimation to tune up to 50 parameters based on measurements. For the self-consistent extraction of systematic errors in a gate set as input for such optimizations, we propose a method dubbed gate set calibration. Similar to gate set tomography, sequences of gates are measured, but the focus on systematic errors rather than a full process characterization substantially reduces the complexity. Single spin readout requires the detection of stochastically timed tunnel events. We find that a neural network approach trained on simulated readout signals has a performance comparable to an optimal Bayesian analysis on this task, but is more tolerant to changes of the system parameters. This indicates that the trained neural network disposes a certain system characterization capability. 

Eliška Greplová (Technical University of Delft, the Netherlands)
Automated control and characterization of quantum devices

Thursday 4th March 2021, 15:00-15:45 GMT

Gate-defined quantum dots constitute a promising scalable platform for quantum computation and simulation. One of the contemporary challenges is variation of the device properties arising due to the fabrication process. It is therefore critical to develop methods that can bring these devices into a desired regime in an autonomous way. Secondly, in order to fully characterize the devices, it is crucial to match the experimental data with an underlying physical model. In this talk I will show examples of both aspects. I will present a machine-learning driven algorithm for autonomous tuning of double quantum dots into a specific charge states and its direct experimental applications. As an example of the reliable device characterisation I will discuss how to adapt optimisation methods to find models that, based on the experimental data, best capture the physics of the device. We expect that a combination of automated tuning algorithms and physical model-based device characterisation tools will form a scalable path towards fully autonomous control of future quantum devices.

Contributed Talks

Shai Machnes (Saarland University and Forschungszentrum Jülich – Germany)
Machine-learning tools for rapid control, calibration and characterization of QPUs and other quantum devices
Tuesday 2nd March 2021, 18:30-18:55 GMT
We present a novel procedure of Combined Calibration and Characterization (C3) of QPUs: An iterative combination of learning open-loop pulse controls, closed-loop model-free RL tune-up, and iterative system model fitting and refinement, based on a highly detailed TensorFlow physics-level simulator. The result is a high-fidelity model, replete with sensitivity analysis, commensurate high-fidelity robust gates and a detailed error budget. All available as open-source at q-optimize.org. Automated experiment design, co-design, and adversarial system characterization will follow.
Tools: reinforcement learning, experiment design, generative-adversarial learning

Muhammad Usman (University of Melbourne – Australia)
Characterization of atomic qubits in silicon by machine learning
Tuesday 2nd March 2021, 18:55-19:20 GMT
Atomic qubits in silicon are attractive candidates for the implementation of quantum computer architectures given the nexus with nanoelectronics engineering and the long coherence times. The characterisation and control of atomic qubits by direct measurements is a highly challenging task. This work reports a first application of a machine learning tool to achieve scalable, autonomous, fast and reliable characterisation of donor spin qubits which is expected to play a crucial role in the scale-up process
Tools: Convolutional Neural Networks

Giovanni Oakes (University of Cambridge – UK)
Automatic virtual voltage extraction of a 2xN array of quantum dots with machine learning
Thursday 4th March 2021, 12:45-13:10 GMT
Spin qubits in quantum dots are a promising platform for fault-tolerant quantum computing. However, due to the presence of cross-coupling capacitances, it becomes impractical to heuristically tune as the number of quantum dots increases. We develop a theoretical framework to extract the virtual voltages of a 2×N array of quantum dots, based on the gradients of different charge transitions that can be measured in multiple two-dimensional stability diagrams.
Tools: Neural Networks

Dominic T Lennon (University of Oxford – UK)
Machine learning enables completely automatic tuning of a quantum device faster than human experts
Thursday 4th March 2021, 13:10-13:35 GMT
Bringing a spin qubit into operation requires a large parameter space to be explored. This process is intractable for humans as the complexity of circuits grows. We developed a machine learning algorithm that navigates the entire parameter space. We demonstrate fully automated tuning of a double quantum dot device in under 70 minutes, faster than human experts. We provide a quantitative measure of device variability. This is a key demonstration of quantum device tuning.
Tools: Gaussian processes, Thompson sampling, Monte Carlo, Bayesian inference, Point set registration, and Statistics

Ilan Mitnikov (Quantum Machines – Israel)
Booting a quantum computer: A QUA-based graph framework for automatic qubit calibration
Thursday 4th March 2021, 15:45-16:10 GMT
We have developed a framework that allows arranging and executing quantum and classical experimental steps as a directed acyclic graph (DAG) with control flow. The framework is built on top of Python and QUA, a pulse-level cross-quantum-platform programming language. A robust calibration strategy is developed using a set of expected behavior models and calibration heuristics provided by the user. Bootstrapping a system from its initial (e.g. post cool-down) state to a fully usable state, with minimal user input and intervention, is then possible. We showcase these abilities by a reference implementation of automated calibration targeting a system of superconducting qudits.
Tools: Multi-class classification and neural networks are used for multiplexed readout and a simultaneous multi-state discrimination. Both gradient based and local derivative-free optimization methods are used for pulse shaping and calibration as well as in implementing variational hybrid quantum algorithms using this same framework

Valeria Cimini (Universita’ degli Studi Roma Tre – Italy)
Classification of multimode states through artificial neural networks
Thursday 4th March 2021,16:10-16:35 GMT
The presence of non-Gaussian features in large multipartite networks is of key relevance to achieve quantum advantage. Their identification can be solved as a classification problem avoiding to resort to a complete tomographic reconstruction which results particularly time-consuming for states in high-dimensional Hilbert spaces. Here we report a novel approach, extremely fast and robust, that makes use of neural networks for the identification of the negativity of the Wigner function of optical multimode states.
Tools: feed-forward artificial neural network