In a paper published in the journal Intelligent Computing, researchers demonstrated methods for efficiently encoding image data into quantum states using variational, genetic, and matrix product state algorithms. They achieved circuits two orders of magnitude shallower than standard implementations, significantly reducing circuit depth and gate count while maintaining classification accuracy. The quantum machine learning (QML) models trained on this data showed increased robustness to adversarial perturbations.
Related Work
Past work has shown that the impressive capabilities of transformer-based models have spurred widespread interest in artificial intelligence (AI) and ML, extending beyond academic contexts into various practical domains. Concurrently, the advent of quantum computers has driven significant exploration of QML, which promises exponential speedups and greater robustness than classical methods.
However, challenges like the computationally intensive quantum state preparation for classical data still need to be solved. Addressing this is crucial, as inefficient state preparation can negate any potential quantum advantage, especially given current hardware limitations.
Efficient State Preparation
Matrix product states (MPS) offer a way to represent quantum systems efficiently, particularly those with limited bipartite entanglement. By decomposing states in MPS form using sequential k-local operators, where k depends on the bond dimension, researchers managed to disentangle qubits efficiently.
This process involves the eigen-decomposition of reduced density matrices and using unitary operators. However, due to the exponential size of these operators, a heuristic approach was adopted, applying operations iteratively across qubits until reaching satisfactory fidelity. This method allowed parallel processing of subsystems, significantly reducing circuit complexity.
The genetic algorithm for state preparation (GASP) provides an alternative approach, mimicking natural selection to optimize quantum circuits. The algorithm begins by creating a population of circuits representing the desired state, assessing their fitness based on fidelity, and evolving them using genetic operators like crossover and mutation.
GASP iteratively improves the circuits by maintaining diversity and employing sequential optimization, reducing the number of required controlled-NOT (CNOT) gates and total gate count. This approach showed promise in evolving circuits efficiently, even for complex states, while minimizing noise—a crucial factor for current quantum hardware.
A third method used variational circuits optimized to approximate the target quantum state. These circuits consist of layers of arbitrary qubit rotations followed by CNOT gates between adjacent qubits. Starting with a single layer, the circuit parameters were refined using the Adam optimizer, with more layers added as needed until the desired fidelity was reached. Despite its simplicity, the variational approach matched the performance of the other methods, often requiring only a few layers to achieve adequate fidelity.
These techniques—MPS encoding, genetic algorithms, and variational circuits—offer a unique strategy for efficient quantum state preparation. By minimizing circuit depth and gate count, they aim to make QML models more viable on current quantum devices, addressing critical challenges in data encoding and enabling more practical quantum applications.
Quantum Models Enhance Robustness
This study revealed that QML models could preserve classification accuracy and boost adversarial robustness while significantly reducing encoding circuit complexities through approximate data encoding. Traditional high-fidelity methods, such as the Shende–Bullock–Markov (SBM) algorithm, demand complex circuits impractical for current quantum hardware.
However, using techniques like MPS, GASP, and variational circuits, the researchers achieved circuits with two orders of magnitude fewer entangling gates, maintaining performance even when encoded states had fidelities as low as 60%. This reduction in circuit depth proved advantageous, enabling impressive accuracy in classification tasks on actual devices like International Business Machines (IBM’s) Algiers.
Adversarial robustness emerged as a promising benefit of these approximate models. The study found that noise introduced by approximate state preparation made QML models more resilient to adversarial attacks than exact methods. It is because errors in state preparation diminished the effectiveness of adversarial perturbations, echoing classical machine learning strategies where noise helps counter such attacks. Models using approximate methods were tested on adversarially manipulated Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST datasets, showing slight improvements in robustness without sacrificing classification performance. The results emphasize the potential of QML models to withstand adversarial threats, making them suitable for applications in security-sensitive domains.
The QML models employed a three-step structure: data encoding, a trainable variational circuit, and measurement of observables. The research explored the effectiveness of adversarial attacks using projected gradient descent (PGD) to maximize model misclassification.
Despite the vulnerability of classical convolutional neural networks to adversarial perturbations, the QML models demonstrated resilience by learning different features, an advantage transferable across networks. Thus, QML offers a promising direction for developing secure, efficient quantum algorithms, leveraging approximate state preparation to achieve practical robustness and fidelity in real-world applications.
Conclusion
To sum up, the study simplified the expensive process of arbitrary state preparation in QML by using low-fidelity approximations and allowing models to learn to handle noise. This approach reduced the gate count for 9-qubit states by two orders of magnitude while preserving classification accuracy.
As a result, it significantly decreased hardware resource requirements for fault-tolerant QML using surface code error correction. The potential of algorithms designed for low-fidelity approximations with shallower circuits was also highlighted.
Journal Reference
West, M. T., et al. (2024). Drastic Circuit Depth Reductions with Preserved Adversarial Robustness by Approximate Encoding for Quantum Machine Learning. Intelligent Computing, 3. DOI: 10.34133/icomputing.0100, https://spj.science.org/doi/10.34133/icomputing.0100
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.