IIIT Hyderabad Publications 


Quantum Algorithms for Regularized Least SquaresAuthor: Aditya Morolia 20171177 Date: 20230607 Report no: IIIT/TH/2023/51 Advisor:Shantanav Chakraborty,Indranil Chakrabarty AbstractLinear regression is a widely used technique to fit linear models and finds widespread applications across different areas of natural sciences, as well as field such as machine learning and statistics. In most realworld scenarios, however, linear regression problems are often illposed or the underlying model suffers from overfitting, leading to erroneous or trivial solutions. This is often dealt with by adding extra constraints, known as regularization. For instance, suppose that we are given N data points {(ai , bi)} N i=1 where ∀i : ai ∈ R d , ∀i : bi ∈ R. The assumption is that there is a vector x such that bi = x T ai +ei , where ei is a random variable (noise) with mean 0. Suppose A is the data matrix of dimension N × d, such that its i th row is the vector ai and b ∈ R N such that b = (b1, · · · , bN ) T . Adding an ℓ2regularization, the objective is to obtain x that minimizes ∥Ax − b∥ 2 2 + λ∥Lx∥ 2 2 (0.0.1) where L is an appropriately chosen penalty matrix (or regularization matrix) of dimension N ×d and λ > 0 is the regularization parameter, an appropriately chosen constant. This regularization technique is known as general ℓ2 regularization in the literature. It is a generalization of the well known ridge regression which corresponds to the case when L is the identity matrix. The closedform solution of the general ℓ2regularized ordinary least squares problem is given by x = A T A + λLTL −1 A T b. (0.0.2) A straightforward observation is that even when AT A is singular, regularization ensures that the condition number (ratio of the maximum and the minimum singular values) of the resulting matrix is finite and therefore AT A + λLTL is invertible. Another observation is that when the condition number of A is arbitrarily large (the minimum singular value of A is arbitrarily small), the condition number of the matrix to be inverted in Equation 0.0.2 can be tamed significantly. Since the complexity of most quantum algorithms for the quantum versions of these problems depend the condition number, the complexity reduces significantly. These models can be generalized to situations where the data points are weighted or correlated which incorporate more realistic scenarios. Quantum linear systems solvers are possibly among the most studied quantum algorithms. In this work we develop the first quantum algorithms for ordinary, weighted and generalized least squares with generalized ℓ2 regularization. We assume access to (approximate) blockencodings of the relevant matrices and develop robust versions of quantum singular value transformations to implement linear transformations of these matrices. Our goal is to output a quantum state that is δclose to x⟩, the state encoding the solution to the underlying regularized least squares problem. Owing to the generality of the blockencoding framework, our algorithms are applicable to a variety of input models. In order to obtain our results, we work with approximate blockencodings of matrices and apply robust quantum singular value transformations (QSVT) to them. For most quantum linear algebra applications using QSVT, access to perfect blockencoding is assumed, and the robustness of such algorithms is left out. We develop a spaceefficient variable time amplitude amplification (VTAA) procedure, which we then use in a variabletime matrix inversion algorithm. A crucial requirement for variable time matrix inversion algorithm is the application of the inversion procedure to the portion of the input state that is spanned by singular values larger than a certain threshold. In order to achieve this, prior results have made use of Gapped Phase Estimation (GPE), which is a simple variant of the standard phase estimation procedure. However, GPE requires extra registers that store the estimates of the phases, which are never used in the variabletime algorithm. In this work, instead of GPE, we make use of a robust version of quantum eigenvalue discrimination (QEVD) using QSVT, which decides whether the eigenvalue of a matrix is above or below a certain threshold without storing the eigenvalue estimates. Given the blockencoding of a matrix A with condition number κ, this procedure for implementing A+ reduces the number of additional qubits required for our variable time by a factor of O(log2 (κ/δ)) as compared to prior results, where δ is the desired accuracy. We show that in order to implement A+, the precision required in the blockencoding of A is determined in turn by the precision in the QEVD procedure. Consequently, this improves both variabletime amplitude amplification and quantum linear systems algorithm: we achieve optimal complexity (linear dependence in the condition number and polylogarithmic dependence on inverseaccuracy) using far fewer ancilla qubits. We then use this to develop our algorithms for ordinary, weighted and generalized least squares with general ℓ2regularization. The primitives we have developed, and our approach to designing the algorithms might be of independent interest to the community, specially in the domain of quantum machine learning (QML) and quantum algorithms for natural sciences, where such quantum linear algebra subroutines (QLAS) are frequently used. Full thesis: pdf Centre for Security, Theory and Algorithms 

Copyright © 2009  IIIT Hyderabad. All Rights Reserved. 