Subscribe to the mailing list Subscribe to iCalendar Watch on Zoom
Coined discrete time quantum walk is a model of quantum computation and a discrete time dynamical system that has been extensively studied by the community. It is the quantum analog of the classical random walk. One of its most popular application is to search an element in an unstructured database. The way it works is similar to the well known Grover algorithm, and it presents similar results. There are however some behaviors that differentiate this quantum walk based search from a Grover search. Those differences encourage its use for structured searching in a graph. In this talk, I will first introduce quantum walks and quantum walk based search. Then, I will show how this can be extended to unstructured searching and distributed computing.
This presentation explores the implications of the intersection of quantum computing and cryptography. We are concentrating on security proof aspects. Specifically, we examine situations in which a classical security proof may not be valid in the presence of a quantum attacker. As an example, we talk about the security proofs in the quantum random oracle model and superposition-access model. In addition, we explore how classical security definitions can be adopted to a post-quantum world.
In this talk, I will present the research activities undertaken by our group focused on the study of the electronic structure of molecular systems. Over the recent years, our efforts have been focused on diverse and complementary facets, spanning the development, implementation, and application of quantum chemistry methodologies. These approaches have been instrumental in dissecting an extensive array of molecular systems, probing their inherent properties, and unraveling their responses to stimuli such as electromagnetic radiation, external magnetic fields, and mechanical perturbations. The talk will provide a comprehensive overview of our progress in several key areas, including the development of quantum chemistry methodologies tailored for both ground and excited states, the introduction of efficient quantum algorithms for quantum chemistry, the characterization of strongly correlated molecules, the study of intricate molecular photophysical processes, and the computational study of magnetic properties of high spin molecules.
In recent years, quantum technologies have experienced rapid growth and maturation. As quantum devices become capable of specific tasks, ensuring their proper functioning is crucial, necessitating reliable certification techniques. Certification became one of the most important topics in the field as it addresses concerns related to noise and decoherence, ensuring that devices align effectively with blueprints. In this talk I will discuss possible answers to the question: How can certification methods, which rely on the robustness of quantum correlations, be applied to quantum computing platforms? Self-testing as the most important primitive for device-independent certification is constructed within the framework of the Bell scenario, which entails two or more spatially separated parties. While this setup is advantageous for demonstrating foundational proofs of quantumness, its application to computing platforms poses challenges due to the inherent integrality of such platforms, making them incompatible with Bell-type scenarios. I will describe two approaches to dealing with this problem. In the first one I give some answers stemming from using quantum homomorphic encryption to bypass the locality requirement. In the second one I describe plethora of self-testing results that can be proven in the case when some amount of communication is allowed among the parties.
Towards “Good-Enough” Quantum Low-Density Parity-Check Codes The accumulation of errors in quantum computers obstructs the execution of powerful algorithms. Hence, current quantum computers require Quantum Error Correction (QEC) in order to be functional and reliable even in the presence of such errors, something known as Fault-Tolerant Quantum Computation (FTQC). Conventional strategies for protecting and correcting against qubit errors, such as Surface Codes, suffer from large overheads in the number of physical qubits that are used to encode logical information. In other words, practical implementation of quantum error-correcting codes requires a better ratio between the number of logical and physical qubits, something known as a high encoding rate. Quantum Tanner Codes have been shown to be optimal in the asymptotic limit due to their rich structure, providing a potential solution to this conundrum. However, “good-enough” finite explicit constructions have not been found so far. In this work, we reformulate Tanner codes as unconventional lattice gauge theories describing spin many-body systems. Preliminary results indicate the potential for an unusually robust relative of topologically ordered matter as a key element for the outstanding capabilities of this family of codes.
The k-distinctness problem is a major problem in quantum computing. The best quantum centralized (i.e., non-distributed) algorithms for the k-distinctness problem are based on quantum walk search. In this work, we investigate the complexity of this problem in the distributed setting. By slightly modifying the algorithm by Le Gall and Magniez (PODC 2018) for network diameter computation in the quantum CONGEST model, we derive a distributed quantum algorithm that efficiently solves a natural distributed version of k-distinctness without using a quantum walk search. By studying the complexity of a more general problem (k-subset-finding), we obtain a lower bound which suggests that quantum walk search might be a useful technique for solving certain difficult instances of this problem. Finally, we study what happens in the CONGEST-CLIQUE model and gives a framework to transform a quantum parallel query algorithm into a distributed one in the CONGEST-CLIQUE model. If time allows, we could discuss a little bit the multiplication of rectangular matrices in the CONGEST-CLIQUE model.
Quantum error correction (QEC) and quantum error mitigation (QEM) are the two most popular techniques to deal with errors occurring during a quantum computation. QEC comes with a polynomial asymptotic overhead in terms of qubit and gate count, but the overhead is massive in practice. For example, the factorization of RSA-2,048 may require surface codes using 1,457 physical qubits per logical qubit. QEM is already used in today’s machines because it consumes only few or no extra qubits. However, it is not a viable solution for large quantum circuits because the sampling cost, i.e., the number of circuit executions required, generally grows exponentially with the size of the circuit. In this work, we propose a scheme that fills the gap between the regime of QEC and QEM. This Clifford noise reduction (CliNR) scheme reduces the logical error rate of Clifford circuits in practically relevant regimes, and it does so at the price of comparatively small overheads. It can be made universal together with physical or logical single-qubit rotations. Based on joined work with Edwin Tham https://arxiv.org/abs/2407.06583
For quantum error-correcting codes to be realizable, it is important that the qubits subject to the code constraints exhibit some form of limited connectivity. The works of Bravyi & Terhal (BT) and Bravyi, Poulin & Terhal (BPT) established that geometric locality constrains code properties – for instance [[n,k,d]] quantum codes defined by local checks on the D-dimensional lattice must obey kd2/(D−1)≤O(n). Baspin and Krishna studied the more general question of how the connectivity graph associated with a quantum code constrains the code parameters. These trade-offs apply to a richer class of codes compared to the BPT and BT bounds, which only capture geometrically-local codes. We extend and improve this work, establishing a tighter dimension-distance trade-off as a function of the size of separators in the connectivity graph. We also obtain a distance bound that covers all stabilizer codes with a particular separation profile, rather than only LDPC codes. This talk is based on the following papers: 1. https://arxiv.org/abs/2106.00765 2. https://arxiv.org/abs/2109.10982 3. https://arxiv.org/abs/2307.03283
In this talk I will present the main result of the paper “Optimizing Strongly Interacting Fermionic Hamiltonians” Matthew B. Hastings, Ryan O’Donnell, arXiv:2110.1070, STOC 2022, whose abstract is: “The fundamental problem in much of physics and quantum chemistry is to optimize a low-degree polynomial in certain anticommuting variables. Being a quantum mechanical problem, in many cases we do not know an efficient classical witness to the optimum, or even to an approximation of the optimum. One prominent exception is when the optimum is described by a so-called “Gaussian state”, also called a free fermion state. In this work we are interested in the complexity of this optimization problem when no good Gaussian state exists. Our primary testbed is the Sachdev–Ye–Kitaev (SYK) model of random degree- q polynomials, a model of great current interest in condensed matter physics and string theory, and one which has remarkable properties from a computational complexity standpoint. Among other results, we give an efficient classical certification algorithm for upper-bounding the largest eigenvalue in the q = 4 SYK model, and an efficient quantum certification algorithm for lower-bounding this largest eigenvalue; both algorithms achieve constant-factor approximations with high probability.
Quantum Chemistry and Physics have been identified as key applications for quantum computers and quantum algorithms have been designed to solve the Schrödinger equation using the wavefunction formalism. In this context, we have proposed a VQE-type algorithm specifically for the Hubbard model, particularly in the strongly interacting limit. However, the wavefunction formalism is still limited to small systems, as their size is constrained by the number of available qubits. Computations on larger systems primarily rely on mean-field-type approaches such as density functional theory, for which no quantum advantage has been envisioned so far. In this seminar, we will also challenge this assumption by proposing a counter-intuitive mapping from the non-interacting to an auxiliary interacting Hamiltonian that may provide the desired advantage.
In this talk we will discuss to what extent entanglement is a “feelable” (or efficiently observable) quantity of quantum systems. Inspired by recent work of Gheorghiu and Hoban, we define a new notion which we call “pseudoentanglement”, which are ensembles of efficiently constructible quantum states which hide their entanglement entropy. We show such states exist in the strongest form possible while simultaneously being pseudorandom states. Consequently, we prove that there is no efficient algorithm for measuring the entanglement of an unknown quantum state, under standard cryptographic assumptions. We will talk about applications of this construction to diverse areas such as property testing and holography. We will then discuss recent constructions of so-called “public key” pseudoentanglement, which are pseudoentangled quantum states that remain secure even if we give the adversary access to a quantum circuit that prepares these states. As a corollary we prove the existence of local Hamiltonians whose ground states are pseudoentangled.
Several well-known quantum algorithms provide up to, or exactly, quadratic speed-ups compared to their classical counterparts. Are these merely variants of Grover’s algorithm? Why do we hit this exact speed bump? In this survey talk we examine a range of algorithms, from quantum random walks to quantum versions of Monte Carlo methods, and explore these questions.
We propose a novel variational ansatz for the ground-state preparation of the ℤ2 lattice gauge theory (LGT) in quantum simulators. It combines dissipative and unitary operations in a completely deterministic scheme with a circuit depth that does not scale with the size of the considered lattice. We find that, with very few variational parameters, the ansatz can achieve >99% precision in energy in both the confined and deconfined phase of the ℤ2 LGT. We benchmark our proposal against the unitary Hamiltonian variational ansatz and find a clear advantage of our scheme, especially when focusing on the nature of the confinement-deconfinement transition of the ℤ2 LGT. After performing a finite-size scaling analysis, we show that our dissipative variational ansatz can predict critical exponents with reasonable accuracies even for reduced qubit numbers and circuit depths. Furthermore, we investigate the performance of this variational eigensolver subject to circuit-level noise, determining variational error thresholds that fix the error rate pℓ below which p<pℓ it would be beneficial to increase the number of layers ℓ↦ℓ′>ℓ. In light of these quantities and for typical gate errors p in current quantum processors, we provide a detailed assessment of the prospects of our scheme to explore the ℤ2 LGT on near-term devices. arXiv:2308.03618
This talk present a new way to reduce the number of qubit needed to run Shor’s algorithm to factor RSA integers. To do so, we integrate a theoretic idea of May and Schlieper, into an already existing optimisation due to Ekera-Hastad. This integration is made possible by several arithmetic tricks. To learn more about it, come to the talk ;) Based on https://eprint.iacr.org/2024/222
Based on https://arxiv.org/abs/2311.13040.
The fundamental problem in coding theory is to determine the maximum size of an error-correcting code of given distance and block length. Here we approach this question by providing a semidefinite programming hierarchy to determine the existence of quantum codes with given parameters. The hierarchy is complete in the sense that every set of parameters for which no code exists is detected at some level of the hierarchy.
Demi-journée scientifique du département Combalgo suivie d’un buffet dans l’atrium. 09h00-09h10 Cyril Gavoille (chef du département) Combalgo en 3 mots 09h10-09h55 Clara Marcille (équipe Graphe et Optimisation) From antimagic to equitable labellings 09h55-10h40 Felix Huber (équipe Quantique) Quantum problems on graphs 10h40-11h00 pause café-thé-gateauxw 11h00-11h45 Sébastien Bouchard (équipe Algorithmique Distribuée) Distributed Systems of Mobile Agents and Verification 11h45-12h30 Sébastien Labbé (équipe Combinatoire et Applications) Pavages apériodiques et expérimentations dans l’environnement SageMath 12h30-14h00 buffet dans l’atrium
We provide a systematic method for nonlinear entanglement detection based on trace polynomial inequalities. In particular, this allows us to employ multipartite witnesses for the detection of bipartite states, and vice versa. We identify pairs of entangled states and witnesses for which linear detection fails, but for which nonlinear detection succeeds. With the trace polynomial formulation a great variety of witnesses arise from immanant inequalities, which can be implemented in the laboratory through the randomized measurements toolbox.
In this talk, I will introduce quantum isomorphisms through the isomorphism game. I will then define group-invariant quantum Latin squares and their connection to quantum isomorphisms. We will see some representation theory and finally, I will introduce a method of constructing Cayley graphs that are quantum isomorphic.
At first sight, dissipative processes are the source of quantum decoherence and limit the timescale over which a given system can be controlled. On the other hand, from a control point of view, the availability of dissipative processes also opens new avenues, in particular regarding the stabilization of quantum systems. This strategy, known as quantum reservoir engineering, can be traced back to the seminal work of Alfred Kastler on optical pumping. In this talk, we present how to exploit reservoir engineering to stabilize and control Gottesman-Kitaev-Preskill (GKP) qubits - a bosonic encoding exploiting exotic states of light or matter to reduce the hardware cost of quantum error correction. We propose a novel approach relying on nonlinear modular interactions with a dissipative auxiliary system to autonomously stabilize the GKP code. This approach robustly suppresses local noise processes; unlike previous proposals, it also suppresses the propagation of noise from the auxiliary system used for dissipation engineering itself. In a state-of-the-art experimental setup based on superconducting circuits, we estimate that the encoded qubit lifetime could extend several orders of magnitude beyond break-even.
Finding a good approximation of the top eigenvector of a given d × d matrix A is a basic and important computational problem, with many applications. We give two different quantum algorithms that, given query access to the entries of A and assuming a constant eigenvalue gap, output a classical description of a good approximation of the top eigenvector: one algorithm with time complexity d^{1.5+o(1)} and one with time complexity \tilde{O}(d^{1.75}) that has a slightly better dependence on the precision of the approximation. Both provide a polynomial speed-up over the best-possible classical algorithm, which needs Ω(d^2) queries to entries of A (and hence Ω(d^2) time). We extend this to a quantum algorithm that outputs a classical description of the subspace spanned by the top-q eigenvectors in time qd^{1.5+o(1)}. We also prove a nearly-optimal lower bound of \tilde{Ω}(d^{1.5}) on the quantum query complexity of approximating the top eigenvector. Our quantum algorithms run a version of the classical power method that is robust to certain benign kinds of errors, where we implement each matrix-vector multiplication with small and well-behaved error on a quantum computer, in different ways for the two algorithms.
Our first algorithm used block-encoding techniques to compute the matrix-vector product as a quantum state, from which we obtain a classical description by a new time-efficient unbiased pure-state tomography algorithm that has essentially optimal sample complexity O(d log(d)/ε^2) and that comes with improved statistical properties compared to earlier pure-state tomography algorithms. Our second algorithm estimated the matrix-vector product one entry at a time, using a new “Gaussian phase estimation” procedure. We also develop a time-efficient process- tomography algorithm for reflections around bounded-rank subspaces, providing the basis for our top-eigensubspace estimation application. This is the joint work with Ronald de Wolf and András Gilyén.
We study the notion of k-stabilizer universal quantum state, that is, an n-qubit quantum state, such that it is possible to induce any stabilizer state on any k qubits, by using only local operations and classical communications. These states generalize the notion of k-pairable states introduced by Bravyi et al., and can be studied from a combinatorial perspective using graph states and k-vertex-minor universal graphs. First, we demonstrate the existence of k-stabilizer universal graph states that are optimal in size with n=Θ(k2) qubits. We also provide parameters for which a random graph state on Θ(k2) qubits is k-stabilizer universal with high probability. Our second contribution consists of two explicit constructions of k-stabilizer universal graph states on n=O(k4) qubits. Both rely upon the incidence graph of the projective plane over a finite field 𝔽q. This provides a major improvement over the previously known explicit construction of k-pairable graph states with n=O(23k), bringing forth a new and potentially powerful family of multipartite quantum resources. Based on https://arxiv.org/abs/2402.06260.
In this talk, I will give an overview of the algorithmic aspects of quantum topology, a field of mathematics studying the topology of low-dimensional objects (knots, 3-manifolds) with algebraic tools originally designed for quantum physics. On the algorithmic side, early results show that the computation of the so called quantum topological invariants reveals rich dichotomy properties in parameterized complexity, randomized approximation, and quantum computing. I will present some results in the field and highlight how the diversity of tools offered by low dimensional topology has promising applications in standard quantum computing.
We will be watching and commenting the video https://www.youtube.com/watch?v=MnxQ3oqTWgQ by Ryan O’Donnell.
A new information theoretic condition is presented for reconstructing a discrete random variable X based on the knowledge of a set of discrete functions of X. The reconstruction condition is derived from Shannon’s 1953 lattice theory with two entropic metrics of Shannon and Rajski. Because such a theoretical material is relatively unknown and appears quite dispersed in different references, we first provide a synthetic description (with complete proofs) of its concepts, such as total, common and complementary informations. Definitions and properties of the two entropic metrics are also fully detailed and shown compatible with the lattice structure. A new geometric interpretation of such a lattice structure is then investigated that leads to a necessary (and sometimes sufficient) condition for reconstructing the discrete random variable X given a set {X1,…,Xn} of elements in the lattice generated by X. Finally, this condition is illustrated in five specific examples of perfect reconstruction problems: reconstruction of a symmetric random variable from the knowledge of its sign and absolute value, reconstruction of a word from a set of linear combinations, reconstruction of an integer from its prime signature (fundamental theorem of arithmetic) and from its remainders modulo a set of coprime integers (Chinese remainder theorem), and reconstruction of the sorting permutation of a list from a minimal set of pairwise comparisons.
9h00 - Yassine Hamoudi (LaBRI, CNRS Bordeaux) The NISQ Complexity of Collision Finding 10h00 - Maël Luce (IRIF, Univ. Paris-Cité) Distributed Grover Algorithm 11h00 - Simon Apers (IRIF, CNRS Paris) Quantum speedups for linear programming via interior point methods
9h00 - Félix Huber (LaBRI, Univ. Bordeaux) Semidefinite programming bounds for quantum codes 10h00 - Wouter Rozendaal (IMB, Univ. Bordeaux) A Renormalisation Decoder for Kitaev’s Toric Code 11h00 - Jens Siewert (UPV/EHU, Leioa - Spain) Correlation constraints and the Bloch geometry of two-party systems 14h00 - Simon Martiel (IBM Quantum, Paris) Shallower CNOT circuits on realistic quantum hardware 15h00 - Arthur Braida (ATOS & Univ. Orléans) Tight Lieb-Robinson bound for approximation ratio in Quantum Annealing 16h00 - Ion Nechita (CNRS Toulouse) Monogamy of highly symmetric states
14h00 - Yixin Shen (INRIA Rennes) Finding many Collisions via Reusable Quantum Walks 15h00 - Stéphane Dartois (CEA LIST, Univ. Paris-Saclay) Geometric multipartite entanglement and injective norm of uniform quantum states 16h00 - Leonardo Novo (INL Braga - Portugal) Quantum search and optimization with continuous-time quantum walks
La Maximum Mean Discrepancy (MMD) permet de mesurer la distance entre distributions de probabilités. Son calcul est réalisé à l’aide des Kernel Mean Embedding, qui consistent à représenter les distributions dans des espaces de dimensions supérieur muni d’un produit scalaire, puis de calculer la distance dans cet espace [1]. Le swap-test [2] est une technique permettant d’estimer des produits scalaires avec un ordinateur quantique. En définissant la Quantum Mean Embedding (QME), le swap-test permet de rendre la complexité de calcul d’une estimation de la MMD indépendante du nombre d’échantillons [3], contrairement au cas classique où O(N²) calculs sont nécessaires pour N échantillons. Si la QME peut être préparée avec une complexité linéaire en le nombre d’échantillons, il est alors possible de réaliser un gain de complexité [3]. Je présenterai une implémentation du noyau Gaussien en terme de QME, qui pourrait permettre un tel gain. Ref: [1] K. Muandet, K. Fukumizu, B. Sriperumbudur, B. Schölkopf, Kernel Mean Embedding: A Review and Beyond (2017) [2] H. Buhrman, R. Cleve, J. Watrous, and R. de Wolf, Phys. Rev. Lett. 87, 167902 (2001) [3] J. M. Müler, K. Muandet, B. Schölkopf, Phys. Rev. Research 1, 033159 (2019)
I will explain how quantum algorithms can compute the Nash equilibrium of zero-sum games more efficiently compared to classical algorithms. In particular, I will describe a quantum state preparation technique for sampling from any discrete probability distribution. The talk will highlight some common features and limitations in current quantum algorithms for solving optimization problems with rigorous guarantees.
À San Sébastien, le département d’éducation du gouvernment du Pays Basque a fait l’acquisition auprès d’IBM d’un ordinateur quantique à 127 qubits. Dans ce second exposé seront présentées une partie des activités scientifiques à l’université de San Sébastien en lien avec cet investissement.
A scanning tunnelling microscope (STM) can drive the spin evolution of a single atom, molecule or nanostructure on a solid surface [1]. A tunnelling electron current is localized on a single atom, while the bias is modulated at microwave frequencies. The measured current shows excitations attributed to a spin resonance (ESR). We have developed software to simulate virtually any instance of ESR-STM under realistic conditions of temperature, external fields, and conductance [2-5]. Here, we suggest the realization of the usual quantum circuit made up of one Hadamard and one CNOT gate that converts a product state to a Bell state. Quantum circuit is simulated with the interaction of two Ti atoms on an MgO substrate driven by a scanning tunneling microscope. The first step of the sequence (a Hadamard gate) is implemented to act on the first spin. The evolution onto the final state demonstrates the realization of the Hadamard gate, plus current-induced decoherence if a longer evolution time is used. To characterize our gate, we evaluate its fidelity on the desired Bell state. This allows us to discuss the performance of the simulated gate using tunnelling currents and solid state-hosted spins. References: [1] K. Yang et al. Science 366, 509 (2019) [2] J. reina-Gálvez et al PRB 100, 035411 (2019) [3] J Reina-Gálvez, Lorente, Delgado, Arrachea Phys. Rev. B 104, 245415 (2021) [4] J Reina-Gálvez, Wolf, Lorente Phys. Rev. B 107, 235404 (2023) [5] https://github.com/qphensurf
This algorithm is an improvement of Shor’s algorithm for factorization. Although the number of qubits is the same, O(n), the number of gates is reduced from O(n^2) to O(n^(3/2)).
Recently, Apers and Piddock [TQC ‘23] strengthened the natural connection between quantum walks and electrical networks by considering Kirchhoff’s Law and Ohm’s Law. In this talk, I will discuss the multidimensional electrical network by defining Kirchhoff’s Alternative Law and Ohm’s Alternative Law based on the novel multidimensional quantum walk framework by Jeffery and Zur [STOC ‘23]. This multidimensional electrical network allows one to sample from the electrical flow obtained via a multidimensional quantum walk algorithm and achieve exponential quantum-classical separations for certain graph problems. In analogy to the connection between the (edge-vertex) incidence matrix of a graph and Kirchhoff’s Law and Ohm’s Law in an electrical network, we also rebuild the connection between the alternative incidence matrix and Kirchhoff’s Alternative Law and Ohm’s Alternative Law.
Compte-rendu de lecture de arxiv:2308.07915 par Bravyi, Cross, Gambetta, Maslov, Rall et Yoder.
Dans cet exposé, nous utilisons la présentation du projet Hybrid Quantum initiative pour faire un panorama de la recherche sur l’informatique quantique. Cela consiste en construction d’ordinateurs, correction des erreurs, optimisation, simulation, cryptanalyse etc.
Rentrée et compte-rendu de lecture de “ Discrete Bulk Reconstruction” de Aaronson et Pollack.