Jeroen Zuiddam - Discreteness of asymptotic tensor ranks - IPAM at UCLA
Presenter
February 7, 2024
Event: Tensor Networks Workshop
Abstract
Recorded 07 February 2024. Jeroen Zuiddam of the University of Amsterdam presents "Discreteness of asymptotic tensor ranks" at IPAM's Tensor Networks Workshop.
Abstract: Tensor parameters that are amortized or regularized over large tensor powers, often called "asymptotic" tensor parameters, play a central role in several areas including algebraic complexity theory (constructing fast matrix multiplication algorithms), quantum information (entanglement cost and distillable entanglement), and additive combinatorics (bounds on cap sets, sunflower-free sets, etc.). Examples are the asymptotic tensor rank, asymptotic slice rank and asymptotic subrank. Recent works (Costa-Dalai, Blatter- Draisma-Rupniewski, Christandl-Gesmundo-Zuiddam) have investigated notions of discreteness (no accumulation points) or "gaps" in the values of such tensor parameters.
We prove a general discreteness theorem for asymptotic tensor parameters of order-three tensors and use this to prove that (1) over any finite set of coefficients (in any field), the asymptotic subrank and the asymptotic slice rank have no accumulation points, and (2) over the complex numbers, the asymptotic slice rank has no accumulation points.
Central to our approach are two new general lower bounds on the asymptotic subrank of tensors, which measures how much a tensor can be diagonalized. The first lower bound says that the asymptotic subrank of any concise three-tensor is at least the cube-root of the smallest dimension. The second lower bound says that any three-tensor that is "narrow enough" (has one dimension much smaller than the other two) has maximal asymptotic subrank.
Our proofs rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions. We prove that for any concise tensor the product of any two such maximum ranks must be large, and as a consequence there are always two distinct directions with large max-rank.
This is joint work with Jop Briƫt, Matthias Christandl, Itai Leigh, and Amir Shpilka
https://arxiv.org/abs/2306.01718
Learn more online at: https://www.ipam.ucla.edu/programs/workshops/tensor-networks/