Algorithmic Thresholds for Ising Perceptron
Presenter
May 22, 2024
Abstract
The Ising perceptron is a toy model of a single-layer neural network, which can be viewed as a random constraint satisfaction problem with dense connectivity. The problem requires finding a length-N vector of +1 or -1 entries, such that its product with a random Gaussian M by N matrix is entrywise lower bounded. We work in the regime where both M and N grow large with a fixed ratio M/N = alpha. In this talk, we present some recent findings on the algorithmic aspects of the Ising perceptron. Inspired by the rich literature on discrepancy minimization, we propose an efficient algorithm that improves on previous algorithms (works for higher values of alpha). We also show that at high enough alpha, the Ising perceptron exhibits the multi Overlap Gap Property (m-OGP), which matches our algorithmic threshold up to poly-logarithmic factors. This is based on joint work with Tselil Schramm and Kangjie Zhou.