However, in many cases $A$ and $B$ might satisfy sparsity conditions, and hence one could hope for significant gains compared to the standard FFT algorithm. The ideal goal would be an $O(k \log k)$-time algorithm, where $k$ is the number of non-zero elements in the output, i.e., the size of the support of $A \star B$. This problem is referred to as \emph{sparse} nonnegative convolution, and has received a considerable amount of attention in the literature; the fastest algorithms to date run in time $O(k \log^2 n)$.
The main result of this paper is the first $O(k \log k)$-time algorithm for sparse nonnegative convolution. Our algorithm is randomized and assumes that the length $n$ and the largest entry of $A$ and $B$ are subexponential in $k$. Surprisingly, we can phrase our algorithm as a reduction from the sparse case to the dense case of nonnegative convolution, showing that, under some mild assumptions, sparse nonnegative convolution is equivalent to dense nonnegative convolution for constant-error randomized algorithms. Specifically, if $D(n)$ is the time to convolve two nonnegative length-$n$ vectors with success probability $2/3$, and $S(k)$ is the time to convolve two nonnegative vectors with output size $k$ with success probability~$2/3$, then $S(k) = O( D(k) + k (\log \log k)^2)$.
Our approach uses a variety of new techniques in combination with some old machinery from linear sketching and structured linear algebra, as well as new insights on linear hashing, the most classical hash function.