Geometric data structures have been extensively studied in the regime where the dimension is much smaller than the number of input points. But in many scenarios in Machine Learning, the dimension can be much higher than the number of points and can be so high that the data structure might be unable to read and store all coordinates of the input and query points.
Inspired by these scenarios and related studies in feature selection and explainable clustering, we initiate the study of geometric data structures in this ultra-high dimensional regime. Our focus is the {\em approximate nearest neighbor} problem. In this problem, we are given a set of $n$ points $C\subseteq \mathbb{R}^d$ and have to produce a {\em small} data structure that can {\em quickly} answer the following query: given $q\in \mathbb{R}^d$, return a point $c\in C$ that is approximately nearest to $q$, where the distance is under $\ell_1$, $\ell_2$, or other norms. Many groundbreaking $(1+\epsilon)$-approximation algorithms have recently been discovered for $\ell_1$- and $\ell_2$ norm distances in the regime where $d\ll n$.
The main question in this paper is: {\em Is there a data structure with sublinear ($o(nd)$) space and sublinear ($o(d)$) query time when $d\gg n$?}
In this paper, we answer this question affirmatively. We present $(1+\epsilon)$-approximation data structures with the following guarantees.
\begin{itemize}
\item For $\ell_1$- and $\ell_2$-norm distances: $\tilde O(n \log(d)/\mathrm{poly}(\epsilon))$ space and $\tilde
O(n/\mathrm{poly}(\epsilon))$ query time.
\item For $\ell_p$-norm distances: $\tilde O(n^2 \log(d) (\log\log (n)/\epsilon)^p)$ space and $\tilde O\left(n(\log\log
(n)/\epsilon)^p\right)$ query time.
\end{itemize}