5

Geometry of High Dimensions

From Euclid to Latent Spaces

300 BCE → 2024 The Shape of Meaning

We live in three dimensions, yet AI thinks in thousands. The geometry that Euclid codified for flat planes extends into spaces with 768, 4096, even 12,288 dimensions — and in those vast spaces, meaning itself has shape. The journey from ancient geometry to modern latent spaces reveals that mathematics can describe worlds far beyond what our eyes can see.

The Timeline

Origin 300 BCE

Euclid of Alexandria

Euclid’s five postulates built all of geometry from the ground up. For 2,000 years, geometry meant Euclidean geometry. The fifth postulate — the parallel postulate (through a point not on a line, exactly one parallel exists) — seemed obvious. Attempts to prove it would eventually shatter our understanding of space itself.

$$d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$$
Origin

Euclid’s distance formula extends directly to any number of dimensions. In 768-D space: $d = \sqrt{\sum_{i=1}^{768}(a_i - b_i)^2}$

Breakthrough 1637

René Descartes

Descartes merged algebra and geometry by introducing coordinates. Every geometric shape became an equation; every equation became a shape. This union — analytic geometry — made it possible to do geometry with numbers, which is exactly what computers need.

$$x^2 + y^2 = r^2 \qquad\qquad y = mx + b$$
Breakthrough

Descartes’ insight: geometry IS algebra with a different notation. This duality is why neural networks (algebra) can learn geometric structures.

Discovery 1830s

János Bolyai, Nikolai Lobachevsky, Carl Friedrich Gauss

What if Euclid’s parallel postulate is wrong? Lobachevsky and Bolyai independently discovered consistent geometries where multiple parallels exist (hyperbolic) or none exist (spherical/elliptic). Gauss had known secretly for years. This shattered the belief that Euclidean geometry was the only geometry — opening the door to curved spaces and, eventually, the geometry of neural network loss landscapes.

Euclidean: angles sum to $180°$.   Hyperbolic: $< 180°$.   Spherical: $> 180°$

Discovery

Hyperbolic geometry is now used in AI: Poincaré embeddings represent hierarchical data (like taxonomies) more efficiently than Euclidean space.

Breakthrough 1854

Bernhard Riemann

In his legendary 1854 lecture, Riemann generalized geometry to spaces of any dimension and any curvature. A manifold is a space that locally looks flat (Euclidean) but globally can be curved. Riemann’s work led to Einstein’s general relativity — and today, the manifold hypothesis is central to understanding how AI organizes knowledge.

$$ds^2 = \sum_{i,j} g_{ij}\,dx^i\,dx^j$$
Breakthrough

Riemann was 28 when he revolutionized geometry. Einstein used Riemannian geometry 60 years later for general relativity. AI uses it 170 years later for understanding data.

Unsolved 1957

Richard Bellman

Bellman discovered that as dimensions increase, space becomes overwhelmingly empty. In 100 dimensions, 99.99% of a hypercube’s volume is in a thin shell near the surface. Points that seem nearby in low dimensions are astronomically far apart in high dimensions. Data becomes sparse, distance becomes meaningless, and traditional algorithms fail catastrophically.

$$V_d = \frac{\pi^{d/2}}{\Gamma(d/2 + 1)} \to 0 \text{ as } d \to \infty$$
Unsolved

The curse of dimensionality says high-dimensional spaces are mostly empty. Yet LLMs thrive in spaces with thousands of dimensions. How? The answer is the manifold hypothesis.

AI Connection 2000s

Yann LeCun, Geoffrey Hinton, Yoshua Bengio

The manifold hypothesis proposes that real-world high-dimensional data (images, text, speech) actually lies on or near a much lower-dimensional manifold. Think of it like a crumpled sheet of paper in 3D — the paper itself is 2D. Neural networks learn to “uncrumple” this manifold, finding the true low-dimensional structure hidden in high-dimensional data.

AI Connection

A 512×512 image lives in a space of 786,432 dimensions. But the manifold of “natural images” is estimated to have only ~100 true dimensions. Neural networks learn this structure.

Discovery 2008–2018

Laurens van der Maaten, Leland McInnes

t-SNE (2008) and UMAP (2018) are dimensionality reduction algorithms that project high-dimensional data into 2D or 3D for visualization, preserving the manifold structure. They revealed that neural networks organize knowledge into clusters, hierarchies, and continuous spectra — showing that AI’s internal representations have beautiful, meaningful geometric structure.

$$q_{ij} = \frac{(1 + \|y_i - y_j\|^2)^{-1}}{\sum_{k \neq l}(1 + \|y_k - y_l\|^2)^{-1}}$$
Discovery

When you visualize what GPT “sees” internally, t-SNE and UMAP reveal clusters: similar concepts group together, forming a geometric map of knowledge.

AI Connection 2020–2024

Modern LLM Research

The internal representations of LLMs form rich geometric structures. “King - Man + Woman = Queen” is a geometric operation in embedding space. Researchers have found that concepts like truth/falsehood, past/future, and positive/negative correspond to directions in the latent space. The geometry of meaning is becoming a scientific discipline.

$$\vec{\text{king}} - \vec{\text{man}} + \vec{\text{woman}} \approx \vec{\text{queen}}$$
AI Connection

Researchers at Anthropic discovered that Claude’s internal representations encode truthfulness as a linear direction in geometry — AI has a literal “compass for truth” encoded in high-dimensional space.

The Thread That Connects

From Euclid’s flat plane to 12,288-dimensional latent spaces, geometry has always been about understanding shape and structure. AI has revealed that meaning itself has geometry — that words, concepts, and ideas occupy positions in vast mathematical spaces where distance equals similarity and direction equals relationship.

The Geometry Chain
$$\text{Euclid} \to \text{Riemann} \to \text{Manifolds} \to \text{Curse} \to \text{Embeddings} \to \text{Latent Spaces}$$
2,300 years of geometry, now shaping every AI representation.

Connections to Other Lectures

Logic & Computation All Lectures Number Theory & Encoding