This post walks through an unconvential proof that the Johnson-Lindenstrauss Lemma preserves angles between vectors. I came up with this proof for a HW problem in EECS 498, and Iโ€™m not actually sure if itโ€™s completely correct, but I think the idea is pretty cool.

Suppose we have independently chosen vectors . We want to reduce these vectors to dimension while retaining important information. Specifically, we want to preserve the distance between arbitrary vectors. Letโ€™s select a matrix where each is sampled independently from . Then, is our vector in reduced dimension . Multiplication by is a simple but surprisingly effective dimensionality reduction technique. We want to show that with probability at least . Note that we are just dealing with the standard inner product here. In order to reach this result, we want to first show that . Then, there is a simple variation of the Hoeffding bound that we can apply to bound our probability that is within of . The latter step is mostly plug and chug and not really interesting, so Iโ€™m just going to show the first part and leave the second as an exercise for the reader (always wanted to say that).

First, . Now, I will show that

Consider the diagonal elements in . For any , . But recall that each , so . Then, in expectation, . Now consider any of the non-diagonal elements in . For any distinct , . The key thing to notice here is that every element in the sum is two distinct cells being multiplied. For fixed , because each cell is sampled independently. And each entry so , so in expectation, each non-diagonal entry will end up being . This shows is precisely the identity matrix and thus

From there, we do some boring concentration bound stuff to complete the proof. Though this proof probably skipped over some important nuances, I thought its structure was really elegant and overall it just felt super satisfying to come up with.