Signals and Systems Part 2: Fourier Series

Lesson 1: The Inner Product

←Previous Lesson: IntroductionNext Lesson: Next →


How much of my soup is carrots?

Imagine you are sitting down to eat a nice warm bowl of soup. That soup is made up of different things - let's say you're a vegetarian and your soup is made up of some broth, peas, and carrots. Maybe your soup happens to be 1/5 carrots, 1/5 peas, and 3/5 broth. We say that your soup can be "decomposed" into its "components" - namely broth, peas, and carrots. All we need to do in order to make some soup (if we ignore the whole cooking part) is just put together the ingredients in the right ratios.

Vectors and Soup

But it's not just soup that we can decompose into its parts - we can do it with mathematical objects like vectors, too. For example, let's take the vector \(\pmatrix{2 \\ 1}\):

This vector has two components - each along a different axis. To find the component along each direction, we can use the dot product. To find the component along the first axis (call it the \(x\)-axis if you like), we can just take the dot product with the vector \(\pmatrix{1 \\ 0}\). Here I'm using the notation \(\langle a, b\rangle \) to denote the dot product.

\(\langle \pmatrix{2 \\ 1}, \pmatrix{1 \\ 0} \rangle = 2*1 + 1*0 = 2\)

Just as we can "decompose" our soup into "components", so can we with vectors, using the notion of the dot product. The dot product lets us answer the question "how much of each component is in my vector?"

Inner Product vs. Dot Product

The inner product is an extension of the idea of the dot product to things that aren't vectors (or at least, not at first glance). For regular real-valued vectors, it's exactly the same as the dot product. It tells us how much of something is in something else. For example, while this isn't strictly-speaking mathematically correct, you could take the inner product of soup and carrots:

\( \langle soup, carrots \rangle \)

Which we can interpret as what fraction of my soup is carrots? If you followed the recipe above, the answer would be 1/5.

Inner Product with Functions

Just like with soup and vectors, we can use the inner product with functions, to figure out how much of one function is contained in another. This is easiest to describe if we actually do think of functions as vectors. For example, let's take the sinewave over a single period:

We could turn this into a vector by just taking the value at a bunch of different closely-spaced points. For example, if we took the value at points spaced by , our sinewave and vector would look like this:

If we wanted our vector to be an accurate representation of the function, we would want the points to be really closely spaced together:

In fact, ideally we'd want the points infinitely close togthere (but we'll come back to that later).

Now that we can turn functions into vectors, we can do fun things, like take their dot products. Let's take the dot product of our sinewave with a garden-variety square wave:

But what does taking the dot product here mean? Well, if we make an analogy to vectors, we're asking how much of the square wave 'vector' is along the sinewave 'vector', or how much of this sinewave is contained within the square wave. If we just plot the two together, you can visually see that they are similar, there's a lot of overlap between the two, and the dot product tells us exactly how much overlap.

Let's take the concrete case where our sample points are spaced apart by \(\pi/5\). If we take the dot product of the two vectors, we get about 6.15. But we've got a bit of a problem. If we want the points to be more closely-spaced, the dot product actually gets bigger. For example, at \(\pi/50\), the inner product is about 63.6. To fix this, we can define the inner product to be the dot product multiplied by the distance between samples, let's call it \(\Delta x\). This means as our samples get closer together, the inner product should stay the same. Twice as many samples means \(\Delta x\) is half as small, leaving the overall inner product the same. Now what happens as we take the points to get infinitely close together, and we get a more and more accurate representation of our functions? Well, our inner product between the square wave and the sine wave starts to get closer and closer to an integral. If we have a total of \(N\) points we are using to represent these two functions, then our inner product starts off looking like a sum:

\begin{equation*} \sum_{i=0}^{N} sin(\Delta x *i)*sq(\Delta x*i)\Delta x \end{equation*}

But as the points get infinitely close together, \(\Delta x\) becomes the infinitesimal \(dx\), and the sum becomes an integral:

\begin{equation*} \int_{0}^{2\pi}sin(x)*sq(x)dx \end{equation*}

If you haven't seen this before, it's some really heavy and mind-bending stuff. Let's do a couple examples of the inner product to see how this all works:

What is the inner product between the sinewave and the square wave above, \(\langle sin(x), sq(x)\rangle \)?





What is the inner product between \(sin(x)\) and \(sin(2x)\), \(\langle sin(x), sq(x)\rangle \), over the period \(0\) to \(2\pi\)?





←Previous Lesson: IntroductionNext Lesson: Next →




If you found this content helpful, it would mean a lot to me if you would support me on Patreon. Help keep this content ad-free, get access to my Discord server, exclusive content, and receive my personal thanks for as little as $2. :)

Become a Patron!