Most recent change of FourierTransform

Edit made on June 09, 2013 by ColinWright at 13:54:55

Deleted text in red / Inserted text in green

!! Formally

The Fourier Transform (FT) of an integrable function EQN:f(x)
is defined as follows follows:

EQN:\mathcal{F}:f\mapsto\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty\.f(x)e^{-ikx}\,\text{d}x\;. |>> EQN:\mathcal{F}:f\mapsto\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty~f(x)e^{-ikx}\,\text{d}x <<|

When defined as above, the FT is a unitary transformation. Under
certain conditions, the transform is invertible as follows follows:

EQN:\mathcal{F}^{-1}:\hat{f}\mapsto\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty\.\hat{f}(x)e^{ikx}\,\text{d}k\;. |>> EQN:\mathcal{F}^{-1}:\hat{f}\mapsto\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty~\hat{f}(x)e^{ikx}\,\text{d}k <<|

The FT decomposes the transformed function into oscillatory functions
and has many applications, both in mathematics and elsewhere. For
example, the FT (and integral transforms in general) are useful in solving
differential equations. Using the FT differential equations can be transformed
to algebraic equations.
!! Informally ...

Suppose you have a vector $A$ in 2-D space, and suppose you have another
vector $B$. Suppose you want to know how much of $A$ is pointing in the
direction of $B$. When you ask that question it doesn't matter if $B$ is
long or short, we we may as well assume that $B$ has length 1. It's of
unit length. This concept of "how much of $A$ is in the direction of $B$"
is called the "dot product", and there are standard ways to compute it.

So now suppose you also have two vectors $u$ and $v$ of unit length, and
which are at right angles to each other. Then we can ask how much of $A$
is in the direction of $u$ and how much is in the direction of $v$, and
the answers are (unsurprisingly) the dot products, $A.u$ and $A.v$

Interestingly, we can then write $A=(A.v)v+(A.u)u$ and we have a way of
expressing $A$ in terms of $u$ and $v.$ So given a vector we can work
out how to express it as the sum of other vectors, although it only works
if the vectors are of unit length and at right angles.

The Fourier Transform does the same sort of thing. It expresses a
function as the sum of other functions.

So how does it do this?

What we need is a concept of the dot product in a vector space. In
short, we need the basic properties of vectors (adding, multiplying
by a constant, distributivity of these) and a concept of angle between
two vectors as provided by the dot product and its properties.

Well, functions can be thought of as a vector space. Give two functions
$f(x)$ and $g(x)$ we can define their sum $(f+g)(x)$ as: EQN:(f+g)(x)=f(x)+g(x).
We can multiply by a constant in the obvious way. So we just need a concept
of the dot product. That we get by analogy with the dot product of two

We take the dot product by multiplying point wise and adding up. So we
do the same. Given $f(x)$ and $g(x)$ we multiply pointwise and sum:


Now we need to get a collection of functions (that is, vectors) that are of
unit length, and all at right angles. Here's a collection of such:

* $sin(n.x)$ for $n$ in $\{1,2,3,...\}$

Working over the interval $[0,2\pi]$ these have a dot product of zero with
each other, and a dot product of 1 with themselves. Think about that, and
you'll see that if we think of these as basis vectors, they have the properties
we need.

So given any function that can be expressed as the linear sum of those sine
functions, we can find the coefficients by taking the dot products, just
as we did with $A$ as a linear sum of $u$ and $v.$

Now, of course, things are more complex when we want functions that are more
general than just the linear combination of sine waves with integer frequencies,
but that is the underlying idea of what a Fourier Transform really is. We are
finding out how much of each frequency is needed to construct a function we're
interesting in.