Signal Processing -- The Frequency Domain

Topics and Methods

Read, or have read by now, the "Informal Introduction".

A Primary Goal

  1. See how to transform a signal into a different representation (specifically here, a representation of frequencies in the signal).
  2. See why such a transform is a useful practical and abstract thing to do.

Dirac Delta Function

Sinusoids

Some math leads to: cos(ux +vy) and sin(ux +vy) are 2-D sinusoids as in the figure. Their ridges and troughs fall along the parallel lines ux+vy = k &pi for integer k, and their wavelength is 2 &pi / &radic (u2 + v2). So we can write a 2-D wave as ei(ux + vy).

Inner Product

Dot Product from high school vectors:

Generalize to n-vectors:

x · y = &Sigma n x(i)y(i).

Generalize to continuous functions:

f · g = &int f(x) g(x) dx

Basis Functions

A weighted sum of basis functions is equal to some other function of interest. Very similar to expressing a vector's position in another coordinate system.

Family of something like shifted Dirac Deltas (but not quite): "obviously" true.

Family of Sinusoids: not obviously true (unless your name's Fourier). But easier to formalize.

In 2-D, implies you can build any image out of gratings(!!)

Basis Functions Cont.

Orthogonal Basis functions have zero inner product with any other function in the family, but positive i.p. with selves (Dirac, Sinusoids both are orthogonal sets.)

Lots of useful basis functions: Bessel (vibrational modes of drumheads), Legendre, Laplace (solving differential equations, e.g... damped sinusoids), spherical harmonics (vib. modes in spheres):

animation and live.

Linear and Linear Shift-Invariant Systems: Definition

Here, systems of Linear operators, not systems of linear equations (related, but...). Generally a linear system can be represented as a matrix (or some generalization involving continuous functions) operating on an input vector. y = Mx.

Let f1(t) and f2(t) be input functions and g1(t) and g2(t) their corresponding output functions, and &alpha and &beta be scalar weights.

Then for input function &alpha f1(t) + &beta f2(t) the output is &alpha g1(t) + &beta g2(t).
This is the Superposition Principle.

In a linear, shift-invariant (LSI) system, input f(t-h) produces g(t-h).

LSI Systems as Matrix Multiplication

LSI systems are a specialized subcase of linear systems: they are characterized by a (shiftable) vector or function rather than a full matrix. An example: a linear imaging system---

If x is a one-dimensional ``scene'' and y its image, and M models a camera,

0      1 0 0 0 0 0 0   0
0      0 1 0 0 0 0 0   0
1      0 0 1 0 0 0 0   1
2  =   0 0 0 1 0 0 0 * 2
4      0 0 0 0 1 0 0   4
0      0 0 0 0 0 1 0   0
0      0 0 0 0 0 0 1   0

Can imagine these as infinite objects. Each row is a shifted version of the Dirac delta.

An out-of-focus camera C2 might be modeled like this:

1 1 0 0 0 ...
0 1 1 0 0 ...
0 0 1 1 0 ...
0 0 0 1 1 ...
0 0 0 0 1 ...
....

A camera C3 that behaves nicely in the middle of its field of view but goes blurry around the edges could be:

0     1/2  1/4   0   0   0    0    0     0
1/4   1/4  1/2  1/4  0   0    0    0     0
1      0    0    1   0   0    0    0     1
2  =   0    0    0   1   0    0    0  *  2
4      0    0    0   0   1    0    0     4
1      0    0    0   0  1/4  1/2  1/4    0
0      0    0    0   0   0   1/4  1/2    0

Observations

Sinusoids as Eigenfunctions

Sinusoid input to ANY linear, shift-invariant system (any h(t)) yields sinusoid output of same frequency (wavelength), possibly shifted (in phase) and changed in amplitude (amplified or attenuated).

This beautiful fact seems surprising, but arises from the definition of convolution and the integral properties of eikt (sinusoids).

This is essentially what it means to be an eigenvector (or eigenfunction) of a linear operator. An eigenvector of a matrix remains in the same "direction" when multiplied by the matrix: though it may be lengthened or shortened (technically it may be multiplied by a scalar). A sinusoid is still a sinusoid of the same frequency when operated on by a LSI. The possible phase shift is a consequence of the vectors actually being complex-valued.

Linear Systems: Examples

Our LSI systems will operate both in the time-domain (e.g. sound), and in the 2-D spatial domain (e.g. images). We draw them using f(t) as the input, h(t) as the impulse response or point spread function, and g(t) as the output. The box performs a convolution.

The differential d/dt is an LSI operator.

Physical systems can often be approximated by linear systems.

Linear Systems and Reality

An ideal camera's film frame extends infinitely and exactly records all levels of input, from full dark to infinitely bright.

A ideal camera's point spread function is a Dirac Delta (impulse).

A real PSF is always more complicated!

Linear output response to a linearly increasing input is desired, but...

Convolution and Correlation

A linear system convolves its impulse response (or PSF) with the input.

g(t) = f(t) &otimes h(t), or g(t) = f(t) * h(t).

Formal def: (f &otimes g )(t) = &int - &infin &infin f(&tau)g(t - &tau)) d &tau .

Note one input is reversed in time (or space) (see camera fig above).

Correlation

Correlation is the same as convolution, only one of the inputs is not flipped backwards. It is often notated just the same way, with * or &otimes .

g(t) = f(t) &otimes h(t), or g(t) = f(t) * h(t).

Formal def: (f &otimes g )(t) = &int - &infin &infin f(&tau)g( &tau - t)) d &tau .

In-class Examples

The correlation of the delta function with itself &delta(t) &otimes &delta(t) -- its autocorrelation

The autocorrelation of the delta-comb function.

The correlation of the delta function with an arbitrary function &delta(t) &otimes h(t) -- sometimes correlation of two different functions is called their cross-correlation when it could get mixed up with the autocorrelation in the surrounding prose.

Note the last exercise shows that to discover the impulse-response (PSF) of an unknown system, just give it an impulse: the output is the PSF! Like 'kicking the tyres' but more informative.

The correlation of [1 2 3 2 1] with [1 1 1].

Fourier Transform

The Fourier transform is a linear mathematical operation that takes time- or space-domain input (sound wave, voltage waveform, image,...) and outputs an equivalent (spatial) frequency-domain representation. The operation is lossless and invertible.

Essentially, it decomposes the input into a number of sinusoids of varying magnitude and phase (and in two dimensions, directions). The inverse transform reverses the process.

The formal definition is

F(&nu) =&int f(t) e - 2 &pi i &nu t dt ,

where t is time or space and &nu is frequency. The inverse is simply related: (We have left out some normalization constants).

f(t) =&int F(&nu) e 2 &pi i &nu t dt .

The Fourier transform (FT) is an inner-product integral that answers the question: How much of the particular sine wave e- 2 &pi i &nu t is in this input function f(t). In mathematical terms, it is projecting f(t) into the transform basis space of sinusoids.

The result is generally it is a complex function (real and imaginary parts).

FT Properties

(Fast) Fourier Transform Issues

Power Spectrum

Analyzing time series: one issue is dominant frequencies. Power spectrum is magnitude of FT: (F .* conj(F)). Below, the PS is displayed with its 0-frequency origin in the middle of the X-axis (at 32).

PS tells how much power the signal contains at a given frequency.

Sampling Issues: bandlimited input, sample at least twice the maximum signal frequency.

More Power Spectrum

60, 150, 350 Hz. sines plus 0-mean Gaussian noise.

Power Spectrum.

function thePS = PowSpec1D(X,n)
Y = fft(X,n);
thePS = (Y .* conj(Y)) / n;
end
... % and in the calling script
plot(xaxis, fftshift(thePS));

Power Spectrum Examples

Complex Numbers

Complex number a + bi is a 2-vector (a,b) living in the complex plane, which has a real axis and an imaginary axis.

Or, a complex number is an ordered pair (a,b) that is a mathematical object having rather funny rules of operation.

The conjugate of a complex number negates the imaginary part:
conj(a + bi) = (a - bi).
You can work out that (a + bi) · conj(a+bi) is a2 + b2, the squared magnitude (length) of the (a,b) vector in the complex plane.

Complex numbers are added, subtracted, multiplied, and divided by formally applying the associative, commutative and distributive laws of algebra, together with the equation i2 = -1 :

Phasors

A phasor is a complex number, considered as a vector, and considered to rotate around the origin without changing its length. This alters the magnitude of its real and imaginary parts, and is said to change its phase. FT entries are complex. Considered as phasors, each corresponds to a sinusoid at a frequency given by its coordinates in the FT, amplitude equal to its length, and phase given by its the angle to the real axis.

Looking at individual phasors: there are two of them for a sine wave, 180 degrees apart One frequency but two symmetrical conjugate elements in FT. Can see phasor rotate as phase changes. In Image below, for sine (red, 0 phase angle), we get the imaginary (0.0000 +-32.0000i). Shifted sine (blue) gives (29.5641 +-12.2459i).

The Convolution Theorem

Another FT symmetry property: Convolution in the time (space) domain is dual to elementwise multiplication in the frequency domain.

Let FT denote the Fourier transform operation. Then

FT{f &otimes g} = FT{f} · FT{g}

where · denotes point-wise multiplication.

Also vice-versa:

FT{f · g} = FT{f} &otimes FT{g},

And applying the inverse Fourier transform FT -1 to the first equation , we get:

f &otimes g= FT -1 {FT{f} · FT{g}}

As convolution and correlation are important, The Convolution Theorem is a key idea and technique. We'll see some applications later.

Linear Systems and the Frequency Domain

Use convolution theorem to make a frequency-domain version of the linear system block diagram:

Inputs and outputs are FT{f(t)} and FT{g(t)}, The function in the box is FT{h(t)}, and the operation of the box is elementwise multiplication (Matlab's .*).

Box is a "graphic equalizer": sound (say) comes in as a pressure wave, the sum of sinusoids of many frequencies. Equalizer amplifies or attenuates (and changes phase) according to H(&nu), the box's function.

FT{h(t)} =H(\nu) is called the Modulation Transfer Function (MTF).

So graphic equalizer is just an MTF, basically. It's a linear system that's easier to think about in the frequency domain than in the temporal domain.

The Sampling Theorem

Basic Question: Can we exactly reproduce a continuous signal from a finite number of discrete samples?

Yes (!!), if...

The sampling rate must clearly be at least twice the highest signal frequency. It is called the Nyquist frequency. Turns out any rate above that works (given enough signal).

If sample at high and low peaks of red sine, can get its amplitude, phase, and frequency. But higher-frequency blue sine is undetected, being 0 at all sampling points.

The Sampling Theorem: Frequency Domain

Sampling at too low a rate means copies of FT{signal} overlap (causing 'aliasing') and our strategy fails due to addition and confusion between copies. Faster sampling separates the FT copies in freq. space, and we may imagine snipping one out and retrieving
f(t) = FT -1 {F(&nu)}.

Three Frequency-domain Operations

Filtering

Filtering Tutorial .

Matching

A 'chirp' and the autocorrelation of a random vector of 1 and -1.

Deconvolution

Produce good image from one made with an exotic PSF (as in some of today's cameras (see 'coded aperture' and 'computational camera'. Or fix image made with undesireable PSF (bad optics (e.g Hubble), bad 'seeing conditions' (Optical astronomy, atmospheric instability), motion or focus blur,...)

How? Convolution theorem:

FT{f &otimes h} = FT{f} · FT{h}

Where LHS is the image from a camera with point-spread function h and scene data f. If h is an ideal Dirac delta, the output is the scene. Camera motion: h image is spread out along 2-D path. Focus blur: PSF becomes a disk (sensor intersects cone of focused rays NOT at its point).

General treatment: Assume we know or can guess h. In the equation above, divide (elementwise) by FT{h}.

FT{f &otimes h}/FT{h} = FT{f}, so

f = FT -1 [ FT{f &otimes h} / FT{h}]

We now have FT of what we want (the input function) using things we know (PSF and degraded image). Inverse-transforming both sides recovers the original input. It almost works.

Consider boxcar (or disk) blur function, which formalizes either (in 2-D) defocus blur or (in 1-D) straight-line motion blur. Here's FT{h}: note it crosses zero often, so it is near zero often.

Multiplying by 1/FT{h} where the FT is near zero amplifies frequencies there by huge amounts (or Inf if there is a divide by zero error).

Noise at these amplified frequencies can overwhelm the signal. Some care (thresholding before multiplying, say) is needed.

The Gaussian is a user-friendly blur function since its FT is Gaussian, so always positive. Dividing by it amplifies high frequencies.

Motion Blur Example

Time-Domain Signal Processing

Random Links

See the reading for more references (not many, though). There's a goldmine of tutorial material out there!

Good tutorial
Nice examples, Good tutorial
Another Source
Huge resource (aimed at CHE)
Filtering Tutorial



---

Last update: 04/22/2011: RN