Fourier Series and Transform using NSA

2016 May 3
by Daniel Lakeland

So, we've been talking about NSA and Fourier series. Some of you are probably really familiar with the whole idea of Fourier series and/or transforms, others might not be. Hopefully this post will help those who haven't ever really figured out what Fourier series or transforms are. Maybe some Economists who currently are reading papers about seasonal corrections to timeseries will find it interesting or something. Anyway let's get started. (The TL;DR summary is at the end)

First off, imagine you have a periodic function f(x) with period p. An example is the Bangkok daylight hours function from the seasonal trends post. The trig functions \sin(2\pi k (x/p)) and \cos(2\pi k (x/p)) are also periodic with period p for all integers k. Furthermore, they are "orthogonal" which is to say

\int_0^p \sin(2\pi k (x/p)) \cos(2 \pi k (x/p)) dx = 0

You can think of this in a couple of ways... this is the definition of the "timewise covariance" of two timeseries (the average value of the product through time).

The vector space view, Fourier Series for periodic functions, with a discrete set of standard frequencies:

It's also possible to imagine the cos and sin functions as vectors in a very large vector space, let's say one with N a nonstandard number of dimensions. Then let s_k and c_k be two different vectors, whose i'th entries are:

 s_k[i] = \sin(2\pi k (i/N)), c_k[i] = \cos(2\pi k (i/N))

Now what's the dot product of these vectors?

\sum_{i=1}^N s_k[i] c_k[i]

This is the sum of a nonstandard number of appreciable values, to make this thing have a limited value we'd better multiply it by dx = p/N and as soon as you do that, you see it's the nonstandard definition of the integral mentioned above.

\sum_{i=1}^N s_k[i] c_k[i] dx = \int_0^p \sin(2\pi k x/p) \cos(2\pi k x/p) dx

In other words, we can define a "dot product" on functions by using the covariance integral. Since the two functions have nonzero length, their dot product is only zero because the angle between them is \pi/2 or 90^\circ (the dot product of two vectors a,b is |a||b|\cos(\theta) and if a,b have nonzero length then \cos(\theta)=0 is the only solution. In an N dimensional vector space, there are N different orthogonal directions, and the sin and cos functions fill up all those dimensions (there's a sin and cos function for each k from 0 to N/2 that defines one of the directions in this vector space).

So, periodic functions are elements of a vector space, and the sin,cos functions span that space, we can create any periodic function by linear combinations of sin and cos. If you're a mathematician you now come out of the woodwork to bop me on the head about Hilbert spaces, and how we don't get pointwise convergence. Specifically you could have just a few of those N dimensions fail to converge. Imagine the function \cos(2\pi x) except when x = \sqrt(2)+2\pi n for any integer n, the function equals 97.3. This little blip is an irritation only, for example its contribution to any integral is always infinitesimal since it only occurs for an infinitesimal interval, and the closest standard function (in terms of sum of squared differences) to this function is the usual \cos(2\pi x) so we identify and collect together all the nonstandard vectors with their closest standard function in the vector space, and when we do that we can span the whole space of standard periodic functions with the sin and cos functions with frequencies that are integer multiples of 1 cycle per distance p f_i = i/p

Fourier Transforms: nonperiodic functions using a discrete set of infinitesimally close frequencies

Fine and dandy. How about non-periodic functions? Like our old friend the quadratic exponential radial basis function (a non-normalized normal curve)

\exp\left(-\frac{x^2}{2}\right )

Well, we can still do the whole vectorspaceification (yes it's a real word, one I just made up) of functions but now instead of on the interval [0,p] we do it on the interval [-N,N] for N a nonstandard integer, and we space our samples a distance dx = 1/N apart, so we've got 2N/(1/N) = 2N^2 different dimensions in our vector and two separate types of functions, so we need 2N^2/2 = N^2 frequencies. And we define the same kind of dot product on this vector space... and we can again span the vector space with sin and cos functions that are mutually near-orthogonal (infinitesimal covariance integral).

\sum_{i=0}^{N^2} \sin(2\pi f_1 (-N+i dx) \cos(2\pi f_2 (-N + i dx)) dx = \int_{-\infty}^{\infty} \sin(2\pi f_1 x) \cos(2\pi f_2 x) dx \cong 0

for any two different positive standard frequencies f1 \ne f2 from among the appropriate nonstandard sequence of frequencies. Which frequencies do we have? I think the usual way to think of it is f_i = -N/2 + i/N for i \in 0\ldots N^2. Intuitively, that is basically from minus infinity to infinity in increments of 1/N.

And, we can approximate an arbitrary function g(x) by taking its dot product with any of our basis vectors. And constructing the function:

 g^*(x) = \sum_{i=0}^{N^2}\left [ \left (\int g(x) \sin(2\pi f_i x)dx\right ) \sin(2\pi f_i x) + \left (\int g(x) \cos(2\pi f_i x)dx\right ) \cos(2\pi f_i x) \right ] df

This constructs the function as a nonstandard sum of infinitesimal contributions from each frequency. The sum is itself the definition of an integral in frequency space. Multiplying the sum by df normalizes the sum properly. It has N^2 terms, and df = O(1/N). This means that the coefficients (the \int g(x) \sin(..)dx parts) have to decline fast enough in order to ensure we get a near-standard function.

Remember also that we don't get precise pointwise convergence. In particular, at discontinuities we're going to get problems, there are a variety of technicalities. The point of this blog post is more or less to give you the following basic ideas:

TL;DR

  • Functions of a single variable (or even of many variables) could be considered like vectors with a nonstandard number of dimensions.
  • Linear algebra works fine in these large vector spaces, and we can compute dot products.
  • We can approximate functions by linear combinations of basis vectors (basis functions). The initially most popular one was the Fourier basis, sin, and cos
  • For periodic functions, you can construct a Fourier series. The frequencies are on a discrete grid corresponding to integer multiples of a standard base frequency.
  • For non-periodic functions you can construct a Fourier integral, the frequencies are on a discrete grid corresponding to integer multiples of an infinitesimal base frequency.
  • All of it is hand-wavy here, and I'm sure I am probably mussing up one or two technicalities but in order to use these ideas fruitfully you don't usually need to know all the technicalities. If you do need to look up technicalities, you can start with this intuitive view and map your ideas to the exposition in reference books on the subject. When they say "integral" you think "nonstandard sum"... etc.
  • In particular, I hope this makes clear why we might not want to do things like use "monthly dummy variables" or "weekly dummy variables" to approximate seasonal effects. When we do that, the dot product of our timeseries with our basis function is basically just the average value during the time of interest. Since the width of our dummy function is standard, we can't capture the standard sized changes that occur during that time interval.
  • Another way to think about the issue is that the dot product with the Fourier basis uses information about how the function changes over the whole periodic interval, whereas the dot product with the dummy "window" function only uses local information to that time window.

 

No comments yet

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS