Stochastic PDEs and Chaos Expansions part 1

2010 January 16
by Daniel Lakeland

The research group I work with does a lot of work on numerical solutions of PDEs with uncertainty. The result is we work a lot with fairly obscure random objects. Where the typical statistician might work with several random real variables, we work with say random functions of 3 dimensions. Representing ever more esoteric random objects ultimately falls back on random variables (the usual real-valued random quantities). In the case of random functions, it happens through the concept of a Hilbert space. A random function can be represented as a random linear combination of basis functions. The coordinates then are standard real random variables.

Modern meaning of solution of a PDE:

The standard modern method for describing a solution of a PDE works something like this, we have a differential operator (not necessarily linear). This is something like \nabla^2 u - 2(\nabla u)^2 = w, the left hand side is a (nonlinear) differential operator acting on a function "u" and the right hand side is some known "forcing function" typically representing something like temperature, pressure, or flow velocity that we have control over or assume that we know (or have a stochastic model for). You can put this equation into "residual" form by subtracting "w" from both sides so that the result is identically 0. We'll call that R(u).

Now, how do you find u? Well any way to make R(u) "small" (near zero) will be effective as an approximation. The way we measure the size of the residual is via a norm. Basically the idea of a norm is to transform the function u so that the new function has only positive values, and then add up all those positive values via an integral. The bigger the sum, the bigger the function is. The standard norm to use is the "2" norm in which we transform the function u into u^2 inside the integral, since u^2 is always positive. For vectors in 3D space for example, this is euclidean distance from 0.

The 2 norm of a function is the following: |u|_2 = \sqrt{\int u(\vec x) u(\vec x) d\vec x}. Sometimes this is written as \sqrt{\langle u,u \rangle} because the symbol \langle u , v \rangle means the "inner product" of u and v which is defined in terms of the integral above. The 2 norm of a vector is just the square root of the "2" inner product of a vector with itself. There are other norms as well. The 1 norm for example arises from |u|_1 = \int |u(\vec x)| d\vec x. However, of all the "p" norms, (\int |u(x)|^p d\vec x)^{1/p}, only the 2 norm arises from an inner product, and the inner product is a special operation that allows you to measure the "angle" between different vectors, and in fact in this setting u and v are vectors in our Hilbert space. In fact \langle u,v \rangle = |u| |v| \cos(\theta) where \theta is the angle between the vectors.

Galerkin method:

Boris Galerkin (pronounced Galyorkin) invented a method for approximating a solution of a PDE using some finite orthogonal basis set. If you have N orthogonal basis vectors \{\phi_i \} in a Hilbert space, you use the linear combination which makes the residual orthogonal to each of the basis vectors. In other words \langle R(u) , \phi_i \rangle = 0 \ \ \forall i \in \{1..N\}. For linear operators this implies a linear algebraic system for the coefficients of the linear combination of basis vectors that makes up u, namely u = \sum a_i \phi_i and we're looking for u which means we're looking for the coefficients (numbers) a_i. So you can use standard matrix solvers to get the coefficients. As N increases, the norm of the residual provably goes to zero. I confess to not having seen the proof in detail, at least not recently. I think it requires a bounded linear operator, and it involves application of the Cauchy-Schwartz inequality, and the triangle inequality, and I'm not sure how a nonlinear operator changes the assumptions. Perhaps if the nonlinear operator is bounded it is sufficient? For the moment, let's accept that it works well.

With uncertainty (randomness):

Well, here's where we get to the uncertainty stuff. If your differential operator has some randomness involved, then there are various notions of solution. The one we'll care about for applications is the Chaos Solution. Basically, we define a basis set, and an appropriate inner product, and then say the solution we're after is the Nth order Galerkin solution with respect to that inner product. However, with randomness involved, how do we define the inner product and indeed the basis elements of the Hilbert space?

Whenever I see this explained, it's hopelessly intertwined with two things that make it more confusing. The first is ideas from the finite element method, including matrix algebra and computational issues related to solving the system of equations, and the second is orthogonality and efficiency in choice of basis functions for representation of the solution. Both of these are important in practice, but are totally irrelevant in understanding the core theory. So I'm going to stick to the slightly more abstract ideas here and hopefully that will at least help me understand what is going on. Also by getting at the essentials of the problem, we can see where current methods are making choices based on efficiency, and where we might choose to change those choices.

(continued in part 2... upcoming).

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS