Radial Basis Functions, PDE, discontinuities, and all that jazz

2010 January 20
by Daniel Lakeland

It’s easy to get wrapped up in either end of applied mathematics. Those who work on the theory may never actually calculate anything, and those who work on the practice may calculate things that are intended to be approximations of something that doesn’t exist as a proper mathematical object. It’s always good to couple calculation and theory together, and it’s especially important to remember that if your PDE has no solution it may be mostly an indicator of having a bad model for the physical reality.

So it’s an interesting problem to sit down and try to calculate a solution to a corner case, so to speak. Something where we think the solution should exist, but just barely. Such problems should be good test problems for numerical methods as well, because if the solution exists but has very bad properties, the numerical scheme may not behave well.

The following code, in maxima, calculates the solution to a heat conduction problem on a 2D plate. The plate has a rectangle [-1,0]x[-1,1] union with a semicircle radius 1 centered at 0,0. (so ignore the corners of the graph). Along the semicircle boundary we choose temperature = 1, and along the rectangular boundary we choose temperature = 0.

I chose some centers by eye for the radial basis functions, and then we do collocation of the residual at those centers. In other words, I plug the trial RBF into the differential equation and calculate the expression for the residual error. And then I force that residual to equal 0 at each of these points by choosing the coefficients in the RBF expansion. If you run this in a recent version of maxima, you will get a very nice 3D graphical display of the approximate function.

The question is, even though the residual does not converge near the discontinuity in the boundary condition, it seems based on numerical experiment that the temperature function does converge. If you find that unintuitive, you are not the only one. This is basically what mathematicians spent the first half of the 20’th century figuring out. Hilbert spaces, Sobolev spaces, and the Gelfand Triple are the standard machinery for defining the mathematical convergence of these things. It is interesting to calculate the gradient or residual laplacian of our approximation and see how badly behaved it is near the discontinuity, yet the solution seems to be relatively well behaved.

I believe an easy way to explain this is to think in terms of mollifiers. A mollified boundary condition will produce a perfectly well behaved solution. This solution, as the degree of mollification decreases, should converge to something that has a sharp step-function like behavior at the boundary. On the other hand, the derivative along the boundary converges to something like the derivative of the step function, which is a generalized function called the dirac delta function. So even though the solution seems to converge fine in real function space, the derivatives that define the PDE do not converge in some regions. This is what is meant by generalized PDEs, and there is a proof that the extension via mollification is equivalent to the extension via inner products in Hilbert space. The original paper was by Kurt Otto Friedrichs.


rbf(x,y,xi,yi,b) := sqrt(((x-xi)/b)^2 + ((y-yi)/b)^2+1);

/* Solve the laplace equation on a slightly irregular region given by
a square from x in [-1,0] and y in [-1,1], union with a semicircle
with center 0,0 and radius 1

The value of the function on the semicircular boundary is 1, and is 0
on the square boundary, so there is a discontinuity at (0,1) and (0,-1)

The solution method is by radial basis functions. The centers are
placed along the boundary, with denser centers along the discontinuity
and less density in the center of the domain.


pde_resid(u,x,y) := 'diff(u,x,2) + 'diff(u,y,2);
bound_vals(x,y) := if(abs(x^2+y^2 - 1) <= 1/10000) then
(if(x > 0) then 1 else 0)
else 0;

boundpoints: append(makelist([-1,i/3,.2],i,-2,2),
 /*some additional points near the transition*/

intpoints: append(
 /* grid of points in rect region in staggered hexagonal grid*/
 apply('append,makelist(makelist([.9*i/4 + 1/8*mod(j,2),.9*sin(%pi/2*j/3),.2+.2*(1-abs(j)/3)],i,-4,-1),j,-3,3)),
 /* additional points near the transition*/
 /* points near the corners */
 /* points in the center*/
 [[1/4,-1/6,.5],[1/4,1/6,.5], [0,0,.5], [0,-1/4,.5],[0,1/4,.5]]);  

thepoints: append(boundpoints,intpoints);

 color='red,points(boundpoints), color='blue, points(intpoints));

/* define the radial basis function expansion*/

rbffnc: sum(a[i] * rbf('x, 'y, thepoints[i][1], thepoints[i][2],thepoints[i][3]),i,1,length(thepoints));

/* here's one where the parameter in the basis functions is a little larger, this should converge better
as long as we're not too close to numerical precision limits*/

rbffnc110: sum(a[i] * rbf('x, 'y, thepoints[i][1], thepoints[i][2],thepoints[i][3]*1.1),i,1,length(thepoints));

/*rbffnc:rbffnc110;  to check if the convergence is good enough, increase the flatness parameter by 10% by
uncommenting this*/

boundeqns: makelist(subst([x=boundpoints[i][1],y=boundpoints[i][2]],rbffnc=bound_vals(x,y)),i,1,length(boundpoints))$


inteqns: makelist(subst([x=intpoints[i][1],y=intpoints[i][2]],
inteqns: ev(float(%),nouns,numer)$





/* also draw the residual which is expected to blow up badly around the singularity
One Response

Trackbacks and Pingbacks

  1. Feynman-Kac “proof” of heat equation existence for my problem | Models Of Reality

Comments are closed.