On the lack of Lebesgue Measure on countably infinite dimensional spaces, and Nonstandard Analysis

2017 September 18
by Daniel Lakeland

Consider the interval $$[0,1]$$ it has length 1. The generalized notion of length in the reals is Lebesgue measure, whenever you have something like a closed interval so that there’s a trivial length for a set, then the Lebesgue measure is the same as the length.

Now consider the 2D plane, the square $$[0,1] \times [0,1]$$ consists of all the points $$(x,y)$$ where x is in $$[0,1]$$ and so is y. What is the area? It’s 1. This continues to work for integer dimensions 3,4, etc what’s the volume of the hypercube $$[0,1]^N$$ for $$N$$ some large integer like 3105? Again, it’s $$1^{3105} = 1$$.

But now let’s see what happens when we consider intervals of the form $$[0,0.5]$$ the length is 0.5 and for high dimensions $$N$$ the hyper-volume of the hyper-cube is $$0.5^N$$ which goes to zero as $$N$$ gets big. Similarly for intervals $$[0,2]$$ the volume goes to $$2^N$$ which goes to infinity as $$N$$ gets big.

Intuitively this is why we don’t have (standard) Lebesgue measure on the infinite dimensional space. An infinitesimal interval $$dx$$ is small, but when you calculate $$dx^N$$ for $$N$$ nonstandard, the hyper-volume is REALLY small. Similarly for intervals of slightly larger than side 1, the hyper-volume is infinite.

On the other hand, consider the interval $$[0,1.1]^N$$ for $$N$$ a nonstandard integer. Sure, the hyper-volume $$1.1^N$$ is nonstandard. But, it’s a perfectly fine nonstandard number. If this calculation is an intermediate calculation in a series of calculations that eventually leads you to prove some property, there is nothing that keeps you from carrying it out. For example you want to show that one set is much smaller than another, the ratio of sizes is $$r = 1.1^N/1.2^N$$ for $$N$$ nonstandard. This ratio is clearly infinitesimal as $$1.1/1.2 \approx 0.916667$$ is a fraction less than 1 and it’s raised to a nonstandard power.

But if you have some other infinitesimal ratio, and we want to discern how big they are relative to each other, for example how big is $$0.995^{N-K}$$ relative to $$(1.1/1.2)^N$$ you can do so easily and algebraically. $$[0.995^{N-K}/(1.1/1.2)^N] \approx 1.0855^{N-K} \times (1.1/1.2)^{-K}$$.

When N and K are nonstandard, you rapidly get either an unlimited or an infinitesimal result. But if you prove that this is true for all $$N,K$$ and then need to later consider the finite standard case say N=1331512 and K=89331 then you have the formula available to you, and you can get a perfectly fine standard value. This is useful if you’re doing something like considering a function of space evaluated at a set of points and you don’t know ahead of time exactly how many points. For example each point might be the location of competing insects, and you’re working out a PDE to approximate how these insect populations change in time. The insects come at discrete locations, but the particulars of how many and which locations are not known ahead of time. You can develop a continuous model, in which you have a smooth function of space, and then you’ve got an “infinite dimensional” model, but the truth is your infinite dimensional model is just a device for calculations approximating a finite but “large N” number of points. It’s not helpful to say that “there is no Lebesgue measure on infinite dimensional space” because the property “there is Lebesgue measure on space of finite dimensions N for all integer N” is the property you care about. In your model you would only actually ever care about say N = a few million to billion. So developing a nonstandard expression makes more sense to the modeler, even though it makes no sense to the pure mathematician trained in classical analysis.


One Response leave one →
  1. Daniel Lakeland
    September 18, 2017

    Another situation of interest, consider the “path integral” formulation of Quantum Mechanics. A photon is emitted by an atom somewhere over on the left side of the picture… two small slits in a photon-absorbing screen are in the middle of the picture, and a photon detector is over on the far right of the picture…. We’re interested in knowing say the ratio of the rate of arrival of photons at the detector as a function of position of the detector at the detector screen, relative to the total rate of photons arriving at some point along the detector screen. This classic experiment gives “interference patterns” at the detector.

    Well, Feynman formulates this as “along every possible path from the emitter to the detector, calculate a complex number amplitude” then “Add up the amplitudes for all the paths” and then “take the absolute magnitude of the result”

    The problem for a classically trained mathematician is that “every possible path from the emitter to the detector” is too big of a class of things. It’s not clear that the “set of all functions that take on the value f(0)=(0,0) and f(t) = (1,d) where d is the y location of the detector and 1 is the x location of the detector” is a well defined set, or even if it is, that it is physically relevant. There are for example functions representing a photon traveling at a billion times the speed of light out to Jupiter and then swinging around and coming back in time for tea at the detector…

    The early days of doing these calculations people used simply a grid of points and photons traveled from the origin to the first set of grid points, and then from wherever they landed on that grid to a point at the next set of grid points, and soforth. At no time could the photon fly off to Jupiter between infinitesimal timepoints.

    The important point is that the technique gave PHYSICALLY CORRECT frequencies of observations. So to the extent that “along every possible path from the emitter to the detector” is not a mathematically well posed idea, it’s probably because the well posed idea is actually something more like “along each of a nonstandard number of paths constructed in such and such a way” is the correct posing of the problem. Thinking in terms of nonstandard construction is usually a much better way to develop a scientific model. If there is no mechanism whereby we could construct a path involving some strange Cantor set… it’s the case that the strange Cantor set no matter how mathematically pure it is, is not relevant to the modeling problem.

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS