Recommended Reading IST

2016 April 30
by Daniel Lakeland

There's a fabulous, inexpensive Dover text by Alain Robert that gives a very accessible introduction to nonstandard analysis (NSA) through Internal Set Theory (IST). I wish there were an e-book version, but alas there isn't.

There is however an e-book version of another useful accessible book on nonstandard analysis  (by Henle and Kleinberg) that comes at the subject from a more "traditional" viewpoint (based on Abraham Robinson's original construction). It's so cheap you have to buy it right now. It spends more time on applications than on the constructions required to define infinitesimals etc.

For someone interested in applied mathematics, the IST version of NSA is extremely accessible, but the techniques are similar in both Robinson's version and IST once you get past the Robinsonian foundations (which require heavy-hitting mathematical logic).

In any case, if you build mathematical models using calculus, or you do statistics, you should know something about NSA because it "algebraifies" analysis. One advantage to that is it brings in lots of possibilities for Computer Algebra. Another advantage to NSA is that it has hugely enriched function spaces. Things that "ought" to be functions but in standard mathematics aren't, like delta functions, are perfectly reasonable nonstandard objects.

In many areas of "standard" mathematics, we have a sequence of "finite" objects which miraculously transforms into a totally different TYPE of object in the limit. In NSA that isn't the case.

7 Responses leave one →
  1. May 3, 2016

    I find nonstandard analysis instinctively unappealing but I'm not sure why. Having said that, some of the stuff I've seen (very briefly) on synthetic differential geometry is not bad, so maybe I could be converted.

    Have you always been a fan? How do feel about standard epsilon-delta stuff? I had a mature (returning after many years) student come to me the other day after a lecture on PDEs in which I gave a few different ways of deriving equations.

    They said that limits had always been difficult for them and part of the reason they originally dropped out. I found that the ' epsilon delta game' was the explanation that worked best for them ultimately. I'm not sure if they were really comfortable with 'infinitely small' etc on the other hand.

    Seems to be (very anecdotally) a difference between those who like a 'process' based (eps delta) description and those who like an 'object' based description (infinitesimals)?

    Also reminds me of the frequentist (eg 'putting bounds on a stochastic process') vs Bayesian (eg 'manipulation of probability distribution objects') division somewhat.

    • Daniel Lakeland
      May 3, 2016

      I think it all has to do with how you eat your corn:

      http://bentilly.blogspot.com/2010/08/analysis-vs-algebra-predicts-eating.html

      I eat corn sometimes in rows across, and sometimes in random chunks... which means I'm a stochastic mix of algebraist and analyst. I think that maps to nonstandard analysis pretty well.

      Seriously, though, the better you are at computer programming, the more advantages you can get by manipulating symbolic things, which is more or less algebra and if you've done a lot of computing, I think you're probably more likely to be interested in doing analysis via algebra.

      Also, epsilon-delta is a good explanation for a mathematician interested in connections between mathematical objects, but it's a terrible description if you want to preserve the mapping between objects in the world and mathematical objects. There's a good reason why Newton and Leibniz used infinitesimal reasoning, because they were working with physical ideas.

      I'm not that interested in proving abstract mathematical facts about mathematical objects, though I was good enough at it as an undergrad (math major). I like building descriptions of physical processes though, and to me the mapping makes better sense. Also powers of infinitesimal numbers give a good description of what asymptotic analysis is all about in applied mathematics...

      I guess it's just preferences really.

      • ojm permalink
        May 3, 2016

        Fair enough.

        RE programming, I think this is another one with a division. I only really appreciated the beauty of analysis after studying and using numerical methods. Here and functional analysis are (to me) where analysis and algebra really start to come together. Perhaps NSA helps one get there earlier.

      • ojm permalink
        May 4, 2016

        Having another look, it seems like if I was to use infinitesimals then the version I prefer (eg leading to synthetic differential geometry) corresponds to smooth infinitesimal analysis.

        Apparently it has some significant differences to non-standard analysis eg no transfer principle. Interesting.

        https://en.m.wikipedia.org/wiki/Smooth_infinitesimal_analysis

  2. Daniel Lakeland
    May 4, 2016

    Smooth infinitesimal analysis where an infinitesimal \epsilon \ne 0 has the property \epsilon^2 = 0 is very different indeed. It might have appealing properties, I don't know much about it. But it doesn't have one of the things I like about NSA, which is the "hierarchy" of asymptotic numbers. In the hyperreals you can take an infinitesimal to any power, and each higher power is infinitesimal relative to the lower power... you can do "multiscale" modeling in that system which corresponds well to something like modeling things at atomic scale, at colloidal scale, at sand grain scale, at the scale of a sand deposit, at the scale of the grand canyon, at the scale of the crust thickness, at the scale of the entire earth... etc

    • ojm permalink
      May 4, 2016

      Fair enough. BTW you can go beyond 'nilsquare' infinitesimals and more generally consider 'nilpotent' infinitesimals which amount to choosing some power at which to cut things off so presumably you could get higher order or even multiple scale asymptotic expansions this way too.

  3. Daniel Lakeland
    May 4, 2016

    Also I agree with you about the importance of numerical analysis and functional analysis in understanding the connections between algebra and analysis.

    The basic idea that a PDE describes the trajectory of a vector through a vector space is extremely powerful, an overarching idea that is very useful for organizing thoughts about mathematical models. When you actually go to implement it using say Spectral Collocation methods, it really falls into place.

    I really think everyone in Engineering and Applied Math should look at John Boyd's book on spectral methods, which happens to be downloadable in PDF form:

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24.3791&rep=rep1&type=pdf

    It really hammers home the idea of representing a function as a linear combination of other functions. Very relevant to my recent post on Fourier methods. The NSA approach of breaking up the domain of the function into a hyperfine grid and then calling the value of the function at each grid point the the coordinate of the function in the nth dimension is a very literal interpretation.

    Often expositions aimed at mathematicians elide or fail to really understand the value of the connection between the application and the mathematical theory. So much mathematics has been invented by applied people, and then formalized later in a way that's devoid of connection to the application. I'm looking at you "distribution" theory 🙂

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS