Truthfulness, nondeterminism, epistemic and aleatoric randomness and Bayesian models (part 1)

2013 April 15
by Daniel Lakeland

Two interesting posts from opposite directions led me to write about the structure of Bayesian models of mechanistic processes, and the nature of uncertainty. The first was Josephs blog post about the "truthfulness" and "information" criteria in Bayesian models, and the second was Mayo's post about statistical ontology. I think it's important to understand what it means to be a good Bayesian model, and how that connects with frequentist concepts of statistics, as well as what it means to be a random variable, and how some random variables are different from others.

To illustrate the issue, I'm going to posit a very simple physical experiment that anyone could actually do, and then talk about an analysis of this repeated experiment using a mechanistic Bayesian model, and the role that randomness plays in the model, and what it means for this to be a good model.

The experiment goes as follows, get a pad of note paper, a 2 meter stick, a stopwatch, and a partner. Tear off a sheet of paper, crumple it into a ball. Generate a random number between 1 and 2 using R. Hold the ball of paper with its bottom at that height on the 2 meter stick. Count 1,2,3 and let go of the paper. The partner will start the stopwatch on 3, and stop it when the paper hits the ground. Record the height and the time. Repeat the experiment say 20 times. Weigh the 20 papers together on a sensitive scale and record the total weight. Measure the temperature and the barometric pressure in the room if you like, or get the pressure from a weather site online and measure just the temperature. Or whatever you can accomplish, note that a room thermostat may be available for temperature measurement.

Now we'd like to build a model of what happened that takes into account what we know: First off, Newton's laws including a decent model for drag:

ma = -mg + c(\frac{\rho v r}{\mu}) \rho A(r) v^2/2 and the initial conditions, v(0)=0 and h(0)=h_0.

We can't integrate this directly via a nonlinear numerical solver (like say runge-kutta-fehlberg) because we don't know g exactly for our current location, the drag coefficient function c, the actual mass of any one piece of paper or the appropriate dimensions A,r of the crumpled ball, or the density \rho or viscosity \mu of air. However, some people have provided us with scientific background information. We can look up a table of information about air. From this, and the ideal gas law, and a range of reasonable estimates for the water content, we could build a model for the density of air. A reasonable model would be to put in what we think of as the best guess values for water content, and then allow a normal distribution around this value with some variance. This is a Bayesian probability distribution as it expresses an unknown aspect of a fixed value. If we're indoors and do the experiments in close time succession, it's reasonable to expect that the density doesn't change between experiments.

Similarly we can get the viscosity of air using the calculator that wikipedia suggests, and since we don't know the exact water content we may need to include some unknown correction, again a Bayesian model for a fixed but unknown value.

Most people will not even consider the variability in g based on location, unless they've taken courses in geophysics or geodetics, but it's not entirely trivial. Where I am near Los Angeles, Wikipedia's table says it's about 9.796 but values vary from about 9.76 to 9.83 so it would be reasonable to give a prior distribution for g of N(9.796,0.02^2) if you're in Los Angeles. It might be reasonable to expand the variance to say 0.04^2 if you didn't know much about your location, altitude or other factors.

Finally, we have c the drag coefficient function, and A,r that describe the shape and size of the ball of paper, both of which are unknown, and both of which actually do vary from experiment to experiment. In this sense, they have frequentist type frequency distributions. A crumpled ball of paper does not have a single A,r value in fact. We might like to define r and c together. Let c' be the drag coefficient function for an idealized sphere of the same weight as the piece of paper, then let r be the value which predicts the same outcome for the travel time of the ideal sphere as the actual travel time of the real paper given A=\pi r^2. Note that this definition of r will depend slightly on the height from which the paper is dropped, an effect we will include in "model error". Furthermore, the function c' for an idealized sphere is itself not perfectly well constrained by data, and the ball of paper is not a sphere. These are additional modeling errors. Perhaps we would like to add a perturbation function so that c_i = c' + dc_i and we express dc as a series with random coefficients. Perhaps we would like to express our knowledge that the experiment in this range of values should be pretty well approximated by a sphere by placing prior constraints on the coefficients of dc.

Finally, there is the measurement process. Aligning the ball with the correct height, starting the stopwatch, and stopping the stopwatch all have fluctuations associated with them and these are real "frequency" type fluctuations. It's not just that the error is the same every time but unknown, it's both unknown, and different every time. We would like to create a statistical model of these measurements, and a typical one to use is a normal distribution around the measured value with mean 0 and variance something or other, possibly we may use hyperparameters for the variance if we are Bayesian. Then we have Bayesian probability distributions over the statistics of a frequency distribution.

Ideally, from a Bayesian perspective, I would now like to build a computer model of this experiment which will give me an approximate likelihood function. Namely, p(D | g,c,h_0,r,\rho,\mu, m, ...) where D is the vector of measured initial heights and fall times and h_0 is the actual heights, and ... indicates that we may possibly have other parameters such as the hyperparameters on the measurement error model. To do this we will take the values as given, integrate the equations of motion through time until we predict that the paper hits the ground, get a predicted time, and then use the measurement error model to figure out the probability of the discrepancy. Then, we can use MCMC  to do Bayesian inference on the values of g, r, \rho, \mu etc.

This blog post is long enough now that I will come back to the topic soon and see if we can make progress on understanding the various roles that randomness plays, and how to deal with likelihoods induced by numerical simulations.

(NOTE: having performed this experiment a few times it seems that the fall time will be in the range of about 1/2 a second for most cases, it is therefore useful to have a stopwatch that reads to the nearest 0.01 seconds, not because your reaction time is that good, but more because the roundoff error aspect is less important compared to a more continuous reaction time component. Apparently Stopwatch by Chronus is an Android stopwatch that promises 1/1000 second accuracy.)

(NOTE2: it seems prudent, for identifiability issues, to drop each balled paper from say 3 different heights, this lets us get information about the drag function since the shape and size will be constant for the given piece of paper).

3 Responses leave one →
  1. April 15, 2013

    Frequentists often do this kind of modeling. They just call is sensitivity analysis.

    The usual procedure is to rail against the prior N(9.769, .0004) for something like g. Then after spending a great deal of time clamming it's all subjective metaphysical nonsense, they will want to carry out a sensitivity analysis of the model's predictions using different values of g. After doing their physics homework they will decided to use values for g in the range 9.757-9.835 and then rejoice in the pure objectivity superiority of their approach without noticing they get the same answers as the Bayesian.

    They can run into trouble though, since they want to view every probability as a frequency, they will want to interpret every variance as an actual physical variation. Sometimes this is fine, but sometimes it will give them endless difficulties because it isn't true in general.

  2. Daniel Lakeland
    April 16, 2013

    I hope in the follow up to talk about the role that probability plays in various aspects of the model. For example, the distribution over g is a pure Bayesian uncertainty, since during the experiments g will be a definite constant (to at least 4 significant figures).

    On the other hand, the c function should really be different for each experiment, or maybe we should fix the c function and let r_{\rm eff} be a time varying effective r value. In any case, there is something that actually is different in each experiment due to the shape and orientation of each paper ball.

    I have a friend who will hopefully help me run this experiment wednesday so I will actually provide a bunch of data to analyze. It's even justifiable since I have a more complicated situation with an ODE that describes something in my research for which I need to do Bayesian inference, and I can't do it in JAGS or Stan because I need to solve the ODE at each step. However, I think I can accomplish this using the "mcmc" package in R together with the odesolve package.

    So, this is half a way to talk about the philosophical implications of a specific example model, and half a way to figure out how to practically build such a model on a computer.

Trackbacks and Pingbacks

  1. Dropping the Ball: paper experiments in Dynamics and Inference (part 2) | Models Of Reality

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS