On the other hand, the c function should really be different for each experiment, or maybe we should fix the c function and let be a time varying effective r value. In any case, there is something that actually is different in each experiment due to the shape and orientation of each paper ball.

I have a friend who will hopefully help me run this experiment wednesday so I will actually provide a bunch of data to analyze. It's even justifiable since I have a more complicated situation with an ODE that describes something in my research for which I need to do Bayesian inference, and I can't do it in JAGS or Stan because I need to solve the ODE at each step. However, I think I can accomplish this using the "mcmc" package in R together with the odesolve package.

So, this is half a way to talk about the philosophical implications of a specific example model, and half a way to figure out how to practically build such a model on a computer.

]]>The usual procedure is to rail against the prior N(9.769, .0004) for something like g. Then after spending a great deal of time clamming it's all subjective metaphysical nonsense, they will want to carry out a sensitivity analysis of the model's predictions using different values of g. After doing their physics homework they will decided to use values for g in the range 9.757-9.835 and then rejoice in the pure objectivity superiority of their approach without noticing they get the same answers as the Bayesian.

They can run into trouble though, since they want to view every probability as a frequency, they will want to interpret every variance as an actual physical variation. Sometimes this is fine, but sometimes it will give them endless difficulties because it isn't true in general.

]]>