Professor Shalizi at CMU has an interesting article on how Bayesian Inference applied to the Jaynesian view of statistical mechanics leads to a backward arrow of time (decreasing entropy as time increases).

The essence of the argument, as far as I can see, is that if we use Bayesian uncertainty to describe thermodynamic entropy, then each time we make a measurement we reduce our uncertainty, or at best keep the uncertainty constant. This implies decreasing entropy as time goes along. Since thermodynamic entropy of a system tends to increase on average, he believes that there is a conflict in the assumptions which should be resolved by throwing away the idea that Shannon entropy can be identified with thermodynamic entropy.

I find this argument interesting, but I’m not sure that he has made a full accounting of the issue. In particular in the steps between equations 2 and 3 he makes the assumption that we have measured macrostate m1, and that we should condition our new microstate distribution on the measurement of m1. However as any experimentalist would happily tell you, measurements are never perfectly accurate.Â  I believe this is an issue with Jaynes theory as well. When Jaynes says that we should choose the maximum entropy distribution consistent with the macrostate I am not aware of any way in which he takes into account our uncertainty in the macrostate. If for example we know the volume of an ideal gas exactly but the pressure is measured only to 1 part in \$\$10^-6\$\$ (a very precise measurement) the implied uncertainty about the microstate due to that pressure uncertainty is huge since N = PV/kT and k is very small, a tiny error in P implies a much larger error in N which enters into the entropy via a factorial!

Also, any measurement process itself requires an interaction with the system, which inevitably leads to increasing uncertainty. If we account for these factors, can we save ourselves from the argument of Shalizi? I don’t know, but I’d love to find out.

3 Responses
1. July 16, 2010

Dear Daniel,

There is quite different viewpoint as concerns entropy and thermodynamics as a whole.

I hope, you’d be interested in this: http://arxiv.org/ftp/arxiv/papers/1007/1007.1773.pdf

Respectfully yours,

Evgeni B Starikov

• July 16, 2010

Evgeni,
I am very interested in this topic in general, so I will read your paper with interest. Thank you.

2. November 8, 2011

You can take the uncertainty in the microstate into consideration explicitly. For example if you measure the energy E with some error “delta E” then you can maximize the entropy subject to constraints on the E and itâ€™s variance (i.e. constraints on the first and second moments of the energy).

This is mathematically complicated however, so it typically isn’t done in Statistical Mechanics. Moreover, it isn’t usually needed there either. The reason is that although the implied uncertainty in the microstate is phenomenally large, the error from this in the macroscopic variables of interest is usually negligible.