There was a bunch of discussion over at Andrew Gelman's blog about "bet proof" interpretations of confidence intervals. The relevant paper is here.

The basic principal of bet-proofness was essentially that if a sample of data X comes from a RNG with known distribution $D(\Theta)$ that has some parameter $\Theta$, then even if you know $\Theta$ exactly, so long as you don't know what the X values will be, you can't make money betting on whether the constructed CI will contain the $\Theta$ (the paper writes this in terms of $f(\Theta)$ but the principal is the same since f is a deterministic function).

The part that confused me, was that this was then taken to be a property of the individual realized interval... "Because an interval came from a bet-proof procedure it is a bet-proof realized interval" in essence. But, this defines a new term "bet-proof realized interval" which is meaningless when it comes to actual betting. The definition of "bet-proof procedure" explicitly uses averaging over the possible outcomes of the data collection procedure $X$ but after you've collected $X$ and told everyone what it is, if someone knows $\Theta$ and knows $X$ they can calculate exactly whether the confidence interval does or does not contain $\Theta$ and so they win every bet they make.

So "bet-proof realized confidence interval" is really just a technical term meaning "a realized confidence interval that came from a bet proof procedure" however it doesn't have any content for prediction of bets about that realized interval. The Bayesian with perfect knowledge of $\Theta$ and $X$ and the confidence construction procedure wins every bet! (there's nothing uncertain about these bets).

2 Responses leave one →