It is perfectly possible to accept Cox's theorem as stated by Van Horn, accept the guidelines given by Feynman and consistently use non-Bayesian statistical methods.

That is to say, the issues addressed by Cox/Van Horn appear to relate to statistical inference, but in fact address narrower questions of probability theory and propositional logic that should be acceptable to anyone using probability calculations.

On the other hand, the issues addressed by Feynman are sufficiently broad to accommodate a wide range of positions on statistical/scientific inference methodology.

So, while interesting, none of these are arguments *for Bayesian statistics or Bayesian epistemology etc as opposed to a number of alternatives*.

Now, do I have a better, fully-worked alternative? Nope! But I'll let you know if I ever get one đź™‚

]]>I think if you can put some specific logical argument for a particular position of how it all works, I could better understand where you're coming from. So far I think you agree with me that Cox's theorem provides a logical basis for updating degrees of plausibility over factual statements. Then, you seem to say that outside of factual statements, there are additional questions that can't be answered, and I agreed, specifically I put forward a framework from Feynman of "guess", "compute", "check" and agreed that both guess and compute have little to do with Bayesian probability theory (except that if you accept Bayesian theory, you should compute probability consequences in a certain way). You seem to have other issues of interest, but you haven't really said what those questions are, or how you think they should be answered. Or, if you have, I at least haven't understood the specifics.

]]>So, by this definition, the p value "verifies" whether a sequence of number is a random sequence from a given distribution or not.

See post here: http://models.street-artists.org/2013/03/13/per-martin-lof-and-a-definition-of-random-sequences/ which has a link to the paper.

By this definition, the p value MUST be useful for "verifying" that a sequence is from a given frequency distribution. However, by this definition there is absolutely NO reason to think that it verifies that a plausibility distribution is "adequate". For "adequate" you need some kind of notion of "goodness" and the p value is only a notion of "goodness" when you're searching for a model of frequencies.

]]>As Jaynes says "It is therefore highly illogical to speak of 'verifying' (3.8 [the Bernoulli urn equation]) by performing experiments with the urn; that would be like trying to verify a boy's love for his dog by performing experiments on the dog."

So, when you're doing inference on a frequency distribution, you can then figure out the adequacy of your posterior distribution by comparing the frequency distributions you find to the data they supposedly generate using p values. But when you're not doing inference on a frequency distribution... it just makes no sense to use p values, it's a categorical error, like seeing if the temperature of a frying pan is at least 3 meters.

]]>But again, Cox's theorem doesn't say that - Fisherian p-values refer to the adequacy of a statistical model, which you've said isn't covered by Cox, and Cox's theorem does relate to frequencies (of propositions), e.g.

"In fact, Cox pointed this out in his 1961 book The Algebra of Probable Inference, quoting Boole in Footnote 5, p. 101. In this passage, Boole not only makes the connection between the frequentist and logical interpretations of probability, he suggests that it is necessaryâ€”which is the point of Coxâ€™s Theorem."

(from the meaningness post).

Statisticians of all stripes accept (or should accept) probability applied to simple propositions. The question is what can and can't be represented by simple propositions.

For example a 'state of information' is not a collection of propositions, as discussed. It does allow you to assign probabilities to propositions, i.e. it is a probability model.

One could use a Fisherian p-value to make a statement about the probability model (state of information) itself, rather than propositions within the model. In fact I think this is the most sensible use of p-value style reasoning (whether formal or informal).

]]>To stick with the cat skinning metaphor, it's like saying "there are no other ways of skinning a cat that consistently produce a complete skin with no punctures that can be effectively tanned other than to use a skinning knife" and you're saying "gee but this guy over here effectively removes the skin from cats in thin strips"

If you want something else, then Cox's theorem isn't that helpful, but if you want plausibility, Cox's theorem tells you stop fiddling around with NHST and p values and frequencies, because frequencies and plausibility are not the same thing, even though they confusingly have the same kind of algebra.

]]>Anyway, I suppose we aren't going to agree anytime soon.

]]>