Feasibility in the real world is of course very dependent on the problem. To me, knowing how to analyze the stopping rule in a fully Bayesian context thanks to your help with clarifications at least gives me the tool I need to argue to people whether or not the stopping has meaning.

There are enough cases that I am faced with whether the stopping rule was clearly pretty nebulous, so simply telling people "hey here's the things I need to know about the experimental design to determine whether we need to think about the stopping" that's really helpful.

Even if every time we say "looks like we should just ignore the stopping rule" at least we've actually done it based on some logically consistent Bayesian principle.

]]>And if you know quite a lot but not everything about my experimetal design you can use the fact that there was a stopping event to infer a bit more and reverse engineer my model to some extent and use that to improve your model so you get a different (maybe better) posterior. I don't really think that's feasible in the real world but I guess it's not logically impossible.

]]>Thanks, you've pointed out a situation in which we are ambiguously talking about different situations.

Let's reiterate your situations, and add another one which is directly related to my number 2:

your scenarios:

(0) you don't know if N was fixed or there was a different stopping rule of any kind

(1) you know that N was fixed (=2)

(2) you know that there was a deterministic stopping rule different from N fixed, but not the exact details

(3) you know that the stopping rule was to stop after two heads

From what I have understood, your likelihood will be L(theta)=theta^2 in scenarios (1) and (3).

In scenario 0 I have my simple posterior from above using just my prior on the mean and the measurement error size.

In scenario 1 the fact of stopping tells me nothing and I have the same posterior as 0.

In scenario 2 I assumed I have some partial information about the stopping rule, for example that it relates somehow to an instrument precision/measurement error information that I think you have. a new parameter is born describing what I know about what you know about the measurement error, and the fact of stopping informs me about what you know about the measurement error. That also informs my posterior for the length, it would probably make it more concentrated since you are assumed to know something real about the instrument error.

In scenario 3 to map to my measurement error scenario, the rule was say to stop after two heads of a separate coinflip... this doesn't help me infer anything, except what the coinflips are, a fact I don't care about, but I do in fact learn about them if I care to.

In scenario 4) you tell me that you stopped after plugging in the data to your bayesian model, you give me the explicit prior and likelihood that you used. Now. rather than inferring partial information about what you know about the measurement error, I have already been handed complete information about what you know about the measurement error. So my posterior distribution is an even more concentrated version of (2), since I use your very good information abut the measurement error exactly. In fact, unless I have some reason to add something else to my model (like for example that I think there's a slight consistent bias to your measurements or whatever) then my posterior becomes your posterior.

If you now pointed out that I think in scenario 4 that I should throw out the email you sent me with the exact measurement error prior, and go back to scenario (2) then that really would be something weird. However, I don't think that. In scenario 4 I use your prior because you're the one who knows the instrument.

Does that help us?

]]>If you don't have any information about a stopping rule, you get some posterior for theta.

If you have complete information about the stopping rule, you get the same posterior.

However, if you have partial information about the stopping rule you might be able to extract additional information from the data and get a more precise posterior distribution for theta.

Don't you find that slightly disturbing? If you know precisely the stopping rule, should you ignore some of that knowledge to improve your inference about theta?

]]>When there's something else going on (like for example, the Bayesian doesn't know enough about the rule to predict with certainty) then you'd get intermediate probabilities for stopping and a continuous likelihood.

]]>The case 2 is where because you have partial background data that can be used when you know there was a stopping rule that was triggered you get to infer something.

In probability as logic this 2 is the equivalent of what random means to a frequentist. And the partial information allows you to invent a parameter which is then probabilistically related to the fact of stopping. It has the same "random and related to the parameters of the model" but the meaning of random is different for Bayesian interpretation. In Bayesian interpretation random means you can't be sure stopping would happen without being told it did happen.

]]>Let's stay with the other cases for a second then. If I understand correctly, you get always the same likelihood for theta. You said that your model in (3) and (1) is the same as in (0). Do you mean that you get the same posterior distribution for theta in all these cases?

]]>I think you could make it more like what you expect to see in a prior*likelihood situation by

yourMeasurementSigma ~ MyPriorOverYourMeasurementSigma();

sqrt(N) ~ gamma(10.0,10.0/(yourMeasurementSigma/yourPosteriorSigma));

or something like that, where sqrt(N) is on the left hand of a sampling statment because it's data, this is obviously an approximation, because N is discrete.

again just checking in here quickly to give hints between other sunday duties.

]]>I know you're collecting data on the length of some object, and I get from you the data set {1.04, 0.97, STOP After 2}

I also know that you have a lot of experience with this measurement apparatus, and that you're motivated to measure this object with reasonably tight precision. (some way to partially interpret the stopping rule)

Also I have reason to believe that the size of this object is O(1) so that I could in the absence of any data put say a gamma(2,2) prior on the length.

Now, if I don't use the information about the stopping rule, I get the inference

mu ~ gamma(2,2);

sigma ~ exponential(1.0/0.5); // I guess the measurement instrument isn't too inaccurate

Data ~ normal(mu,sigma);

If I do use the fact that I know you have knowledge of the measurement instrument, then together with the data N, I get the inference:

mu ~ gamma(2,2);

yourPosteriorSigma ~ exponential(1.0/.02); // information about how much error I know you're willing to tolerate yourMeasurementSigma ~ gamma(10.0,10.0/(yourPosteriorSigma*sqrt(N)); // inference about what you must think the measurement error in the machine is

```
```

`Data ~ normal(mu,yourMeasurementSigma);`