Suppose you're trying to do ABC on the laser example.

You choose from the prior for true_len, and wavelength.

You simulate a latent normal(true_len,sigma_given*true_len);

You reject if observed - round(latent*4/wavelength)/4 isn't 0.

That's obviously terribly inefficient.

But it is putting a kroneker-delta-function distribution on the function of observed values and parameters such as true_len, and wavelength when projected onto a grid of rounded possibilities.

]]>(meas+roundoff)*wavelen/trueval ~ normal(1,0);

in both cases you're calculating a function of data and parameters, and putting a distribution on it. Usually the ABC distribution is just uniform in an interval maybe, but it's still a distribution.

]]>Specifically, once parameter values from the true_len and wavelength priors have been generated, one can back out the RMS error in distance units and sample the latent non-rounded data; in your stan model this is the real.length.nm* (1+ rt(10,df=dof.t)*tscale piece, although for ABC I guess one would use the normal instead of the t. With samples of wavelengths from their priors, one can then round the latent data appropriately; in your stan model, this is the round(simulated_latent_observations*4*/real.wls)/4 piece. Nowhere in this process are rounding errors sampled directly from a prior distribution.

And now that I've sorted that out, the declarative model statements of the form

(l2[i]-roundl2[i])*wl2/true_len ~ normal(1,sd_scaled);

become clear to me; I confess they were rather opaque to me until I could see how the generative process goes.

]]>So, you'd generate from true_len prior, generate from rounding error priors, generate from the wavelength priors, etc etc and then calculate the thing on the left hand side and accept it with probability somehow proportional to the right hand side of the various statements

(l1[i]-roundl1[i])*wl1/true_len ~ normal(1,sd_scaled);]]>

Well yes, exactly. What is the implied prior on true_len? As you say, the canonical ABC algorithm is: generate parameters from the prior, simulate data, retain corresponding parameters if the simulated data is approximately equal to the observed data (for some operationalization of "approximately equal"). If I wanted to implement that algorithm using your non-generative specification, how would I go about it? I can generate deviates from the priors on the wavelengths easily enough; if I had an explicit prior on true_len I could carry out the data simulations required for ABC in almost exactly the same way that you generate observed data given the hidden true parameters (the "almost" here referring to the t vs. normal issue). But what exactly am I supposed to do with your non-generative component to get myself to simulated data?

]]>