In a more general context, it seems like it wouldn't be too hard to back out enough block-conditional posteriors from the generative model in closed (or closed-enough form that I could use block differential evolution MCMC to get a sampler with the right stationary distribution. (Efficiency would be a concern, naturally, but when is it not?)

]]>y_act[2:1000] - y_act(1:999) ~ uniform(-0.005,0.005)

You can do this as an iteration, and the distribution of y_i then is windowed within the reachable range, and depends on y_i-1... but again, it's a huge stretch and obfuscation of what's going on compared to the above declarative statement.

]]>The big issue, is the final step.

y_meas ~ ??????

there are two issues, the first one is that the "round" function is non-differentiable, so mucks with Stan's hamiltonian monte carlo pretty heavily, and the second is that it's only the difference between that big generative thingy and y_meas which is gamma distributed. There's no way in Stan to say:

y_meas = some_complicated_function of parameters.

you can only say y_meas ~ some_distribution_function

What is the distribution you should put on y_meas given that

y_meas - (...) ~ gamma(foo,bar);

which is the declarative statement I'd use in _my_ model

I guess the point of all of this is that forcing yourself into a mold where you're only ever putting either the name of a prior, or the name of a data variable on the left hand side of a ~ statement, so that you've expressed your model in a directly generative way (ie. where data is the output of a special RNG described on the right of the ~) is stifling.

]]>ad16_noise ~ gamma(3,1/0.003);

cable_noise ~ normal(0, cable_noise_rms);

cable_noise_rms ~ unif(0,0.01);

conn_imp ~ unif(0,10);

// and so on until...

y_meas = round(((round(1024*A*f_pre(y_true)+epsilon_rf)+da_bias_noise)*10* ((rperlen*Len+conn_imp)/(rperlen*Len+inpt_imped+conn_imp))+cable_noise)/10*2^16)/2^16+ad16_noise