# On the application of Distributions... an example

So, recently I was both asking questions on the stan-user mailing list, and trying to be helpful with others who were asking questions too. A topic came up where no-one was answering the question, and I thought I had a reasonable approach, it was about rounding in data reporting.

Now, my approach, which Bob Carpenter made more computationally efficient, involved creating parameters to represent the roundoff errors, and then putting uniform interval priors on them (the maximum entropy prior for a finite interval) and then running the model and seeing both what the parameters were for the underlying normal, and the parameters for the individual roundoff errors. The logical way to code this turned out to be to place a distribution over a function of data and parameters

data - rounding_errors ~ normal(mu,sigma);

this generated a lot of resistance on vague principles. Bob Carpenter later proved that the model was identical to the recommended model (marginally for mu,sigma at least), which involved writing out a likelihood using the normal CDF.

The strong resistance to this approach made me wonder, am I doing something that has pitfalls, or have I just drunk different Kool-Aid than the "generative" modeling crew?

So, I'm going to put up an example model where this kind of thing makes very good sense to me, and "generative" models seem irritatingly obfuscated... and see what the results are.