# The probability density vs a sequence of random numbers

How can we map back and forth between (nonstandard) density functions, and random number sequences?

Let be a finite sequence of numbers of length with a nonstandard integer. Let be a grid of values on the real line with a nonstandard integer and so that there are grid point values.

Let be nonstandard (L is ULTRA ULTRA ULTRA BIG, so that there are roughly "a nonstandard number of values for each grid point").

Consider a "randomly chosen" permutation of the integers from where we intuitively understand somehow how this is done (ie. we pick one of the permutations intuitively). Call this sequence (for choice) then imagine choosing at random from the sequence by choosing the integers in order calculating and choosing the appropriate . Assert without proof that is an IID random number generator for some distribution when is chosen "well enough". What distribution?

For each value, determine which grid point it falls in, and increment a counter for that grid point until we get to the end of the sequence. Call this counter . Construct the nonstandard function this is the nonstandard probability density associated with . When the standardization of this function exists, use it. Otherwise, perhaps you have something like a delta-function, where all the values are infinitesimally close to a single value, or some such thing. Those are useful too so we don't necessarily require a standard density. The fact that we include nonstandard densities means we are able to actually represent arbitrary probability measures, where the measure of an open set is

To reverse this process, start with the function , multiply the function by then for each grid point append a string of values equal to on our sequence. Once we have the full sequence we can "sample from it" by choosing a permutation of .

Now, let's canonicalize our sequences, namely if we have a sequence we can sort it in ascending order. Now for every function we can find a canonical of length .

How many different RNG sequences of length L are there for a given density function? There are one for each permutation, or . How big is that? Remember Stirling's approximation: which is already quite accurate at . So there are about different RNGs corresponding to the same distribution. We could perhaps require only permutations that leave no element in place, called the derangements, there are about of those so that'd put us at

Are all of these really "random number generators" that will satisfy Per Martin-Lof? Probably not. But a big fraction of them will be. Imagine that an oracle tells us Martin-Lof's "most powerful test" for a binary random number generator. Take the subset of the permutations which cause the sequence to pass that test at the level, when is the sequence where the first half of the sequence is 0 and the second half 1. Constrain our choice of to be one of these sequences, it's still going to be a big big number of sequences (asserted without proof).

So, it's always possible to take a probability measure (a nonstandard density) and convert it into many different sequences of IID samples from that probability measure (a random number generator) and back and forth.

Mathematically, this means in essence that "you can sample from any probability distribution". But this doesn't mean that placing a probability distribution over real world data implies that you think *is* a sample from that distribution. As a Bayesian, you are free to reject that. The probability involved is *in your head* or at least *assumed for the purpose of calculation* (ie. in the head of the Jaynesian Robot, or of MC Stan). As Jaynes says: "It is therefore highly illogical to speak of 'verifying' (3.8 [the Bernoulli urn equation]) by performing experiments with the urn; that would be like trying to verify a boy's love for his dog by performing experiments on the dog." - E.T. Jaynes, *Probability Theory*

Hi Daniel, I have a question very tangentially related to this post so I thought here's as good a place as any to post it. I'm essentially trying to understand how generation of random numbers from a given distribution interacts with machine arithmetic. My thoughts on it all are a little half-baked so I'll try not to say anything too silly, but will probably fail 🙂

Consider X ~ U(0,1), i.e. a real-valued RV uniformly distributed between 0 and 1. Reasonably obviously X is irrational almost surely. But if I ask for X* <- runif(1) in R, I'll get a machine number, i.e. from a particular subset of the rationals. At the moment I think of this as representing the interval of real numbers for which this is the closest machine number, that my 'actual' X is somewhere in that set (which has positive measure) and my (relative) error in using X* instead of X is both small and can be handled using the part of the theory of numerical analysis which is designed for exactly this. So what's the problem, you ask? Well, first of all, is this how other people think of it, and have they written about it?

Secondly, pseudorandom number generators I'm familiar with produce integers, for simplicity say in the range 0 to 2^n -1, uniformly (with the uniformity being one of the things we test). So if we then want a double-precision float between 0 and 1, we need to make sure that bits we put in do represent a number in this range, and moreover, if my intuition about intervals above is correct, we no longer want to select uniformly as smaller numbers will also have smaller intervals around them, e.g. 1 should represent the interval (1-eps/2,1) (I think...), whereas say, a denormalised number will represent a much narrower interval. So we have to 'invert' the widths of the intervals or something... And now, thinking about it, I'm wondering if the 'redundancy' of numbers that can be represented at lower precision helps cancel this out... or do we need to do another modification to ensure we're putting in the maximum precision version of that number... wow I'm really confused, sorry!

I actually don't really need an answer necessarily, I just can't even find any references that discuss something like this. Google has absolutely failed me :). I certainly haven't found it in literature on PRNGs, or machine arithmetic.

Even more half-baked-ly, given that we are working in finite precision, all our distributions are discrete, so, do we actually need/can we actually use measure theory? Could there be a useful theory of probability only on finite subsets of R that look like sets of machine numbers and specifically intended to give us computational tools? Or is it actually a really good idea to still think of continuous objects and deal with them like we do in numerical analysis?

Alright, brain dump complete; if you have any thoughts or references I'd be very appreciative!

Hi Kit. thanks for asking this interesting question. Let me think on it and I will post something in the next few days, possibly as a new blog post on this topic.

Thanks Daniel, looking forward to it.

I've found some more of your previous posts on NSA like this which are really helpful, though I'm still interested in the 'interval inversion' thing I mentioned above. I'm finding some stuff on constructive NSA which might be relevant too. Ahh, so much to learn...