# More on nonstandard random number generators

From information theory, suppose that there exists a bit stream of nonstandard length such that for any nonstandard sample of consecutive substrings of length the sequence of bits has Kolmogorov complexity bits in some simple restricted LISP like language, where . From the theory, this is near maximal complexity since every finite string has a complexity no more than some constant larger than that finite string, and our strings are finite but nonstandard.

Intuitively, we have a bit sequence which is very very long, and when we take very long samples of long bit strings, we can't compress those samples.

Such a bit stream is non-computable, and nonstandard. But does it exist? I don't know. By transfer, it would have to exist if for all standard values of there is a sequence with prefix bitstring of length with complexity for some constant . So suppose it does exist. We can make a random number generator from this sequence easily, in a manner similar to what is done in a computer.

The function takes sequential bits starting at location and forms the number in other words interpret the bits as an integer and divide by . This number will be a nonstandard rational number whose binary expansion has bits (a nonstandard number of bits). The standard part of this number will in general be irrational. The values of all lie on a lattice whose lattice spacing is . The function is clearly a nonstandard function, but we can make it output standard values by taking the standard part of its output, then it will be a function .

Now, we'd like to show that the nonstandard function has the property of being "near-uniformly-distributed" on all micro-intervals of sufficiently long but infinitesimal length. So we need a reasonable definition of "near-uniformly-distributed". Suppose that we take a nonstandard sample of successive outputs of and we count the relative frequency that the output is in any infinitesimal interval . We require that this frequency be with . In other words, on any interval which includes a standard number of infinitesimal lattice points, we need to have near uniformity for large enough sample size .

Suppose that is positive and appreciable for some micro- interval. Then the output of the bit sequence produces numbers in that interval significantly more often than in other intervals. A compressor program for the original bitstring would be able to take advantage of this fact. In fact we could output the bitstring in chunks of length followed by a 0 if we are going to output another chunk, or a 1 followed by the address within the micro-interval of the lattice point we're going to output. These addresses need to be of length so we save a lot of bits by knowing the micro-region we're going to output.

For chunks of length we could re-encode the stream as approximately bits using this method. As a fraction of bits that's: . But is infinitesimal and so we haven't saved much unless is nonstandard. This shows that if we have a point mass component to the distribution that we can compress the original sequence. So we've shown that has a standard density on .

It'd be nice to take it further and show that this standard density is very close to uniform, perhaps I can do that or perhaps not. I'll make an effort next time.