On IPv6 only networks and FireTV Stick

2017 October 17
by Daniel Lakeland

The FireTV stick from Amazon (I have I think version 2) is a useful bit of kit. However, while it can use ipv6 it seems as of today (FireOS 5.2.6.0 October 2017) it must have an ipv4 connection in addition in order to be happy.

However, even if you give it dual-stack, it can still flake out, and the way it seems to work is it looks like it'll be connected for a few seconds, and then it will show that "home is unavailable" and ask you to check your network settings.

If you go to settings -> about -> network you will see it has an ipv4 address and everything looks fine! what's up? Look closer and I see that it shows a DNS address with ipv6.

The key is that ipv6 can advertise an ipv6 DNS server, and if you aren't also DHCP advertising an ipv4 DNS server, you'll be up a creek.

using dnsmasq on my network, what fixed this for me was:

dhcp-option=option:dns-server,10.x.x.x

where 10.x.x.x was my server/router running dnsmasq

If this is there, even if you have an additional:

dhcp-option=option6:dns-server,......

where ...... is your ipv6 for dnsmasq, firetv will seem to be happy. This seems to be because it overrides its DNS settings if it hears from dnsmasq who the DNS servers are, and if it doesn't have any ipv4 DNS servers, it just refuses to be happy about its network connection...

Hope that helps someone.

 

Followup on Implicit Function Theorem / Likelihoods

2017 September 25
by Daniel Lakeland

I think it's important to understand conceptually what is going on in these cases where we have an implicit relationship that data and parameters are supposed to follow, and to know when it is that we need to do some kind of Jacobian corrections.

A Jacobian correction is required when you have a GIVEN probability distribution on space A and you have a transformation from A to B, call it B=F(A) and you want to express a probability distribution on the B space which is *equivalent in every way* to the GIVEN distribution on space A. The distribution on B is called the *push forward* distribution on B. The mnemonic here is that if you have a small neighborhood in A and you "push it forward" through the F function into the B space, it produces a small neighborhood in B, and if you want this to be equivalent in every way, then the measure of the neighborhood on A is going to be forced to be equal to the measure of the pushed-forward neighborhood in B.

GIVEN: A ~ DistA(values)

GIVEN: B = F(A)

DERIVE: B ~ DistB(values)

This process requires using DistA, the inverse transform Finv(B) and a Jacobian correction.

Compare this to:

UNKNOWN: A ~ UnknownDistro(Avalues)

GIVEN: B = F(A)

GIVEN: B ~ GivenDistB(Bvalues)

Here we don't know what measure we have on the A space, but we know (by a modeling assumption) what measure we have on the B space. If F is an invertible function, then this situation is entirely symmetric with the above distribution it's just the case that *which space the distributional information is given in is different*.

Now, let's add some semantics to all of this.

In the above problem let A be a space in which your data measurements live. Let DistA(values) then be p(A | values) a likelihood factor in your model. Here, you know what the distribution is on your data. So, just use it. But if you insist on transforming your data to some other space, like say taking the log of your data, in order to leave your model unchanged by the fact that you insist on taking the log, you will have to find a DistB which is the push-forward measure of DistA through the transformation B=log(A).

Now, suppose you don't know what likelihood to give for your data values, but you know that if you calculate some complicated function B = F(A) you would be willing to model the results, in the B space, as having a distribution p(B|Parameters) = DistB(Parameters)

Now, if you want to know what measure this implies in the data space, you will have to do the whole change of variables rigamarole with Jacobians. The important thing to understand is *what is given vs what is derived*

Now, let's imagine a situation where you have a non-separable relationship between various data and parameters, which is constant plus error, a typical situation where the implicit function theorem applies. Here x,y are data, a,b,c are parameters in your model, and we'll assume F is a "nice" function of the kind you're likely to write down as part of a modeling exercise not something really weird which is nowhere differentiable on any of its inputs or the like. Our model says that there is a relationship between x,y,a,b,c which is a constant plus noise. This relationship will be written:

F(x,y,a,b,c) = 0 + \epsilon

And let's say \epsilon \sim De(C) has GIVEN distribution De(C) where C are some constants (easiest case).

Now suppose that a,b,c have given values, and x,y are measured. Then the quantity on the left of this equation is a number F(x,y,a,b,c)=3.310 for example. And so, 3.310 = \epsilon is data, derived data to be sure, but data nonetheless, for a given a,b,c and measured x,y there is no uncertainty left it's just a number. By MODELING ASSUMPTION the probability that this \epsilon would be calculated to be within d\epsilon of 3.310 if the true values of a,b,c were the ones given by the sampler, is De(3.310|C)d\epsilon where De is a given function.

And so the distribution De(C) is of the form p(\epsilon | a,b,c) it is a *given* likelihood in "epsilon space". Note that x,y are needed to get \epsilon but they are known data values, throughout the sampling process they stay constant. So this is really a function L(a,b,c) where a,b,c are the only things that change while you're sampling. Given the data x,y the post-data distribution on a,b,c is

L(a,b,c) prior(a,b,c)/Z da db dc

Where Z is a normalization factor Z = \int L(a,b,c) prior(a,b,c)da db dc

Now, if you have this given likelihood in epsilon space, and you want to see what the equivalent likelihood is over say y space where we think of y as data we'd like to predict, and x as covariates, and a,b,c as parameter values:

p(y | a,b,c) dy = p(\epsilon(x,y) | a,b,c) \frac{d\epsilon(y)}{dy} dy

Under the assumption that F is sufficiently well behaved that the implicit function theorem gives us a unique differentiable transform from y to epsilon for given x,a,b,c. And d\epsilon(y)/dy is the "Jacobian Correction". Now divide both sides by dy and we have our answer for the density of y (I'm using nonstandard analysis, dy is an infinitesimal number).

The point is, the likelihood is strictly implied to be the push-forward measure of the GIVEN distribution over \epsilon. But the truth is, we don't know the transformation y = f(x,a,b,c,\epsilon) or its inverse. The typical way we'd do predictions would be to set \epsilon_n to be a parameter with the epsilon distribution, and then sample, then we take the \epsilon_n values and use an iterative numerical solver to get y values. And so, now we have a computational criterion for deciding of F is sufficiently nice: it produces a unique answer (you might be able to extend this to a countable number of possible alternative answers) under iterative numerical solution for y from a given x,a,b,c,\epsilon_n.

 

The Implicit Function Theorem and Likelihood Functions

2017 September 22
by Daniel Lakeland

In Bayesian statistical modeling we often use the symbol ~ which denotes a kind of "statistically equal to". Consider the following:

y = ax+b+\epsilon

If \epsilon = 0 then this is an equation of a line, whereas if we say \epsilon \sim N(0,1) for example then this denotes a line with errors that have a certain range of credible sizes and are centered around 0. Well this statement about the distribution of \epsilon doesn't alter the algebraic properties of the symbolic expression y=ax+b+\epsilon and so that equality still respects all the usual algebraic rules.

y-ax-b = \epsilon

Is true, and so y-ax-b \sim N(0,1) is true by substitution of \epsilon.

In general you might have a fairly complicated relationship, something like

F(x,y,a,b) = \epsilon

With F a nonlinear non-separable relationship between the quantities, for example

y^2 -\frac{x}{y}\mathrm{atan}(ay)+\frac{b}{a} = \epsilon

Or something equally nasty from the perspective of trying say "solve for y". We can suppose that y is our data, and x a covariate and a,b are parameters. What do we make of this relationship in a Bayesian model and how do we use it?

In Stan, if you create a transformed parameter F = y^2-x/y*atan(a*y)+b/a and then say

F ~ normal(0,1)

You will get a message about how the left hand side of this sampling statement contains a transform of a parameter, and if it's nonlinear you need to include the Jacobian of the transformation in the target. This warning message is designed to alert you to something you need to do when you re-parameterize. But a re-parameterization is a purely formal transformation. It doesn't alter the meaning of your model, it alters the way in which the model is expressed. For example if you have y = ax+b and you change this to y/a = x +b/a and then rename y/a = y' and b/a = b' and say y' = x + b', this is a formal transformation that doesn't alter the meaning of the equation (provided a is not 0). On the other hand, if you do y = ax^2 + b then you're changing your model.

The statement F ~ normal(0,1) above is not a formal transformation, it is in fact a statement about your data, a kind of likelihood, it's just an implicit statement about your data.

Although we can't necessarily solve our equation for y symbolically, there is a theorem called the implicit function theorem which enables us to say that as long as our relationship F(x,y,a,b) is sufficiently well behaved, then in some region around any given point x',a',b'  there exists a function y = f(x,a,b) even if we don't know how to express it. For example when the distribution for a is well separated from 0 then we won't be dividing by a=0 and so our expression F is well behaved. And so, our statement

F ~ normal(0,1) is really a statement about

y-f(x,a,b) = \epsilon

And could be re-expressed as

y \sim N(f(x,a,b),1)

Which for y a data value is obviously the usual kind of likelihood expression. The problem is, although this f function exists, that doesn't mean we know what it is. We do, however, know the relationship F(x,y,a,b) and so why not do

F(x,y,a,b) \sim N(0,1)

Which has exactly the same meaning.

Note, to the best of our knowledge we have decided to model \epsilon \sim N(0,1) which is a modeling choice, and subject to questions regarding whether it expresses valid facts about the world more than it is subject to questions about mathematical correctness. This fact isn't derived mathematically from anything, it's assumed and so it should be questioned primarily on modeling grounds more than anything else. There can be mathematical facts that are relevant I suppose, but the main question is "is this a good model" not "did you derive the N(0,1) correctly" since it isn't derived.

All of this is another way to think about what I called "declarative models" a while back when I first started thinking about this topic.

 

On the lack of Lebesgue Measure on countably infinite dimensional spaces, and Nonstandard Analysis

2017 September 18
by Daniel Lakeland

Consider the interval [0,1] it has length 1. The generalized notion of length in the reals is Lebesgue measure, whenever you have something like a closed interval so that there's a trivial length for a set, then the Lebesgue measure is the same as the length.

Now consider the 2D plane, the square [0,1] \times [0,1] consists of all the points (x,y) where x is in [0,1] and so is y. What is the area? It's 1. This continues to work for integer dimensions 3,4, etc what's the volume of the hypercube [0,1]^N for N some large integer like 3105? Again, it's 1^{3105} = 1.

But now let's see what happens when we consider intervals of the form [0,0.5] the length is 0.5 and for high dimensions N the hyper-volume of the hyper-cube is 0.5^N which goes to zero as N gets big. Similarly for intervals [0,2] the volume goes to 2^N which goes to infinity as N gets big.

Intuitively this is why we don't have (standard) Lebesgue measure on the infinite dimensional space. An infinitesimal interval dx is small, but when you calculate dx^N for N nonstandard, the hyper-volume is REALLY small. Similarly for intervals of slightly larger than side 1, the hyper-volume is infinite.

On the other hand, consider the interval [0,1.1]^N for N a nonstandard integer. Sure, the hyper-volume 1.1^N is nonstandard. But, it's a perfectly fine nonstandard number. If this calculation is an intermediate calculation in a series of calculations that eventually leads you to prove some property, there is nothing that keeps you from carrying it out. For example you want to show that one set is much smaller than another, the ratio of sizes is r = 1.1^N/1.2^N for N nonstandard. This ratio is clearly infinitesimal as 1.1/1.2 \approx 0.916667 is a fraction less than 1 and it's raised to a nonstandard power.

But if you have some other infinitesimal ratio, and we want to discern how big they are relative to each other, for example how big is 0.995^{N-K} relative to (1.1/1.2)^N you can do so easily and algebraically. [0.995^{N-K}/(1.1/1.2)^N] \approx 1.0855^{N-K} \times (1.1/1.2)^{-K}.

When N and K are nonstandard, you rapidly get either an unlimited or an infinitesimal result. But if you prove that this is true for all N,K and then need to later consider the finite standard case say N=1331512 and K=89331 then you have the formula available to you, and you can get a perfectly fine standard value. This is useful if you're doing something like considering a function of space evaluated at a set of points and you don't know ahead of time exactly how many points. For example each point might be the location of competing insects, and you're working out a PDE to approximate how these insect populations change in time. The insects come at discrete locations, but the particulars of how many and which locations are not known ahead of time. You can develop a continuous model, in which you have a smooth function of space, and then you've got an "infinite dimensional" model, but the truth is your infinite dimensional model is just a device for calculations approximating a finite but "large N" number of points. It's not helpful to say that "there is no Lebesgue measure on infinite dimensional space" because the property "there is Lebesgue measure on space of finite dimensions N for all integer N" is the property you care about. In your model you would only actually ever care about say N = a few million to billion. So developing a nonstandard expression makes more sense to the modeler, even though it makes no sense to the pure mathematician trained in classical analysis.

 

On finite vs countable additivity and Cox/Jaynes/DeFinetti etc

2017 September 12
by Daniel Lakeland

There is some concern in the back of people's mind about Cox/Jaynes Bayes and things like continuous parameter spaces, or infinite dimensional models and soforth. It comes down to questions about Finite Additivity. About the only good thing for me that came out of Dan Simpson posting on Andrew's blog about the "Gay Face" neural net baloney that came out this week is that he indirectly pointed me to a guest post on Xian's blog from a few years back. (To be clear, it's the gay face "research" baloney not Dan's post that I object to)

So, here I want to try to organize some thoughts I have on finite additivity type stuff. I've mentioned why I don't think this is actually an issue for science in the past.

As regular blog readers know, I'm a fan of the IST form of Nonstandard Analysis. I think it connects models to formalism in a way that is transparent, and in a way that "typical" measure theory type math doesn't.

So, let's try to define some probability foundations in a Nonstandard Analysis setting (warning, I will probably do a poor job of this compared to professionals but if I screw it up I am pretty confident it will be both detectable and fixable by a more serious formalist)

First, let's work with a simple continuous valued random variable. X

Define a set of points \{x_i : -N + i dx\} where dx = 1/N^2 and i \in [0,1,...,N^2] and N is a nonstandard integer.

Now define a function p(x_i) which is non-negative and \sum_i p(x_i) dx = 1

Suppose the standardization p^*(x) exists as a standard function, and therefore p^*(x) is a standard probability density function, proof is simple, it's non-negative, it's integrable, and its integral is 1. Every standard integrable function that is non-negative and has integral 1 is a probability density by definition of a probability density.

Now, suppose instead that p(x) is a nonstandard function, that is, it takes on nonstandard values in such a way that p(x) dx is appreciable for some x values, but suppose this only occurs for values infinitesimally close to some standard values x_j (ie. the "delta functions" are at standard locations). Then there is no standardization of this function. However, we can define a probability measure on X such that if s is a standard set of points in X

\mu(s) = \mathrm{st}(\sum_{i : x_i \in s} p(x_i)dx)

It's trivial to show that this \mu is a countably additive probability measure, first off we're adding up non-negative values, and the sum of all the nonstandard values equals 1 so the outcome is always in [0,1]. If s1 and s2 are disjoint standard sets then \mu(s1) + \mu(s2) = \mu(s1\cup s2) by the property of the sum operation that defines \mu in terms of p(s). This is true for all unions of K disjoint standard sets for K any standard integer. This is true because the K standard sets are disjoint, and therefore they partition the sum, we don't double-count any of the nonstandard grid points for example. By Transfer this is true for all K including nonstandard K. To see that this implies countable additivity we use proof by induction. In standard mathematics, we have a sequence of subsets \{s_i\} whose full union is the full set S, our equality is true for every K prefix of the sequence of subsets by our partitions-the-nonstandard-sum argument, and hence by induction is true for the whole sequence of subsets.

So, finite additivity may be problematic as a foundation for Bayesian statistics on continuous parameter spaces, but nonstandard-finite additivity is sufficient to give measure theory, and to boot, in the nonstandard case, every standard measure has a nonstandard density.

Now, suppose we're dealing with infinite dimensional spaces, like the sample paths of gaussian processes. There may not be any standard Lebesgue measure on infinite dimensions, but there is a perfectly fine Lebesgue measure on every finite dimensional space of standard positive integer dimension D, therefore there is a Lebesgue measure on every nonstandard positive integer dimension D as well by Transfer. Let (x,f(x)) be the graph of a sample path on a nonstandard grid of nonstandard dimension D in x and in each f. Let f have a nonstandard density defined by the nonstandardly-discretized multivariate normal distribution. The the standardization of some realization f(x) is a gaussian process sample path (because every finite standard set of x values has f(x) values with multivariate gaussian distribution, which is the definition of a gaussian process).

The problem with standard measure theory is that when you build your formalism on taking limits, things change from one type of thing to another under certain limits. For example, the normal(0,s) density for s an infinitesimal is a perfectly fine nonstandard density function, but in standard mathematics it is a "delta function measure" which is to say a measure over sets, not a function f(x).

 

Statics methods for MC ensembles

2017 September 5
by Daniel Lakeland

Recently I've been working on problems for which Stan struggles. The typical situation is that I have a large model with many parameters, usually most of the parameters represent some measurement for an individual item. For example a Public Use Microdata Area in the Census, or the behavior of some individual who has been surveyed, or the expression level of some gene, or what have you. And then there are a small number of parameter that describe the group-level properties of these individual measurement taken as an ensemble (a hierarchical model)

The symptoms of this struggling is that Stan adapts its step size very small, uses long trajectories, maxes out its treedepth, takes many iterations to get into the typical set, and then generally takes a long time to sample, and probably has divergences anyway. Often I kill the sampler before it even really adapts because I see the stepsize has crashed down to 1e-7 or the like and each iteration is taking tens of minutes to an hour.

Now, when we have better diagnostics for divergences, perhaps this kind of problem will be remedied somewhat. But in the mean time, solving dynamics problems is known to be hard, a lot harder typically than solving statics problems. For example, if you want to put electrons on the surface of a sphere in a configuration that minimizes total potential energy, you can do so a lot easier computationally than you can solving the dynamics of those electrons if you give each one of them a velocity constrained to the sphere... Solving the dynamics problem requires you to take small timesteps and move all the electrons along complicated curves. Solving the statics problem just requires you to simultaneously perturb all the positions in an energy minimization way.

So, one thought I had was to solve an optimization problem on ensembles.

Here's one way to think about it: suppose p(x) is a probability density on a high dimensional space x. We want to create a sample of say N points in the X space, let N = 100 or so for example.

One thing we know is that if \{x_i\} is any set of points in the X space then for K a nonstandard number of steps \{x_i'\} = \mathrm{RWMH}(x_i,K) is an ensemble in equilibrium with this p(x) distribution. Where RWMH(x,K) means K random walk metropolis hastings updates from each starting point x_i.

Furthermore, once in equilibrium, it stays in equilibrium even for K=1. Of course for K=1 it is possible that each point is rejected in the MCMC update step, and so the ensemble is also unchanged. But as both the size of the ensemble increases and K increases, this non-movement becomes dramatically improbable. In particular, we can choose something like an isotropic gaussian kernel with sufficiently small standard deviation that we get something like 10% chance of acceptance in each point, and with 100 points, the probability that the ensemble is unmoved after 1 update per point is 0.9^100 ~ 1e-5. Of course, the distance moved may not be very large, but we will have some movement of at least some of the points. If we make K = 10 or so, we'll get some movement of most of the points.

However, we want to guarantee that we do get large movement relative to our usually terrible starting point. One way to get movement is to just continue moving each particle along a line:

For each particle:

x(t) = x(0) + t (RWMH(x(0),K) - x(0))

When mapped across the ensemble, this can be thought of as transport of the ensemble in the direction of an acceptable random perturbation (acceptable because the RWMH accepted that update).

How far should we allow this transport to go? Obviously if t goes to infinity, some of the particles go to infinite distances and since our probability density is normalizable this means some of the particles go well outside the high probability region.

One particular expectation that is of interest is the ensemble entropy:

\frac{1}{N} \sum_{i=1}^N -\log(p(x_i))p(x_i)

This has the advantage that the function f(x) = -\log(p(x)) has domain equal to the domain of the density, and is therefore sensitive to all the regions of space. Furthermore, because p(x) goes to zero outside some ball, the function -\log(p(x)) is positive and so as t increases the entropy of the ensemble eventually increases.

The result is, that if x is out of equilibrium, initially as a function of increasing t the ensemble entropy should first decrease (because the RWMH accepts higher probability transitions and many of the transitions will be towards higher probability) then after a while, it should become somewhat constant, then as time goes on it has to increase again. The region of space near the entropy minimum is what we want. Yet, we don't want to just send all the points to the mode... that would be the lowest entropy ensemble possible.

This whole concept is related to concentration of measure. The entropy of an ensemble of 100 randomly chosen iid points is a function of 100 random variables and is therefore more or less constant under different realizations. The region of constant entropy is called the typical set and contains essentially 100% of the probability mass.

Now, suppose that \{x_i\} starts in equilibrium. Then, as t increases, ensemble entropy may decrease or increase initially, but the change will not be dramatic, and as t increases through time eventually entropy increases again. Since p(x) is typically smooth, the ensemble entropy as a function of t is also smooth. If it comes to a minimum somewhere, then in the vicinity of that minimum it acts like a parabola. How far should we go? Suppose that the ensemble entropy has a central limit type theorem such that when the ensemble size is large enough the ensemble entropy is distributed according to Normal(e,s(N)) for e the actual entropy of the distribution, and s some decreasing function of ensemble size N. This manifests as the trajectory having a parabolic minimum entropy. If we sample t with weights according to \exp(-e(x(t))) where e is the ensemble entropy then we should come into equilibrium (intuitively, I have no idea if this works in reality). Note that initially we are going to always go forward in time, but at equilibrium we need to project t both forward and backward.

Some questions: why doesn't this collapse to finding the mode?

The reason is that at equilibrium each individual random walk metropolis step goes either up or down in the distribution equally, so some points are heading into the tail, and others into the core. If we get close to the mode, the RWMH has more ways to go away from the mode and so we will get directions that mostly point away from it. If we get too far from the mode, RWMH will tend to ratchet upwards and give us a direction back into the typical set.

Is this any better than random walk?

I don't know, intuitively we're using a random walk to get a random perturbation direction that at equilibrium leaves the ensemble in equilibrium. So in the direction of this perturbation, we're moving along the typical set manifold. We do so in a way that more or less equally weights all points that we visited which were in the typical set (ie. if the entropy is constant along our curve, then we go to any of those points equally likely). It seems likely that this devolves to a random walk when the typical set is highly curved, and works more like HMC when the typical set is elongated. Now, Stan detects when there is a lot of curvature and drops the step size, but it then keeps the same step size for all the trajectories. so the computation required is based on the worst case. Perhaps this other technique for following the typical set as an ensemble is more robust to allowing larger step size when it's warranted? Also, it doesn't require calculating gradients. In some sense, the RWMH step finds us a direction which is pointed along the typical set, and so has zero ensemble entropy gradient on average.

The nice thing is, I have been working on some example problems in Julia, so I think I have most of the requirements in place to test this on some example problems. I'll report back.

 

On Morality of Real World Decisions, and Frequentist Principles

2017 June 27
by Daniel Lakeland

If you want to make decisions, such as choosing a particular point estimate of a parameter, or deciding whether to give a drug, or whatever. And you want your decision making rule to have the *Frequency related* (Frequentist) property that under repeated application it will on average have small "badness" (or larger "goodness") then you should look for your procedure within the class of procedures mathematically proven to have unbeatable frequency of badness properties. This class is the Bayesian decision rules (see Wald's Theorem and the Wiki). The boundary of the class is the Bayesian decision rules with flat priors, but we know more, we know that the frequency properties of Rule A will be better than Rule B whenever a region around the real parameter is higher in the prior probability distribution under A than under B. Then we are giving more weight to the actual correct value and so our decision is based more on what will turn out to happen.

Unfortunately we don't know the correct value exactly, but in the case where we use flat priors, they are equivalent to a prior that places 100% probability on the value being infinitely large

Now, if you, as a person who cares about the frequency properties of procedures, agree that your parameter *is* a real number, and you have any idea at all what the magnitude of that number is, say it's logarithm rounded to the nearest integer is N, then by choosing a proper prior that has normal(0,10^(N+100)) you will have lower Frequency risk of making "bad" decisions than if you used a flat prior, of course, you can do better, normal(0,10^(N+1)) will do better...

Now, decision making and morality are inevitably entertwined. Consider the existence of the "trolly problem" in moral philosophy, it's all about making choices each of which have bad consequences, but we have to make a choice, including the choice of "do nothing" which also has bad consequences. On the other hand, if you have no choice, there is no morality associated with your action. Getting hit by a drunk driver who crashes through the wall of your bedroom while you're sleeping is not a moral failing on your part for example.

But, if you have a choice of how to make real important, *CLINICAL* decisions about people's lives, and health, and societal health through engineering civil structures and the like, and you care about the frequency with which you do things that have bad consequences that you can't forsee exactly, and you *don't* make your decision by choosing a method that is better than your likelihood + flat prior + point estimate based on Mean Squared Error because you refuse to use a prior on some kind of principle, or you refuse to consider real world consequences other than mean squared error on some kind of principle, then in my opinion your principle is immoral, in the same way as prescribing a toxic drug on the principle that "I get a cut of the proceeds" is immoral.

If you make the decision because you don't know any better... then you're like the guy in the bed who gets hit by the car. But if you write books on statistics from a Frequentist perspective, and you fail to teach the Complete Class result, and you fail to emphasize the fact that you have a choice in what measure you will use in deciding on your decisions (such as the choice between Mean Squared Error in your estimate of the parameter value vs Quality Adjusted Live Years Saved of your clinical decision) then I think you're doing evil work in the same way that a person who teaches a Civil Engineering design rule that has been proven to be wrong and risk people's lives is doing evil work.

So, I do get a little worked up over this issue. Remember I have background in Civil Engineering and I work with my wife who is a research biologist at a medical school. None of this is abstract to me, it's all real world "you shouldn't do that because it's wrong/bad for society/evil/it hurts people more often"

To back up a bit though: I don't think it's evil to care about the frequency properties of your procedures. I think it's evil to *fail to care* about the real world consequences of your decisions.

From the perspective of making decisions about things, such as point estimates of parameters, or clinical decisions about treatments, being a Frequentist (meaning, trying to reduce the average badness of outcomes under repeated trials) actually *entails* doing Bayesian Decision Theory. The Frequentist principle "try to reduce the average severity of bad outcomes" implies "Do Bayesian Decision Theory".

 

On the perversity of "likelihoodism" or Bayesian with Flat Priors

2017 June 26
by Daniel Lakeland

"Classical" statistics is a mishmash of various heuristic ideas. Frequentism is specifically about the Frequency with which repeated application of procedures would cause something to happen if the distributional assumptions used were really true. Some Frequentist procedures are not based on Likelihoods, and some are. When they are, they are based on Likelihoods but not priors.

We can reinterpret an interval constructed from a Likelihood with no prior, as a Bayesian calculations with nonstandard priors uniform(-N,N) for N a nonstandard integer (in IST nonstandard analysis for example).

Now, let's examine the prior. What is the prior probability that the parameter will be limited? Let {[a,b]} be any finite set of intervals with standard endpoints. The prior density for uniform(-N,N) is 1/2N, so the prior probability for any interval in the set is (b-a)/2N which is infinitesimal because N is nonstandard. This is true for all finite sets of standard intervals, so by the Idealization principle, it's true for all standard intervals.

So, classical interval inference using Likelihoods alone, are equivalent to Bayesian intervals where the prior essentially dogmatically asserts that the parameter is either minus infinity or infinity. Only by renormalization via division by an infinitesimal after seeing the data, do we get standard posterior probabilities.

P(Param | Data, Model) = P(Data | Param, Model) P(Param | Model) / P(Data|Model)

Since P(Param | Model) puts infinitesimal probability on limited parameters, and the parameter *is* limited, then P(Data | Model) is infinitesimal as well. In other words, Likelihood only inference asserts that it's virtually impossible to get any data that isn't infinite.

 

On models of the stopping process, informativeness and uninformativeness

2017 June 24
by Daniel Lakeland

I had a great conversation with Carlos Ungil over at Andrew Gelman's blog where we delved deep into some confusions about stopping rules and their relevance to Bayesian Inference. So here I'm going to try to lay out what it is I discovered through that conversation, and I'm going to do it in the context of Cox/Jaynes probability with explicit background knowledge, translating from what I consider the much more nebulous terms like "deterministic" or "random".

Standard Textbook Stopping Problem:

First off. Let's talk about the "standard textbook" type stopping rule: "Flip a bernoulli coin with a constant p until you see 3 heads in a row and then stop" Suppose you get 8 observations HTTHTHHH. Now, the rule is completely known to you, and so you can determine immediately by looking at the data whether a person following the rule will stop or not. If someone hands you this data and say "given what you know about the rule, will they stop?" you will say P( STOP_HERE | Data, Rule) = 1. *for you* the rule is deterministic thanks to the textbook description. Under this scenario, knowing that they stopped at N=8 does not add anything that you didn't already know. Therefore it can't add any information to the inference. Basically

P(Parameters | Data, Rule, STOP) = P(Parameters | Data,Rule)

Standard Real World Stopping problem:

In the real world, the reason why people stop is rarely so clearly known to you. The experimenter tells you "Our collaborators ran a few trials and read the protocols from another lab, and tried this thing out, and it seemed to work, and so we tried it in our lab, and after collecting 8 samples we saw that the results were consistent with what we'd been told to expect, and so we stopped."

Or a survey sample of a population comes with a dataset of answers from 120 people, and a description of the protocol: "we ran a small preliminary study to test out our questions, and found in the preliminary study that the phrasing of the questions didn't seem to matter, and so we used the observed variability in this previous study to determine that a sample of 120 people should give a cost effective dataset for answering the questions we asked, we then sampled a fixed 120 people" but... because the preliminary study was based on a mixture of different questions and soforth... it's not included in the data set. Here, the information gleaned in the earlier trial study is expressed only through the choice of 120 people. The "fixed deterministic" rule "sample 120 people" is informative for a bigger model in which you include the earlier steps.

Or a slightly more sophisticated version: "we sampled until our Bayesian model for the probability of parameter q being in the range 0-0.1 was less than 0.01, ie. p(q < 0.1) < 0.01" To the people running the study, a Bayesian posterior is a deterministic function of the data and the model. Everyone who has the same model always calculates the same posterior from the given data. But note, *you* don't know what their Bayesian model was, either priors or likelihood.

Deterministic vs Random stopping rule vs Cox/Jaynes restatement

In the literature, a stopping rule is called "informative" if it is "a random stopping rule that is probabilistically dependent on a parameter of interest". I personally think this is a terrible definition, because "random" is usually a meaningless word. It gave me a lot of trouble in my conversation with Carlos, because when I think random I pretty much exclusively use that in the context of generated with a random number generator... but that's not what is meant here. So let's rephrase it in Cox/Jaynes probability terminology.

A stopping rule is informative to you if given your background knowledge, and the data up to the stopping point, you can not determine with certainty that the experiment would stop, and there is some parameter in your model which would cause you to assign different probabilities to stopping at this point for different values of that parameter.

Under this scenario, the *fact of stopping* is itself data (since it can't be inferred with 100% accuracy just from the other data).

Now in particular, notice that the informativeness of a stopping rule depends on the background data. This should be no surprise. To a person who doesn't know the seed of a random number generator runif(1) returns a "random" number, whereas to a person who knows the seed, the entire future stream of calls to runif is known with 100% certainty. It's best to not use the term random and replace it with "uncertain" or better yet "uncertain given the knowledge we have".

What can you usually infer from the stopping rule?

If you're in a situation where the stopping rule is uncertain to you, then if you believe it is helpful for your modeling purposes, you can add into your model of the data a model for the stopping rule (or a model for the choice of the "fixed" sample size). This is particularly of interest in real world rules along the lines of "my biologist friends told me what I should be getting from this experiment so I tried about 8 experiments and they kind of seemed to be consistent with what I was told, so I stopped". The actual rule for whether the experimenter would stop is very nebulous, but the fact of stopping might tell you something you could use to describe distributions over relevant real-world parameters. For example, suppose there's an issue with the way a drug is delivered that can cause toxicity if you do it "wrong". The fact that the biologist stopped after 8 experiments suggests that they believe p(DetectToxicity | DoingItWrong) is near 1, so that if you haven't seen it in 8 tries then you are virtually certain you are doing it "right".

So, eliciting information about the stopping rule is very useful because it can show you that there are potentially parameters you need to include in your model for which the fact of stopping informs those parameters, and particularly, parameters *that describe the nebulous uncertain rule*.

In the example above about sampling until a Bayesian posterior distribution excluded the range 0-0.1 with 99% probability, if someone tells you exactly the description of the Bayesian model, then if you plug in the data, you will immediately know whether the rule said stop or continue. But, if you know what the likelihood function was, but not the prior, then you could potentially infer something about the prior that was used from the fact that the sampling stopped at this point. If you think that prior was based on real background information, this lets you infer some of that real background info.

Summary:

To a Cox/Jaynes Bayesian there is no such thing as "random" only "uncertain given the background information". A stopping rule can teach you something about your parameters precisely when your model doesn't predict the fact of stopping with probability = 1 *and* your model has a parameter which affects the probability of stopping conditional on data in other words, p(STOP_HERE | Data_so_Far, Background, Params) is a non-constant function of Params

Then, given the data, the fact that the experiment stopped is additional information about some of the Params

Stopping rules are not irrelevant to Bayesian models, but they are only relevant in those circumstances, if you feel that the stopping rule seems vague or the reasons for the choice of the sample size seem based on some information that you're not privy to, then you might want to invent some parameters that might help you explain the connection between the fact of stopping, and the things you want to know such as "hidden" prior probabilities in use by the experimenter that inform their stopping.

 

Once more about Stan and Jacobian warnings

2017 May 30
by Daniel Lakeland

In Stan, if you have an expression that looks like

foo ~ distrib(a,b,c);

Where foo is either a function, or a transformed parameter variable, and distrib is some distribution, Stan will complain about you needing to add a log-Jacobian to the target.

DIAGNOSTIC(S) FROM PARSER:
Warning (non-fatal):
Left-hand side of sampling statement (~) may contain a non-linear transform of a parameter or local variable.
If so, you need to call target += with the log absolute determinant of the Jacobian of the transform.
Left-hand-side of sampling statement:

Now, if foo is directly a transformed single parameter, then in fact, Stan is correct. But if foo is the result of applying a function to several parameters together, collapsing an N dimensional space down to 1, then there does not exist a Jacobian

for example:

exp(q/r) ~ distrib(a,b,c)

where q and r are both parameters. There is one function, but two parameters, q,r. So the Jacobian matrix is 1x2 and has no determinant. How can you understand what this means?

Consider the prior on q,r perhaps you've said
q ~ normal(qguess, qerr);
r ~ normal(rguess, rerr);

and you thought to yourself "Ok there's the prior for q and r", but then you wrote down:

exp(q/r) ~ distrib(a,b,c)

and thought to yourself what? Something like "there's the prior for exp(q/r)" which is kind of true. But exp(q/r) is not a parameter, it's a deterministic function of two parameters q,r. On the other hand, we're multiplying them all together, so the actual prior for q,r is

normal_pdf(q,qguess,qerr) * normal_pdf(r,rguess,rerr) * distrib_pdf(exp(q/r),a,b,c)/Z

that is, "exp(q/r) ~ distrib(a,b,c)" declares a weighting function on values of q,r which further downweights those regions of q,r space that cause exp(q/r) to fall outside the high density region of distrib(a,b,c) (and you need the Z to re-normalize after the reweighting, but note that in MCMC you typically don't need the normalization constant so you're ok there).

This "declarative" technique produces an intentional "tilt" on the prior for q,r that helps you create a more complex and nuanced prior. In particular, when there are various ways to get your function to be near the high probability region for distrib, you'll wind up with a non-independent prior over the parameters q,r you might have a ridge along some curve, or several modes or whatever. This is good when you really do have information that q,r should lead your function to be in the high probability region of distrib.

To generate from this model, you'd need to first generate according to the simple prior, and then reject on the basis of the weighting function, so for example

q = normal_rng(qguess,qerr);
r = normal_rng(rguess,rerr);
f = exp(q/r)
w = distrib_pdf(f,a,b,c)/distrib_pdf_max_pdf_value
p = runif(0,1);
if(p < w) accept else reject