Inference on a ratio from two separate experiments (a case study in Stan awesomeness)

2017 April 9
by Daniel Lakeland

Andrew Gelman's blog has a post about a case where you do two experiments, one you measure X,Y with measurement error, and want to find out the true ratio of the parameters Xbar/Ybar, and same for a second experiment with Pbar/Qbar. Here is the R code to do the complete experiment:

note that we want to do p(R | Xtrue,Ytrue,Ptrue,Qtrue) = p(R|Xtrue,Ytrue) P(R|Ptrue,Qtrue) and each of these is a delta function (that is, if you tell me Xtrue,Ytrue I find out R exactly)... so we have to approximate this delta function with a sufficiently tightly peaked normal distribution. Basically we tell Stan to stay in the region where both experiments agree to within epsilon = 0.001

## inference on a ratio via two separate experiments

library(rstan);

dataset <- data.frame(X=rnorm(10,1,.1),Y=rnorm(10,2,.3),P=rnorm(10,30,6),Q=rnorm(10,60,4));

## now the true value of R = 1/2 = 30/60 but we have two experiments
## where we measure X,Y or we measure P,Q with measurement error that
## is of known size (we know the .1 and .3 and 6 and 4 used as
## standard deviations, assume we have a well calibrated instrument).

stancode = "data { vector[10] X; vector[10] Y; vector[10] P;vector[10] Q;}

parameters{
real<lower=0> Xtrue;
real<lower=0> Ytrue;
real<lower=0> Ptrue;
real<lower=0> Qtrue;
real<lower=0> Rat;
}

model{
// we assume on physical principals that these are positive and have an upper bound order of magnitude estimate
Xtrue ~ uniform(0,1000);
Ytrue ~ uniform(0,1000);
Ptrue ~ uniform(0,10000);
Qtrue ~ uniform(0,10000);

X ~ normal(Xtrue,.1);
Y ~ normal(Ytrue,.3);
P ~ normal(Ptrue,6);
Q ~ normal(Qtrue,4);

// constrain the ratios to agree to within a small epsilon
Rat ~ normal(Xtrue/Ytrue,.001);
Rat ~ normal(Ptrue/Qtrue,.001);

}
"

samps <- stan(model_code=stancode,data=dataset);
samps

When I run it. I get the following output:

Inference for Stan model: 5100ead2741217b889326da6ff0f0419.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

       mean se_mean   sd   2.5%   25%   50%   75% 97.5% n_eff Rhat
Xtrue  1.02    0.00 0.03   0.97  1.01  1.02  1.04  1.08  2105    1
Ytrue  2.06    0.00 0.08   1.91  2.00  2.06  2.11  2.22  1568    1
Ptrue 30.24    0.04 1.31  27.71 29.34 30.23 31.15 32.87  1410    1
Qtrue 60.64    0.03 1.22  58.20 59.85 60.66 61.43 63.06  2172    1
Rat    0.50    0.00 0.02   0.46  0.48  0.50  0.51  0.54   927    1
lp__  -6.28    0.04 1.59 -10.39 -7.05 -5.95 -5.11 -4.21  1588    1

Samples were drawn using NUTS(diag_e) at Sun Apr  9 19:37:53 2017.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
>

Note that with 10 observations in each experiment, we determine that our expected value of Rat is 0.5 to within 0.00 standard error, and almost all the samples of Rat are between about 0.46 and 0.54

What to do with Census outliers?

2017 April 9
by Daniel Lakeland

Consider the following records from the Census ACS for Pasadena CA:

      YEAR NP GRNTP MRGP  HINCP
11570 2013  2  1030   NA  53000
11835 2013  3  1210   NA  29000
12945 2013  2    NA 2200 200200
16433 2014  2    NA 3100 181000
18641 2015  2  2080   NA 128500
20854 2015  6   260   NA  57000

Apparently there is a family of 6 living in Pasadena, making $57,000/yr (HINCP) and paying $260 a month in rent (GRNTP).

Now, if you told me $2600 I wouldn't blink an eye, and as you can see that would be in line with the $2080 for a family of 2, or $3100 in mortgage for a family of 2. But $260 could only occur if you were renting from your uncle or you are acting as an apartment complex manager and part of your compensation is reduced rent, or some other similar situation.

So, given that some of these records do go through optical scanners and could come out with a dropped decimal place or the like, as well as some people have a situation like the apartment complex manager who has secret income in the form of subsidized rent... How should one think about handling this kind of situation if the goal is to estimate a function: "minimum market rent for a family of N people in each region".

One idea would be to simply trim the data, ignore anything that is say less than 20% of the observed mean or more than 5x the observed statewide mean, and this would catch most situations of both dropped or added decimal places (which would cause a factor of 10 error). Another idea would be to do a model where there's some probability to have outliers due to a separate process, and we simply learn about which records to ignore by virtue of their high posterior probability of being "outliers". But this requires us to carry an extra parameter for each observation. A third possibility is to marginalize out the outlier parameter, and learn only about the frequency of outliers and treat observed data as coming from a mixture model where we learn the mixture weight as equal to the marginalized frequency of outliers.

I'm sure there are other ways, but one issue is this: I already have a lot of parameters due to the large number of public use microdata areas I'm doing inference for, and runs are slow, so it'd be good to avoid complex models just for computational reasons.

Any thoughts?

 

PUMAs - Public Use Microdata Areas of the Census

2017 April 8
by Daniel Lakeland

The Census ACS and similar microdata datasets use PUMAs which are sort of regions that contain about 100k people. In 2010 the PUMAs were revised to be simpler, and completely contained within a state. But I gotta say, the PUMA is the wrong way to do things.

For the ACS and similar microdata, each household record should have a Latitude and Longitude associated. These values should be equal to the actual Lat/Long of the actual household plus a random uniformly distributed perturbation. The record should give the size of this perturbation in each dimension, and the size should be determined by the local population density such that the patch contains approximately 100k people. The uniform distribution should be used because it's bounded so you can be sure you're never more than a certain distance away from the true location.

The record still has the State, so if you go across a state boundary you'd have some issues which I'm sure the census is capable of working out, but other than that, it makes way more sense to think of things in terms of fuzzing out the location using a continuous perturbation than the complexity of redrawing boundary lines every few years and completely re-doing your identification scheme and screwing up any chance you had of working with spatial-timeseries models...

Just sayin.

 

A Bayesian Understanding of the "Garden of Forking Paths"

2017 April 7
by Daniel Lakeland

Andrew Gelman has discussed and written on his concept of "The Garden Of Forking Paths" in NHST analysis of scientific data.

"Laplace" whose insights I respect a lot has ridiculed the idea, and when put into the terms he uses, I agree with him. However, I don't think that Gelman's point is quite the same as the one Laplace ridicules. So thinking about it, here's how I'd like to proceed to an understanding.

For simplicity we'll analyze the situation in which a research collects data D, and then does a test T to determine if the two subsets A^+(D) and A^-(D) differ in some way that is detectable by the test by use of a sample statistic S.

First off, consider what the various options are available to the researcher:

T_i, i \in [1,.I]

and

S_j, j \in [1..J]

and

A_k, k \in [1..K]

That is, we can choose which test to use, which statistic to test, and how to subset and exclude certain portions of the data to form the partition (the function A partitions and excludes the data, so that there are two groups).

Now, what is the Bayesian probability that p < 0.05 given our knowledge N (I use N because I've already used K).

P(p < 0.05 | D,N,i,j,k) P(D,i,j,k | N)

Suppose in the first case that N contains the information "i,j,k were preregistered choices and D was collected after i,j,k were specified and is independent of the i,j,k". Then P(i,j,k|N) = 1, and P(p < 0.05 | N) is determined entirely by our knowledge in N of the appropriateness of the test and the p values that it outputs.

P(p < 0.05 | D,N) P(D|N)

So, we're still left with all the problems of the use of p values, but we're at least not left with the problems described below.

In the case that N contains the information "I,J,K are all large integers and were chosen after seeing D, and the researcher is motivated to get p < 0.05 and probably at least looked at the data, produced some informal graphs, and discussed which analysis to do with colleagues" we're left with the assumption that i,j,k were chosen from among those analyses which seemed via informal data "peeking" to be likely to give p < 0.05 so the Bayesian is left with:

P(p < 0.05 | i,j,k,D,N) P(i,j,k|D,N) P(D|N)

Now, due to our pre-analysis choice peeking, we can safely assume

P(p < 0.05 | i,j,k,D,N) \sim 1

sure it might not be exactly 1, but it's much much bigger than 0.05 like maybe 0.5 or 0.77 or 0.93 and this is FOR ALL i,j,k that would actually be chosen.

P(i,j,k|D,N,\{i,j,k\}\in G) = O(1)

where G is the reachable subset of the I \times J \times K space called "the garden of forking paths" such that any typical researcher would find themselves choosing i,j,k out of that subset such that it leads to analyses where P(p < 0.05 | i,j,k,D,N) \sim 1

So, how much information does p < 0.05 give the Bayesian about the process of interest? In the preregistered case, it at least tells you something like "it is unlikely that a random number generator of the type specified in the null hypothesis test would have generated the data" (not that we usually care, but this could be relevant some of the time).

In the GOFP case, it tells us "these researchers know how to pick analyses that will get them into the GOFP subset so they can get their desired p < 0.05 even without first doing the explicit calculations of p values."

So, using this formalism, we arrive at the idea that it's not so much that GOFP invalidates the p value, it's that it alters the evidentiary value of the p value to a Bayesian.

 

Once again on Using the Data Twice

2017 April 6
by Daniel Lakeland

So there's an argument in the blogtwitterosphere about updating your bayesian model with the data twice.

On the one hand, we have EJ Wagenmakers saying:

And on the other hand, we have "Laplace" saying updating with the data twice is totally fine

Now, Laplace's math is absolutely correct. But, it's also subtle, because it's purely symbolic. When we build a model, we need to write down specific mathematical expressions for p(Foo | Bar) for all the foos and bars.

Let's see Laplace's equation in detail:

P(A|B,B) = P(B,B|A) P(A)/P(B,B) = P(B|B,A) p(B|A) P(A) / (P(B|B) P(B))

Now, P(B|B,A) = 1 because given B, B has to be true, same for P(B|B) and when you plug those in, you get P(B|A)P(A)/P(B) = P(A|B) = P(A|B,B)

BUT: when you build a Bayesian mathematical model, you CAN make mistakes, just like when you program a numerical integration routine, you can make mistakes. Suppose instead of P(A|B,B) we calculate P(A|B1,B2) where B2 is a deterministic copy of the data in B1.

Now, if we *remember this fact* correctly, we'll get P(B2|B1) = 1 and P(B2,B1) = P(B1) and we'll get the results above. But, if we forget this fact and pretend that B2 is new independent data, we will get the same results as if we had collected 2x as much data as we really did collect and treated it all as separate information. The mistake is as simple as doing something like

for(i in 1:(2*N)){ 

data[i] ~ normal(foo,bar);

}

instead of

for(i in 1:N){

data[i] ~ normal(foo,bar)

}

The second one is correct, because the second copy of data adds no information to the posterior as the probability of each data value past the Nth value is 1 given that we already know data values 1..N.

It's *this mistake* which is a bug, is common, and leads to the statements along the line "only use the data once". The statement "only use the data once" is like the statement "don't use global variables to pass arguments to functions" it's useful advice to reduce the chance of committing an error. It's not mathematical truth.

 

Suicide Rates in the US

2017 April 4
by Daniel Lakeland

I got this data from CDC Wonder, and I let them do the population rate calculation, so I don't know if they did it right, but let's assume they did. Here are suicide rates for Males and Females ages 15 to 65 across the whole US, by race.

Notice that White rates are higher than Black or Asian rates, and that they've been trending upwards steadily since 2000.

 

A new term

2017 March 27
by Daniel Lakeland

Wansinking (verb, gerund): To do research about an essentially unimportant topic in a sloppy and unprincipled manner, possibly even inventing data, while attracting enormous amounts of credulous popular press coverage and corporate sponsorship for years and years, and dodging criticism by acting or being essentially clueless. cf. Brian Wansink.

 

Call for Book Recommendations on Game Theory

2017 March 9
by Daniel Lakeland

I'd like a book on game theory that is readable and interesting, has some examples that are somewhat real-world, but isn't afraid to use math. (ie. not something written for people who read Malcom Gladwell etc)

I dislike Definition, Theorem, Proof math books. I mean, not entirely, I like that stuff fine when I'm reading about abstract math, but generally find them tedious when the topic should have applied content. What interests me is how the formalism maps to the real-world, not excruciating details on what the content of the formalism is.

I loved this book on set theory, and Barenblatt's book Scaling, and Practical Applied Mathematics, and I'd like something at that level if possible. Also helps if it's less than $50 or so.

Hoping someone has something to recommend.

Of particular interest: issues in mechanism design, dynamic / repeated games, games with no stable equilibrium, rent-seeking, games where greedy algorithms fail, etc. I've been thinking a lot about economics problems, and I'd like to get familiar enough with the basic stuff to be able to talk about why certain policies put together result in lousy outcomes without appearing foolish for missing some very basic known results etc.

Also, any thoughts specifically on Steven Tadelis' book?

Bet-Proofness as a property of Confidence Interval Construction Procedures (Not realized intervals)

2017 March 8
by Daniel Lakeland

There was a bunch of discussion over at Andrew Gelman's blog about "bet proof" interpretations of confidence intervals. The relevant paper is here.

Below is what I original wrote in grey. Nope. I was misreading. The actual definition of bet-proofness is also weird. Here it is in essence:

A confidence interval construction procedure is bet proof if for every betting scheme b(x) there exists some value of the parameter \Theta such that this scheme will lose in the long run.

This is kind of the converse of what I was thinking before. But here's my example of why that makes no sense (quoted here from what I wrote on Andrew's blog after I finally figured out what was going on).

Now, with the above reworded Definition 1, I can see how the game is about revealing X and not revealing Theta. But, I don’t see how it is interesting. Let me give you an example:

Our bet is to sample a random sample of every adult who lives within 2 blocks of me. We will measure their heights X, then you’ll construct a confidence interval CI(X), and I will bet whether after we measure all the rest of them, the population mean Theta will be in the CI.

Now, being a good Bayesian, I use Wald’s theorem and realize that any strategy to decide what my bet winnings will be that doesn’t use a prior and a loss function will be dominated by one that does…. So I’ll go out to Wikipedia and google up info about the population of the US and their heights, and I”ll construct a real world prior and then I’ll place my bet.

Now bet proofness says that because it’s the case that if the actual height of people in a 2 block radius is 1 Million Feet (ie. THERE EXISTS A THETA), my prior will bias my bets in the wrong direction and I will not be able to make money…. that this CI is all good, it’s bet proof.

And how is that relevant to any bet we will actually place?"

The basic principal of bet-proofness was essentially that if a sample of data X comes from a RNG with known distribution D(\Theta) that has some parameter \Theta, then even if you know \Theta exactly, so long as you don't know what the X values will be, you can't make money betting on whether the constructed CI will contain the \Theta (the paper writes this in terms of f(\Theta) but the principal is the same since f is a deterministic function).

The part that confused me, was that this was then taken to be a property of the individual realized interval... "Because an interval came from a bet-proof procedure it is a bet-proof realized interval" in essence. But, this defines a new term "bet-proof realized interval" which is meaningless when it comes to actual betting. The definition of "bet-proof procedure" explicitly uses averaging over the possible outcomes of the data collection procedure X but after you've collected X and told everyone what it is, if someone knows \Theta and knows X they can calculate exactly whether the confidence interval does or does not contain \Theta and so they win every bet they make.

So "bet-proof realized confidence interval" is really just a technical term meaning "a realized confidence interval that came from a bet proof procedure" however it doesn't have any content for prediction of bets about that realized interval. The Bayesian with perfect knowledge of \Theta and X and the confidence construction procedure wins every bet! (there's nothing uncertain about these bets).

 

"Person" as a dimension?

2017 March 6
by Daniel Lakeland

Consider things like GDP / capita and how to use them in constructing dimensionless ratios. Now, a person is not an infinitely divisible thing. It's pretty much meaningless to talk about 2.8553 people. Just like molecules, "people" is a count, a dimensionless integer. But, then you can also think about the "mole" in SI units. This has all the qualities that normally make up an arbitrary dimensional unit, you can sub-divide it essentially continuously (at least to 23 decimal places). And that's the key thing about dimensions. The symmetry property of dimensionlessness is essentially that if you define a unit of measure of something, say the Foo and someone else defines the Bar and 1 Bar is equal to x Foos, then you can make all your equations in Bar units into equations in Foo units by multiplying by x Foos/Bar wherever you have something measured in Bars.

This is closely related to the renormalization group. However, it breaks down when the thing you are measuring is not infinitely divisible (or approximately so).

So, in the asymptotic regime where you are describing large aggregates of people (like say countries or states or very large corporations etc) you can calculate statistics in terms of "per capita" and treat capita as if it were an infinitely divisible thing, and therefore as if it represents a "dimension". However in the asymptotic regime where you are discussing a small number of people, you will always use an integer, 1,2,3,4 people etc and so this should be treated as simply a number that is dimensionless and does not enter into the dimensional calculation.

So, when I calculated Total Wages Paid / Total Hours Worked / Market Cap Of Stocks * 1 hr, the dimensions of this are Dollars/Hours / Dollars * Hours = 1 = dimensionless, and this is implicitly because we're discussing what fraction of the total market a single person could buy if they received an average amount of money for their hour of work.

On the other hand, when thinking about the aggregate when I calculated

(GDP/capita * Fraction Of GDP To wages ) / (GDP/Capita * Market Cap as Fraction of GDP * 1 Yr)

We are now explicitly comparing averages across a large pool of people, and we might be interested in how "capita" changed in time in both cases, and "capita" changes in a near-continuous manner because it's an integer, but it's an integer like 325,000,000 so it has 9 significant figures and we can ignore the incremental person's discrete effect.

Whether to treat Capita or People as a dimension is more or less down to a choice of how you are thinking about the problem. When the number of people involved is large and explicitly considered in a ratio, such as in the calculation of GDP/Capita then treating "Capita" as a dimension that needs to cancel makes sense. When the number of people is small and a specific integer: such as "for a family of 3 the cost to purchase food for a month is X Dollars/Month" it would make sense to treat the number of people as a dimensionless count and not calculate something like X/3 Dollars/Month/Person, so if you want to ask what's the relative cost of feeding 3 people per month vs watering a lawn at cost Y dollars/month you can say X/Y is dimensionless ignoring the fact that a family of 3 people is involved.

The most useful case for treating people as a dimension is when there is a natural linear relationship between the number of people and the thing of interest. For example, for adults feeding 3 of them requires about 3 times the mass of food as feeding one. But, because of economies of scale related to cooking and shopping costs, there is no real reason to think that it requires 3 times the dollar cost (for home cooked meals). So, Dollars/Person is not a meaningful statistic at the asymptotic small person count regime. On the other hand, for a whole Battalion of troops, it would be. This is directly related to the "unit conversion symmetry" property I described in the first paragraph. If you define a Battalion of troops as 7000 and a different country they define it as 5300, then you can convert the other countries equations to your units by multiplying their Battalion numbers by 5300 and dividing by 7000, and the assumed linearity of the relationships in the asymptotic limit makes these discrete counts have approximately arbitrary linear-scaling dimensional properties.