On Morality of Real World Decisions, and Frequentist Principles

2017 June 27
by Daniel Lakeland

If you want to make decisions, such as choosing a particular point estimate of a parameter, or deciding whether to give a drug, or whatever. And you want your decision making rule to have the *Frequency related* (Frequentist) property that under repeated application it will on average have small "badness" (or larger "goodness") then you should look for your procedure within the class of procedures mathematically proven to have unbeatable frequency of badness properties. This class is the Bayesian decision rules (see Wald's Theorem and the Wiki). The boundary of the class is the Bayesian decision rules with flat priors, but we know more, we know that the frequency properties of Rule A will be better than Rule B whenever a region around the real parameter is higher in the prior probability distribution under A than under B. Then we are giving more weight to the actual correct value and so our decision is based more on what will turn out to happen.

Unfortunately we don't know the correct value exactly, but in the case where we use flat priors, they are equivalent to a prior that places 100% probability on the value being infinitely large

Now, if you, as a person who cares about the frequency properties of procedures, agree that your parameter *is* a real number, and you have any idea at all what the magnitude of that number is, say it's logarithm rounded to the nearest integer is N, then by choosing a proper prior that has normal(0,10^(N+100)) you will have lower Frequency risk of making "bad" decisions than if you used a flat prior, of course, you can do better, normal(0,10^(N+1)) will do better...

Now, decision making and morality are inevitably entertwined. Consider the existence of the "trolly problem" in moral philosophy, it's all about making choices each of which have bad consequences, but we have to make a choice, including the choice of "do nothing" which also has bad consequences. On the other hand, if you have no choice, there is no morality associated with your action. Getting hit by a drunk driver who crashes through the wall of your bedroom while you're sleeping is not a moral failing on your part for example.

But, if you have a choice of how to make real important, *CLINICAL* decisions about people's lives, and health, and societal health through engineering civil structures and the like, and you care about the frequency with which you do things that have bad consequences that you can't forsee exactly, and you *don't* make your decision by choosing a method that is better than your likelihood + flat prior + point estimate based on Mean Squared Error because you refuse to use a prior on some kind of principle, or you refuse to consider real world consequences other than mean squared error on some kind of principle, then in my opinion your principle is immoral, in the same way as prescribing a toxic drug on the principle that "I get a cut of the proceeds" is immoral.

If you make the decision because you don't know any better... then you're like the guy in the bed who gets hit by the car. But if you write books on statistics from a Frequentist perspective, and you fail to teach the Complete Class result, and you fail to emphasize the fact that you have a choice in what measure you will use in deciding on your decisions (such as the choice between Mean Squared Error in your estimate of the parameter value vs Quality Adjusted Live Years Saved of your clinical decision) then I think you're doing evil work in the same way that a person who teaches a Civil Engineering design rule that has been proven to be wrong and risk people's lives is doing evil work.

So, I do get a little worked up over this issue. Remember I have background in Civil Engineering and I work with my wife who is a research biologist at a medical school. None of this is abstract to me, it's all real world "you shouldn't do that because it's wrong/bad for society/evil/it hurts people more often"

To back up a bit though: I don't think it's evil to care about the frequency properties of your procedures. I think it's evil to *fail to care* about the real world consequences of your decisions.

From the perspective of making decisions about things, such as point estimates of parameters, or clinical decisions about treatments, being a Frequentist (meaning, trying to reduce the average badness of outcomes under repeated trials) actually *entails* doing Bayesian Decision Theory. The Frequentist principle "try to reduce the average severity of bad outcomes" implies "Do Bayesian Decision Theory".

 

On the perversity of "likelihoodism" or Bayesian with Flat Priors

2017 June 26
by Daniel Lakeland

"Classical" statistics is a mishmash of various heuristic ideas. Frequentism is specifically about the Frequency with which repeated application of procedures would cause something to happen if the distributional assumptions used were really true. Some Frequentist procedures are not based on Likelihoods, and some are. When they are, they are based on Likelihoods but not priors.

We can reinterpret an interval constructed from a Likelihood with no prior, as a Bayesian calculations with nonstandard priors uniform(-N,N) for N a nonstandard integer (in IST nonstandard analysis for example).

Now, let's examine the prior. What is the prior probability that the parameter will be limited? Let {[a,b]} be any finite set of intervals with standard endpoints. The prior density for uniform(-N,N) is 1/2N, so the prior probability for any interval in the set is (b-a)/2N which is infinitesimal because N is nonstandard. This is true for all finite sets of standard intervals, so by the Idealization principle, it's true for all standard intervals.

So, classical interval inference using Likelihoods alone, are equivalent to Bayesian intervals where the prior essentially dogmatically asserts that the parameter is either minus infinity or infinity. Only by renormalization via division by an infinitesimal after seeing the data, do we get standard posterior probabilities.

P(Param | Data, Model) = P(Data | Param, Model) P(Param | Model) / P(Data|Model)

Since P(Param | Model) puts infinitesimal probability on limited parameters, and the parameter *is* limited, then P(Data | Model) is infinitesimal as well. In other words, Likelihood only inference asserts that it's virtually impossible to get any data that isn't infinite.

 

On models of the stopping process, informativeness and uninformativeness

2017 June 24
by Daniel Lakeland

I had a great conversation with Carlos Ungil over at Andrew Gelman's blog where we delved deep into some confusions about stopping rules and their relevance to Bayesian Inference. So here I'm going to try to lay out what it is I discovered through that conversation, and I'm going to do it in the context of Cox/Jaynes probability with explicit background knowledge, translating from what I consider the much more nebulous terms like "deterministic" or "random".

Standard Textbook Stopping Problem:

First off. Let's talk about the "standard textbook" type stopping rule: "Flip a bernoulli coin with a constant p until you see 3 heads in a row and then stop" Suppose you get 8 observations HTTHTHHH. Now, the rule is completely known to you, and so you can determine immediately by looking at the data whether a person following the rule will stop or not. If someone hands you this data and say "given what you know about the rule, will they stop?" you will say P( STOP_HERE | Data, Rule) = 1. *for you* the rule is deterministic thanks to the textbook description. Under this scenario, knowing that they stopped at N=8 does not add anything that you didn't already know. Therefore it can't add any information to the inference. Basically

P(Parameters | Data, Rule, STOP) = P(Parameters | Data,Rule)

Standard Real World Stopping problem:

In the real world, the reason why people stop is rarely so clearly known to you. The experimenter tells you "Our collaborators ran a few trials and read the protocols from another lab, and tried this thing out, and it seemed to work, and so we tried it in our lab, and after collecting 8 samples we saw that the results were consistent with what we'd been told to expect, and so we stopped."

Or a survey sample of a population comes with a dataset of answers from 120 people, and a description of the protocol: "we ran a small preliminary study to test out our questions, and found in the preliminary study that the phrasing of the questions didn't seem to matter, and so we used the observed variability in this previous study to determine that a sample of 120 people should give a cost effective dataset for answering the questions we asked, we then sampled a fixed 120 people" but... because the preliminary study was based on a mixture of different questions and soforth... it's not included in the data set. Here, the information gleaned in the earlier trial study is expressed only through the choice of 120 people. The "fixed deterministic" rule "sample 120 people" is informative for a bigger model in which you include the earlier steps.

Or a slightly more sophisticated version: "we sampled until our Bayesian model for the probability of parameter q being in the range 0-0.1 was less than 0.01, ie. p(q < 0.1) < 0.01" To the people running the study, a Bayesian posterior is a deterministic function of the data and the model. Everyone who has the same model always calculates the same posterior from the given data. But note, *you* don't know what their Bayesian model was, either priors or likelihood.

Deterministic vs Random stopping rule vs Cox/Jaynes restatement

In the literature, a stopping rule is called "informative" if it is "a random stopping rule that is probabilistically dependent on a parameter of interest". I personally think this is a terrible definition, because "random" is usually a meaningless word. It gave me a lot of trouble in my conversation with Carlos, because when I think random I pretty much exclusively use that in the context of generated with a random number generator... but that's not what is meant here. So let's rephrase it in Cox/Jaynes probability terminology.

A stopping rule is informative to you if given your background knowledge, and the data up to the stopping point, you can not determine with certainty that the experiment would stop, and there is some parameter in your model which would cause you to assign different probabilities to stopping at this point for different values of that parameter.

Under this scenario, the *fact of stopping* is itself data (since it can't be inferred with 100% accuracy just from the other data).

Now in particular, notice that the informativeness of a stopping rule depends on the background data. This should be no surprise. To a person who doesn't know the seed of a random number generator runif(1) returns a "random" number, whereas to a person who knows the seed, the entire future stream of calls to runif is known with 100% certainty. It's best to not use the term random and replace it with "uncertain" or better yet "uncertain given the knowledge we have".

What can you usually infer from the stopping rule?

If you're in a situation where the stopping rule is uncertain to you, then if you believe it is helpful for your modeling purposes, you can add into your model of the data a model for the stopping rule (or a model for the choice of the "fixed" sample size). This is particularly of interest in real world rules along the lines of "my biologist friends told me what I should be getting from this experiment so I tried about 8 experiments and they kind of seemed to be consistent with what I was told, so I stopped". The actual rule for whether the experimenter would stop is very nebulous, but the fact of stopping might tell you something you could use to describe distributions over relevant real-world parameters. For example, suppose there's an issue with the way a drug is delivered that can cause toxicity if you do it "wrong". The fact that the biologist stopped after 8 experiments suggests that they believe p(DetectToxicity | DoingItWrong) is near 1, so that if you haven't seen it in 8 tries then you are virtually certain you are doing it "right".

So, eliciting information about the stopping rule is very useful because it can show you that there are potentially parameters you need to include in your model for which the fact of stopping informs those parameters, and particularly, parameters *that describe the nebulous uncertain rule*.

In the example above about sampling until a Bayesian posterior distribution excluded the range 0-0.1 with 99% probability, if someone tells you exactly the description of the Bayesian model, then if you plug in the data, you will immediately know whether the rule said stop or continue. But, if you know what the likelihood function was, but not the prior, then you could potentially infer something about the prior that was used from the fact that the sampling stopped at this point. If you think that prior was based on real background information, this lets you infer some of that real background info.

Summary:

To a Cox/Jaynes Bayesian there is no such thing as "random" only "uncertain given the background information". A stopping rule can teach you something about your parameters precisely when your model doesn't predict the fact of stopping with probability = 1 *and* your model has a parameter which affects the probability of stopping conditional on data in other words, p(STOP_HERE | Data_so_Far, Background, Params) is a non-constant function of Params

Then, given the data, the fact that the experiment stopped is additional information about some of the Params

Stopping rules are not irrelevant to Bayesian models, but they are only relevant in those circumstances, if you feel that the stopping rule seems vague or the reasons for the choice of the sample size seem based on some information that you're not privy to, then you might want to invent some parameters that might help you explain the connection between the fact of stopping, and the things you want to know such as "hidden" prior probabilities in use by the experimenter that inform their stopping.

 

Once more about Stan and Jacobian warnings

2017 May 30
by Daniel Lakeland

In Stan, if you have an expression that looks like

foo ~ distrib(a,b,c);

Where foo is either a function, or a transformed parameter variable, and distrib is some distribution, Stan will complain about you needing to add a log-Jacobian to the target.

DIAGNOSTIC(S) FROM PARSER:
Warning (non-fatal):
Left-hand side of sampling statement (~) may contain a non-linear transform of a parameter or local variable.
If so, you need to call target += with the log absolute determinant of the Jacobian of the transform.
Left-hand-side of sampling statement:

Now, if foo is directly a transformed single parameter, then in fact, Stan is correct. But if foo is the result of applying a function to several parameters together, collapsing an N dimensional space down to 1, then there does not exist a Jacobian

for example:

exp(q/r) ~ distrib(a,b,c)

where q and r are both parameters. There is one function, but two parameters, q,r. So the Jacobian matrix is 1x2 and has no determinant. How can you understand what this means?

Consider the prior on q,r perhaps you've said
q ~ normal(qguess, qerr);
r ~ normal(rguess, rerr);

and you thought to yourself "Ok there's the prior for q and r", but then you wrote down:

exp(q/r) ~ distrib(a,b,c)

and thought to yourself what? Something like "there's the prior for exp(q/r)" which is kind of true. But exp(q/r) is not a parameter, it's a deterministic function of two parameters q,r. On the other hand, we're multiplying them all together, so the actual prior for q,r is

normal_pdf(q,qguess,qerr) * normal_pdf(r,rguess,rerr) * distrib_pdf(exp(q/r),a,b,c)/Z

that is, "exp(q/r) ~ distrib(a,b,c)" declares a weighting function on values of q,r which further downweights those regions of q,r space that cause exp(q/r) to fall outside the high density region of distrib(a,b,c) (and you need the Z to re-normalize after the reweighting, but note that in MCMC you typically don't need the normalization constant so you're ok there).

This "declarative" technique produces an intentional "tilt" on the prior for q,r that helps you create a more complex and nuanced prior. In particular, when there are various ways to get your function to be near the high probability region for distrib, you'll wind up with a non-independent prior over the parameters q,r you might have a ridge along some curve, or several modes or whatever. This is good when you really do have information that q,r should lead your function to be in the high probability region of distrib.

To generate from this model, you'd need to first generate according to the simple prior, and then reject on the basis of the weighting function, so for example

q = normal_rng(qguess,qerr);
r = normal_rng(rguess,rerr);
f = exp(q/r)
w = distrib_pdf(f,a,b,c)/distrib_pdf_max_pdf_value
p = runif(0,1);
if(p < w) accept else reject

The Reality of actual average taxes paid

2017 May 28
by Daniel Lakeland

According to IRS data, the following graph summarizes the tax situation in the US. It compares actual average taxes paid by AGI categories (points) to a linear tax plus a $500/mo UBI to all adults.

This graph shows the range all the way out to 10M/yr income, here's the range where most of us actually are:

Finally, here's the number of people in these tax brackets:

You can see, there are tons and tons of people down in the region near basically zero income, the brackets there are very closely spaced. Then families who work and earn in the range 40k to 150k are the vast majority.  Overall, there are a trivial number of people out past the ~$300,000 top bracket shown here, especially out past a $1M or more. Taxes on the ultra-rich just don't matter for revenue, because there are too few of them to make a difference. (But they matter a lot for behavior, because ultra rich people will dodge those taxes and distort economic efficient allocation of vast amounts of business resources).

Note that this compares a 35% flat tax + UBI to current. But to be deficit growth neutral, we really only need around 28% flat tax + UBI. and then use the UBI to replace many separate inefficient government programs.

 

WTF Economics: shifting curves UP/DOWN or LEFT/RIGHT?

2017 May 24
by Daniel Lakeland

Further evidence that I don't think like an Economist. From Mankiw's Macroecon undergrad textbook (and pretty much everywhere else you look on the web etc where they're trying to describe supply and demand curve models):

Now. In my world, if someone offers me something such as say Pears, at a particular price, I look at the price and decide my quantity I will buy. So Qd = D(P) and if I grow pears and have to harvest them and transport them to a market, then I look at the price they are selling at at the market, and decide if I'm going to pay the money to harvest and transport, so my market supplied quantity Qs = S(P)

In other words, price is the input variable, and quantity is the output variable, and so I expect all the graphs to have P on the bottom horizontal X axis and Q on the left vertical Y axis.

The fact that it's intuitive to econ professors to flip these axes... is I think very weird. When I talk with economists I'd want to say "demand increases so the demand curve shifts up", and they expect me to say "demand increases so the demand curve shifts right"

It may be the convention, but you're not going to convince me that it is a good convention to go against everything that will be learned in every other math related field.

Probably too late to fix. Sigh

 

Some Notes for Phil on Modeling Housing Markets

2017 May 21
by Daniel Lakeland

Given all the stuff being discussed on Gelman's blog about the housing market, here's some ideas I had about how one might formulate a continuous version of the model (rather than say an agent based version).

First, let's talk about how many houses are being rented and how many people are renting them, as a function of price. Let's posit an important component of the model, F(t,r) which is the frequency distribution of houses renting for price r at time t. Now let's also posit some variables N(t) for the total number of houses, B(t,r) the average number of Bedrooms per house renting for r, and C(t,r) the average number of people per bedroom in houses renting for r (C is for crowding). Then we can describe the relevant population of the area as:

 \mathrm{Pop}(t) = \int_0^\infty N(t)F(t,r)C(t,r)B(t,r) dr

Now F(t,r) can change in several ways. First of all there can be some turnover. When there's turnover of a house currently renting for rent r_1 and it starts renting for r_2 then we have:

\frac{d}{dt}(N(t) F(t,r_1)) = -\int N(t) F(t,r_1)To(t,r_1,r_2) dr_2 + \int N(t)F(t,r_2)To(t,r_2,r_1)dr_2 + \frac{dN(t,r_1)}{dt}

An integro-differential equation, with a kernel To(t,r_1,r_2) that describes the turnover transition of an existing apartment between different rents, and a function \frac{dN(t,r)}{dt} that describes how fast we're constructing housing and renting it out at price r, each changes in time.

Now, we'll imagine that the number of units actually on the market for a day or whatever is negligible (we don't have say 15 percent of all units empty ever for example). So we can talk about a turnover rate v(t,r) for volume of transactions at time t, price r. This defines a kind of spot price distribution. There's no one spot price, because there's no "universal apartment". Different locations and different qualities, and different sizes produce different market rental spot prices. But if v(t,r) > 0 then at time t, some volume is available at price r.

There is a relationship between the kernel To(t,r_1,r_2) and the transaction volume v(t,r), namely that the total number of apartments transitioning from anywhere into price r, plus the new sales, is the volume:

v(t,r) = \int N(t) F(t,r^*) To(t,r^*,r)dr^* + \frac{dN(t,r)}{dt}

and also, there is a relationship between N(t) and \frac{dN(t,r)}{dt} namely:

\frac{dN(t)}{dt} = \int \frac{dN(t,r)}{dt} dr

This more or less describes the dynamics of the supply, at least, if you posit specific forms for the F(0,r) function, and some dynamics of the transition kernels and the building rate and soforth.

But, in order to get information about the transition kernels and the volume, etc, you need to have some information about demand, and for this you need to posit D(t,r) a demand for an apartment (or a bedroom?) at a given rent r. And this describes the reservoir of people who are waiting in the wings to scoop up an apartment if one becomes available at a given price. We should probably split it out as a component from inside our region, and from outside our region. From inside our region will help us determine how often people are leaving their apartments.

It's clear that the demand can be a function of the character of the population, as Phil says you can get more rich people demanding services and then increasing the demand and the lower end of r. So, there is feedback between statistics of the population, and the demand.

Now, I think we have a structure that we can start to build on, specific functional forms could be hypothesized and dynamics could be observed through solving integro-differential equations, perhaps by discretizing the system and doing sums. Probably we need a bit more than this, this isn't quite a complete closed model, but it's a good start for a structure if you wanted to go after this with a continuous statistical approach.

 

Injecting Data into the SF Rent discussion

2017 May 17
by Daniel Lakeland

There was a massive shitstorm on Andrew Gelman's blog about SF rents, which are of course a hot button topic. Since I have the complete ACS microdata, I figured I could inject some data into the discussion at least. So here it is, the complete distribution of rents for occupied units each year in all of the SF related Census PUMAs... In total about 8328 household observations across 2010-2015

 

Click to see full size

Tabular values from each year:

  YEAR   MedRent
1 2010 1.0000000
2 2011 0.9565217
3 2012 1.2173913
4 2013 1.3043478
5 2014 1.3913043
6 2015 1.3913043

Note, the above was the original table I published, but it is affected by the fact that the Census discretizes the rent. Adding a normal perturbation to the normalized rent with sd = 0.01 creates a smoothed distribution, and then you can get better estimates on the median and mean of the real distribution:

  YEAR   MedRent  MeanRent
1 2010 0.9959745 1.0020145
2 2011 0.9621274 0.9797048
3 2012 1.2243687 1.2844332
4 2013 1.3043896 1.3762188
5 2014 1.3854716 1.4446855
6 2015 1.4144859 1.5111725

So clearly the distribution has ratcheted up each year, and more at the right tail than anywhere else (mean more than the median). Also Top-coding is a big problem here, lots of the rents are reported at the maximum allowable top-coded value rather than their real value because of privacy issues.

Combining information from independent experiments: further ideas

2017 April 19
by Daniel Lakeland

It's an interesting question connected to something posted at Andrew's blog. I posted one possible "solution" using Stan previously, and Corey and I did some exploration of the issues in the comments.

Here are some new thoughts I've had, but first the set up. There are N labs each of which investigate a phenomenon in which there is a fundamental ratio R = X/Y and X and Y are individually measured with error. The labs each publish some posterior distribution for X and Y. In the example problem we have

Xtrue[i] ~ normal(Xbar[i],sx[i]); // published by lab i

Ytrue[i] ~ normal(Ybar[i],sy[i]); //similar

for each lab i

Now we'd like to do inference from this on the ultimate quantity of interest R. We assume that there is an underlying Rtrue of interest, and that in each lab, due to peculiarities of their apparatus etc an underlying R[i] is at work so that Xtrue[i]/Ytrue[i] = R[i] and R[i] is "close to" Rtrue.

R[i] ~ normal(Rtrue,deltaR);

with deltaR "small" and Rtrue a parameter in our model.

How can we do inference on Rtrue?

The thing that I recently came to is that in this situation, we don't get the individual data points from the labs, we get as *data* only Xbar[i] and Ybar[i] the published values that describe the lab's posterior distributions over Xtrue[i] and Ytrue[i], so we can do something like as follows:

Rtrue ~ normal(OurRtrueGuess, someSpread);// our prior for Rtrue

R[i] ~ normal(Rtrue, deltaR);// our prior for R[i] given Rtrue, expresses the "closeness" of the individual experiments to the real R value

Ytrue[i] ~ normal(Ybar[i], sy[i]); // the published posterior probability distribution over Ytrue[i] from lab i

Xbar[i] ~ normal(R[i]*Ytrue[i],sx[i]); 
/* the likelihood of having Xbar be published
 as the estimate of Xtrue by lab i given 
Ytrue[i], R[i] and the lab published sx[i] inference error. 
That is: p(Xbar[i] | R[i],Ytrue[i])
*/

So now what we've got is essentially p(R) p(R[i]|R) p(Ytrue[i] | Data[i]) p(Xbar[i] | R[i], Ytrue[i], Data[i])

the narrow width of p(R[i]|R) is what describes the fact that all the labs are trying to set up their apparatus to reproduce carefully a single ratio R. The Ytrue[i] distribution is as given from the lab, and finally the likelihood over the Xbar value is potentially either the published distribution from the lab, or if we have additional information, it could be some modified thing. When Xbar is a sufficient statistic and we're using a normal distribution, it can be as good as getting the whole dataset from the lab.

What's interesting is, in these cases, the output of each labs inference process becomes *observed data* that we use in a likelihood type factor in our Bayesian model for combining the data.

 

Under Stan-ding the Hot Hand

2017 April 10
by Daniel Lakeland

Summary: It is hard to detect the "hot hand" in basketball with hit/miss shot data. In general in science, we often see people claiming that a thing doesn't exist because "they looked carefully and didn't find it". Well, the problem is if your method of looking can't find the thing, the fact that you didn't find it tells us nothing, yet this is taken as evidence that the thing doesn't exist. It's not just a triviality about basketball, it's a common problem in other important areas of science.

Imagine a player shoots baskets, and their probability of success is 0.5 + a sinusoidal function of shot. We don't know the phase or period exactly, we want to look at a sequence of hits and try to infer the amplitude of the variation and its frequency. In my case I'm using period = 40 and phase = 0.

## the hot hand, a player takes shots at a basket with a sinusoidal
## underlying prob of success, looking at N successive shots, if we
## know the period and phase perfectly can we detect the magnitude for
## different N = 20, 50, 100, 200, 500

library(rstan);
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
set.seed(1);

hitmiss = sapply(1:500,function(x){return(rbinom(1,1,prob=0.5 + 0.1 * sin(2*pi*x/40)))})

stancode =  "
functions{
real binp(real A,int i,real f,real ph) { return(0.5 + A*sin(2*pi()*(i+ph)/f));}
}
data{int hitmiss[500];}

parameters{real A20;real A50; real A100;
 real A200; real A500;
vector [5] f;
vector [5] ph;
}
model{
A20 ~ normal(0,.5);
A50 ~ normal(0,.5);
A100 ~ normal(0,.5);
A200 ~ normal(0,.5);
A500 ~ normal(0,.5);

f ~ gamma(2,2.0/50);
ph ~ normal(0,20);

for(i in 1:500){
if(i < 21){hitmiss[i] ~ binomial(1,binp(A20,i,f[1],ph[1]));
}
if(i < 51){hitmiss[i] ~ binomial(1,binp(A50,i,f[2],ph[2]));
}
if(i < 101){hitmiss[i] ~ binomial(1,binp(A100,i,f[3],ph[3]));
}
if(i < 201){hitmiss[i] ~ binomial(1,binp(A200,i,f[4],ph[4]));
}
if(i < 501){hitmiss[i] ~ binomial(1,binp(A500,i,f[5],ph[5]));
}
}
}
"

samps <- stan(model_code=stancode,data=list(hitmiss=hitmiss),
              init=list(
                  list(f=rep(50,5),A20=0.2,A50=0.2,A100=0.2,A200=0.2,A500=0.2,ph=rep(0,5)),
                  list(f=rep(50,5),A20=0.2,A50=0.2,A100=0.2,A200=0.2,A500=0.2,ph=rep(0,5)),
                  list(f=rep(50,5),A20=0.2,A50=0.2,A100=0.2,A200=0.2,A500=0.2,ph=rep(0,5)),
                  list(f=rep(50,5),A20=0.2,A50=0.2,A100=0.2,A200=0.2,A500=0.2,ph=rep(0,5))),
              iter=10000,thin=10)

samps


Running this model gives the following results:


Inference for Stan model: c8293fb08b00b7823bcc1fc716df0ef9.
4 chains, each with iter=10000; warmup=5000; thin=10; 
post-warmup draws per chain=500, total post-warmup draws=2000.

         mean se_mean    sd    2.5%     25%     50%     75%   97.5% n_eff Rhat
A20      0.15    0.01  0.11    0.01    0.06    0.12    0.21    0.39   422 1.00
A50      0.11    0.00  0.08    0.01    0.04    0.10    0.17    0.30   410 1.00
A100     0.09    0.00  0.06    0.00    0.04    0.09    0.14    0.24   569 1.00
A200     0.09    0.00  0.06    0.00    0.04    0.09    0.13    0.20   288 1.01
A500     0.13    0.00  0.03    0.06    0.12    0.13    0.16    0.20   270 1.01
f[1]    45.95    1.09 35.87    5.22   19.06   37.12   62.08  138.20  1090 1.00
f[2]    39.30    1.46 34.07    3.89   12.61   29.49   54.20  132.16   541 1.00
f[3]    37.54    1.66 29.00    5.82   17.11   32.19   47.29  112.63   305 1.01
f[4]    38.49    1.50 27.51    8.49   19.08   35.79   45.11  108.88   336 1.01
f[5]    40.39    0.28  2.02   39.52   40.31   40.62   40.98   41.68    54 1.08
ph[1]   -0.97    0.95 18.74  -34.41  -13.40   -3.96   11.86   38.11   387 1.02
ph[2]    1.18    0.97 20.88  -40.78  -11.06    0.30   15.52   42.05   465 1.00
ph[3]   -2.06    2.08 20.75  -43.41  -14.11   -1.80   12.54   36.89    99 1.03
ph[4]    1.48    1.35 20.94  -39.44  -10.69    1.07   14.84   42.38   240 1.02
ph[5]    5.01    0.28  4.57   -2.00    2.57    4.69    7.45   12.91   267 1.00
lp__  -581.57    0.33  4.48 -591.30 -584.43 -581.32 -578.54 -573.58   185 1.02


Suppose we decide that we've "Detected" the effect if we have an expected amplitude / sd(amplitude) of about 2, then it takes upwards of 200 shots, because at 200 shots E(A)/sd(A) = .09/.06 ~ 1.5. Also note how few effective samples I get (4 cores, 10000 samples per core, thinning by a factor of 10 so I have 2000 total samples but typical parameters have around a couple hundred effective samples. The model is hard to fit because the whole thing is noisy and messy, traceplots reflect this).

Also, this is with the benefit of a precisely repeated periodic signal and informative priors on the size, period, and phase of the effect.

If the effect varies like a smooth gaussian process with a ~ 40 shot scale but no periodicity and no informative priors, it'd be like trying estimate simultaneously 20 or 30 fourier coefficients or something... you'd need even more data and long runs to overcome the noise and get good effective sample size.

The model is ill posed, the measurement is not very informative, the Hot Hand is hard to detect from hit/miss data.

SO LACK OF DETECTION IS NOT EVIDENCE OF ABSENCE.