Randomized Chess

2014 September 5
by Daniel Lakeland

I've been sick a lot recently, in part thanks to having small children. In any case, one thing I've been doing is revisiting Chess. I honestly am pretty clumsy at Chess but it's one of those things I always felt I should probably do. When I was younger most of my friends played stronger games than me, and it was hard to enjoy when you were getting beaten all the time. Now, thanks to Moore's law and clever programming, even the very very very top players are useless against a 4 core laptop computer running Stockfish.

So we can all agree now that it's no fun getting blasted out of the water every time, but also we can use computers to make things better and more interesting for humans, since that's what they're for right?

There are lots of proposals for randomized or alternative starting position Chess games. For example Chess 960 (Fischer random chess) is a variant with 960 possible starting positions. The idea is to avoid making Chess a game where a big advantage comes from memorizing opening moves in some opening database. I'm more or less for this in my play. I enjoy playing Chess well enough, but I have absolutely NO interest in poring over variation after variation in a big book of opening theory. I think some people like this stuff, so for them, they can of course continue to play regular chess.

On the other hand, for people like me, consider the following method of starting the game:

  1. Set up the board in standard starting position.
  2. Using a computer, play N random legal pairs of moves (turns). possibly with or without capture.
  3. Using a chess program on the computer, find the "score" for this position.
  4. Accept the position if the chess program decides that the score is within \epsilon of 0.0 (where positive is good for white, negative is good for black, this is standard output from chess engines), otherwise go to step 1.
  5. Assign the color randomly to the human (or to one of the humans if you're not playing against a computer).
  6. Start the game by allowing white to move.

Note, this variation also can be used to handicap games by accepting a starting position if it is within \epsilon of some handicap value h, and then assigning the weaker player to the color who has the advantage. It's also possible to play N random moves and then allow the computer to move K moves until the score evens out properly if you can get support from the chess engine. Finally, it's also possible to search a large database of games for one in which after N moves the position evaluates to within \epsilon of the appropriate handicap value, rather than generating random moves.

I suspect N=6 to N=10 would be the appropriate number of moves to use.

Now, who will implement this in SCID or the like?

 

What it takes for a p-value to be meaningful.

2014 September 3
by Daniel Lakeland

Frequentist statistics often relies on p values as summaries of whether a particular dataset implies an important property about a population (often that the average is different from 0).

In a comment thread on Gelman's blog (complete with a little controversy) I discussed some of the realistic problems with that, which I'll repeat and elaborate here:

When we do some study in which we collect data d and then calculate a p value to see if it has some particular property, we calculate the following:

1-P(s_1(d),s_2(d),\ldots,s_n(d))

Where P is a functional form for a cumulative distribution function, and s_i are sample statistics of the data d.

A typical case might be 1-p_t(\bar d / s(d),n(d)-1) where \bar d is the sample average of the data and s(d) is the sample standard deviation, n(d) is the number of data points, and p_t is the standard t distribution CDF with n-1 degrees of freedom.

The basic idea is this: you have a finite population of things, you can sample those things, and measure them to get values d.  You do that for some particular sample, and then want to know whether future samples will have similar outcomes. In order for the p value to be a meaningful way to think about those future samples you need:

  • Representativeness of the sample. If your sample covers a small range of the population's total variability, then obviously future samples will not necessarily look like your current sample.
  • Stability of the measurements in time. If the population's values are changing on the timescale between now and the next time you have a sample, then the p value is meaningless for the future sample.
  • Knowledge of a good functional form for p. When we can rely on things like central limit theorems, and certain summary statistics therefore have sampling distributions that are somewhat independent of the underlying population distribution, we will get a more robust and reliable summary from our p values. This is one reason why the t-test is so popular.
  • Belief that there is only one, or at least a small number of possible analyses that could have been done, and that the choice of sample statistics and functional form are not influenced by information about the data: p_q=1-P_q(s_{iq}(d)) represents in essence a population of possible p values from analyses indexed by q, when there are a wide variety of possible values for q, the fact that one particular p value was reported with "statistical significance" only indicates to the reader that it was possible to find a given q that gave the required small p_q.

The "Garden of Forking Paths" that Gelman has been discussing is really about the size of the set q independent of the number of values that the researcher actually looked at. It's also about the fact that having seen your data, it is plausibly easier to choose a given analysis which produces small p_q values even without looking at a large number of q values when there is a large plausible set of potential q.

Gelman has commented on all of these, but there's been a fair amount of hoo-ha about his "Forking Paths" argument. I think the symbolification of it here makes things a little clearer, if there are a huge number of q values which could plausibly have been accepted by the reader, and the particular q value chosen (the analysis) was not pre-registered, then there is no way to know whether p is a meaningful summary about future samples representative of the whole population of things.

What problems are solved by a Bayesian viewpoint?

Representativeness of the sample is still important, but if we have knowledge of the data collection process, and background knowledge about the general population, we can build in that knowledge to our choice of data model and prior. We can, at least partially, account for our uncertainty in representativeness.

Stability in time: A Bayesian analysis can give us reasonable estimates of model parameters for a model of the population at the given point in time, and can use probability to do this, even though there is no possibility to go back in time and make repeated measurements at the same time point. Frequentist sampling theory often confuses things by implicitly assuming time-independent values, though I should mention it is possible to explicitly include time in frequentist analyses.

Knowledge of a good functional form: Bayesian analysis does not rely on the concept of repeated sampling for its conception of a distribution. A Bayesian data distribution does not need to reproduce the actual unobserved histogram of values "out there" in the world in order to be accurate. What it does need to do is encode true facts about the world which make it sensitive to the questions of interest. see my example problem on orange juice for instance.

Possible Alternative Analysis: In general, Bayesian analyses are rarely summarized by p values, so the idea that the p values themselves are random variables and we have a lot to choose from is less relevant. Furthermore, Bayesian analysis is always explicitly conditional on the model, and the model is generally something with some scientific content. One of the huge advantages of Bayesian models is that they leave the description of the data to the modeler in a very general way. So a Bayesian model essentially says: "if you believe my model for how data d arises, then the parameter values that are reasonable are a,b,c\ldots ". Most Frequentist results can be summarized by "if you believe the data arise by some kind of simple boring process, then you would be surprised to see my data". That's not at all the same thing!

 

Wait, you need to what??

2014 August 15
by Daniel Lakeland

According to this opinion piece in The Telegraph by a knighted peer, and inventor of the bagless vacuum, the UK needs to "double the number of engineering graduates coming out of our universities each year, for the next twenty years"

Ok, let's see here. There must be at least 1 engineering school in the UK, and it must graduate at least 100 students because otherwise how would it stay in business?

So let's calculate: \sum_{i=1}^{20} 100 \times 2^i. We could do all kinds of nice little integration techniques to approximate this, or we could just ask Maxima for the answer:

(%i1) sum(100*2^i,i,1,20);
(%o1) 209715000

Yep, so at a minimum the UK needs to graduate another 209 million engineers total, in a country  that currently only has 63 million people and a growth rate of 0.6% last year?

And what if we maybe assume that more than 100 people graduated from engineering school last year? Like maybe it was 1000 or even 10000? The whole thing is linear in the factor of 100, so we might need to graduate 2 billion or even 20 billion engineers over the next 20 years. Perhaps they're planning to puree them into Soylent Green to feed the masses?

Monty Python had something useful to say at this point:

Let's just hope these engineers have better numeracy than the peerage.

 

The future is now, or maybe next week or next year... what do we do about it?

2014 August 13
by Daniel Lakeland

This video was posted to my FB news feed by a friend who works in technology. It gives voice to a lot of thoughts I've been having recently on the future of the economy:

Here are some things I've been thinking about related to this:

It's easy for me to imagine a coming world, in my lifetime, in which the vast majority of the frequently repeated tasks currently performed by humans are performed by computer and robot. This includes:

  • Transporting things (by truck, train, within warehouses, picking, packing, shipping, etc)
  • Selling things (from what Amazon already does, to cars, houses, industrial supplies...)
  • Making decisions about which things are needed where, or how much of a thing to make or acquire (much of store management, middle management).
  • Cleaning things (janitors, maids, etc)
  • Diagnosing illness and treating common illness (let's say 80 to 90% of general practice medicine)
  • Manufacturing things, even or especially customized things (3D printers, general purpose CAD/CAM, robotic garment construction)
  • Basically, anything you do more than 1 day a week...

In that context, first of all vast numbers of people will be unemployed, or looking for alternative employment, and second of all, the only alternative employment will be doing things that are "one off" where the cost of teaching a machine to do it doesn't pay off. So humans will need to find a series of relatively random things to do. Things which you won't typically repeat more than a few times a month or year.

Furthermore, it now becomes impossible to rely on working at a "regular job" to provide a regular level of income to feed, clothe, house, educate, medicate, and otherwise sustain the basic needs of a human being. So, all else equal, the average wages humans will earn will go steadily down.

At the same time, cost of producing things will go down too. A decent pair of glasses might be something you can 3D print the frame of, and online order the lenses for, assembling it all yourself in less time than it takes to get a pair from a current Lens Crafters, at a price of say $3.00 instead of $50 or $250 (for designer frames), and choosing from a vast vast selection of designs. So, the bottom might drop out of the price of everyday goods. Note that you can already buy cheap eyeglasses online for around $10 if you don't care about relatively high quality lens material and coatings.

The question is, what will be the value of the ratio \bar P/ \bar S, that is the average price (P) of a basket of important common goods that you consume in a typical year, divided by the average salary (S) in dollars per year. This is a dimensionless ratio and describes whether you are breaking even or not (>1 = losing, < 1 = winning). Both of these quantities are in theory going to be decreasing. But what matters is not just what is some new asymptotic value, 100 years from now or more, but also what are the dynamics of these changes during the next 10, 20, 50, or 100 years. It is entirely plausible that the bottom drops out of the market for labor quicker than it drops out of the market for goods for example. The result is potentially worse than the Great Depression during a period where nevertheless we have vast potential growth in real wealth through automation!

One problem area seems to be social and political technology. Conservative ideas about how the world should be: "work hard and get ahead" sort of thoughts could very well be highly counterproductive during these changes. The future may well be in "find stuff no-one has thought to automate yet, and do that for a little while until it isn't needed anymore", where "a little while" might be anywhere from an hour to a couple of months or a year but probably won't be ten or twenty years.

We already see some of this in things like "Etsy" where individuals produce one-off or short batches of custom goods, not that Etsy is by itself changing the economy dramatically, but even the world of people buying used books from libraries, marking them up by a few pennies, selling them on Amazon with shipping and handling fees, and pocketing a few dollars per book is an example of humans going out and doing one-off tasks (namely combing through boxes of books for likely candidates). Even that, with its regular nature is fairly automatable, and it only exists because we don't legally allow technology to scan and reproduce those books electronically (copyright anyone?).

One political advance I see being needed is relatively simple and efficient redistribution of wealth. We're already redistributing a lot of wealth through tax systems, welfare systems, etc. But we could set some baseline standard, and create the Universal Basic Income, or build it in to our taxation system (see my previous post on optimal taxation with redistribution). The idea being that we give everyone a simple cushion to help make the risky entrepreneurial aspect of doing a wide variety of non-repeated things more workable for people, and let people improve on their basic level of income through this entrepreneurial process, with people essentially running a vast number of small businesses utilizing the tools that a bot-based production system creates.

Like it or not, I just don't see 9-5 jobs having very much future beyond my generation, but we should probably embrace that idea and make it work for us as a society, not rebel against the inevitable. Doing so will require new social structures, new identities, new laws, and new ideas about work.

Unfortunately, it doesn't seem to work for Entsophy

2014 August 11
by Daniel Lakeland

Oh well, I got a laugh out of it at least.

 

 

Bras and Breast Cancer, and Anthropologists, and Bayes Theorem, Oh my!

2014 July 30
by Daniel Lakeland

Boobies. There I had to say it. This is a post about boobies, and math, and consulting with experts before making too many claims.

In this click bait article that I found somehow searching on Google News for unrelated topics, I see that some "Medical Anthropologists" are claiming that Bras seem to cause breast cancer (not a new claim, their book came out in 1995, but their push against the scientific establishment is reignited I guess). At least part of this conclusion seems to be based on the observation from their PDF

Dressed To Kill described our 1991-93 Bra and Breast Cancer Study, examining the bra wearing habits and attitudes of about 4,700 American women, nearly half of whom had had breast cancer. The study results showed that wearing a bra over 12 hours daily dramatically increases breast cancer incidence. Bra-free women were shown to have about the same incidence of breast cancer as men, while those who wear a bra 18-24 hours daily have over 100 times greater incidence of breast cancer than do bra-free women. This link was 3-4 times greater than that between cigarettes and lung cancer!

They further claim "bras are the leading cause of breast cancer."

That's pretty shocking data! I mean really? Now, according to http://seer.cancer.gov/statfacts/html/breast.html there are about 2 Million women in the US living with breast cancer, and 12% overall will be diagnosed throughout their lives. There are around 150M women in the US overall. So P(BC) = 2/150 = O(0.01)

However, in our sample P(BC) = 0.5 That's 50 times the background rate (ok 37.5 if you do the math precisely).

Doesn't it maybe seem plausible that in winnowing through the 1% of women living with breast cancer and are still alive, or even the 5 or 6 percent who have been diagnosed in the past but are still alive (figure half of women who are alive today who will at some point be diagnosed have already been diagnosed at this point) that maybe, just maybe they could have introduced a bias in whether or not their sample wears bras?

So "looking for cancer patients causes us to find bra wearing women" is actually maybe the more likely story here? Perhaps "cancer patients who were non bra wearers were overwhelmingly more likely to have died from their breast cancer, and so we couldn't find any of them?" That's somehow not as reassuring to the non-bra-wearers in the audience I think.

Symbolically: P(Alive \& BC \& NBra) = P(Alive | BC \& NBra) P(BC|NBra) P(NBra) = 1/100 P(Alive \& BC \& Bra)\\ = 1/100 P(Alive | BC\&Bra) P(BC|Bra) P(Bra) pretend BC and Bra are independent. We conclude P(Alive | BC \& NBra) = 1/100 P(Alive | BC \& Bra) P(Bra)/(1-P(Bra)) or not wearing a bra reduces your chance of surviving by a factor of 10 or so if P(Bra) ~ 0.9? Put on those bras ladies! The exact opposite of their conclusion!

I personally suspect something else spurious in their research. But nothing in their PDF convinces me that they know what they are doing.

Note that wikipedia has some discussion of their book.

 

Sunblock, Skin Cancer, Evidence Based Medicine, and the Surgeon General

2014 July 30
by Daniel Lakeland

A friend of mine posted a link to news articles about a recent Surgeon General warning about sunscreen

http://www.cnn.com/2014/07/29/health/surgeon-general-skin-cancer/

Quoting from that article:

Skin cancer is on the rise, according to the American Cancer Society, with more cases diagnosed annually than breast, prostate, lung and colon cancer cases combined.

On Tuesday, the United States surgeon general issued a call to action to prevent the disease, calling it a major public health problem that requires immediate action. Nearly 5 million people are treated for skin cancer each year.

But let's dissect this a little bit. First of all, most skin cancer is unlikely to really hurt you. It's melanoma that is the real concern.

cancer.gov gives melanoma rates according to the following graph:

Melanoma diagnosis and death rates through time from seer.cancer.gov

Melanoma diagnosis and death rates through time from seer.cancer.gov

As for melanoma itself, clearly the diagnosed cases are on the rise, but look at the deaths per year. Totally flat. This is consistent with improved diagnosis procedures without any change in actual incidence in the population. Furthermore looking deeper on the melanoma site we see that 5 year survival rates have increased from 82% in 1975 to 93% in 2006, this is also consistent with earlier diagnosis (so that the 5 year rate measures from an earlier point relative to the initiation of the melanoma).

How about melanoma by state? Climate should be involved right? More sun exposure should mean more melanoma?

 

Melanoma rates by state from seer.cancer.gov

Melanoma rates by state from seer.cancer.gov

As you can see, states in the southwest have lower rates, and states in the northwest and northeast have higher rates. The cluster of southeastern states with high rates are interesting too.

Vitamin D is suspected to be involved in several health affects related to cancer, so overall exposure to sun may be beneficial. However, I think that the data is also clear that intense exposure to sun from tanning beds, intentional tanning, and repeated sunburn is bad for your skin.

Sun exposure, like other exposures such as alcohol, may have nonlinear risk effects. At moderate exposures you are better off than at zero exposure (in Oregon rain or Maine winters) or heavy exposure (leading to repeated sunburn and high melanoma risk).

So, is the advice to slather on sunscreen good? I don't think the conclusion is so clear cut, but I don't have any in-depth study data to go on either. All I can tell you is that I'll continue to avoid getting sunburn by covering up and using zinc based sunblock when I'm outside for long periods, but I'll continue to get regular sun exposure without sunblock in mornings, evenings, and mid day during non-summer months.

Manuel Neuer vs Gonzalo Higuain (World Cup 2014 final)

2014 July 14
by Daniel Lakeland

A serious error in refereeing occurred in the 2014 World Cup final in the beginning of the second half, when German goalkeeper Neuer challenges Gonzalo Higuain in a reckless and dangrous manner, resulting in a collision that is absolutely shocking to watch. I suspect that Higuain only survives this collision because he is unaware that it will occur and does not tense up (his eyes are facing away from Neuer and on the ball the whole time). Videos abound of this event, such as this one (EDIT: takedown requests have made getting good video more difficult, much of what is available is people glorifying this event by replaying it over and over set to aggressive music). After this collision, a foul was given against Neuer (ie. committed by Higuain??) and Germany takes possession of the ball. The charitable interpretation of this is that the referee simply didn't see what happened, and therefore applied a heuristic that the goalkeeper gets the benefit of the doubt. The alternative is actual misconduct by the referee. The uninformed claptrap that abounds on the internet, in which people claim that this is not a foul by Neuer against Higuain, or that it is not worthy of a red card against Neuer seems to be rampant. Fortunately, there are rules to the game which can be looked up. Such as on the FIFA website interpretation of rules for fouls and penalties. Under page 122 of this commentary: "Serious Foul Play"

"A player is guilty of serious foul play if he uses excessive force or brutality against an opponent when challenging for the ball when it is in play.

A tackle that endangers the safety of an opponent must be sanctioned as serious foul play. Any player who lunges at an opponent in challenging for the ball from the front, from the side or from behind using one or both legs, with excessive force and endangering the safety of an opponent is guilty of serious foul play. ....

A player who is guilty of serious foul play should be sent off"

Neuer lunges at Higuain, using one leg, from the side (or behind Higuain's direction of view), using excessive force (at full tilt with knee raised) and certainly endangering the safety of Higuain. Some have said this was ruled as a foul on Higuain because he was impeding the progress of Neuer, which is nonsense. On page 118 of the above pdf:

"Impeding the progress of an opponent means moving into the path of the opponent to obstruct, block, slow down or force a change of direction by an opponent when the ball is not within playing distance of either player.

All players have a right to their position on the field of play, being in the way of an opponent is not the same as moving into the way of an opponent.

Shielding the ball is permitted. A player who places himself between an opponent and the ball for tactical reasons has not committed an offence as long as the ball is kept within playing distance and the player does not hold off the opponent with his arms or body. If the ball is within playing distance, the player may be fairly charged by an opponent."

Higuain was within playing distance of the ball at the time the foul was committed, Higuain had a right to his position, Neuer could have avoided the collision (as opposed to Higuain impeding progress by stepping in front of Neuer to intentionally trip him, which is what the above commentary is about).

Was this reckless and endangering the safety of Higuain? Anyone who watches this collision has to see that Higuain's safety was endangered. It's actually shocking that he survives it without being paralyzed. I suspect that this is entirely due to Higuain being unaware of the brutal challenge that is coming since his eyes are clearly on the ball, away from Neuer, the entire time. If he had tensed up when the impact occurred his neck vertebra would have been subject to much higher forces (as opposed to simply bending) and that might have resulted in his permanent paralysis. That isn't even to say anything about the possibility of concussion from the impact.

My prediction is that this World Cup will result in rule changes regarding the treatment of head injuries, it was constantly on the lips of commentators, fans, and others throughout the tournament.

As far as I am concerned, without their primary goalie (ie. playing 10 men and with a substitute for the goalie), Germany would have lost this match badly, and since the rules clearly show that the goalie should have left the game at 56 minutes or so, Argentina are the proper world champions.

 

Attributing deaths to Alcohol

2014 June 27
by Daniel Lakeland

This article from the CDC about alcohol and its role in deaths was picked up on the internet at various places. I took a look at the link and had some comments.

The article purports to show that alcohol played a role about 1/10 of all deaths among "working age" people (age 20-64). This is a pretty big percentage, but we should remember that most people don't die until after age 64. In fact according to the CDC life tables about 85% of people survive to 65, and 50% of people live to more than 84 years of age. So if alcohol is attributable to 10% of the 15% of deaths that occur before age 65, then alcohol is attributable to about 1.5% of all deaths in the US.

Since most people are living so long, the kinds of things that do kill younger people are likely to be things that are not very age specific, or which are generally quite harmful, like car accidents, and crime. It's pretty clear that excessive use of alcohol is problematic, but just because it's involved in 10% of all deaths in this population doesn't mean it's involved in 10% of all deaths in the US, most of which occur in people over 64 years of age, and are related to things like cancer, heart disease, pneumonia, etc.

How do they estimate the role that alcohol plays? They plug things into a model developed by people who study the harmful effects of alcohol. The model has an online application available. Assumptions in this model are going to play a big role in the outcomes. For example, they assume that 42% of fire injuries and 47% of homicides are "attributable to alcohol" in this population. I don't know where they get those kinds of numbers, but even if those are decent point estimates, the uncertainty in the attributable fraction would seem to be significant. Similarly suicide is somehow 23% attributable to alcohol.

This kind of result is pretty questionable. Clearly a causal attribution is being made. Implicitly this means somehow "without alcohol" these deaths wouldn't have occurred. But suppose you have for example a depressed Iraq war veteran with post traumatic stress syndrome, a small child (which is a big stress), and a wife who is considering leaving him (my sister who is a Psych NP used to see about 10 cases a day at a VA hospital that were like this). Suppose that this person, who has been threatening to commit suicide for a few years one day drinks 500 ml of 40% alcohol spirits (about 10 or 12 drinks) over a period of about 4 or 5 hours, and then shoots himself. Is this "attributable" to alcohol, or is it "attributable" to traumatic combat stress, social safety net issues, as well as perhaps genetic risk factors which affect personality and make this person both more likely to join the military, and more likely to contemplate suicide when faced with serious life problems?  It's a pretty difficult and complex thing to tease out. If you want to make "causal" claims, it has to some how be that with alcohol the death occurs, and without alcohol the death wouldn't have occurred. Pretty clearly, when a 19 year old fraternity pledge drinks until he passes out and dies of alcohol poisoning, the cause is plainly attributable "to alcohol" (as well as to poor social norms that encourage excessive drinking in that context). But what is the "intervention" that would involve alcohol and would have prevented these events? If it's not possible to intervene to control the variable, and thereby control the outcome, then we can't really have "causality".

It's also clear that interventions don't leave "all else equal". Clearly, in the US, when we tried Prohibition, violence went way up. We couldn't just "eliminate alcohol" while keeping everything else about society the same. So in that context you might say that thanks to the fact that we have alcohol, overall deaths are much down vs what they would be with legally enforced prohibition. No, I wouldn't say that either, because the whole concept is just stupid. Prohibition was predictably a bad idea from the start, but it was motivated by the "all else equal" fallacy.

I personally think of these CDC type reports as largely politically motivated. Some people study the harmful effects of alcohol (or guns, another political topic the CDC has published on)and by publishing this kind of "attribution" article they can justify further funding and continue their research, and they can also bring some publicity to what is undoubtedly a real problem. But typically the way in which these things are stated winds up over-generalizing and making the politically charged issue into a ruckus.

To contrast that approach, consider this report from a group that studies alcohol issues in Canada and has published "low risk" guidelines. The estimate is that if everyone were to adopt the "low risk" guidelines, deaths in Canada would decrease by 4600 per year. This is a concrete, causal sort of intervention and most likely achievable in large part without major changes to "all else." How big is that 4600 per year decrease? Well this website lists about 250000 deaths in Canada per year, so the idea is that by everyone adopting the low risk guidelines, deaths would be reduced overall by about  2%. It's interesting how my 2% calculation here is about the same order of magnitude as the 1.5% calculated back in the first paragraph or so. Implicitly then, the real issue is heavy, or binge drinking, since that is what the "low risk" guidelines rule out.

The Canadian report takes the point of view that their guidelines are set so that if you max out the guidelines (for men, 15 drinks per week, no more than 3 in 24 hours most days, no more than 4 in 24 hours ever, consumed in a safe environment that rules out drunk driving etc) the net effect on deaths vs complete abstinence from alcohol would be zero. How can that be? Wouldn't the least deaths happen from zero alcohol consumption? It's been known for a long time that alcohol consumption mortality has a sort of U shaped profile. Those who drink a little overall live longer than either heavy drinkers, or complete abstainers (on average). Mostly the effect is related to reduced cardiovascular disease related problems, but there may be other effects.

So, to that, I say if you are not in a high risk category for alcohol dependence (such as the offspring of an alcoholic or someone who has suffered from alcohol related problems in the past) raise one or maybe two glasses of wine, beer, or well made cocktails tonight with some friends, and be thankful for a substance that is beneficial in moderation. Keep to those low risk guidelines, and don't drive anywhere until the effects are fully worn off.

Oh, and don't believe everything you read.

Himmicanes and Stanislaws

2014 June 23
by Daniel Lakeland

Stanislaw Ulam was a mathematician who more or less invented the use of Monte Carlo computations for physical problems, and he's the namesake for which the Stan Bayesian modeling system was named.

Following up on my previous analysis of Himmicanes vs Hurricanes, I decided to go fully Bayesian on this problem (imagine the voice of Marsellus Wallace from Pulp Fiction). Hurricane damage is of course a traditional Civil Engineering interest, and if the name of the storm has a lot to do with how deadly it is, for social reasons, then this seems like information we really want to know.

The big issue in the Hurricane analysis is measurement uncertainty. For example in examining Hurricane Isaac in my previous analysis, (before I had gotten access to the PNAS paper), I decided that their estimate of 5 deaths was somehow wrong, and put the 34 deaths from Wikipedia in. Then, after reading the PNAS article I realized they were only counting US deaths, so I looked more carefully at the Wikipedia article and counted 9 deaths mentioned in news stories. So clearly even the direct deaths have errors and uncertainty, and that isn't to say anything about the indirect deaths, such as increased car accidents in regions outside the evacuation, or heart attacks or strokes caused by stress, or even potentially reductions in overall deaths due to lower traffic levels during hurricanes. It's a confusing and difficult to measure thing. And if it's hard to measure deaths, think of how hard it is to measure dollar damages. First of all, there is some kind of normalization procedure used by the original authors, second of all, a car crushed by a fallen tree is pretty clearly hurricane damage, but what about a car that gets "lightly" flooded but this causes electrical damage which leads ultimately to the car failing 6 months later and being scrapped because the repair is too costly? We should take measures such as damage and deaths as having significant errors incorporated into their measurement.

With that in mind, I decided to re-analyze the data using a Bayesian errors-in-variables model on top of my basic model of "involvement" using D/(C v^2) as a measure of number of people potentially affected by the storm (where D is damage, C is capitalization of the region in dollars per person, and v is the velocity of the storm winds).

To accomplish this I set up parameters that represented actual values of D, C,v for each storm, and linked them to the measured values through probability distributions, then linked these unobserved "actual" values together through a regression line in which the Masculinity and Femininity were allowed to play a role after 1979 (when both male and female names were in use). Before 1979 we simply consider this as a separate era, and have a slope and intercept term for that entire era).

The model is perhaps best described by the code itself:
stanmodl <- "
data {
int Nhur;
real<lower=0> NDAM[Nhur];
real alldeaths[Nhur];
real<lower=0> Category[Nhur];
real Year[Nhur];
real MasFem[Nhur];
}

parameters {
real<lower=0> Damage[Nhur];
real actualdeathsnorm[Nhur]; /* the actual deaths, normalized by 150, unobserved*/
real<lower=0> v[Nhur];
real<lower=0> Cappercapita[Nhur];
real<lower=0> ndsig;
real<lower=0> involvement[Nhur];
real<lower=0> invsd;
real<lower=0> slope;
real<lower=0> intercept;
real MFslope;
real MFint;
real MasFemAct[Nhur];
real predsd;
real Pre1979Int;
real Pre1979Slope;
real<lower=0> avgcappercap;
}
/* dollar amounts will be converted consistently to billions*/
model{
ndsig ~ exponential(1/.1);
invsd ~ exponential(1/.5);
MFslope ~ normal(0.05,10.0);
MFint ~ normal(0.05,1.0);
intercept ~ normal(0,1);
slope ~ normal(0,10);
predsd ~ gamma(1.3,1/4.0);
Pre1979Int ~ normal(0,1);
Pre1979Slope ~ normal(0,10);
avgcappercap ~ lognormal(log(5.0/1000.0),log(1.5));
for(i in 1:Nhur) {
Damage[i] ~ normal(NDAM[i]/1000.0,NDAM[i]/1000.00 * ndsig);
Cappercapita[i] ~ normal(avgcappercap,.25*avgcappercap);
v[i] ~ normal((25.5+8.2*Category[i])/(25.5+8.2),0.15); /* normalized wind velocity, with error*/
MasFemAct[i] ~ normal((MasFem[i]-6.0)/5.0,.15);
involvement[i] ~ normal(Damage[i]/(Cappercapita[i]*v[i]*v[i])/2000.0,invsd);
actualdeathsnorm[i] ~ normal(alldeaths[i]/150, .1*alldeaths[i]/150+5.0/150.0);
if(Year[i] < 1979) {
actualdeathsnorm[i] ~ normal((intercept + Pre1979Int + (slope+Pre1979Slope ) * involvement[i]), predsd);
}else
{
actualdeathsnorm[i] ~ normal((intercept + MFint*MasFemAct[i] + (slope+(MasFemAct[i]*MFslope) ) * involvement[i]), predsd);
}
}
}"

out <- stan(model_code=stanmodl,data=list(Nhur=NROW(Data), NDAM=Data$NDAM, alldeaths=Data$alldeaths, Category=as.numeric(Data$Category), Year = Data$Year, MasFem=Data$MasFem),chains=1,iter=800)

I run 800 samples (only 400 of which are post-warm-up) on 1 chain because the model has a lot of parameters (one for each measurement, so a couple hundred) and it takes a while to run. This run takes about 5 minutes on my desktop machine. If I were going for publication I'd probably do 3000 or more samples in two separate chains in parallel (I have two cores). But the basic message is already visible in this smaller sample set:

Here we're plotting the posterior mean values of the "actual deaths" vs the "actual involvement" D/(C v^2). As you can see, After 1979, the posterior values line up along the same line regardless of the Masculinity / Femininity of the name. Before 1979, the slope of the line is larger, indicating a higher fraction of the people involved tended to be killed. This can most likely be attributed to much poorer forecasting and communications in the 1950's and 1960's. For example, the wiki article on Hurricane Camille mentions many people refusing to evacuate even including some jail prisoners who thought this category 5 hurricane would be no big deal.

Why do these data fall right on the line? In the model we have "wiggle room" for all of the actual measurements. This includes some room for variability in capitalization per capita (think "wealth of the area affected") as well as the actual number of deaths and the actual wind velocity at landfall. Put all together, there is enough prior uncertainty in the measurements that we can find the posterior average values of the estimates directly along the regression line that we've specified. The prior we specified for the model error (predsd) is relatively wide, but the Bayesian machinery is telling us the more important source of uncertainty is in the measurements, somehow it seems to pick a very small predictive sd O(0.01) even under alternative priors. (this is actually somewhat suspicious I suppose, and would be worth following up if you were trying to publish this non-news).  So all the estimates of the actual deaths and actual involvement are pulled towards the regression line we've specified. We've plotted with low alpha some grey dots that show alternative possibilities from our samples.

How about the masculinity and femininity effects?:

Masculinity and Femininity effects

The effect of femininity on slope and intercept.

As you can see, these values are essentially 0 though there is plenty of uncertainty left to allow for the possibility of small effects. We don't know if there is some small effect, but we sure as heck shouldn't publish a result claiming an important effect based on this information!