# Qualitative, quantitative, and baroque

There are three major levels of analysis in mathematical modeling, as shown in the title of this post. A qualitative model is something like my taxation curve, or the model of a short building as a single degree of freedom oscillator. The point of a qualitative model is that it captures some important aspects of the problem, in particular it should account for “scaling” concepts (normalization of variables by typical values, growth of model output of the same order as the real outputs etc.)

A quantitative model is a somewhat different thing. Necessarily to make predictions that have close agreement with reality, a model will have to be “fit to data”. For dynamic models this means that coefficients in the model are determined by some procedure that minimizes prediction error in some experimental cases, or reproduces important features of observational data. Newton’s gravitation constant, the permittivity of free space, and the fine structure constant are all basically numbers that are determined by assuming the model is correct and finding the value that makes the predictions fit the most sensitive experiments. For a more messy model, like a social sciences statistical model, the slopes and intercepts of linear fits, or the coefficients of splines are all examples of “fitting to data”. Only by reference to experiment or observation can a theory be considered quantitatively accurate.

Finally, there are the big mean baroque models so in fashion now that we have big computers. Molecular dynamics simulations of millions of atoms, general circulation models of the weather and extrapolation to long term climate change, materials damage models of airframe structures, highway network interaction models, smart-grids, and linear finite element analysis of entire bridges possibly including details down to the individual bolt level.

Big models are sexy, because they cost a lot of money, bring in big grants, and make you feel like you’re really calculating something. But they ultimately have problems. For one, because they cost a lot and require a lot of details, they can fool you into thinking that you’ve captured everything there is and therefore necessarily are accurate. Unfortunately, the complexity of these models can hide sensitivities to missing data. A few bolts can cause the collapse of entire structures so spending a lot of time to identify critical pieces rather than capturing lots of relatively unimportant details and forgetting to generate a good sub-model of those bolts will make a fancy calculation worthless. A simpler, qualitatively correct model that has been quantitatively fit to the critical components will always be a better model than a super-detailed model which has many generic sub-pieces.

Current funding in sciences seems to emphasize bigger and bigger computational models. I think we do this at our peril because it encourages a false sense of having “solved” problems when all we’ve really done is “calculated” one particular realization of a problem.

The research group I work with does a lot of work on adding uncertainty to big computational models because we know that the one thing you should be sure about is that there is zero probability that reality will be exactly like the particular model you calculate. But still, many times there is a temptation to start with a complex model, and add uncertainty to that. I like to think more about starting with a complex idea, making a simple model, and adding uncertainty to *that*. Einstein supposedly said that a model should be as simple as possible but no simpler. The trend these days is to maximize complexity until the biggest available computers can just barely calculate the results in a reasonable time. I hope this fad passes.

Comments are closed.