Friday, August 18, 2017

Led by:  Samuel Western
Description:  Computer modeling has played a critical role in American policy since the Manhattan Project. The concept quickly leapt from military to scientific and economic projects. Limits to Growth, published in 1972, used computer modeling. Yet scientists discovered a problem with Limits: the models it employed were written by humans and prey to subjective interpretations. Since publication of the Limits of Growth, the world’s population has nearly doubled. We will rely on computer modeling to make sure the economies of scarcity don’t become too real.  Yet we hear of snafus with industry or policy makers who either rely too heavily on models or ignore them.

1996: “I believe nicotine is not addictive.” (R.J. Reynolds had used computer modeling to show additives could make nicotine even more addictive)

2008: Arjun Murti of Goldman Sachs, with access to superior computer modeling, predicts that oil would soon hit $200 a barrel. Within eight months it drops to $39.43.

2009: In a nutshell, theoretical models cannot explain what we observe in the geological record. There appears to be something fundamentally wrong with the way temperature and carbon are linked in climate models.  Gerald Dickens, Rice University.

2009. Basic finance, Mr. Greenspan? “The current crisis has demonstrated that neither bank regulators, nor anyone else, can consistently and accurately forecast…if the financial system as a whole will seize up.”

Is computer modeling getting more or less accurate?  Is it gaining in importance?  Will errors carry increasingly greater consequences? Or are we facing an old problem merely in need of adjusting? Moi? I’m a writer specializing in economic history. I’m particularly interested in big economic trends and behavioral economics. As scientific solutions become, by necessity, increasingly a part of our social milieu, I’m curious how ideas succeed or fail in influencing public policy.

Notes:
As global population grows and solutions become increasingly complex, so do our computer models that society uses to reduce risk.  These computer models have now reached the levels of complexity so acute that only modern augurs – the elect – can understand them.  This exclusivity can lead to problems, including transparency.

Models are just a grand projection of which, historically, our cognitive (and occasionally neurotic) selves have been very fond.  People have employed entrails or the placement of stars to predict their future. Computer modeling is just an extension of that.

We are, rightly or wrongly, increasingly relying on statistical computer models.  This means a more complex level of interpretation. Models have a multiplicity of purposes, not least of which is to help us understand what we don’t understand.

We can use more transparency in models that have a systemic impact, if only to a select few, like the SEC or some sort of watchdog that can question the reliance on leaning a certain model.  Perhaps a website that describes how various models work? This website would include describing the intellectual heritage of the model and list major funders. It might ascribe the model in a social setting.

A growing world population gives the old bromide, “there are no models of human behavior,” a little more credence.

Challenges:

  1. Cultural disposition to look for a single answer. How do we challenge this?
  2. What is the rightful place of statistical models in science?
  3. Who is interpreting the interpreters? Do we indeed need watchdogs? How do inject more transparency into a more cryptic but important part of science? How important is it the public understand highly specialized models?
  4. Do we need to do more basic research, both in the lab and on ground (or space or the ocean) before we start writing models?
  5. Very few incentives for scientists for having the public understand.

Getting the public to understand (scientists to be more humble and honest about):

  1. The difference between the explanatory and predictive purposes/functions of a model.
  2. Not all models are created equal.
  3. A lack of modesty perforates the authority of climate science.
  4. You can demonstrate global warming without the prediction models!
  5. Issue of consequence, if we failed to heed a model, what are the consequences? What are issues of accountability?

Even if your models were perfect, we have agencies that don’t know how to use the data.

Comments are closed.