I just started a book, The Future of Everything [erething?]: The Science of Prediction, by the mathematician qua polymath David Orrell. Orrell was one of the guys who kind of limited the chaos affect in weather (butterfly farts, Hillary wins the election) to almost nothingness, instead attributing large discrepancies (to say the least) in weather prediction to model error, basically the gap between the pretty math model and the real world phenomenon.
The book, thus far at least, has been really interesting, well-written, and erudite-though the beginnings hover on some historical facts that aren’t completely novel, but nonetheless. Dr. Orrell’s main interests in this book revolve around prediction in wealth, weather, and health, tracing the history of prediction from around 1500 BC up through the methods developed by Newton and Kepler that remain in practice to this day (in complicated, derivative forms, I imagine). While it’s not as exciting as the opening of a new Bape store (heh), a couple chunks have popped out thus far, and I should like to quote them and offer some commentary:
“Systems where predictions are of interest-in biology, economics, or climate change-are either alive, influenced by life, or have a similar level of complexity to living beings (1). They are difficult to predict not because of simple technical reasons, which can be overcome with faster computers or better data, but because they have evolved to be that way (2). We pinpoint the causes of prediction error (3).” (italics mfjoe)
1. This is particularly interesting now, when we finally have the ability to recognize the complexity such systems, as well as have a pretty big new one with the Interweb to play with. As an aside, it is also telling of our lack of predictive ability that we can rarely predict how complex systems are going to end up being, per se, let alone how they will be modeled in toto and in situ.
2. I haven’t read far into the book. But the way I look at it is that we evolved to predict certain things very well, and most other things horribly at best. It’s not the system under the scope then, but the one peering through it that cause the error. E.g., I still argue with intelligent (seemingly) people about the ability to beat the house at a casino. As a great thinker, Nassim Nicholas Taleb, pointed out, a casino is the least random, chance destination on the planet. Everything is completely measured and the house always wins (even if your uncle charlie won $10K that one time). However, the games played mimic those we evolved with, and we expect them to be path-dependent (I think I got that right). Hence: ass handed to you. You aren’t more likely to win after losing in a casino a string of times. I am more likely to fall asleep as the hours I haven’t slept continue passing…that’s the gist.
3. Comes from the Popper. Since we have no real ability above chance to know or predict things, why not look for systematic causes for error, so we can start to turn those unknown unknowns into know unknowns (Thanks Rummy, best thing that came out your mouth in your life).
“One type of prediction relates to overall function and can be used to make general warnings. The other type involves specific forecasts about the future. Mathematical models are better at the first than they are the second. (1)”
1. Example of first type that I’m fairly sure is largely accurate: Any given Human, a biological system of great complexity, will expire, sometime (for the various reasons consult Aubrey de Grey, also a great offerer of the second type of prediction, of which most will turn out to be very, very far from reality-one could call these Methuselah Mouse Model Errors).
(Cop it from Amazon)