## Are models "arbitrary'?

#### « previous entry | next entry »

**Jan. 27th, 2008 | 06:28 pm**

**posted by:** **beachofdreams** in **exobio**

Are models arbitrary? A lot of people think they are, especially when it comes to climate science. I've sometimes heard it said that models are built to give answers,

*arbitrarily**pre-determined in advance*.A

This is somewhat of a simplification, because computers can also help to formulate the very simulations they are "running". If, for instance, a program is meant to infer from the results of premilinary calculations that only it can perform, then it could be said to be planning its own operations, derived logically, from the previous steps. It's programmed to do this, of course, but it's never programmed with the specific steps in advance; it actually has to reason its way through the simulation, and there isn't any way for the programmer or experimenter to forsee what exactly the simulation will come up with. Computers, then, are just extensions of the model builder -- but they can think and build faster.

And it'd be nonsense to think of a computer operation (or the program that makes it perform that operation) as "arbitrary", because the computer is simply taught how to perform operations that are rooted in

So much, then, for the arbitrariness of computers themselves. But what about the models -- the simulations -- that they are running? These aren't arbirary either. They too follow from well established axioms. For example, on my journal title, I display one of my favourite formal logical arguments . It's a model of a denial of a classic

For another example, while it may seem like large feat, there is a relatively simple way to predict the temperatures of planets using a basic physical model that doesn't even need to be run by computer.

4 pR²· sT^4 = pR² · L/4pd² · (1 – a)

Calculating the temperature of the planet must take into account several important factors, the most important of which are encompassed by the equation above. The left side of the equation, marked in red, specifie show much energy the surface is radiating into space, and is calculated by multiplying the surface area of the planet by the amount of energy that radiates from every square metre of that area, expressed in watts per second. This, in turn, must equal the right, green, part of the equation. Green specifies how much energy the surface is absorbing and it is calculated by taking the surface area of the planet

So, the energy emitted by a planet must balance the energy that falls on to it minus the fraction reflected, although the amount of energy being thrown around can change, and ergo so can the temperature.

Deriving the temperature means that you simply have to re-arrange the equation so that

And so does the model match up with observations? For the most part, it does. It predicts most of the solar system's planet's temperatures quite well, although tends to underestimate by a small amount. For Venus, it underestimates by an extremely large margin. Venus should be slightly warmer than Earth, given its closer proximity to the sun. However, Venus is on the other of over 3 times as hot. Why is this? Did the model go wrong? Well, no, it simply didn't take into account other factors which determine a planet's temperature, like greenhouse gases, which affect the thermal equilibrium. A more complex model would incorporate these.

Of course, models don't always model correctly, either because they are wrong or they really don't take stock of all the important factors. But does this mean that they are arbitrary? Not really. The equation here is the way it is for a very good reason -- because this is a neccessary step in predicting planetary temperatures. It would only be arbitrary if the changes (or the assumptions) built into the model were themselves selected for irrelevant reasons, or for no reasons at all. Most important of all, the models are

So, are models built with their own answers? Not at all. They are built

*computer*model is simply a model formulated (or executed) by a computer. In theory, you could write the entire model on paper -- although it might take you a couple of years and a few thousand stacks of paper! Why? Because depending on the number of operations and functions involved, only a computer can run the simulation in a reasonable time frame.This is somewhat of a simplification, because computers can also help to formulate the very simulations they are "running". If, for instance, a program is meant to infer from the results of premilinary calculations that only it can perform, then it could be said to be planning its own operations, derived logically, from the previous steps. It's programmed to do this, of course, but it's never programmed with the specific steps in advance; it actually has to reason its way through the simulation, and there isn't any way for the programmer or experimenter to forsee what exactly the simulation will come up with. Computers, then, are just extensions of the model builder -- but they can think and build faster.

And it'd be nonsense to think of a computer operation (or the program that makes it perform that operation) as "arbitrary", because the computer is simply taught how to perform operations that are rooted in

*firm*, non-computer derived axioms, like the*fundamental counting principle*or some of the rules of logic (themselves based upon their own axioms, and so on). There are even well justified axioms for programming a computer to perfom operations. Computers don't perform their operations arbitrarily. Nor is programming arbitrary any more than building an*abacus*or programming a computer to play chess is arbitrary.So much, then, for the arbitrariness of computers themselves. But what about the models -- the simulations -- that they are running? These aren't arbirary either. They too follow from well established axioms. For example, on my journal title, I display one of my favourite formal logical arguments . It's a model of a denial of a classic

*either/or**but not both*case -- the letters code for a certain operation in an argument that says, basically, that either A or B is true, but not both. The extra '~' at the beginning models the denial of the argument. The classic fallacy of the "false dilemma" takes this form, although the fallacy lies in the supposed truth of the statement*"but not both".*The model then allows you to infer from it. If "A" is true, then it's automatically the case that "B" isn't true. The original model wasn't built with the inference in mind -- the inference is a*product*of the model. There are actually several ways to model the same argument, some more sophisticated than others. But the point to remember is that, while there are several ways to do it, there is still a limit to how many ways it can be done, and it has to be done right, given the rules of whatever system you are using. It if it was arbitrarily done up, the letters would be scrambled, or arranged in a way that doesn't actually model any logic but instead models something else.For another example, while it may seem like large feat, there is a relatively simple way to predict the temperatures of planets using a basic physical model that doesn't even need to be run by computer.

Calculating the temperature of the planet must take into account several important factors, the most important of which are encompassed by the equation above. The left side of the equation, marked in red, specifie show much energy the surface is radiating into space, and is calculated by multiplying the surface area of the planet by the amount of energy that radiates from every square metre of that area, expressed in watts per second. This, in turn, must equal the right, green, part of the equation. Green specifies how much energy the surface is absorbing and it is calculated by taking the surface area of the planet

*exposed to the sun*(the absorbing half), expressed by the area of a circle, and multiplying it by the*brightness*of the sun's light (luminosity) at the distance of sun's surface to the earth's surface (also measured in watts per second), expressed by L/4pd² . This then is multiplied by the fraction of the surface that is reflective (also known as albedo, although there are various ways of defining this), expressed as (1 – a).So, the energy emitted by a planet must balance the energy that falls on to it minus the fraction reflected, although the amount of energy being thrown around can change, and ergo so can the temperature.

Deriving the temperature means that you simply have to re-arrange the equation so that

*T*is on one side and everything else is on the other. It's a simple operation that isn't arbitrary but can seem counter-intuitive, and is certainly contingent upon certain axioms of mathematics.And so does the model match up with observations? For the most part, it does. It predicts most of the solar system's planet's temperatures quite well, although tends to underestimate by a small amount. For Venus, it underestimates by an extremely large margin. Venus should be slightly warmer than Earth, given its closer proximity to the sun. However, Venus is on the other of over 3 times as hot. Why is this? Did the model go wrong? Well, no, it simply didn't take into account other factors which determine a planet's temperature, like greenhouse gases, which affect the thermal equilibrium. A more complex model would incorporate these.

Of course, models don't always model correctly, either because they are wrong or they really don't take stock of all the important factors. But does this mean that they are arbitrary? Not really. The equation here is the way it is for a very good reason -- because this is a neccessary step in predicting planetary temperatures. It would only be arbitrary if the changes (or the assumptions) built into the model were themselves selected for irrelevant reasons, or for no reasons at all. Most important of all, the models are

*tested*against what they are striving to represent. We know that the above physical model is insufficient as far as Venus is concerned -- it simply can't explain that Venus is so much hotter than Earth. In fact, it gets it wrong. The logical model of the*either/or but not both*is tested by comparing it to how we reason with every day language. If what we infer from the logical model is the same as what we ordinarily infer without the model, then the model has passed the test. Things that are arbitrary don't generally agree withSo, are models built with their own answers? Not at all. They are built

*to give answers*, based upon*non-arbitrary*starting assumptions.