I have been reading Judea Pearl’s influential book “Probabilistic Reasoning in Intelligent systems”. There are many wonderful ideas here, but I think the following section is really something:

*On the surface, there is really no compelling reason that beliefs, being mental dispositions about unrepeatable and often unobservable events, should combine by the laws of proportions that govern repeatable trials such as the outcomes of gambling devices. The primary appeal of probability theory is its ability to express useful qualitative relationships among beliefs and to process these relationships in a way that yields intuitively plausible conclusions, at least in cases where intuitive judgements are compelling. … What we wish to stress here is that the fortunate match between human intuition and the laws of proportions is not a coincidence. It came about because beliefs are formed not in a vacuum but rather as a distillation of sensory experiences.*

*He than argues that any calculus of beliefs that has evolved – and I’d say both in humans and animals – is necessarily based on computations with probabilities. *

There are a number of ways to view this statement: One is to say that we naturally, but not consciously, represent and understand events probabilistically. Moreover, we naturally compute with probabilities – computations that can be quite complex when written down formally. If we are to understand the way our brains operate it is therefore necessary to understand these computations, how nervous tissue performs them, and when it fails to perform them “correctly.”

Another way to view this is that a probabilistic representation of the world around us is natural to us. We intuitively understand how to ascribe probabilities and how to interpret them. It is an ability that we can develop and formalize, but it is not foreign to us. We should therefore develop theories of the world using the language of probability theory. This last part is my interpretation, but I would tend to agree – even when we use deterministic models we know that we cannot take their predictions as completely accurate. We implicitly ascribe some uncertainty to them, even if this uncertainty is not explicitly stated.