In the description of the original idea about the chain of suspicion, it was stated that, in order for a civilization to survive, one can deduce that only one kind of relationship is possible between worlds in such a universe; one where a civilization must destroy another upon discovering it as soon as possible.
In our model we take this idea of the chain of suspicion, and add a property to the civilizations: a civilization can be benevolent or malevolent. We will also add the notion of hiding to our model, giving a civilization the possibility to avoid an encounter it believes it can not survive.
Our idea is to find out, by means of simulations of encounters, whether it is possible for benevolent but cautious civilizations to survive in the Big Universe, and to keep fulfilling their primary need: survival.
We will look at this possible explanation for the Fermi paradox from an epistemic point of view. The general BDI models should explain what the believes, goals and intentions of a civilization can be. The Kripke model is used to explain how a civilization can reason about another civilization, and the intentions of this other civilization.
The world model represents the actual possible situations any kind of civilization can encounter,
focusing on the possible actions during an encounter with another civilization, and the consequences this has for
their primary need: survival.
The belief models represent the beliefs the civilization has about itself and other civilizations in the universe. Both kinds of civilizations, malevolent and benevolent, can be in a state where they either discovered another world, or civilization, or where they have not yet discovered one.
Upon discovery of another civilization, beliefs differ between malevolent and benevolent civilizations. Where a malevolent civilization only believes it can either successfully or unsuccesfully assimilate another world, or hide from it, benevolent civilization beliefs include contacting, preemptively striking or hiding from another world.
The goal models represent the objectives the civilizations actively persue. Malevolent civilizations persue the discovery and annihilation of other civilizations. Only when they foresee defeat, they resort to hiding, as to not fail their primary need: survival. The goal of benevolent civilizations is to successfully make contact with other benevolent civilizations, and to hide or preemptively strike another civilization if this is needed in order to survive.
The intention models represent the path of dicisions a civilizations will follow after the civilization has made a certain choice. The malevolent civilizations will, depending of its notion of the other civilizations tech-level, either persue victory by annihilation, or hide to prevent annihilation. Benevolent civilizations either intend to survive by preemptively strike another civilization if it suspects it to be malevolent and less advanced on the technology level, or go into hiding if it thinks the other civilization is malevolent and has a more advanced technology level.
The Big Universe is full of civilizations, each with their own attitude towards, and beliefs about the attitude of other civilizations. These attitudes and beliefs about them are very important to the benevolent civilizations. Malevolent civilizations are, as their name suggests, not looking to make friends, and will try to annihilate the other civilization upon discovery if possible. But benevolent civilizations do see cooperation as an important way of making progression. Because a benevolent civilization does not know the intentions of another civilization upon discovering it, it may have certain beliefs about what the other civilizations intentions are, and beliefs about what the other civilization thinks the intentions of them are.
The model shown to the right represents a chain of suspicion with a depth of 3. In d1 the initial belief of a civilization about another is represented as being either b, for benevolent, or m, for malevolent. If the civilization thinks the other civilization is malevolent, the chain stops, and the civilization goes into hiding. If the civilization thinks the other civilization is benevolent, there are two choices: the other civilization thinks we are malevolent (wm) or it thinks we are benevolent (wb). If we think they think we are malevolent we have two options: go into hiding or strike them preemptively, before they decide to strike us. The chain ends here. If we think they think we are benevolent, the chain continues. In theory this can go on forever, in our simulations we limit the depth of the chain to 7.