Cosmic Sociology
Andrija Curganov | Vladyslav Tomashpolskyi


The model we developed focuses on a single encounter between two civilizations. Its main purpose is to define the most reasonable strategy for an emerging civilization. As can be seen from our results, presented on the "Conclusion" page, almost three quarters of all encounters result into one of the participating civilizations ending up not surviving it. On the other side, due to the specifics of our model any civilization that decided to hide will survive. The cooperation as a strategy is a quite dangerous effort that really proved itself beneficial in only 12.18% of all encounters.

It is important to note, however, that as we do not have the ability to establish real values for some of the parameters of the model our conclusion may be different from real world. For example, a 50% chance to discover another civilization in close vicinity may be as high as 100% or as low as 0%.

Our results make the concept of Great Filter seem like a reasonable one. If real numbers are really close to those produced by our model then it really is beneficial to hide, as this strategy is almost guaranteed to lead to the survival of the civilization using it.

"But in this dark forest, there's a stupid child called humanity, who has built a bonfire and is standing beside it shouting, 'Here I am! Here I am!"'

Liu Cixin "The Dark Forest"

Extensions to the encounter-model

There are also a number of improvements that may be introduced to our model:

  • Stealth mechanics can be improved. In our current model, if a civilization goes into hiding it cannot be uncovered anymore. Even if another civilization had already detected it, it cannot cause any harm to the one hiding. This may cause a total migration of a civilization somewhere else, or an unwillingness and evasion of any contacts with its peers. However it is reasonable to account for a possibility of the civilization just being harder to locate.
  • In our model the only representation of time is a simultaneous change of states in civilization driving automatons. This is a problem, as it does not provide a realtime-based simulation. In this case it would be possible to implement chain of suspicion in a way as it was originally described by its author, Liu Cixin, and allow preemptive strikes to be more time-efficient.
  • As was already mentioned above. The model is driven by a number of probabilistic parameters that define the possibilities of actions to be taken. These are:
    • The probability of a malevolent civilization to perform a destroy-action against its adversary. Currently set to 90%.
    • The base probability of benevolent civilizations trying to make contact. Currently set to 90%, is heavily influenced by the chain of suspicion mechanics.
    • Technological advantage given to a civilization making a preemptive strike against another one. Currently set to increase technological level for this encounter by 2.
    • Penalty given to a civilization making a contact attempt. Currently set to decrease technological level for this encounter by 2.
    • As mentioned above detectability of a civilization. Currently set to 50% for every civilization.
  • One of the main concerns of cosmic sociology is the availability of resources. As specified in the second axiom, resources in universe are finite, and any civilization needs to grow. This means that sooner or later any benevolent civilization will come to a point where it will desperately need a new and steady income of supplies. This is a question that may arise during contact with another civilization.
  • The previous point directly leads to a possibility of different civilizations changing their allegiances. For example there may be a point, when hitherto benevolent civilization may become malevolent.
  • Our model does not reuse civilizations in further encounters. It is possible to modify it in such a way that prior history of a still-alive civilization would have a strong influence over its current disposition.

Further extensions

Taking into account all that is discussed above, our model is a solid basis that can model a more general cosmic society. For example, it can be embedded into a evolutionary game theory framework.

Suppose we have a galactic society consisting of a number of independent civilizations which can be regarded as a population. In light of recent discoveries made by the Kepler space telescope there are reasons to believe that almost any star in the galaxy has a planetary system and many of those actually host planets that are located inside a Goldilocks zone. This means that a significantly large number of civilizations is actually possible.

Game rules are defined by our encounter model. It may or may not be expanded with previously stated suggestions. Another suggestion that applies to this scenario, and should be considered in this model, is knowledge sharing between cooperating civilizations: information about encounters, possible alliances and possible technology sharing.

We suggest replication rules based on a number of successful encounters for every given civilization.

Replication may represent not a full change of society, but some small mutations still may occur. For example the increase or decrease of technology level may represent a progress or stagnation of a society, change of effective depth of suspicion may mirror previous experiences, and of course, change of attitude is also a possible occurrence.

Relation to real world politics

Preemptive strikes as used by benevolent civilizations actually can be found in a real world history. For example, an infamous Bush Doctrine describes a strategy of "preemptive strikes" as a defense against an immediate threat to the security of the United States. Reasoning about the possible attitude of the United States is left as an exercise to the reader.

One may say that it is not really "benevolent" in any way to perform any kind of preemptive strikes, but this is out of the scope of this project.

In this article, which describes techniques used to provide nuclear deterrence during the Cold War it is clearly stated that human agents cannot be expected to behave rationally in all situations, i.e. humans are prone to make a retaliation strike, even when everything else is already lost.


Source code for this project can be downloaded from GitHub.