There was an interesting experiment done some years ago on the implementation of the golden rule in an altruistic society. It was aimed at finding out which strategy was the best when dealing with other people: a selfish or altruistic one. Many people would likely think that a selfish attitude would work best for people, as a general rule, but the experiments came to a very different conclusion. The experiment was a computer simulation of a population of people, where everyone interacted with each other on the basis of a trade arrangement similar to the classic “prisoner dilemma”.
The prisoner dilemma has two suspects facing questioning in separate cells, not knowing what the other suspect is saying. The offer to both of them is: rat out the other guy and you go free while the other guy goes down for life. There is a catch, though, if both suspects rat each other out, then they both get life too, but if they both keep quiet, they only get a short term prison sentence. Obviously the best outcome is to rat out the other guy, while he stays quiet, which is the purely selfish option. For a one off that may work, but when you feed back the results into a community situation, the results are very different.
Initial testing of the simulation found that purely selfish and purely altruistic program populations running in the simulation would swing back and forth, as each one gained dominance. At first, the altruistic programs would win the advantage, with altruistic helping each other to gain an advantage over the selfish. However, once the selfish became a minority, they had so many altruistic programs to take advantage of they flourished until they became the majority. Once again the population would swing back, as the selfish programs destroyed each other.
They ran a competition, to find out which program’s strategy could win in this simulation. Programmers from all over the world submitted a variety of complex strategies, some honest, some cheats, to try and gain the most from the simulation. The programs were allowed to have a memory, whereby they could remember who cheated them, and this made a distinct difference to the success of cheaters, because most programs would have stringent strategies on those that cheated them. Despite all the fantastic strategies on offer, the winner proved to be a very simple algorithm: a program called “tit for tat”.
The tit for tat program behaved exactly as you would imagine. It’s opening gambit in the prisoner dilemma trade was honest. Whatever the response of the other program was, it would remember and do to them whatever they did to it. That meant that if it met another honest program, they would both benefit greatly, but any cheats would be quickly weeded out. There was a problem, however, and that was it could get into an endless cycle of retaliation with other programs that were generally very honest, but made “random” cheats or had opening gambits that were cheats. This proved very damaging to both parties.
So for the next competition the same programmers devised “tit for two tats”, which allowed the other programs two chances before it retaliated. This program stormed the competion, and won even more convincingly than before. This strategy is also one that I generally apply to real life, and I will usually forgive the other person at least two indiscretions before I break the Golden Rule and do unto them what they did to me. It might sound a little petty, but it works surprisingly well, with work collegues, with students I am teaching, and especially with forums where people can’t see your face to see it smile, and often assume you mean ill.