Skip to main content

Academic researchers are a funny bunch. They will sometimes try to find explanations, even if there is no way at all to check if they are actually correct. What they try to do might be called ‘trustworthy speculation’. Case in point: the question of how language came to be in humans. There are many explanations, which all vie for ‘accepted’ scientific status based on plausibility. Not on actual evidence of course, because we cannot go back in time and look what really happened.

Actually, plausibility, not truth, is what science is about — see below — so it is not a fundamental problem that we cannot directly observe the birth of language; we can always indirectly observe. And let’s be fair: the same is true for most observations in e.g. physics today. Ever seen an electron?

(This very long article is going to take a circuitous route to the conclusion about the effects of our massive use of IT on us and what challenge lies ahead. Please bear with me and take the time.)

Anyway, one such explanation for the rise of language caught my eye years ago. It was Robin Dunbar’s explanation that language came to pass because humans needed to be able to create larger stable groups than other primates, and he suggested language evolved from grooming via gossip to full language as a result. The basic argument runs like this:

  • A relationship is transactional in nature, something like “you scratch my back, I scratch yours”;
  • Grooming is labour intensive. Given time needed for feeding, resting, fornicating, etc., there is time to maybe cement (indirect) relations in a group up to about 30 individuals;
  • Language is more efficient. As ‘verbal grooming’ replaces ‘physical grooming’, larger coherent groups can be created;
  • Language requires more brain power, especially in the neocortex;
  • The need to operate in larger coherent groups drove the need for language which drove the increase of neocortical brain power;
  • ‘Verbal grooming’ allows efficient human groups of around 150 individuals (‘Dunbar’s number’), which is supported by observations that human hunter-gatherer groups tend to be limited to roughly that size.
Image by Gill Penney https://www.flickr.com/photos/gillpenney/

There are a lot of — again plausible — criticisms of Dunbar’s argumentation, e.g. having to do with the fact that ‘words are (too) cheap’ (this one doesn’t stand up to scrutiny, I think — see below), or that food patterns are more predictive of brain/neocortex size. But let’s run with this ‘verbal grooming’ for now.

What is needed for efficient operation of a group? Essential in a group is trust, because without trust too much energy and time is lost in guarding against each other (internal ‘friction’) and that limits the success of the group as a unit. Even ‘you scratch my back…’ is a matter of trust. The one doing the initial scratching has to trust that the other will reciprocate.

Using language (gossip) widens the web of trust. If Sally and Ann both warn you against the unreliability of Teddy, your guard will be up, and less internal damage/friction occurs. People with bad reputations (free riders, etc.) will be known, and will thus be less damaging to the group as a unit. Reputation is everything. Honour is key. So yes, words seem cheap, but reputation (which can be created and destroyed by words) is not. It’s not about a single cheap word (so I don’t buy that criticism), it’s about a history of words from a web of speakers. The system isn’t perfect of course, and some free riders and gaslighters remain.

Image by Vegansoldier https://www.flickr.com/photos/vegansoldier/

Nonetheless, as we can see all around us, we humans have been able to create larger social ‘superstructures’ that actually work, way beyond the limit of 150 or so individuals. Countries may have millions, even hundreds of millions of individuals, and countries do work. The question becomes: How is that possible?

The reason may be because language has enabled the spread/sharing of abstract ideas (some would say ‘memes’, but these days the word is getting a different popular cultural meaning). Instead of trusting just (a web of) actual humans and a history of their words, we trust (a web of) abstract ideas. The ideas are also ‘expanded’ — e.g. the fundamental idea of personal ‘ownership’ expands in the end to the existence of companies as independent ‘actors’. Such shared ideas are the foundation of larger coherent groups, such as nations (*).

These ideas come in many forms and the ones that make groups successful are in one way or another based on the concept of trust.

For example:

In many countries, religion has played the role of foundational idea, and in many places in the world, religion still does. Religion provides a set of shared ideas that make believers able to trust each other more implicitly (this is regardless of the truth or falsity of a religion, if you’re offended, just think of this being about any other religion than your own).

This is even sometimes openly recognised. The formerly anti-religious and still atheist Chinese Communist Party actually promotes a ‘nationalist’ Christianity for its value in sustaining the (idea of a) ‘harmonious society’. A movement like the Business Christian Fellowship in China has the following ten commandments: “No fraudulent accounting book; no tax evasion; no adultery; treat employees fairly; no destruction of the environment; not to engage in immoral business (in terms of both products and service); no violation of covenants (including oral and written covenants); honouring your father and mother; loving your spouse, children and family; and loving your community, the earth and justice.” (https://www.chinacenter.net/2016/china_currents/15-1/protestant-christianity-in-the-peoples-republic/).
Christian Protestantism is the fastest growing religion in China, and I’ve read somewhere that members often mention the benefit of being able to trust fellow Christians when doing business. The success of Western Europa from the 16th century onwards has been linked to the values (ideas) that came with Protestantism.

Add to that, that psychological research has shown that people who think they are observed behave in a more trustworthy fashion (i.e. posters on office walls with portraits of people seemingly looking at office workers substantially reduced petty office theft). God sees everything you do, and there thus is — o irony — a strong group-level evolutionary advantage to having religion. [Evolution, by the way is such a useful concept that it can even be applied to Enterprise Architecture].

Another important idea has been ”the rule of law’. This idea has taken a strong hold in western countries. Years ago I read reports on a clear statistical relation between the availability of an independent judiciary and economic success, which supports the theory. Countries with a strong rule of law perform much better economically, because it grows investment, as those who invest trust that their property will not be taken away ‘just like that’. In the end, countries with a rule of law are more successful economically than countries with despots, warlords, and such. An example of an advantage on a really large group scale: the nation.

Another idea that is trust-building is the scientific method and its institutions and communities, from research institutions to scientific journals. Science is not about truth, as many think, it is about trust.

What is science about anyway?

The overall scientific method is not about (absolute) truth/falsehood at all. Science is about using observation to establish trust in statements. If such observations are very reliable we normally call them facts.

Trusted observations (‘fact’) can either corroborate a statement or disprove it. It is not possible to prove a statement by using observations. Note that not all observations are reliable. Think of mirages in the desert, that city is really not there. So, you need to to be careful with calling some observation a fact (i.e. a trusted observation).

So, as not everything you read or hear as a fact is automatically true, you need systems to build that trust in facts (observations on top of observations on top of …) to separate trustworthy from untrustworthy statements. The trust-building system for facts (trusted observations) about nature is called science. Science is not perfectly trustworthy. People make mistakes and sometimes people cheat, but the latter are always found out because science must in the end base itself on repeatable observations and other people can independently try to establish those same observations).

Overall, science is probably the most effective independent trust-establishing system that we have. The major other trust-building systems in our societies are (independent) journalism (using journalistic standards) and an independent judiciary. These trust-establishing systems are in fact founded on a shared desire to have them, and this again depends on a (brittle) belief in them. If we stop trusting the trust-systems, we stop desiring them and they disappear.

The fact that science (journalism, judiciary) doesn’t produce absolute truths is a well known angle of attack to try to undermine our trust in these systems. (Observation: those that attack science often don’t question it when they see an advantage, say seeing a physician when they’re ill. Those that attack the judiciary will generally not stop suing. The attacks on our trust systems have a high level of hypocrisy/cherry picking and may say more about the particular advantage sought by the attacker than the quality of the trust system itself.)

What statements about the world can you trust? The weather forecast based on scientific models and the diagnoses of science-based health care greatly outperform witch doctors and shamans, so it’s better to trust the former than the latter. Science is a system for establishing trust in statements, especially predictions.

Trust in general (and not just in science), is all about predictions. If I trust you, it means I think I can somewhat predict that what you are going to do/say will not (have the intention to) hurt me. Given that trust is about prediction, and being able to ‘predict reliably’ has such a big survival (and thus reproductive) advantage, it is not that strange to surmise that much of our human behaviour (apart from that resting, feeding, fornicating we share with other complex organisms) is about trust and predictions: trust that there will be access to food/partners/etc.. For us, humans, that trust is built on other trust: trust in statements, in words, in ideas.

In countries with a strong rule of law combined with high equality, trust in the system of the country (the ‘institutions’ and the ‘elite’) is high. That is because of the experience of ‘good’ outcomes of the institutions. In such countries, religion has weakened. When you can trust secular ideas (and the institutions and people that represent them) — such as social justice, the law and the courts — to protect you, religion as a means to be able to trust your fellow man is less effective and it wanes. If these institutions fail to protect you, however, (e.g. if social justice is axed) other sources of trust are sought.

Anyway, why all the previous paragraphs? Well, it is to illustrate that all the ‘superstructures’ we build in human societies — above the natural one for small tribal groups — are, in one way or another, about trust (and predictions) as well.

Gossip, the glue that holds the trust structure of tribes together, is also about trust, but there is a difference. All the superstructures are built on some foundational shared — often rational/cognitive and abstract — and above all learnedidea: a religion, the rule of law, the scientific method, political convictions such as the importance of equal chances, or of individual freedom (which obviously also has deep evolutionary roots), or of fair outcomes, or of the sanctity of human life, or of the importance of harmony, and so forth. But abstract ideas are fleeting constructs. People can abandon ideas and then everything that is based on that idea is destroyed. Hence: the superstructures based on ideas (such as democracy, rule-of-law, European cooperation, communism, etc.) are inherently brittle. If people stop believing in an idea, the structure dies.

The ideas that are foundational for our social ‘superstructures’ (the ones above the ‘tribe’ level) are generally driven/guarded by a small minority of individuals. Serious journalists, scientists and judges all promote the idea of trust being based on ‘truth-finding’, for instance. You might say these people are ‘the elite’ that stand for the ‘ideas’. Note: the ‘elite’ as a negative connotations is a stunning reversal in itself. The ‘elite’ used to stand for ‘the best of us’. A linguistic reversal of the type that made “fat chance!” mean ‘slim chance’. A reversal promoted by people who are an ‘elite’ themselves (for a certain definition of ‘best’, that is). Made possible because the ‘elite’ failed to do its job: protecting the people who relied on them. You can’t get rid of ‘the elite’ in societies. But you can switch to a new one. But I digress, as usual. The elite can only drive/guard the idea if almost everybody subscribes to it, though.

Enter the information revolution

The first article in this series laid some groundwork by illustrating that logic, the method that is foundational for all that IT, is something mathematical and not something real. And that difference is important as all that logic we employ is acting in the real world. In the first article it was argued that the idea of ‘non-functional requirements’ was OK from the logician’s perspective, but utter nonsense from the organisation’s perspective, for instance.

The second article was about the ‘inertia’ of large landscapes of (machine) logic: the fact that massive amounts of interdependent (machine) logic are very hard to change. Illustrated by the fact that companies start to organise themselves around IT. Digital transformation, digital company, Agile, DevOps, etc., are all about dealing with that inertia. If one could ‘add up’ all the ‘behaviour’ (the ‘acting’) in human societies, we seem to be at a tipping point where there is more and more (massive, brittle, machine) logical behaviour (by logical machines, i.e. IT) than human behaviour in our societies. ‘More’ is a dangerous word here, it is impossible to quantify, but we do see our societies restructure more and more around the behaviour of massive amounts of machine logic with which we are becoming intertwined.

But that inertia is not all. The third article argued that the combination of our behaviour and digital behaviour is becoming the new ‘extended behaviour’ of our species. IT is not going to take over the world, but human behaviour is becoming ‘extended’ by our massive use of IT, to the point that the tools are sometimes allowed to operate largely independently (but still executing some version of human intentions). Getting a speed ticket doesn’t involve any human at all, anymore, unless you count the speeding driver.

So, while that machine logical behaviour doesn’t have a ‘free’ will (the will comes from the humans who design and deploy IT, for good and evil purposes), it becomes a key factor in how our societies behave, because human behaviour has become ‘extended’ by lots of machine logic. The algorithms of all kinds of analytic systems, — from those of ‘ordinary’ companies entering the fray to the algorithms of Facebook/Instagram, Twitter, or Google/YouTube — are becoming undeniably effective in manipulating our individual behaviour. You can influence elections, what people think, etc., by effectively using algorithms and lots of data. You can wage war and war can be waged at you, all through IT. Our behaviour has become ‘combined human and machine logic’ and what we are has become ‘we and our data’.

So it is not just what the algorithms do. Much of what happens is us, communicating with each other, as humans, e.g. through those platforms. This is also in part the argument that the platforms use to deny they are having an effect by themselves, they are just platforms for ‘free speech’. But some of that massive volume of fast but brittle machine logic we have added to the brittle social superstructures is having a devastating effect by short-circuiting us back to the level of gossip in a tribe, but at a massive scale. This is not what the logic does, but what all that machine logic has enabled.

Early social network: Usenet (newsgroups, e.g. alt.cows.moo.moo.moo). Image by Bill Bradford https://www.flickr.com/photos/mrbill/

Let’s go back to the tribe level of trust-through-gossip. How does this system not derail? Well, that is actually pretty simple: there can be lots of gossip, but gossip that is false gets corrected pretty quickly by direct observation and it backfires for the originator of false gossip. (Teddy turns out to be trustworthy after all, Sally and Ann should thus be trusted less.) There is a very direct negative feedback loop. And negative feedback loops create stability. (Actually, it does derail once in a while. Think Witch Trials).

But what social media does to that state of affairs is this: it removes the corrective feedback loop because the group we’re in is not around 150 individuals, it may consist of millions of individuals. Feedback doesn’t reach us anymore and direct observation is out of the question. Language has gotten maximally disconnected from observation/reality. Suddenly, we find ourselves in a world of unfettered gossip where statements do not reach just a few others, but millions. We inhabit a world that directly speaks to our human nature (we naturally trust gossip/stories/etc. because it is the ‘foundational mechanism’ for our group-level success). Natural trust in gossip competes with our learned trust in ideas about for instance ‘truth-finding’. Guess what wins out?

If gossip-with-direct-feedback is our natural trust-building mechanism (and it is plausible that it — or something very much like it — is) we have now — thanks to the information revolution and especially the likes of Facebook, Google, etc. — entered a world where we have removed all filters and breaks and corrective feedback loops. Statements fly with light speed across the globe. Some negative feedback to correct them does exist (e.g. ‘fact checkers’, professional journalists), but is very ineffective (because the feedback is gossip itself and not direct observation) and vulnerable for more gossip (e.g. conspiracy theories). And to make it worse, the gossip that we get is based on choices by massive and brittle stupidalgorithms which are written to optimise for profit, not for truth or any collective value. As a result, they don’t deliver the feedback that makes the mechanism somewhat stable.

Sure, serious old-fashioned media (tabloid press excluded) also make profits, but their profits are based on hard won reputation, just as reputation in a tribe is essential for trust. The sheer volume of (free) gossip is killing their livelihood and their reputation, though. As it is killing the reputation of government, science, the judiciary and all other ‘values-based’ brittle superstructures. In fact, the tabloid press was a precursor to what is happening in social media: what it peddles is much ‘gossip-style’ information, and that directly plays to key human natural tendencies.

Actually, we citizens do check the foundational ideas that underlie our ‘superstructures’ against our observations. Trust in the ideas that are at the basis of your society (e.g. fairness, freedom) dies if your outlook becomes more and more uncertain (as it has for many people in Western countries). So, we see a double whammy: on the one hand trust in the ideas that underlie Western democracies is failing because trust in general wanes if the outlook becomes more uncertain, and on the other hand we are flooded with gossip-style information without checks.

I wrote in the previous article:

IT is not so much a physical tool, it is a mental tool. And thus, we are also witnessing a change in the balance between the role of genes and the role of memes — which are much more volatile and brittle. You think the world is going crazy? You may be right, the dominant complex species in the world seems to be moving to a much more volatile behaviour pattern. The only thing we do not know is how much that volatility part is going to affect the world at large. Is the volatility just the froth on the waves that are just ripples on top of the large undersea currents? Or is it more than that?

The answer seems to be ‘more than that’.

From ‘value’ to ‘values’?

From a tribal perspective, free speech is extremely important as it is essential to the success of the tribe. In a tribe, free speech is guaranteed by the possibility to gossip. Such speech also cannot be stopped, really. Larger superstructures, like dictatorial regimes, have always had trouble to stop gossip-style free speech, as it is so fundamental to our nature. East Germany (DDR) tried. North Korea still does (and with considerable success, the population subscribes largely to the ideas (lies) that are peddled by their elite). Regimes like those in Russia, China or Hungary are relatively successful as well in manipulating the information their citizens get and thus the ideas they believe, but they cannot stop gossip. Our new ‘mental-cyborg’ state does open us up to mass surveillance by government, though, so a conflict between free speech (key human nature) and surveillance seems inevitable.

A company like Facebook argues that it cannot fight lies and untruths in all that gossip that they spread because they are protecting ‘free speech’. That seems valid, but it is myopic.

In Western-style democracies, protecting the freedom of speech has been ingrained in the foundational ideas. But the freedom is never absolute, e.g. criminality is always excused from the protection. Interesting is for instance the US situation, where freedom of speech is seen as particularly important and it has been extended far, e.g. enabling the unlimited flow of money into propaganda under its doctrine. But there are limits. For instance, false statements / lies are not protected under US free speech. That is, except false statement and lies about the government. The reasoning is that the government is too powerful. If it could sue people accusing them of lies, this could be used to silence people even if they are not lying.

The question is if that reasoning still holds as simply as it did before. You can say that under the onslaught of gossip-style lies, untruths, alternative facts that come with social media, governments and institutions are not only powerful but also very vulnerable. And their vulnerability turns into the weakness of the underlying ideas they stand for. Lots of gossip-like stories such as those about the ‘straight banana police’ has seriously damaged the European idea, for instance. That doesn’t mean everything in Europe was fine, of course, but gossip played its role.

As mentioned above, especially Facebook takes a very hardline position on free speech. They do not want any regulation that impacts ‘free speech’ (other than really extreme criminal stuff, like child porn) on their platforms. For one, because that would be a burden that directly impacts their bottom line, of course. So, they are at least also protecting profit, not just free speech.

In other words, social media companies are protecting their value, but we will have to protect our values instead. Some of the protection of our values will have to be some limitation of free speech. And that will be very hard to get right, if only because the money available for propaganda and information warfare employed to stop us doing the right thing. Unethical operators will fight good changes hard. But if we don’t, our values will be toast. And they are already in a weakened state as it is.

What this means and what we can/must do

Given how deep this sits in our nature, we may be pessimistic about our chances of fixing the problem. But if we do want this fixed:

  • In society/politics, we need to have a discussion on ethics and limits on free speech — especially lies and untruths — if we want to save our values. Limits on free speech are not new and they exist everywhere, but under the onslaught of all that IT-enabled gossip, we need to change the rules and especially lying must get more scrutiny. At least we need the platforms that enable destructive gossip to be regulated. For instance, we might want to criminalise the use of advertising (money) to spread lies and untruths (lies in advertising are already illegal in some countries). We could make the platforms responsible for acting on legal orders. We might not protect lies about certain institutes of society unless delivered by actual individual real humans, or we might even give the power to our democratic governments to fight lies (dictators can fight truth, but our democratic governments cannot fight lies — it is quite unfair) and at the same time create/fund an independent fourth ‘power’ of government that only exists to protect against misuse of that power by the government by protecting the population in court, expanding the three separated powers of Montesquieu (legislative, executive, judiciary) with an independent fourth: the (independent) protective branch — equally well funded as the other three.
  • As it is now often: real news sits behind a paywall, fake news is free. Regulation could also put a limit on intellectual property rights (e.g. different time periods for different types of content so that news and opinion pages of fast-moving media such as news outlets have a paywall-limit of — say — 14 days.). As a society, we might want to (increase our) tax-funded subsidies for the non-gossip press, so real news is more readily available for all. The above mentioned ‘protective force’ could give out these subsidies based on the ‘factualness’ of the outlet. It’s all very difficult, I know.
  • If you are an investor, will you invest in companies that currently make their money out of a nonethical standpoint on value versus values, such as Facebook or Google?
  • If you are a company that finds ethical business important, will you even use such platforms or technologies? As the third article said: what culture does your IT promote?

Finally:

In 2009, E.O. Wilson, the biologist was interviewed by Krulwich from NPR. Will we solve the crises of next hundred years? asked Krulwich. “Yes, if we are honest and smart,” said Wilson. “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” Until we understand ourselves, concluded the Pulitzer-prize winning author of On Human Nature, “until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago—Where do we come from? Who are we? Where are we going?—rationally,” we’re on very thin ground.
— Public discussion between E.O. Wilson and James Watson moderated by NPR correspondent Robert Krulwich, September 10, 2009, as reported
here.

People like that quote. It appeals to our emotions. It carries the right sentiment. But it is misleading. The problem is not about our emotions, but about our intelligence. It’s not ‘medieval’ institutions that are at risk, but the best modern institutions we have on offer (science, independent judiciary, supranational cooperation, etc.). And that technology is definitely not ‘god-like’.

The essence is that we may just be not smart enough as a species. We have valueless profit-driven technology that short-circuits our gossip-sensitive and limited intelligence, thereby destroying our values-based brittle institutions.

And that means that at the end of the road, the problems stemming from the massive use of machine logic, machine logic that is an extension of us, are not technical. They are ethical. They are foundational. They need to be fixed. Will we rise to the challenge? Are we intelligent enough to do what is necessary? Maybe we’re at the Peter Principle end of our growth as a species. But we might surprise ourselves yet.

This article was initially published (and will be maintained) here. Featured Image by Dragon Photos, https://www.flickr.com/photos/10739810@N06/

(*) Harari’s Sapiens makes this point. I find his writing on history, while a bit glib here and there, insightful. But as soon as he starts to write about technology I think his lack of deep understanding the subject matter shows. Still, Sapiens is a worthwhile read.

PS. Our profit-drive is probably linked very strongly to our innate individual behaviour. There is a reproductive advantage to having enough means of existence after all. Our values play in larger groups and as such has a much more indirect relation. As that, we might be seeing a fight between genes and memes play out, with IT having made the balance between the two much more volatile.

PPS. If you like this story. Spread it around. I don’t have a marketing budget. I rely on ‘gossip’ 😉

Close Menu

Login