Skip to main content

We tend to talk about ‘nature’ as something not ‘human’, e.g. as in ‘natural tendencies’ versus ‘civilisation’ (education, culture, artificial, human). We speak about ‘saving nature’ (from us, that is). When termites build large mounds, or ants take up intensive farming of other species, we tend to see it as ‘nature’, and David Attenborough does the voice over of the beautifully filmed BBC-series. But when we build cities and we establish a bio-industry, we look at it as non-natural. ‘Nature’ is a deer’s path through the woods. Nothing is less un-natural than a motorway with two times five traffic lanes, right?

(Bear with me, I am unable to make the stories in this mini-series very short).

The same separation is true for tool use. When other primates use little twigs to catch ants or even stones to break into delicious food stuffs, we call it nature. But when we employ computers to plan or execute the feeding of livestock in industrial farms, it seems, again, as far from ‘nature’ as can be.

We are starting to accept that our ‘unnatural’ cities are natural ecosystems for other species, though. Our cities provide evolutionary niches for other species, from bacteria and fungi to what is known as ‘urban wildlife‘, but again, that is what ‘nature’ does. It still doesn’t really include us.

In short: ‘natural’ is generally defined by us as ‘not made by / done by / engineered by humans’.

That is, however, a rather strange view, in my opinion. Our city-building belongs to the behaviour of our human species as much as nest building belongs to birds, digging holes to rabbits, and building citadels to termites. Our use of tools, including machine logic, is also not fundamentally different from a chimpansee using a twig (but in practice very different, I admit). In short, there is another way to look at our behaviour: establishing cities and bio-industry is in fact very natural for humans. There is no real nature-culture divide, there is just behaviour, a complex mesh of innate and learned behaviour. Ours is just much more learned (trained), much less transmitted through genes, which are a rather narrow and inflexible transmission channel (and digital, hmm, what an interesting coincidence). Our innate behaviour (mostly pretty general behavioural parameters, such as a propensity for risk taking, etc.) is transmitted through genes in a Darwinian fashion, while our learned behaviour is transmitted through ideas / memes / concepts in a Lamarckian fashion. [To be frank, the evolutionary situation is even much more complex, as our behaviour is also influenced by other species, such as for instance our the bacteria that live inside us, but this story is already complicated enough].

Two examples of natural behaviour: Human Power Lines and a Termite Citadel. Image by Jeff Attaway

Of course, the fact that our behaviour is ‘natural’, as in: ‘that is what we do’, doesn’t mean it cannot have runaway consequences that in the end are bad for us; or that everything ‘natural’ is ‘good’ (most definitely not).

Anyway, this mini-series focuses on the impact/feedback that one specific class of tools — machine logic — is having on us. The previous article argued that the sheer volume of interdependent machine logic in our society has become so large that it is showing more and more ‘inertia’ — a.k.a. ‘resistance to change’ —, and that as a consequence we ourselves are adapting our behaviour to that inertia.

But there is more: all that logic is actually playing more and more decisive roles in how we act and think at all. Machine logic has become a part of our behaviour, and now that machine logic is becoming so massive, our behaviour as a species is being defined more and more by what our machine logic does on our behalf and how it interacts.

This happens in two ways. First, we humans employ machine logic as a tool and as such it acts as a ‘waldo‘ (a ‘remote manipulator’, an ‘amplifier’ of movements, often) for human intentions. Second, as we are closely wedded to our machine-logical ‘extended behaviour’, influencing our IT means influencing us. Our bodies can be touched by physical things, but IT — information technology, after all — can touch our minds. IT as a tool is mental.

For instance, there is now logic that can be used to actively manipulate people through the use of our close relation with all that IT. In april 2018, I wrote in Something is (still) rotten in the kingdom of artificial intelligence:

But sometimes it is enough for statistical methods to have very small effects to be useful. So, say you want to influence the US elections. You do not need to convince everybody of your message (fake or not). You maybe can swing an election a very worthwhile 0.2 percent by sending negative messages about a candidate to a selected group of his or her supporters. Suppose you can target a specific 22 percent, say black people who support your opponent for 90 percent. You get 50 percent of them to see a message that puts the candidate in a racist or antiblack context. If this suppresses the turnout of 80 percent of those that saw the message by 10 percent while it increases the turnout of 5 percent of that same group by 60 percent (as they are really angry about the unfairness of the message), then you have just created a 0.2 percent lower poll result for your opponent. A couple of such differences may win you elections. This is not farfetched. Such weaponized information has been used in the US 2016 election and in the Brexit referendum, where very small effects like these have apparently had a big effect on the outcome. It gets even better the more ‘micro’ the targeting becomes.

That information can sway voters is a given. But, as one of the major players stated: it does not have to be true, it just has to be believed. This is supported by evidence: a recent study from Ohio State University suggests (not proves) that believing fake news may have influenced the outcome of the US 2016 presidential election by influencing defection by voters. So, the combination of statistics on large data sets and fake news is a ‘weapon’ in the age of information warfare we now find ourselves in. And we, the population, are the target.

While that article was mostly about the limitations of AI/analytics (anti-hype), the technology has real power, as the excerpt illustrates. And before I am accused of doom and fear mongering, take a recent positive example:

Researchers at MIT used machine learning to find a potential new groundbreaking antibiotic (Feb 20, 2020). They programmed the ML system by feeding it the properties of known antibiotic substances (1700 human-created ones and 800 that exist naturally in nature). Then they used that learning to search in a database of 6000 known medical substances. That got them a drug that was once developed for diabetes, but for some reason never got to market. The substance — now dubbed halicin, after the AI character HAL from the movie 2001 A Space Odyssey — turned out to be very effective in a dish to kill several nasty drug-resistance bacteria.They even tested it in mice and it did not kill the mice but did rid them of the bacteria. That is a very good find and chances are it can be used in humans as well. And what is even better: it seems that it is hard to actually develop resistance to it. Using their algorithm on a database with 1.5 million substances, they identified another 23 candidates, 8 of which did exhibit antibacterial activity of which two were very powerful. (Research not yet published).

Machine learning helped to search extremely effectively for a needle in a haystack, but it was not able to draw the final conclusion. For that, the people had to do their part. Together it was rather effective.

So, humans can wield statistics in today’s machine-logic and machine-data infested world very effectively as a tool. Humans can also do bad things with such logic. The tool can’t help it. Analytics IT can for instance be about manipulating your own ‘customers’ or your own population. The algorithms of YouTube or facebook are meant to keep you on the platform so they can sell access to you to advertisers. Which means that the algorithm must provide something you crave, it must be successful in manipulating you.

At war

It has been said that World War I was the chemist’s war, World War II was the physicist’s war. And World War III will be the information scientist’s war (‘computer science’ is really not the correct term, by the way). Well, that war is already on, so make that: “World War III is the information scientist’s war“. Nation states are actually already waging war using IT. Why fight with real soldiers (costly) if you can ‘conquer’ by attacking a country’s population’s resolve (to resist you) by employing information warfare?

And if it gets to doing direct damage, you can always employ an army of hackers to do that damage. The greatest security risk organisations face these days are powerful criminal organisations and state actors that don’t just use weaknesses in software, but are able to break into fully up-to-date IT landscapes, just using the way these complex landscapes are configured. Here, our massive interdependent complex landscape of machine logic has become a possible target for attack by smart humans using — tadaa! — machine-logic behaviour.

So, our organisations are attacked, ransomware is being deployed, and if we are not very resilient, we will have to pay — if we are lucky enough that we have been targeted by someone who is after money, not damage. Incidentally, evolutionary speaking, the most efficient evolutionary strategy is ‘being a parasite’, so this is about as natural as it gets. And talking about parasites, looking at it that way, having good security and continuity is like an organisation having a good immune system. But I digress, as usual.

In other words: the massive volume of machine-logic (and data), that is an ‘extension of our behaviour’ (and what we are), has thus become a target to attack us. Our IT is what security specialists call ‘an attack vector’. For attacking us.

Sometimes there is some poetic justice, though. The attackers themselves may become a target too, through their own machine-logic. Dutch intelligence service AIVD hacked the famous Russian outfit ‘Cozy Bear’ and spied on them for years, even witnessing their attack on the US elections of 2016. Yep, the war is already on. Definitely.

Our organisation’s complex landscapes that we in IT governance still generally (and naively) think are just about us making a ‘business function’ happen, such as’ paying a pension’ or ‘providing education’, suddenly are about a lot more. And they are a lot more because they are an extension of ‘us’ (‘us’ being both individuals and organisations) and that is well illustrated by the massive machine-logic landscape becoming both target and weapon in human and organisational interactions.

Hence: IT is us, and the idea of a separation between us and our IT is becoming hard to maintain. That IT is not something a-personal, it is the extension of a persona, be it an individual or an organisation. For companies it certainly holds: what your IT does is what is perceived you do. The idea of a separation between your organisation and your IT is as circumspect as the separation into humans and nature. How do clients interacting with your IT feel about you? If they interact with your IT, they will equate you with your IT. What culture does your IT embody?

The ‘extended human’

Where are we now in the mini-series on the relation between human behaviour and machine-logic behaviour (IT)?

  1. As described in the first article of the series, IT is machine logic and has brittle behaviour. We established that our logical landscapes are to be seen as much more as some application function to support some business behaviour. From the organisation’s perspective, IT’s ‘non-functionals’ are nonsense, they are ‘essentials’. And architecture is probably more about those essentials than about anything else.
  2. In the second article, we established that the sheer volume of IT is starting to tip us over: from IT simply following human wishes, to the massive volume of IT actually having an inertia-like property, forcing people to adapt to it instead of the other way around. By the way, this is another nail in the coffin of classic Enterprise Architecture.

But as we can see, that inertia is not the only important aspect. The behaviour of all that IT, is also acting like an ‘extension’ of us humans and it is acting on us. How does Google make you (your mind, your will) stay on YouTube? It does so by manipulating your IT, the machine logical ‘extension of you’ (in this case your ‘feed’ in your browser or in their app). Our human behaviour has become integrated with IT behaviour, making it harder and harder to talk about humans apart from IT (and thus also e.g. about ‘business and IT as separate aspects’).

Take a hammer. A hammer is a value-free tool. You can use it to build shelters for the homeless as well as use it to kill people. But seeing IT as ‘just a tool’ is probably too myopic an approach to overcome the challenges. A hammer does not act. A hammer has no behaviour itself. IT is different in that it acts (on our behalf) in ways and at a scale that no tool before it ever could. As I mentioned above, IT is not so much a physical tool, it is a mental tool.

The amalgamation of human behaviour and massive machine logic behaviour is thus different. IT has become closely entwined with us and IT itself thus is no longer a value-free tool we can manage as just a tool. IT is not as independent from us as that hammer or that engine. In fact, one could say that we already have become cyborgs, without the need for physical manifestations of that state.

This has consequences for how we (must) manage our IT, both individually, as organisations, and as a society.

And thus, we are also witnessing a change in the balance between the role of genes and the role of memes — which are much more volatile and brittle. You think the world is going crazy? You may be right, the dominant complex species in the world seems to be moving to a much more volatile behaviour pattern. The only thing we do not know is how much that volatility part is going to affect the world at large. Is the volatility just the froth on the waves that are just ripples on top of the large undersea currents? Or is it more than that?

A species is mostly defined by its ecological niche, by its behaviour. That holds for us too. We are what we do, including what we do with our tools. Our behaviour combines both innate and learned elements. Culture is the ‘learned part’, but still a key part of our nature. It is what makes us so adaptable and able to create a very wide niche for ourselves. No complex species is as widespread as humans, we crop up almost everywhere (as Agent Smith would say: we are a plague).

Our culture is now becoming loaded with IT-based and IT-influenced behaviour. We are thus changing ourselves and — as we are the dominant complex species on earth — we are thus changing nature to include an unprecedented amount of logic as well as a change of the balance between innate and learned behaviour. A watershed, a revolution, indeed. With disruptive consequences for millennia of learned behaviour, as we will see in the last story in this mini-series.

Featured image from Darwin Laganzon (madartzgraphics) on Pixabay.

This article was originally published (and will be kept updated with typos, fixes etc.) here.

Close Menu

Login