AI and Causality

jvhNew articles

Logical-Operation-with-question-mark-between-variables-"a"-and-"b"-as-operator
AI and Causality: Unraveling the Relationship between Artificial Intelligence and Cause-Effect Dynamics

The other day I read a very interesting article in Wired about Deep Learning pioneer and AI researcher Yoshua Bengio‘s efforts to “teach” causality to AI. In 2018 Bengio was one of three recipients of the A.M.Turing Award (sharing it with Geoffrey Hinton (UK) and Yann LeCun (France)). The award is now sponsored by Google and endowed with an annual prize money of € 1 M, thus clearly playing in the “Nobel” league.

Bengio says, AI is great at discerning similarities, identifying conspicuous differences, recognizing patterns, i.e. at performing stereotypical, yet complex tasks for which human brains, when compared to learning machines, are significantly less well equipped. The practical benefits are well known by now: Identification of potential cancer foci in tissue samples, prediction of fraudulent intent in fintech usage, identification of high-risk neighbourhoods in burglary prevention, predictive maintenance of factories, to name but a few.  But, says Bengio, AI, so far, is not able to discern causal connections, i.e. “understand”, why something is e.g. behaving in a certain way and not in some other.

Bengio specialises in this field; he wants to enable machines to understand, like humans, or rather, better than humans, why something is the case and something else is not. Bengio: “Deep learning [so far] is blind to cause and effect.” Unlike a natural doctor, a deep learning algorithm cannot explain why, for example, a certain X-ray image suggests the presence of a certain disease. Therefore, deep learning may only be used very cautiously up till now, especially in critical situations.

Bengio’s approach consists in relating statistical probabilities by way of trial & error algorithms (where the affirmation of causality is verified by humans) to cause-and-effect-sequences. Machines would thus, in a Pavlov’s dog scheme manner, learn when to affirm and when to reject causal relations.

We always think causally – we can’t help it

We all experienced the satisfaction felt when we “grasped” something, i.e. when we felt able to establish a causal connection between phenomena which we were unable to establish before. We also know the difference between reasons which we perceive as “good” reasons as opposed to those which we perceive as no “good” if not bad, regardless of whether the reasons are actually manifest or simply imagined. We can’t help, but think in a causal fashion. Causation is not an integral part of natural or social reality, but always something attributed to it by us humans.

Understanding and explaining: The attribution of causality is always arbitrary and man-made

The attribution of causality is thus always an interpretation, regardless of whether we argue about natural or social phenomena. Historians can fabricate very different, diametrically opposed accounts or extracts from historical data. Historical accounts may mutually exclude each other. And yet it is impossible to say, this one is right, the other is wrong. However, it is possible to state (for humans) that one account is good (well, plausibly argued), while another one is not so good or not good at all.

The same goes for science: elementary particles “are” neither discrete particles nor “flowing waves”, yet we can’t help but make “them” comprehensible to us by means of such mutually exclusive images. To make a long story short: Machines cannot reason, and they cannot explain, they can just perform in exactly the way we want them to perform. They may draw connections faster than we humans can, so fast that we do not understand anymore how they came about. But each individual operation is just an execution a natural human.  If machines could understand causality, they had to be able to tell us, why, at times, we prefer one explanation to another although there is no evidence for either of them.

If it doesn’t rain, the road won’t get wet.

Bengio’s efforts are doomed from the outset. And this has got nothing to do with his apparently outstanding abilities, but rather with his wrong notion of causality.

“When it rains, the road gets wet” ≙ “When it doesn’t rain, the road doesn’t get wet”? No, this is a logical fallacy. It is not mandatory that the rain wet the road. After all, it could have been the garden hose, the city cleaners or a sprinkler system that did the job. Machines can learn this by way of trial & error or by being programmed in a way to assert that if “a=>b” is true, then  “¬a=>¬b” is false, but “¬b => a” is also true. However, machines will never be able to grasp by themselves either the validity of such logical inferences or the validity of the corresponding empirical instances. What happens if we underlay the above logical operation with alternative empirical facts?

If she’s not brain dead, then her heart didn’t stop beating 10 minutes ago?

“If her heart stopped beating 10 minutes ago, she’s now brain-dead.” Can we deduce from this: “If she is not brain-dead now, then her heart did not stop beating 10 minutes ago”? Logically that would be unquestionably permissible. Is it therefore allowed for a doctor or a robot to remove her organs, three minutes after her heart stopped beating (because it stopped not after 10, but already after 3 minutes), and/ or or is it only allowed to do so after 10 minutes and one millisecond? What does “brain-dead” mean under conditions of absent devices able to record brain activity? The question is open to arbitrary definition or interpretation: x or y seconds without heart muscle contraction?

Now, the AI machine could be programmed in a “better” way. But the inference “If a then b” must always, without exception, also allow for the conclusion “if not b, then not a”. It never allows for the conclusion “if not a, then not b”. On the surface it might seem easy to say: “Why! Your premise, “if her heart stopped beating 10 minutes ago” does not allow for the conclusion “then she is brain-dead”. But who decides what is allowed and what not? It would be easy indeed to improve the definition, i.e. making it more akin to what we (humans) conceive as empirically valid.

We could say: “If her heart stopped beating at least 10 minutes ago (and did not start beating again in the meantime), then she is brain-dead. We could also add some safety margin, say: 20 minutes instead of 10. But clearly, such an improvement is arbitrary and could not be carried out by a machine, because the machine had no clue why to do so in the first place. It’s a definition. If a list of symptoms or conditions is complete or fulfilled to allow for a certain judgment, then it is valid. Full stop.

Hobbes and Boyle: Our Learnings are fleeting and accidental, our logic is not

Our image, say, of nature, has changed a lot over the centuries. Our “learnings” with regard to Newtonian nature have taken a multitude of new turns since Einstein. But it did not start with Einstein: When, e.g., Robert Boyle in the mid-17th century, after the collapse of the English Revolution, experimentally “proved” the existence of the vacuum, his “proof” and “experimental method” was openly contested by his challenger Thomas Hobbes. Boyle won this battle and Hobbes was defeated, but this was only so, because Boyle enjoyed the support of allies in the Royal Society.

Those buddies loved Boyle and his novel experimental fancies and they hated and bullied Thomas Hobbes, whom they considered a club bore. Seen from a purely scientific angle, i.e. by the (slowly changing) norms and rules of “true science” of the time, Hobbes’ Aristotelian plenism, based on Aristotle’s “horror vacui”, was perfectly sound and valid. Hobbes thus made fun of Boyle’s “pseudo experiments”, and the idea that something would contain “nothing”. But that did not help him much. The Royal Society was socially and intellectually more influential, and Hobbes had but a few friends. One of them was the new post-revolution King Charles II.

But although the king liked Hobbes’ originality and sincere dogmatism, and also granted him a lavish pension, he could not place Hobbes within the Royal Society, which was the intellectual powerhouse of the age of restoration, and, as the name “Royal Society” suggests, also sponsored by the king. Hobbes was not welcome in the club and not accepted as a member; he could only fume and rage, which made his stance not look any better. But to presume that Boyle had experimental proof whereas Hobbes did not have it, would be the wrong conclusion. What counted as “proof” in general and what counted as “experimental proof” in particular was a heavily contested issue at the time. It was not decided by any “objective facts”, but by social sympathies alone.

If nothing else, this shows i) that there is no such thing as “the science” and no sensible statement along the lines “the sciences tell us…”  and ii) no hope that there will ever be an AI guiding humanity along “objectively valid” strata of cause and effect.

Some people might think, this post was directed against #FridaysforFuture and kindred spirits. They would be wrong, because you don’t win an argument if you are intellectually dishonest. There is an abundance of good reasons to be worried about man-made climate change. You do not need to resort to “the sciences” to support them.