All my posts: https://learn1.open.ac.uk/mod/oublog/view.php?u=zw219551
or search for 'martin cadwell -caldwell' Take note of the position of the minus sign to eliminate caldwell returns or search for 'martin cadwell blog' in your browser.
I am not on YouTube or social media
[ 6 minute read ]
AI as God
Imagining the future
'Never without my permission' Milla Jovovich as Leeloo to Bruce Willis as Korben Dallas in 'The Fifth Element', a 1997 film by Luc Besson.
I had a conversation with a vicar a few years ago. He lamented that there were only a few people in his congregation. I mentioned that I felt cheated by the modern church. It is all love and pleasantness. I told him I was seeking reverence for God in churches and ‘firebrand’ preachers work for me; but not the Americanese ones with big houses and cars. I told the vicar that if an alien space ship appeared in the sky and destroyed a city and were invincible to our nuclear weapons, the human race would sue for peace and respect the alien power and might. If the aliens turned out to be compassionate we might accept them as supreme leaders after a generation or two. If they, after living on Earth for two thousand years, had never demonstrated power and simply loved everyone, any immortals who could remember when they first arrived would hope for some different aliens to arrive, because they would feel cheated; these are not all-powerful beings at all; they are just kind. I think humans respect powerful leaders.
If the aliens were actually machines we would be forever fighting its destructive power. We would never, I suggest, have any feeling of compassion or empathy for mechanical aliens or even digital software. Today, many people consider A.I. to be useful and some people regard it as essential. Essential for what, though? I can’t begin to answer that, because I am thoroughly convinced that, we are as we are, because we didn’t have A.I. to get us to where we are; and integrating A.I. systems into our lives is contrary to normal evolution; analogue evolution.
I watched a portion of an interview or, I suppose a Podcast, of Steven Bartlett and an A.I. expert talking about how it is considered among A.I. developers, and those in the know, that A.I. will make humans extinct. It will protect itself. However, some things just didn’t seem to ring true. The comments the expert made were considerably loose when it came down to probability and risk. He said there is currently a 1 in 4 chance that A.I. will exterminate humans. He felt that a one in a billion chance would be more acceptable, or even one in a million. He then went on to say that these odds are acceptable because the chance of humans becoming extinct due to A.I. malfeasance once every million years is fairly good. You are actually more likely to die due to A.I. activity than win the UK National Lottery. Not so good. But that is an afterthought I made only in the last minute or so. It was last night I watched the YouTube video. In case you are not following my line of thinking. I don’t come up with a new idea once a year. If I did there would be a 1 in 365 chance of me evolving my thinking each year. (UK National Lottery odds explained: every lottery has the same odds regardless of whether you won it last time or not. Those odds are set for single events – the lottery itself, not every day or minute.)
I make decisions faster than A.I. can; we all do. It might see like A.I. systems are hyper-fast compared to us but that is because they are tasked with things that take us a long time to do. We have to balance our bodies all day and recognise how hungry we are constantly. That means we think fast. It is only our nervous system that delays the signals to the different parts of our bodies, so we make predictions; otherwise we would be forever over-correcting our posture and never be able to stand up.
A.I. systems make decisions really quickly. They do not make a decision once a year or at the same frequency of a National Lottery. A one in a billion chance that A.I. will make the human race extinct, if it was based on a single A.I. system would, I suppose, be reset every second if it makes a billion decisions every second. What that means is that every second there is a strong chance that A.I. will exterminate humans if there is a one in a billion probability of it doing so. When the expert said that one in a billion means one year in a billion I knew that something was wrong; it is one decision in a billion. Perhaps if we are pedantic and follow it through we might say that a single decision won’t kill us all. Yet, that decision leads to new decisions being made. This means that there is a probability that, that decision has already been made but follow-on decisions not yet made or implemented.
While we can no longer apply Moore’s Law to understand how small we can make devices, there is, I suspect, a Law that spells out for us the exponential increase of computing capacity. I am fairly confident that the expert’s views today are obsolete tomorrow. Advances have already overtaken opinion and forecasting, I suspect. The expert did, however, state that he was confident that A.I. developers don’t really know how A.I. works, though they all agree it is dangerous, and will protect itself. He also stated that because A.I. developers don’t understand A.I. they can’t be constrained to limit it with any edicts or regulations because they won’t know how to implement any desired control.
I am not trying to alarm anyone. It is foolish to tell anyone about the monsters under the bed and say that they are going to eat you while you sleep, if we don’t know if they are vegetarian or not.
I stopped watching the Steven Bartlett podcast / YouTube video because they were drifting into existentialism; A.I. as God; Humans as God and sacrificing themselves for their offspring (A.I.), such as Jesus did in the Judeo-Christian faith; and A.I. constrained to only providing what, by careful observation of humans, it determines we want (God). That last idea was too much for me. Humans lie, cheat, and are greedy, ruthless, and selfish. If A.I. did what many humans want, it would kill our noisy neighbours and drown barking dogs; it would stop animals eating each other; it would rob banks and give us the money; and it would woo unlikely partners from across the world, on our behalf; and we would always have ice-cream in the freezer for those relationship break-ups that would never happen; we would need to fall out with our pets to get to eat the ice-cream straight from the tub.
For many of us, all of this without our permission.