Skip to main content

The Skeptics guide to Artificial Intelligence

I have been listening regularly for a few years to the Skeptics Guide to the Universe, and I like it a lot. But sometimes you've got an itch than just won't easily scratch away, and this is one of those times. I should note that sent this to the website through their contact form, but I've never been lucky getting anything through (or any response) from doing so several times in the past, which is why I post it here so it doesn't go to waste. I don't actually know if anything goes through there, or I'm ignored, or marked as spam, or they're too busy with my pedestrian requests, could be anything. Normally I wouldn't care, but I wrote this rather lengthy letter about something I care deeply about (and hopefully also can teach someone out there something about the neuance of the topic) and decided to share it. My Skeptics guide to Artificial Intelligence is written with a love for the show.

-------------


So. You know how you guys sometimes riff over the theme of hype and misinformation, and how that can create a sticky, iffy goo of science fiction that gets in the way of actual knowledge and understanding? Like how pseudoscience nonsense gets sold successfully because someone has thrown some tasty sciency words into the otherwise disgusting stew?

I wanted to talk to you guys about Artificial Intelligence. I used to make a living in the field (video motion detection systems using cumulative filtering of neural nets on near real-time embedded systems clustered over WAN), and still meddle and keep up with it through my work and as a hobby. I’m also a philosophy buff, specialising in epistemology and that pesky membrane between technology and humans.

Now, don’t get me wrong, I appreciate your enthusiasm and the continued focus on science and technology as skeptics. I love the show, and listen to every episode, swooning over Cara, rolling eyes over the brothers, nodding here and there, even at Evan, and of course generally wish I could be more like Steve. However, there’s been a number of times I’ve felt the need to write to you about toning down the hype a bit, and hopefully set more realistic expectations around what AI is, what it does and what in the future it will more realistically do. And I apologise upfront for this being a bit long, there’s a lot to get through.

There’s a TLDR at the end for your convenience. I tried to write a bit more tongue-in-cheek, but this rather long tirade happened instead. I am so sorry.

Words and Wittgenstein

First and foremost, artificial intelligence is a misnomer that’s not really helping much; the word ‘intelligence’ is pure science fiction, a word chosen as inspiration rather than being descriptive. Here’s a generic summary of intelligence I think most can somewhat agree with;

[...] the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.

Let’s start with something obvious and ask a question about what sits at the core of a computer; If you write a piece of code that takes some numbers in and calculates a returned value - for example you put in 10 and 20 and you get back 200 - what word describes it best? What would you call that? Logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking or problem-solving? Probably not any of them.

But I bet I can get some of those words if I made a system with more inputs and more complex outputs. I might get most of them if I can return a complex output that has some human characteristics. Heck, if I make it complex enough, I can make you believe that my machine’s output is really just another human sending the output over the network. So let’s ask the question; At what scale of inputs and outputs would you start using what words? What is your threshold for the Turing test?

We’ve always talked about intelligence in a similar way in real life; an amoeba isn’t as intelligent as a ringworm which isn’t as intelligent as a butterfly which isn’t as intelligent as a bird which isn’t as intelligent as a dog which isn’t as intelligent as a human, and all the variants in between. So let’s look at WikiPedia’s entry on Intelligence where AI is discussed;
Kaplan and Haenlein define artificial intelligence as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”.
https://en.wikipedia.org/wiki/Intelligence#Artificial_intelligence
That’s pretty good as definitions go, but as with all good definitions they use words with further definitions and interpretations and pitfalls. What’s a system? What does it mean to do things correctly? How do we define learning? How do we define goals and criteria for achievement? What does the system adapt to, and how? Each question pulls up more words and definitions and more questions.

We humans do this amazing thing where we string together words we think we know well in isolation, and assume they convey the right meaning when read together as a whole. The human capacity for putting together words that never have been together in that way before and still make some kind of valid sense or new meaning is simply astounding. You might even say it’s a key to our intelligence.

Wittgenstein made an effort to point out the hazards of these language games; just because you understand how a pawn in a game of chess moves, doesn't mean you understand what a pawn is, little less how to play a full game of chess. The language that we use in order to explain what AI is all about is fraught with distribution errors, with unclear and inconsistent definitions of what we're trying to say. We don’t have any meaningful consensus on what “artificial intelligence” and all its sub-definition and concepts is. It’s deeply frustrating how sure we are about something we know nothing about.

Techno babble

If you ask an AI researcher or Elon Musk or Google or some techno personality about all of this and when General AI could happen, you get either 1) a variation on the Moore’s Law argument (more on that later), or 2) a quantum leap answer in the direction of quantum computing, which is really a weak version of the first. In short the answer seems to be, better machines, better tech, faster machines, more machines, faster tech.

The AI community is rife with techno babble sprinkled with tattered hopes and broken dreams that will resolve “any day now, right around the corner!” and has been doing that really successfully for the last 40 years. We’re all so certain that the only thing missing to crack this nut is Moore’s Law, that better code and more RAM and bigger data is the key. But that is assuming what we’ve got now is even a little bit intelligent, and that amplifying that will create more intelligence as opposed to a large but stupid one. Intelligence is not synonymous with complexity.

All AI systems out there are based on the same principle; input data gets filtered to signal, then processed for different levels of pattern recognition, then create pattern tokens as output to different channels (including feedback loops). Design choices are plentiful, of course; how to filter what and when, how many layers and how to direct inputs through them, what to do with noise and / or noise tolerance in filtering to signal, signal clustering and layer directing, pattern recognition functions, how many dimensions to process and how, tensors, slides, feedbacking, signal reverbing, pre-loaded vs. evolutionary retention and dismissals, criteria for said retention, thresholds and parameters for outputs, conceptual model building and manipulation, ontological identifiers and models, blah, blah, blah. The list goes on and on into academic paper oblivion, methods, techniques, workflows, filtering, layering, reason rules, state machines, and oh god make it stop. Did I mention just how prevalent crap in / crap out and data filtering truly is? How training a neural network is easy on the surface, but how incredibly hard it is to untrain or create an acceptable real-time loop of memory? No? How semantic models and parameterisation of thresholds easily can crushe your project? How millions of dollars are poured into a black hole of conceptual model creation and manipulation? The techno babblers don’t want to talk about it much.

The fundamentals of even the most modern of AI are still today decades old; it’s bit shifting and heap allocation and pointer hacks and clever array and object manipulation of binary gates that simulate higher resolutions. Moore’s Law has indeed made a lot of things possible and useful, but - and here is the clinker! - it has tricked a lot of people into thinking there’s something human going on here. No, we’re merely fooled by the overwhelming complexity into skipping directly from boolean logic to intelligence, bypassing almost all of those words we mentioned earlier; understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking or problem-solving. So, let’s take a step back and look at those parts in the middle we just jumped over.

Let’s try to convert those human words into something more applicable to a computer, for the sake of the argument; understanding becomes a blend of an ontological sound model using logic over recognised patterns, self-awareness could be similar to adaptive code manipulation through a conceptual but logical model, learning is a systematic manipulation of previously said models, emotional knowledge the same, reasoning the same as long as the logical model is sound (probably through a weighted edge model of sorts), planning is computational simulation and outcome pattern recognition to given criteria (or models; nifty reusable word, that), creativity is often simply a bit of randomness thrown in for fun, and critical thinking or problem-solving variations over the same model manipulation. See, it’s doable! It’s possible! It’s complex and intelligence emerges through!

Emergence

Very often it’s said that Turing’s point still stands; if it quacks like a duck, and looks like a duck, and rapes like a duck, then it probably is a duck! This is the argument from complexity and emergent properties, and in some way it is correct that this is where we’re at; the complexities are now able to look like it has some human-like abilities. All we need to do is apply Moore’s Law, and voila!

It just might be that, technically, we can create an AI. I'm a hardcore physicalist, so sure, it must be possible, at least in principle when we build with "the right stuff", but that right there might just be the main issue all and in itself; using the stuff we're currently using, such as bits and bytes and silicon and integers and binary memory and registers.

When we simulate a neuron in a computer as many AI researches do when they try to use human modelling as a basis for AI, we really have no idea what we're doing. We are converting a biological analogue into a digital filtered signal, and just patch things up in ways that "work", but may or may not be what's actually happening in nature. It doesn't matter at what speeds it does their bidding, or what capacity they have for capturing some phenomena; it just might be that there are some things that don't translate well into digital form.

The computer screen in front of you most likely can represent something like 2 to the power of 24 or 16,777,216 color variations (24 bit color representation, possibly with 8 bit depth of transparency), and for most people that's more than enough. We watch movies, do image manipulation and otherwise enjoy our colorful computing life using only these roughly 16 million colors. It looks awesome. This is because we humans have eyes that can discern around 10 million colors or shades thereof, again roughly speaking. However, how many colors are there, really, out there, in nature? How much are we not able to see?

The answer to that is; almost infinite, because the scale of colors in the universe is (probably) infinite. It's so large a spectrum with such small gradients that we don't have instruments good enough to capture either the size nor depth of it to do it justice. In other words, as far as we can see with our eyes it looks like we’re capturing it all, but in reality we're seeing a tiny, tiny, infinitely small fraction of reality simulated, first as what the brain simulate with what our eyes show us, and then again with the computer screen.

I suspect that what we capture in our computers in regards to AI are those 24 bits of richness; it's an infinite tiny small part of reality, so small as to render the attempt laughingly futile. We make a 24-bit deep AI, when in order for it to have the features we'd like it to have (or speculate it should have), we're off by such a large amount that it seems, well, pointless. Sure, we can simulate something that looks a bit like reality in our computer, but, really, it's a long, long way off.

Any emergent property in AI that we’re looking for are going to be strictly human inventions based on human criteria. They didn’t emerge as physical behaviour or chemical reactions out of the fabric of the universe, like something new you get when you fuse hydrogen and oxygen together. No, they are output data judged to be emergent by our very own intelligent brains. If your brain thinks it sees something intelligent emerge, then by your thoughts and words alone do that emergent property exist. There is no other way; we humans are fundamentally isolated by our own intelligence, we compare any other intelligence against our own, and we judge them by our human values. Even if our computers had the capacity to fulfill our AI dreams, how do you judge their intelligence? We are so quick to dream about the concept of super-intelligence, yet we don’t even understand just plain old boring intelligence.

Anthropological fallacies

There is a massive human injection problem in all of this, and it looks something similar to the “who’s watching the watchers” paradox; the criteria for input signals and the judgement of the output are fully human endeavours, so are we building artificial intelligence or merely simulating some more obvious parts of it? Who’s simulating that simulation?

A computer's memory is nothing like our human memory. Sure, both concepts use the word "memory", but they are two absolutely different things. This is an example of how we create a piece of technology, and we give it an anthropomorphic name because there are some similarities. But then to mistakenly expect that computer memory works the same way as human memory is two levels up from a category error; computers don't remember anything, they have a limited amount of really fast storage that we call "memory" in addition to the normal slower but more abundant one we call harddisks, etc.

A computer's CPU is nothing like a brain. We sometimes say that the CPU is the brain of the computer, but it just plain isn't. A CPU has registers - a slot of memory - in which you cram a piece of memory from somewhere else, do a bitwise manipulation of that piece of memory (this is the essential part of what we know as software, or a computer program), and plonk it back into RAM or similar. Repeat this a few million times a second. That's a CPU; take byte, fiddle with it, put it back.

A computer program - software, for short - is nothing like thinking or how the brain works. It's a long string of instructions on how to shuffle pieces of computer memory around. If this byte has this bit set, do this shuffling over here. Otherwise, do this shuffling over there. Move these bytes from here to there. Change those bytes according to this logical formula. There is nothing here except logical rules over bytes of data.

A computer's input is nothing like seeing, smelling, tasting or hearing. A computer sees only long streams of bytes, and we use computer programs to try to shuffle things around in order to create a symbolic sense of what those bytes could mean as a useful output to us humans. We literally write computer programs to convert bytes into human symbols, without any knowledge or understanding of what those symbols are. For example, the computer might juggle some bytes around and come up with this stream of bytes; 83 97 109 which in binary looks like 01010011 01100001 01101101. If you convert each of these numbers into symbols, then with 83 we have defined it - outside the computer! In a paper manual! A human convention! - as an upper-case S, 97 is a lower-case a, and 109 a lower-case m. Sam. Three symbols which according to one of our human alphabets are the letters s, a and m that together creates, to us humans, the word Sam. To the computer, however, it is three byte values of 83, 97 and 109. We - the humans seeing these symbols on a screen - are seeing the word Sam. The computer does not.

The anthropological principle is leading us astray again and again; we simulate intelligence and immediately start to dream about all the human properties it will have. You can call what you created intelligent, you can create tensors, long and short-term memory, you can manipulate memory, define uptake, retention, shapes, behaviour and so on, you can maybe define your AI a bit better, but you’re still applying human labels on conceptual bit-shifting on binary switches with unforgiving resolution and unbreakable logic.

TLDR

AI is not intelligent. It’s clever but limited algorithmic software written by humans to solve human problems. And so the trap is this; what we’ve got now looks so much like a path to the real thing, we mistake it for the path we should have taken.

What we create in AI is not intelligent, but a limited low-resolution simulation with some useful human characteristics, especially around pattern recognition. Moore’s Law is tempting us to treat everything as a nail, and all we need is just a bigger and better hammer. But we just might be building our AI with something that probably can't build it.

Defining and possibly fearing super-intelligence before you understand your own intelligence seems dangerous. Don’t fear artificial intelligence; fear our human gullibility that so easily mistake simulated human characteristics in feedback loops as trustworthy, good, correct, or otherwise real. What you should fear is artificial stupidity.

So, I beg you, use the concepts a bit more carefully. Don’t assign big human characteristics to simulations of tiny parts of it. It’s not intelligent. It’s stupendously low resolution. It’s an anthropomorphic tech-infused acid trip, a systematized pattern recognition machine disguised to look human-like through puppeteers trying to simulate reality. The hype machine is purring successfully, fuelled by sci-fi dreams, popular personas and lots of dollar bills. But that doesn’t make it real, not now, not even in the future with the power of pure imagination.

AI is good at pattern recognition. Really good. Sometimes quite amazingly so. But don’t mistake it for intelligence, because when you scratch the surface a bit you'll find that almost everything else we think of as “intelligent”, our AI systems mostly suck at.

Again, sorry for banging on for too long.


Cheers,
Alex

Comments

  1. I guess I should link to a longer and more tedious piece I wrote in a similar way about 7 years ago around Sam Harris' uncritical banging on about AI as well. I borrowed and polished parts of this post from that one; https://sheltered-objections.blogspot.com/2015/05/ai-and-bad-thinking-sam-harris-and.html

    ReplyDelete

Post a Comment

All are allowed to comment here, I don't discriminate against anyone's opinion (ie. I delete nothing, except spam and bad personal attacks). Don't be too rude, try to stay polite, but above all, engage using your best arguments, especially towards other commenters (they may not laugh it off as easily as I do). And allow some air between paragraphs and rebuts. Don't get off the lawn. Have fun. Enjoy.

Popular posts from this blog

Hyper AI : Sam Harris and his techno-bros on Artificial Intelligence

TL;DR: Don't be seduced by Hollywood bullshit! Engage those critical thinking skills Bias and background Recently someone asked Sam Harris in his " Ask me anything #1 " episode about his latest views on Artificial Intelligence  ( AI for short ), and they weren't all positive. You see, he's lately been invited along to a conference on the matter of threats to human existence, and AI feature very, very high on the list of potential human extinguishers. He lists Elon Musk as his friend and inviter, and within the conference refers to " people close to development of AI " that all agree with the following; AI's will get smarter or more advanced than the human intelligence, will be able to modify and improve their own code, and come to some negative conclusion about us puny humans, whereas the next logical step is, of course, " Exterminate the humans! " I might be paraphrasing. Harris didn't list who these people " close to the ...

Could it be?

Quoting  Archbishop Silvano Tomasi (via the always wonderful Butterflies and Wheels )  ; "People are being attacked for taking positions that do not support sexual behaviour between people of the same sex." Could it be because you have offensive opinions and actions, and that you constantly persecute people you don't agree with? Could it be that people are getting fed up with your hypocrisy of attacking people of a sexual orientation you yourself so obviously are filled to the brim with? Could it be that science is shedding a more reflective and correct light on what the alternative sexual orientations are all about, that biology shouldn't be dictated by doctrine and opinion? The mind boggles at religious people's stubbornness to change, to just understand that more knowledge through unbiased science renders you old and outdated, that unless you embrace change it will render you pointless but to the crazy fringe. Merge new understanding into your fold, by all m...