Skip to main content

Hyper AI : Sam Harris and his techno-bros on Artificial Intelligence

TL;DR: Don't be seduced by Hollywood bullshit! Engage those critical thinking skills

Bias and background

Recently someone asked Sam Harris in his "Ask me anything #1" episode about his latest views on Artificial Intelligence (AI for short), and they weren't all positive. You see, he's lately been invited along to a conference on the matter of threats to human existence, and AI feature very, very high on the list of potential human extinguishers. He lists Elon Musk as his friend and inviter, and within the conference refers to "people close to development of AI" that all agree with the following;

AI's will get smarter or more advanced than the human intelligence, will be able to modify and improve their own code, and come to some negative conclusion about us puny humans, whereas the next logical step is, of course, "Exterminate the humans!" I might be paraphrasing.

Harris didn't list who these people "close to the development" were, but I can probably rattle off a few names that might have spoken or were present there, like the electronics maker Ray Kurzweil (and I'll point to the latest of his books on the subject, "The Sigularity is Near"), the philosopher Nick Bostrom (who lately paid a visit to my favourite philosophy podcast, The Partially-examined Life, where the episode with Bostrom has the best and possibly most fitting cartoon-version of Nick!), possibly Robert Li or Bill Gates and / or a bunch of other high-profile tech company big-wigs, and maybe some smaller characters like Luke Muelhauser (who I have a great deal of respect for). There's tons more, because this is a fun subject!

Also, for reference, have a quick look through this page on AI takeover (AI has to take over something in order to wield power to kill us, of course), and I'll just casually note that half of that page is of examples of these bad AI's from science fiction. Let that sink in for a few seconds.

Last, but not least, let me add that I used to develop AI's for a living, through high-security surveillance systems using video motion detection, for prisons, nuclear power plants, and the like. And even though I don't work in that field any more, I still try to keep myself up to date with the goings on in AI. I really love this stuff!

Types of AI

Ok, so to be fair, there's a number of different kinds of AI people talk about, and it's easy to get them all confused, and they range from the simplest computer analytics of yesterday to the superbeings of the future. Within the AI community (there's no such thing, really, but let's make it one of all the people who have some kind of interest in all things AI) we these days talk mainly of two types of AI;

Weak AI

This is the current state of affairs, with big computers crunching data from the real world, using analytics and neural network simulations to report to and interact with us in meaningful and helpful ways. Nobody really invokes the word "intelligence" with this one, it's only there for historical reasons. Sam Harris admittedly speak of an AI that is somewhere between this one, and the next. This is the one we all use as a platform to leap into the future of possibilities from ...

Strong AI

... and this is where we usually land; the super-intelligent, possibly conscious piece of software running on complex computers that have super-human intellectual capabilities. I call this the Hollywood AI, but a lot of people call it "superintelligence" or "strong AI" or "conscious machines" or even "artificial general intelligence (AGI)" or somesuch. However, this one is so loosely defined as to defy an intellectual discussion which we'll soon see, and in many ways this is the one I protest the most. Shall we?

I realize that both of these types of AI uses the "intelligence" moniker. For the first type, it's for historical reasons, as mentioned, since "weak AI" was the thing we used to call just AI. This is because when the terms were coined, there's was little to no deep thinking or hard knowledge about the issues and terms thrown around to discuss these things, and, I suspect, our perfectly vibrant human creative urges made us project into the future about what incredible feats of software we could produce, hence they were thought to be intelligent back then.

But of course 30 years of insanely little progress in the "intelligence" department require us to rethink our concepts a bit, and so the weak and strong monikers were invented to denote some kind of magic threshold between where we're up to and where we might be going. When "people in the know" are asked about when they think this "strong AI" might happen, they project some time into the future, from "next 5-10 years"(which is what experts have been claiming for the last 40 years, at least, but of course this time it will be true!) to roughly about 50 years or so, depending on the expert in question.

They're so wrong.

Intelligence

Let's jump off the deep end; the biggest issue here is that nothing worth talking about in this "debate" is well-defined or even properly understood. Let's start at the very beginning with the following two questions;
  • What is artificial?
  • What is intelligence?
If we are to talk about AI's being either good or bad, or in some other state that can be considered a threat to human existence, we need to define what this thing is, otherwise it is pure speculation, or, science fiction, and if there's anything these people really, really hate is to be put into the bucket of science fiction. So I'm not going to do that. Yet.

Bostrom and Kurzweil (both mentioned earlier as big "strong AI" proponents) are fond of defining themselves a bit out of it by saying the threat is called "super-human intelligence", or something to that effect. Now, if we poke around a bit, we might get something like this;
"A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind"
So it's a thing that has an intelligence that has surpassed ours, which begs a couple of questions;

1. what is intelligence? Well, it's neither here nor there; there are so many factors that come into it, and the definitions vary greatly across people and time, encompasses different models of values and epistemology. In short, we're not quite sure how to talk about it. Is it the stricter definitions of universal traits we're after, or the more folksy way of talking about the smartness of people (and things), or somewhere along the scale? Or outside it?

Consider this; think of all the people you know of that are really smart, and now list the things that make them smart. How many things are there on your list? In fact, in how many different kinds of ways do we think smartness, or intelligence, come into play? I have a friend who's really smart with money, but he's a bit of a dick. Does this mean we need to define dickishness as part of being smart? No one would say that, for sure, even if it just might be true. But then, I have another friend, and she's mathematically smart and really clever at crossword puzzles, yet struggles with depression.

So, are we to define up all the positive traits, and try to leave out all the bad? In my examples above we could easily define dickishness and depression as negative things we don't want, but keep in mind that these are broad, overarching, stereotypical and imprecise words we use to explain something far, far more complex underneath. There is no such thing as dickishness; it's a number of factors of a persons traits and properties that together add up the term we use. The same with depression. Some of these traits and properties might be needed for intelligence to happen, like being obsessive with details or logic. Sure, that might lead to logical positivism (yeah, not an endorsement), or maybe true mathematical insight, or, you know, OCD. I'm not going to say that it is either positive or negative, because the point is that it can be both and neither.

Now the same with the word "intelligence." See the problem?

2. What does it mean that an intelligence has surpassed another? Given that intelligence is a compound of a number of traits and properties of beings, like a big city where some buildings are tall and some are small, some are this color, others have that kind of roof, this one houses a lot of people, but this one houses but a few, this one has computers in them, this is a library, this, that, this, that ... hundreds of buildings of various kinds. How do you compare Mumbai, India to Rio, Brazil? How do you proclaim that one of them have surpassed the other? Sure, in people, or money, or the number of houses, or some other trivial data, but surpassing as a whole?

If you don't think deeply about things, then sure, Mumbai has surpassed Rio, or the other way around, depending on your bias and information. Now what does that tell you? Well, it should give you a hint about the fact that "intelligence" is nothing more than a statistical average explained in vague terms we people use on each other. And that is no platform on which any serious debate about one loosely defined thing is better or worse than some other loosely defined thing might be.

Building blocks of AI, or any I

Each "intelligence" consists of a set of building blocks of various sizes and capabilities. Let's dig into some common definitions of the kind traits we think an intelligence should have, and just briefly point out some complexities and issues along the way. And keep in mind that these kind of definitions are rife with fast and loose facts and definitions (although I would point out if they are completely wrong or very different from what these "near the AI development" people would actually say), so this is just to point out some surface problems before we get deeper down.

Let's for the sake of argument use this one;
"A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do"
Reason - "[...] is the capacity for consciously making sense of things, applying logic, establishing and verifying facts, and changing or justifying practices, institutions, and beliefs based on new or existing information". There's almost too much to unpack in this one alone, but pointing to consciousness, facts, beliefs, justifying practices and other epistemologically difficult problems, well, I'm not sure what we think our current technology is capable of doing with any of these. But it's pretty clear that we don't actually agree on what reason is, and how it's supposed to work. The only thing we really got down in terms of computer systems is the logic part, however I'll talk a bit more on this later on when I talk about mathematics and Gödel.

Plan - "[...] is one of the executive functions, it encompasses the neurological processes involved in the formulation, evaluation and selection of a sequence of thoughts and actions to achieve a desired goal."  Here we get into desires, thoughts goals and actions. The two latter I can see being approached in an AI with some degree of complexity, but the two former are seriously hard nuts to crack; it implies non-fatalistic networks of information that affect other non-fatalistic networks in order to evaluate and reshape the information, not to mention some pretty complex handling of data that deals with this thing we call "memory." I'll talk more about that later.

Abstract thinking - "[...] in its main sense is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods. "An abstraction" is the product of this process—a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category" So, how does an AI today deal with concepts? We mostly use symbolic logic and languages to do AI, which is fine and well, but these are representational in nature and backed by logic; they are, in other words, bound to what we already know in terms of computers and programming, they fall into mathematical pitfalls and don't offer us any solutions to the before-mentioned hard problems, but most importantly they are guided and defined by the programmer. There is no intelligence here, only symbolic definitions defined elsewhere.

I could go on, but the point is that all of these things are really, really hard problems to solve. Sure, hard doesn't mean impossible, but don't let that truism hide the fact that when we, today, talk about what AI can and can't do - and hey, that's an important point, because from what it can do now we project into the future what it might be able to do - we really see that we're scratching some limited amount of surface in very limited ways and from this limited progress we extrapolate huge progress in the future.

There's also the deeper point that once you start parsing out what is actually involved in these things, the complexities stand out more, and I know for a fact that a lot of people "close to the development of AI" tend to forget this (often unknowingly), so it serves as a timely reminder.

Now, as a pointer back into that mystical and tedious land of philosophy, I want to remind us all that we have a slightly more difficult issue at hand; we're all engaging in language games. I refer you to Wittgenstein, one of my hero philosophers (yes, he's one of the few in the heavy-weight category), especially his latter incarnation where he rejected his earlier axiomatic and logical positivist attempts at epistemology. He's got a lot to say on a number of topics, of course, but here I'd like to riff a bit on the fact that we're all speaking a variety of languages, using a variety of words, that mean different things in a variety of contexts, and that there's interpretations and translations between a variety of players (where the self, the You, is playing several of those parts) in trying to figure out meaning and message, as well as the different interpretations you get from a variety of actions based on someone's conscious and subconscious workings. So, keep this in mind; I'll refer to Wittgenstein later on, but if you should only read one of the linked articles, Wittgenstein on language games is the one to go for. He's was a really intelligent guy.

So, when I say "Wittgenstein was really intelligent", what is being said? Well, it depends. It's not that clear. Maybe I should write a book about it to clarify my position.

Superintelligence

Nick Bostrom has written a book like that, and his "Superintelligence" is trying to a) define what it is, and b) convince us that the threat is real. I'll use this book as a shorthand way to refer the people I listed further up. Sure, there's subtleties and variation, but for simplicity's sake, I think it rather nicely represent them all in this debate as a lot of them also happens to endorse his book.

In order for something to be superintelligent we need to know what intelligent is. However, I think it's reasonable to argue that we don't really have that knowledge, but let's again for the sake of argument give some slack to all these definitions, and maybe talk about what this thing is supposed to be like, according to Bostrom.

Bostrom says that it is "[...] an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills"

Let's compare that to state of the AI art. What are we currently up to? Well, a huge proportion of it is recognising things. Just, you know, look at a picture (or a sound, or a stream, like a video, which was my field of expertise for many years) and tell me what you see. This is an important part of pattern recognition, and in my view sits at the core of the human empiric experience; if you can experience things and link this stream of input to symbols for further processing (like an input into a neural network, for example), then at least you have a framework for interaction, and a seed of something that might have epistemological grounding. (A lot of this talk is about images, but it applies equally well to sound and touch) So far, so good.

So, how are they going? Well, not all that great, even though the machines are getting better at looking at a picture and give you some keywords to what they recognise that thing to be. In clearly outlined images that are somewhat histographically easy on your filters and thresholds (I could rant for hours about just how shady [pun for the initiated] the techniques we use to trick normal people into thinking our software is clever, but that's a different post waiting to be written), sure, we get ... a few words of what might be in an image. Normally this is done through running millions of images through a neural network simulation, throwing statistical analysis in there for good measure, and if you think intelligence is more or less the same as statistics, then sure, maybe we've come a long way.

But the problem isn't even recognition in itself, but the opposite of that, of not seeing anything at all. Show the computer a fuzzy image, or something out of focus, or something that's half in frame, or any of a million different "what if" contenders, and chances are things are not recognized, or what's worse, recognized as something they are not. The same problem crops up again and again in various parts of machine recognition. Take, for example, autonomous cars, which Alon Musk (Sam's inviting friend) is getting himself heavily involved in. The problem there isn't to make the car follow the rules, but when to break them; if you and your car is in a situation where it makes sense to break the traffic rules in order save lives (for example, a big boulder is heading right for you, and the only way to save your life is to swerve quickly across a double line). We humans are great at breaking rules. Computers not at all.

I need to be a bit clearer on this point, though, namely how the rules are defined by us humans. The outcome of those rules are judged through our human values and evaluations. Make no mistake about it; when we judge what is and isn't an AI, we're judging it through our very human notions of what an AI should be like. But intelligence isn't just a human thing, and within lurks a big problem; how can we talk about superintelligence? We don't know what that nor normal intelligence is!

I'll also point out at this point that Bostrom mentions creativity, wisdom and social skills as being at the core of this superintelligence. Given that not a single experiment with AI that has ever been conducted even comes close to deal with those three core concepts is telling. Because, frankly, they are really, really hard. Being creative is hard for humans, little less trying to figure out how a computer can do that too. And wisdom? Any person worth their epistemological salt would laugh very hard right now. (If this point needs further clarification after you've read up on knowledge and wisdom, let me know) Finally, social skills. Apart from statistical methods of analysis and response, we have no AI that understands the social. Or, frankly, understands anything.

And that's at the crux of this whole problem of AI's in general; they don't understand anything, they just analyse data and give humanly defined responses. Most people have higher expectations of an AI than simply analytics and logic, and I'd go as far as to say that we probably want a superintelligence to understand stuff. A lot of this whole discussion might be summarised in the following question; what do we really mean when we say "understand"?

Understanding

This time I want to start with a more head-on definition;
"1. a. To become aware of the nature and significance of; know or comprehend: She understands the difficulty involved. b. To become aware of the intended meaning of (a person or remark, for example): We understand what they're saying; we just disagreewith it. When he began describing his eccentric theories, we could no longer understand him. c. To know and be tolerant or sympathetic toward: hoped that they would understand my complaint.
2. To know thoroughly by close contact or long experience with: That teacher understands children. I understand the basics of car repair.3. a. To learn indirectly or infer, as from hearsay: understand his departure was unexpected. Am I to understand you are staying the night b. To assume to be or accept as agreed: It is understood that the fee will be $50.4. To supply or add (words or a meaning, for example) mentally: verb is understood at the end of the statement "Yes, let's.""
Let's make one thing very clear; to understand something is not the same as to remember something, not even remembering patterns associated with given symbols (which is where current AI is up to) or analyse it or process it. No, we're talking here about comprehension, meaning, intentions, awareness, tolerance, sympathy, learning, and other mental modes. Anyone close to "the development of AI" knows darn well that we've got nothing on any of these. At the moment it's all analytics, threshold, symbolic logic, some fuzzy logic, and random-ish notions thrown in to make the systems a little more human, but nowhere in AI research is there a hint of, say, "sympathy", or "awareness." Maybe you could scratch the surface of some of the others, but by golly, we are so far removed from the problem that extrapolating progress from it is just, well, baffling to see. If people "near the development of AI" says that they got prototypes of an AI that understands anything, Nobel's and other accolades would already be placed firmly in their hands. But they are not.

Let's take a look at the first example given; "She understands the difficulty involved." You and I, dear reader, understand what this means in broad terms. We don't know what she understands, but we understand what it means to understand this difficulty, because we understand and have experience with past difficulties. We know what a difficulty is. We understand that the generic 'difficulty' is somewhat linked to our own understanding of 'difficulty.' We understand these basic idioms easily.

So it is that an AI must also understand these basic idioms before we can even talk about what it means to understand, but analytics don't get you there. Analytics gets you from complex data, to a parsing tree, to symbolic logic, to a statistical probability of options of that data, however that's not understanding anything at this point, because there is no data. We don't have the data, only a conceptual relationship between 'difficulty' and our experiences of such. But current AI don't really know what to do with no data. (An interjection here is to make 'difficulty' as an abstract concept operate as data, however there's a trap in just classifying everything as data as we shall soon see)

I need to invoke Wittgenstein again here (even though other philosophers of course also has made similar points) and his notion of language games; just because you understand how a pawn in a game of chess moves, doesn't mean you understand what the pawn really is, little less how to play chess. Language that we use in order to explain what AI is all about (as we've done above) is fraught with the errors in outspread and unclear and non-consistent definitions of what we're talking about. And so that's not the same as understanding what this thing called AI really is, or even how that's supposed to be an issue in the context of whatever our understanding of human nature is. Hmm, how can I explain this clearer?

An AI is smart when it recognises what we expect it to recognise, or do what we expect it to do, and only then. If it does anything we don't expect or don't understand, we cannot proclaim to understand that unexpected. Was that clever or stupid? Well, in human terms it looked pretty stupid, but do we know for sure that a dog licking himself is a stupid thing to do? For a dog?

If a supposed AI recognises a forest of trees in a macro picture of dust particles, do we know for sure whether that is stupid or wrong, or it being creative, or having an imagination? This is a major problem of evaluation and how we can talk about something being a threat or otherwise. We judge everything through human values, and rather sterile and logical ones as such. Why is that at the core of intelligence?

The right stuff

It just might be that, technically, we can create an AI. Like Sam Harris, I'm a hardcore physicalist, so sure, it must be possible, at least when we build with "the right stuff", but that right there might just be the main issue all and in itself; using the stuff we're currently using (and this is the kind of stuff that Moore's Law apply to, I might add), such as bits and bytes and silicon and integers and binary memory and registers.

When we simulate a neuron in a computer, for example, as many AI researches do when they try to use human modelling as a basis for AI, we really have no idea what we're doing! We are converting a biological analogue into a digital filtered signal, and just patch things up in ways that "work", but may or may not be what's actually happening in nature. It doesn't matter at what speeds it does their bidding, or what capacity they have for capturing some phenomena; it just might be that there are some things that don't translate well into digital form.

The computer screen in front of you most likely can represent something like 224 or 16,777,216 color variations (24 bit color representation, possibly with 8 bit depth of transparency), and for most people that's more than enough. We watch movies, do image manipulation and otherwise enjoy our colorful computing life using only these roughly 16 million colors. It looks awesome. This is because we humans have eyes that can discern around 10 million colors or shades thereof, again roughly speaking. However, how many colors are there, really, out there, in nature? How much are we not able to see?

The answer to that is; almost infinite, because the scale of colors is infinitely large with almost infinitely finely grades of differences in frequency. It's so large a spectrum with such small gradients that we don't have instruments to capture either size or depth. In other words, as far as we can see with our eyes we capture it all, but in reality we're seeing a tiny, tiny, infinitely small part of reality.

I suspect that what we capture in our computers in regards to AI are those 24 bits of richness; it's an infinite tiny small part of reality, so small as to render the attempt laughingly futile. We make a 24-bit deep AI, when in order for it to have the features we'd like it to have (or speculate it should have), we're off by such a large amount that it seems, well, pointless. Sure, we can re-create something that looks a bit like reality in our computer, but, really, it's a long, long way off. We're trying to build our AI with something that simply can't build it.

Now don't get me wrong; I love AI concepts, and I used to make them for a living. I follow the field, as well as developments in computer languages and hardware. I even understand a good chunk of the saviour of any AI geek out there, quantum computing. But just like the bit has a problem, so does the qubit, and mostly it's because it's still a bit. Sure, it's faster and bigger and oozing with potential, but that potential is quickly gobbled up in the state of the art of quantum computing and our digital approach to logically building our tools. And there's the odd sensationalist new discovery that's going to make it all happen, even if there's been sensations like this happening since the beginning of computing. I'm not holding my breath. Moore's Law is not going to get us there; faster computers gives us a much faster way to do, well, really dumb things.

Some HiFi enthusiasm is spot on

Do you know why there's some HiFi enthusiasts out there still who refuse to buy digital stereo receivers and amplifiers and persist in buying music on vinyl records? That's because the 16-bit (or 18 if you're fanzy) resolution of most recorded music (although there's a slight move towards 24-bit resolutions these days) is missing something. It's hard to pin down, and I think even the most sophisticated enthusiast would be hard pressed to really pin it down. But there's ... something. Something is lost in the digital version. Some call it warmth, some might be referring to "quality" (and again we should read a little Wittgenstein, or maybe a bit more Pirsig on this one), but it is certainly ... something, somewhere deep in the filters and conversion from reality to the digital. Let this stand as an allegory of the problem; Bostrom hears the music, and it sounds like the real thing! It really does! So, it must be the real thing!

A similar argument is made between digital amplifiers and analogue tube amplifiers. Sure, the latter may be slightly more noisy, however the range, and warmth, and color, and depth, and all sorts of properties we associate with the good can also be found there. The digital is lesser. Flatter. A bit restrictive. Something, can't quite pin it down, they would surmise.

Another similar argument is made between CD (16 bit) sound and better 24-bit sound. And with receivers which captures radio signals along a digital vs. analogue spectrum. Or equalizers or pre-amps or capacitor conversion rates or range of converters on both sides of the equation, in recording, mixing, mastering, and playback.

P.S. Any odd cable with enough surface area to conduct properly will do, including $2 electrical cables. The monstrously expensive cables is one thing some HiFi enthusiasts get really, really wrong.

Sam Harris

Ok, so back in January 2015 Sam Harris wrote a piece on his blog which is like a prequel to what he said in his podcast. I want to dig out just the opening gambit of that, as it stands as an example of where all of this went wrong for him. His very first line reads
"It seems increasingly likely that we will one day build machines that possess superhuman intelligence."
By what do you know this? At the present we don't know what we can build in the future. We don't know the ingredients of intelligence, little less what it would mean to build a software program that contains it. Most of this I addressed earlier, but this initial premise is dubious at best.
"We need only continue to produce better computers"
My biggest objection here is the word "only", which makes it seem like all this wonderful newfangled AI only needs is a faster computer, as opposed to new knowledge, better technology, deeper understanding and tons of research. I'll chalk this one up as the wrong word injected by Sam because it makes the sentence sound better.

He then goes on to say stuff in his first paragraph that I agree with, until we get to the end of it (which will be the last I'll quote from it; the rest of his post is based on the poorly reasoned quoted);
There is no reason to believe that a suitably advanced digital computer couldn’t do the same.
What, again? Now, I see your sneaky "suitable" word in there from which you can shake off any criticism for all eternity, but ignoring that I can think of plenty of reasons why a really advanced computer can't tackle the AI conundrum;

  • We don't know what we're making
  • We don't know how to make what we want
  • We don't know by what criteria we should judge it right
  • We don't know how computers could deal with things like values, empathy, creativity and so on
  • The hardware doesn't cut it
  • The software doesn't cut it
  • The software can't cross the analytics hump
  • It might be impossible

Just off the top of my head. So there's certainly some reasons to think otherwise.

The issue at hand here isn't that we can't imagine doing it; no, it's that we might not actually be able to do it. And no one "who knows about the development of AI" would tell you otherwise. Yes, we're making strides in analytics and neural network simluations, but these things have no bearing on what we associated with anything intelligent.

What does Harris fear? Let's grant him the "weak AI" moniker, and that that thing actually exists, and if it gets a little bit smarter, maybe that's something closer to what Harris is talking about? However, it's hard to pin down if there really is a difference between Harris' AI and, say, a Bostrom superintelligence AI; they both talk about how it will take over important human intellectual endeavours, that it might be able to solve complex issues, and that we're creating something close to a god. And that we better make sure it's a good god, and not a bad one. Harris, for example, brings up the good ol' "it will try to ensure its own survival, and secure its computation resources" cliché, as if either of those two goals have to include all the scenarios from The Terminator and The Matrix wrapped up in one. (And I'll especially address the "computational resources" nonsense in the next section) What, exactly, makes an AI choose the Hollywood solutions instead of, say, more realistic ones? Surely, it must think like a script writer.

Anthropomorphic issues and logical impossibilities

Recall from earlier that there's a big problem in evaluating an AI in non-human terms. And now, let's make it harder! Anthropomorphise something is to apply human traits to something that isn't human, often in trying to simply or understand that thing, even if it's misappropriate. Our car is called Becky, and she needs a service. There, I did it. Now you try!

A computer's memory is nothing like our human memory. Sure, both concepts use the word "memory", but they are two absolutely different things. This is an example of how we create a piece of technology, and we give it an anthropomorphic name because there are some similarity. But then mistakenly expect that computer memory works the same way as human memory is two levels up from a category error; computers don't remember anything, they have a limited amount of really fast storage that we call "memory" (RAM, technically) in addition to the normal slower but more abundant one we call harddisks, etc.

A computer's CPU is nothing like a brain. We sometimes say that the CPU is the brain of the computer, but it isn't. It just plain isn't. A CPU have registers - a slot of memory - in which you cram a piece of memory from somewhere else (usually from RAM; see above), do a bit-wise manipulation of that piece of memory (this is the essential part of what we know as software, or a computer program), and plonk it back into RAM or similar. Repeat this a few million times a second. That's a CPU; take bytes, fiddle with it, put it back.

A computer program - software, for short - is nothing like thinking or how the brain works. It's a long string of instructions on how to shuffles pieces of computer memory around around. If this byte has this bit set, do this shuffling over here. Otherwise, do this shuffling over there. Move these bytes from here to there. Change those bytes according to this logical formula. There is nothing here except logical rules over bytes of data.

A computer's input is nothing like seeing, smelling, tasting or hearing. A computer sees only long streams of bytes, and we use computer programs to try to shuffle things around in order to create a symbolic sense of what those bytes could mean as a useful output to us humans. We literally write computer programs to convert bytes into human symbols, without any knowledge or understanding of what those symbols are. For example, the computer might juggle some bytes around and come up with this stream of bytes; 83 97 109 which in binary looks like 01010011 01100001 01101101. If you convert each of these numbers into symbols, then with 83 we have defined it - outside the computer! In a paper manual! A human convention! - as an upper-case S, 97 is a lower-case a, and 109 a lower-case m. Sam. Three symbols which according to one of our human alphabets are the letters s, a and m that together creates, to us humans, the word Sam. To the computer, however, it is three byte values of 83, 97 and 109. We - the humans seeing these symbols on a screen - are seeing the word Sam. The computer sees nothing at all.

Now, a combination of clever software and a CPU might do some pretty cool stuff, and indeed we do, but don't confuse any of these for any human equivalents, or even think that the computer sees in symbols like we do. They just don't.

This also points to the problem of the constant translation that happens between humans and machine all the time that we are completely oblivious of; the computer just manipulates numbers, and only we see the symbols! Get it? We - the humans, as we read it - see the symbols. The computer does not. We make computer programs that shifts bytes into symbols. The computer doesn't. We create input that means something to us. The computer sees just numbers to crunch. We see the meaningful output, while the computer sees only numbers.

And it gets worse! The computer not only sees only numbers, but it doesn't even friggin' see them! Remember I showed you the binary version of Sam? 01010011 01100001 01101101 These, in a computer CPU, aren't even ones and zeros; the symbols of one and zero are human constructs. Inside the computer it deals with the state of signal and non-signal, of on and off of electrical switches. The CPU reacts to the on and off states of bits (the individual components of a byte, the atom of the computing world) in a prescribed manner, and the computer programmer tells it exactly how to react to any given pattern that comes along.

And so the question becomes what you can express using this incredibly constrained set of building blocks? Well, Gödel's incompleteness theorem is here to rain on our AI parade and piss on the logical positivists I have mentioned earlier;

Any logical system powerful enough to express truth-statements, can't be complete and consistent. (My own paraphrase) The most human example I can come up with is the statement; "This statement is false." Chew on that one for a second. Not only does it hold great philosophical value (lots of deep and wonderful questions can be found there; see it you should!), but it also points out something an AI must be able to do easily; lie, or at least don't frown when seeing one. We humans have no problem with the statement as such, we don't explode at the sight of it. Computers, however, deal with this differently. Each logical statements must be broken into computational statements, and when statements like above comes along, you get something akin to "divide by zero", or, I give up! Does not compute! Because, frankly, for a computer, it must compute, or it can't do anything with it.

And so, after all that, do we still think this is the stuff intelligence will be made from?

Me, the villain, and the Hollywood bullshit AI

There's a big problem in thinking that a superintelligence created in a supercomputer environment or specialised computers somehow have control over all aspects of computers everywhere, by virtue of sharing the common word "computer". How often don't we see in movies how a superintelligence created in the lab all of a sudden roam the internet, control lights in a building, and makes any electronic equipment all of a sudden do their bidding. Now, we geeky programmers and system administrators laugh loud at such scenes, as we know that it's hard for many of these things to work at all at the best of times, little less being controlled through a network.

Remember that the superintelligence is bound to the environment in which the superintelligence can be, well, superintelligent. This means that it cannot all of a sudden travel the internet, run on other machines, and control things all over the shop. All things might be connected, but you are still bound by the constraints of those connections. A superintelligence who needs special computers to run can't transfer themselves to any other computer and then simply run there. No, it runs where it is superintelligent, and has to use the internet in just the same crude, shitty way the rest of us does. Just because it is a superintelligence doesn't mean it can download that porn flick any faster. It needs to hack into other computers and create trojans or sleepers there, and then direct the traffic back to itself, and this is very much how we already do things. Sure, it might be able to create a network of drones quicker and possibly smarter, but those drones aren't superintelligent. We all have to work with the tools we're given. And the superintelligence is bound to its environment and tools and constraints.

As stupid as some people might be with their computer security, it doesn't follow that a superintelligence can gain access to my home router, for example, which I can access over the internet. We can pretend I have a secret lair in my basement, and at just the right moment as the protagonist wants to exit my secret lair with my collection of chocolate eggs, it doesn't matter how friggin' super-intelligent this superintelligence is, or how fast it can do things, it still have to brute guess my password with an increasing wait before it allows the next try. Maybe it knows of some hack or vulnerability in that exact routers firmware, but given I take security seriously and have a firmware that I've modified myself - like any good villain should do! - I doubt this very much. Especially within the time it takes our protagonist to leave with her booty before the sleeping gas kicks in.

Hollywood sucks at depicting reality. Let's not forget that, people. Really, truly, absolutely friggin' sucks at it. Why are we letting our reason be so influenced by this shit?

There's one more thing I need to dig into before I wrap up this way too long screed. Every problem with the concept of AI we have dreamed up are anthropomorphic in nature; we expect something intelligent to be like the intelligence of us humans. We tend to think that if we create something, it needs to be judged in human terms. If we create an AI that's a bit of a dick, well, it might be a dick to us humans, but it may not be a dick to its own kind, or maybe not dick to certain kinds of humans, or maybe a dick to dogs but not cats, or ... ? The options are endless. So why are we choosing only the "dick to humans" as a big risk? Why are we so dead sure that one is the one that is likely to happen?

The fear-mongers wants us to believe that we humans can pose a threat to a superintelligence existence, and hence it wants to rid the world of us. Or, as Sam Harris says, we might not even be a threat, but maybe it determines that "what we humans want" is something that we, ultimately, don't want. Like, that all we want is pleasure, and hence the machine decides to put us all in vats and feed dopamine directly to our brains, making us happy and feeling pleasure, but is that really a life you'd like?

I don't know of any straw-man argument made of more hay. Why are we jumping to the worst possibly scenario instead of anything pleasant? Do we have actual reasons for doing so, apart from being a more thrilling Hollywood script? And I mean this absolutely seriously; is there any reason to go with one horrible scenario over any non-horrible?

Secondly, if we truly create something superintelligent, why are we still assuming that we treat it and it treats us as something utterly stupid? Why are we assuming that something superintelligent even ever would think bad thoughts? Or think badly of us? Or think that if we humans are a threat to it and we might shut it down, why wouldn't it just go, ok? Why are we assuming that it will fight for survival at all? Why are we assuming that it wants more resources, that it wants to expand? Why are we assuming it will fight us, for whatever reason?

Because we anthropomorphizing the shit out of it. Nuff said. Now stop it.

Finally

TL;DR: Don't be seduced by the Hollywood nonsense!

Bloody hell, I didn't expect this screed to become this long, and if you've read all the way here, I do apologize. I've seen so many Bostrom inspired AI doomsday bullshit articles of late that I felt this post came out of necessity for some balance from someone who, well, knows a few bits or bytes about the actual technology and issues at hand, including some philosophical insights that are perhaps more important than people realise. No, I'm not an expert in the field who travels the world in a shiny dress speaking at all the TED derivatives there are. If nothing else, I hope to stop a few.

I realize that each of these sections could be much, much longer, using far more technical jargon. I'm sure I could impress you with names of special neural network simulations and rattle off histographical methods that would lure a lot of people into thinking my software is so smart. Darn it, I've done it before in real life, solving really hard and complex problems in pattern recognition and decision making with cumulative histograms, clever dynamic threshold and filter techniques, throw filtered data at neural network simulations (Bayesian trees and clustered analysis, if I remember correctly), and sure, it looks impressive, it can fool people into thinking this is really intelligent software solving problems in intelligent ways.

It's not. It's friggin' stupid stuff that, by virtue of being put together by humans who can instruct the computer to show symbolic results we humans can relate to, because it looks like it's intelligent, we mistake it for intelligent. It's not AI. It's not weak nor strong AI. It's just analytics.

It's only a model.

Also, I wish smart people like Sam Harris would at least try a little harder to see both technical issues with the whole strong AI concept, and if not technical, then at least some philosophical. And I mean that in the nicest possible way;

I want people to engage more in the AI debate, but more by virtue of some deeper thinking not influenced by these technobabble dudes like Bostrom and their shiny future projections that are so close to science fiction that I, well, think it is pure science fiction.

Comments

  1. AI is already here, and has already enslaved us. We humans refer to this enslavement as a "tech boom."

    ReplyDelete
  2. I enjoyed this, and it helped clarify my own thoughts. (As a philosopher, I'd have to do a lot of training just to reach atomweight.)

    I think the audio analogy goes a little off the rails. Analog may sound "better," or "warmer" to some ears than digital (though I think much of what we hear as "digital coldness" is often an aesthetic choice, not a limitation of the technology), but it's no closer to what actually happened in the studio than digital. They're both inaccurate; just inaccurate in different ways—and accuracy isn't really the primary goal of music recording anyway. Pleasurable experience is, and to that end, much signal processing is brought to bear in both domains--some aesthetic, some unavoidable, some a symbiosis of the two. In addition to that, microphones don't hear the same way ears do, so what goes into the recorder isn't what a human being standing next to the microphones would have heard.

    Wait, does that make it a worse analogy or a better one? Hmmm...

    Anyway—thanks, and I've shared this post.

    ReplyDelete
    Replies
    1. Thanks for the positive feedback. And yeah, maybe the analogue part needs a refinement of sorts. I'm basically talking more about the capacity of an analogue signal vs. a digital one. An analogue signal from the distant past has more information in it than any digital filtered signal does; we have decided upon a digital resolution that is good enough for our purposes, but keep forgetting just how much information is lost in the A/D conversion. So, I'm not really saying that whatever happened in the sound studio is captured better with analogue equipment, only that the range of whatever is captured is amazingly larger, and that that range with all the noisy imperfections therein is often more important than we tend to think when we judge the music by its digital clarity vs. warmth, breadth, richness and so on. But maybe this one is too specialised to make much of a point, so maybe I'll just take it out as the digital screens and color really tells the same story. :) Cheers!

      Delete
  3. I tried to submit my primordial thoughts to a Google AI forum but got short-shrift, emotive and rather personal response. Apparently to question the great and powerful Nick Bostrom, I am not worthy. I am interested in writing a blog article on the same subject. The concern about the threat of AI. I don’t know that its bullshit but it inspired me with strong scepticism both whether its possible and if so whether its negative. I will try to keep it short.

    Cultural quality of being concerned with apocalypse eg Christian Revelations. Why we are concerned with dangerous AI.

    Techno-hype. A necessity of keeping the public attention perhaps for funding and investment etc.

    What is intelligence? As you say, it's not well defined, though there are specialised activities that are attribute with intelligence. IQ tests? They seemed to be gradually brought into question as a valid measure. Is intelligence that category of behaviour demonstrated by one human being that perceived as intelligent by another? How can you reverse engineering something if first you don’t have a definition of it and second don’t know how it works.

    Collective intelligence. Also, we attribute intelligence to the individual, but to what extent is our individually intelligence a product of collective intelligence e.g. a scientists knowledge is mostly created by other scientists.

    Have any AI projects demonstrated human intelligence at all? Are they are all automations of complex niche tasks (this is not a disparagement. All these tasks look technically very complex).

    Threat from AI? Is there actually a threat from automation? Eg automated weapons (Israeli Iron dome?) They are not intelligent, but they do automate the task of identifying and shooting at targets.

    Theoretical impossibility of strong AI on current hardware. Have you read Roger Penrose's The Emperor’s new mind?

    Extra terrestrial superintelligence? Speculation squared. If a civilisation existed say 1M years ago, where is the SI?

    Argument that human intelligence is a product of an arrangement of matter and so by understanding the properties of matter and its arrangements it can be simulated? A person does go from a single cell to an intelligent adult. This argument makes a lot of assumptions but is quite compelling.

    Knowledge and intelligence. Can intelligence be demonstrated without knowledge? It seems to me they are interdependent.

    AI in fiction. Like aliens in Star Trek, AI can represent aspects of human culture rather than being actual predictions of strong AI.

    There seem to be so many books on the topic. If strong AI is not likely then what's going on? What's the agenda?

    ReplyDelete

Post a Comment

All are allowed to comment here, I don't discriminate against anyone's opinion (ie. I delete nothing, except spam and bad personal attacks). Don't be too rude, try to stay polite, but above all, engage using your best arguments, especially towards other commenters (they may not laugh it off as easily as I do). And allow some air between paragraphs and rebuts. Don't get off the lawn. Have fun. Enjoy.

Popular posts from this blog

The Skeptics guide to Artificial Intelligence

I have been listening regularly for a few years to the Skeptics Guide to the Universe , and I like it a lot. But sometimes you've got an itch than just won't easily scratch away, and this is one of those times. I should note that sent this to the website through their contact form, but I've never been lucky getting anything through (or any response) from doing so several times in the past, which is why I post it here so it doesn't go to waste. I don't actually know if anything goes through there, or I'm ignored, or marked as spam, or they're too busy with my pedestrian requests, could be anything. Normally I wouldn't care, but I wrote this rather lengthy letter about something I care deeply about (and hopefully also can teach someone out there something about the neuance of the topic) and decided to share it. My Skeptics guide to Artificial Intelligence is written with a love for the show. ------------- So. You know how you guys sometimes riff over the ...

Could it be?

Quoting  Archbishop Silvano Tomasi (via the always wonderful Butterflies and Wheels )  ; "People are being attacked for taking positions that do not support sexual behaviour between people of the same sex." Could it be because you have offensive opinions and actions, and that you constantly persecute people you don't agree with? Could it be that people are getting fed up with your hypocrisy of attacking people of a sexual orientation you yourself so obviously are filled to the brim with? Could it be that science is shedding a more reflective and correct light on what the alternative sexual orientations are all about, that biology shouldn't be dictated by doctrine and opinion? The mind boggles at religious people's stubbornness to change, to just understand that more knowledge through unbiased science renders you old and outdated, that unless you embrace change it will render you pointless but to the crazy fringe. Merge new understanding into your fold, by all m...