Thursday, May 21, 2015

AI and bad thinking : Sam Harris (and others) on Artificial Intelligence

TL;DR: Don't be seduced by the Hollywood bullshit! Think a little deeper!

Bias and background

So, I mostly like Sam Harris. Sure, we disagree on a few things (like gun ownership, violence management, some politics, and his last interaction with Noam Chomsky was just terrible) but mostly I agree with him, and I consider him a philosophical middle-weight (this ranks well above most people who call themselves philosopher, btw. The heavy-weight category is only a handful) and public debater, on par with my hero Christopher Hitchens albeit using a very different style. I would recommend people listen to him more carefully.

However, in his latest podcast episode, called "Ask me anything #1", someone asked him about his latest views on Artificial Intelligence (AI for short), and they weren't all positive. You see, he's lately been invited along to a conference on the matter of threats to human existence, and AI feature very, very high on the list of potential human extinguishers. He lists Elon Musk as his friend and inviter, and within the conference refers to "people close to development of AI" that all agree with the following;

AI's will get smarter or more advanced than the human intelligence, will be able to modify and improve their own code, and come to some negative conclusion about us puny humans, whereas the next logical step is, of course, "Exterminate the humans!" I might be paraphrasing.

Harris didn't list who these people "close to the development" were, but I can probably rattle off a few names that might have spoken or were present there, like the electronics maker Ray Kurzweil (and I'll point to the latest of his books on the subject, "The Sigularity is Near"), the philosopher Nick Bostrom (who lately paid a visit to my favourite philosophy podcast, The Partially-examined Life, where the episode with Bostrom has the best and possibly most fitting cartoon-version of Nick!), possibly Robert Li or Bill Gates and / or a bunch of other high-profile tech company big-wigs, and maybe some smaller characters like Luke Muelhauser (who I have a great deal of respect for). There's tons more, because this is a fun subject!

Also, for reference, have a quick look through this page on AI takeover (AI has to take over something in order to wield power to kill us, of course), and I'll just casually note that half of that page is of examples of these bad AI's from science fiction. Let that sink in for a few seconds.

Last, but not least, let me add that I used to develop AI's for a living, through high-security surveillance systems using video motion detection, for prisons, nuclear power plants, and the like. And even though I don't work in that field any more, I still try to keep myself up to date with the goings on in AI. I really love this stuff!

Types of AI

Ok, so to be fair, there's a number of different kinds of AI people talk about, and it's easy to get them all confused, and they range from the simplest computer analytics of yesterday to the superbeings of the future. Within the AI community (there's no such thing, really, but let's make it one of all the people who have some kind of interest in all things AI) we these days talk mainly of two types of AI;

Weak AI

This is the current state of affairs, with big computers crunching data from the real world, using analytics and neural network simulations to report to and interact with us in meaningful and helpful ways. Nobody really invokes the word "intelligence" with this one, it's only there for historical reasons. Sam Harris admittedly speak of an AI that is somewhere between this one, and the next. This is the one we all use as a platform to leap into the future of possibilities from ...

Strong AI

... and this is where we usually land; the super-intelligent, possibly conscious piece of software running on complex computers that have super-human intellectual capabilities. I call this the Hollywood AI, but a lot of people call it "superintelligence" or "strong AI" or "conscious machines" or even "artificial general intelligence (AGI)" or somesuch. However, this one is so loosely defined as to defy an intellectual discussion which we'll soon see, and in many ways this is the one I protest the most. Shall we?

I realize that both of these types of AI uses the "intelligence" moniker. For the first type, it's for historical reasons, as mentioned, since "weak AI" was the thing we used to call just AI. This is because when the terms were coined, there's was little to no deep thinking or hard knowledge about the issues and terms thrown around to discuss these things, and, I suspect, our perfectly vibrant human creative urges made us project into the future about what incredible feats of software we could produce, hence they were thought to be intelligent back then.

But of course 30 years of insanely little progress in the "intelligence" department require us to rethink our concepts a bit, and so the weak and strong monikers were invented to denote some kind of magic threshold between where we're up to and where we might be going. When "people in the know" are asked about when they think this "strong AI" might happen, they project some time into the future, from "next 5-10 years"(which is what experts have been claiming for the last 40 years, at least, but of course this time it will be true!) to roughly about 50 years or so, depending on the expert in question.

They're so wrong.


Let's jump off the deep end; the biggest issue here is that nothing worth talking about in this "debate" is well-defined or even properly understood. Let's start at the very beginning with the following two questions;
  • What is artificial?
  • What is intelligence?
If we are to talk about AI's being either good or bad, or in some other state that can be considered a threat to human existence, we need to define what this thing is, otherwise it is pure speculation, or, science fiction, and if there's anything these people really, really hate is to be put into the bucket of science fiction. So I'm not going to do that. Yet.

Bostrom and Kurzweil (both mentioned earlier as big "strong AI" proponents) are fond of defining themselves a bit out of it by saying the threat is called "super-human intelligence", or something to that effect. Now, if we poke around a bit, we might get something like this;
"A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind"
So it's a thing that has an intelligence that has surpassed ours, which begs a couple of questions;

1. what is intelligence? Well, it's neither here nor there; there are so many factors that come into it, and the definitions vary greatly across people and time, encompasses different models of values and epistemology. In short, we're not quite sure how to talk about it. Is it the stricter definitions of universal traits we're after, or the more folksy way of talking about the smartness of people (and things), or somewhere along the scale? Or outside it?

Consider this; think of all the people you know of that are really smart, and now list the things that make them smart. How many things are there on your list? In fact, in how many different kinds of ways do we think smartness, or intelligence, come into play? I have a friend who's really smart with money, but he's a bit of a dick. Does this mean we need to define dickishness as part of being smart? No one would say that, for sure, even if it just might be true. But then, I have another friend, and she's mathematically smart and really clever at crossword puzzles, yet struggles with depression.

So, are we to define up all the positive traits, and try to leave out all the bad? In my examples above we could easily define dickishness and depression as negative things we don't want, but keep in mind that these are broad, overarching, stereotypical and imprecise words we use to explain something far, far more complex underneath. There is no such thing as dickishness; it's a number of factors of a persons traits and properties that together add up the term we use. The same with depression. Some of these traits and properties might be needed for intelligence to happen, like being obsessive with details or logic. Sure, that might lead to logical positivism (yeah, not an endorsement), or maybe true mathematical insight, or, you know, OCD. I'm not going to say that it is either positive or negative, because the point is that it can be both and neither.

Now the same with the word "intelligence." See the problem?

2. What does it mean that an intelligence has surpassed another? Given that intelligence is a compound of a number of traits and properties of beings, like a big city where some buildings are tall and some are small, some are this color, others have that kind of roof, this one houses a lot of people, but this one houses but a few, this one has computers in them, this is a library, this, that, this, that ... hundreds of buildings of various kinds. How do you compare Mumbai, India to Rio, Brazil? How do you proclaim that one of them have surpassed the other? Sure, in people, or money, or the number of houses, or some other trivial data, but surpassing as a whole?

If you don't think deeply about things, then sure, Mumbai has surpassed Rio, or the other way around, depending on your bias and information. Now what does that tell you? Well, it should give you a hint about the fact that "intelligence" is nothing more than a statistical average explained in vague terms we people use on each other. And that is no platform on which any serious debate about one loosely defined thing is better or worse than some other loosely defined thing might be.

Building blocks of AI, or any I

Each "intelligence" consists of a set of building blocks of various sizes and capabilities. Let's dig into some common definitions of the kind traits we think an intelligence should have, and just briefly point out some complexities and issues along the way. And keep in mind that these kind of definitions are rife with fast and loose facts and definitions (although I would point out if they are completely wrong or very different from what these "near the AI development" people would actually say), so this is just to point out some surface problems before we get deeper down.

Let's for the sake of argument use this one;
"A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do"
Reason - "[...] is the capacity for consciously making sense of things, applying logic, establishing and verifying facts, and changing or justifying practices, institutions, and beliefs based on new or existing information". There's almost too much to unpack in this one alone, but pointing to consciousness, facts, beliefs, justifying practices and other epistemologically difficult problems, well, I'm not sure what we think our current technology is capable of doing with any of these. But it's pretty clear that we don't actually agree on what reason is, and how it's supposed to work. The only thing we really got down in terms of computer systems is the logic part, however I'll talk a bit more on this later on when I talk about mathematics and Gödel.

Plan - "[...] is one of the executive functions, it encompasses the neurological processes involved in the formulation, evaluation and selection of a sequence of thoughts and actions to achieve a desired goal."  Here we get into desires, thoughts goals and actions. The two latter I can see being approached in an AI with some degree of complexity, but the two former are seriously hard nuts to crack; it implies non-fatalistic networks of information that affect other non-fatalistic networks in order to evaluate and reshape the information, not to mention some pretty complex handling of data that deals with this thing we call "memory." I'll talk more about that later.

Abstract thinking - "[...] in its main sense is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods. "An abstraction" is the product of this process—a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category" So, how does an AI today deal with concepts? We mostly use symbolic logic and languages to do AI, which is fine and well, but these are representational in nature and backed by logic; they are, in other words, bound to what we already know in terms of computers and programming, they fall into mathematical pitfalls and don't offer us any solutions to the before-mentioned hard problems, but most importantly they are guided and defined by the programmer. There is no intelligence here, only symbolic definitions defined elsewhere.

I could go on, but the point is that all of these things are really, really hard problems to solve. Sure, hard doesn't mean impossible, but don't let that truism hide the fact that when we, today, talk about what AI can and can't do - and hey, that's an important point, because from what it can do now we project into the future what it might be able to do - we really see that we're scratching some limited amount of surface in very limited ways and from this limited progress we extrapolate huge progress in the future.

There's also the deeper point that once you start parsing out what is actually involved in these things, the complexities stand out more, and I know for a fact that a lot of people "close to the development of AI" tend to forget this (often unknowingly), so it serves as a timely reminder.

Now, as a pointer back into that mystical and tedious land of philosophy, I want to remind us all that we have a slightly more difficult issue at hand; we're all engaging in language games. I refer you to Wittgenstein, one of my hero philosophers (yes, he's one of the few in the heavy-weight category), especially his latter incarnation where he rejected his earlier axiomatic and logical positivist attempts at epistemology. He's got a lot to say on a number of topics, of course, but here I'd like to riff a bit on the fact that we're all speaking a variety of languages, using a variety of words, that mean different things in a variety of contexts, and that there's interpretations and translations between a variety of players (where the self, the You, is playing several of those parts) in trying to figure out meaning and message, as well as the different interpretations you get from a variety of actions based on someone's conscious and subconscious workings. So, keep this in mind; I'll refer to Wittgenstein later on, but if you should only read one of the linked articles, Wittgenstein on language games is the one to go for. He's was a really intelligent guy.

So, when I say "Wittgenstein was really intelligent", what is being said? Well, it depends. It's not that clear. Maybe I should write a book about it to clarify my position.


Nick Bostrom has written a book like that, and his "Superintelligence" is trying to a) define what it is, and b) convince us that the threat is real. I'll use this book as a shorthand way to refer the people I listed further up. Sure, there's subtleties and variation, but for simplicity's sake, I think it rather nicely represent them all in this debate as a lot of them also happens to endorse his book.

In order for something to be superintelligent we need to know what intelligent is. However, I think it's reasonable to argue that we don't really have that knowledge, but let's again for the sake of argument give some slack to all these definitions, and maybe talk about what this thing is supposed to be like, according to Bostrom.

Bostrom says that it is "[...] an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills"

Let's compare that to state of the AI art. What are we currently up to? Well, a huge proportion of it is recognising things. Just, you know, look at a picture (or a sound, or a stream, like a video, which was my field of expertise for many years) and tell me what you see. This is an important part of pattern recognition, and in my view sits at the core of the human empiric experience; if you can experience things and link this stream of input to symbols for further processing (like an input into a neural network, for example), then at least you have a framework for interaction, and a seed of something that might have epistemological grounding. (A lot of this talk is about images, but it applies equally well to sound and touch) So far, so good.

So, how are they going? Well, not all that great, even though the machines are getting better at looking at a picture and give you some keywords to what they recognise that thing to be. In clearly outlined images that are somewhat histographically easy on your filters and thresholds (I could rant for hours about just how shady [pun for the initiated] the techniques we use to trick normal people into thinking our software is clever, but that's a different post waiting to be written), sure, we get ... a few words of what might be in an image. Normally this is done through running millions of images through a neural network simulation, throwing statistical analysis in there for good measure, and if you think intelligence is more or less the same as statistics, then sure, maybe we've come a long way.

But the problem isn't even recognition in itself, but the opposite of that, of not seeing anything at all. Show the computer a fuzzy image, or something out of focus, or something that's half in frame, or any of a million different "what if" contenders, and chances are things are not recognized, or what's worse, recognized as something they are not. The same problem crops up again and again in various parts of machine recognition. Take, for example, autonomous cars, which Alon Musk (Sam's inviting friend) is getting himself heavily involved in. The problem there isn't to make the car follow the rules, but when to break them; if you and your car is in a situation where it makes sense to break the traffic rules in order save lives (for example, a big boulder is heading right for you, and the only way to save your life is to swerve quickly across a double line). We humans are great at breaking rules. Computers not at all.

I need to be a bit clearer on this point, though, namely how the rules are defined by us humans. The outcome of those rules are judged through our human values and evaluations. Make no mistake about it; when we judge what is and isn't an AI, we're judging it through our very human notions of what an AI should be like. But intelligence isn't just a human thing, and within lurks a big problem; how can we talk about superintelligence? We don't know what that nor normal intelligence is!

I'll also point out at this point that Bostrom mentions creativity, wisdom and social skills as being at the core of this superintelligence. Given that not a single experiment with AI that has ever been conducted even comes close to deal with those three core concepts is telling. Because, frankly, they are really, really hard. Being creative is hard for humans, little less trying to figure out how a computer can do that too. And wisdom? Any person worth their epistemological salt would laugh very hard right now. (If this point needs further clarification after you've read up on knowledge and wisdom, let me know) Finally, social skills. Apart from statistical methods of analysis and response, we have no AI that understands the social. Or, frankly, understands anything.

And that's at the crux of this whole problem of AI's in general; they don't understand anything, they just analyse data and give humanly defined responses. Most people have higher expectations of an AI than simply analytics and logic, and I'd go as far as to say that we probably want a superintelligence to understand stuff. A lot of this whole discussion might be summarised in the following question; what do we really mean when we say "understand"?


This time I want to start with a more head-on definition;
"1. a. To become aware of the nature and significance of; know or comprehend: She understands the difficulty involved. b. To become aware of the intended meaning of (a person or remark, for example): We understand what they're saying; we just disagreewith it. When he began describing his eccentric theories, we could no longer understand him. c. To know and be tolerant or sympathetic toward: hoped that they would understand my complaint.
2. To know thoroughly by close contact or long experience with: That teacher understands children. I understand the basics of car repair.3. a. To learn indirectly or infer, as from hearsay: understand his departure was unexpected. Am I to understand you are staying the night b. To assume to be or accept as agreed: It is understood that the fee will be $50.4. To supply or add (words or a meaning, for example) mentally: verb is understood at the end of the statement "Yes, let's.""
Let's make one thing very clear; to understand something is not the same as to remember something, not even remembering patterns associated with given symbols (which is where current AI is up to) or analyse it or process it. No, we're talking here about comprehension, meaning, intentions, awareness, tolerance, sympathy, learning, and other mental modes. Anyone close to "the development of AI" knows darn well that we've got nothing on any of these. At the moment it's all analytics, threshold, symbolic logic, some fuzzy logic, and random-ish notions thrown in to make the systems a little more human, but nowhere in AI research is there a hint of, say, "sympathy", or "awareness." Maybe you could scratch the surface of some of the others, but by golly, we are so far removed from the problem that extrapolating progress from it is just, well, baffling to see. If people "near the development of AI" says that they got prototypes of an AI that understands anything, Nobel's and other accolades would already be placed firmly in their hands. But they are not.

Let's take a look at the first example given; "She understands the difficulty involved." You and I, dear reader, understand what this means in broad terms. We don't know what she understands, but we understand what it means to understand this difficulty, because we understand and have experience with past difficulties. We know what a difficulty is. We understand that the generic 'difficulty' is somewhat linked to our own understanding of 'difficulty.' We understand these basic idioms easily.

So it is that an AI must also understand these basic idioms before we can even talk about what it means to understand, but analytics don't get you there. Analytics gets you from complex data, to a parsing tree, to symbolic logic, to a statistical probability of options of that data, however that's not understanding anything at this point, because there is no data. We don't have the data, only a conceptual relationship between 'difficulty' and our experiences of such. But current AI don't really know what to do with no data. (An interjection here is to make 'difficulty' as an abstract concept operate as data, however there's a trap in just classifying everything as data as we shall soon see)

I need to invoke Wittgenstein again here (even though other philosophers of course also has made similar points) and his notion of language games; just because you understand how a pawn in a game of chess moves, doesn't mean you understand what the pawn really is, little less how to play chess. Language that we use in order to explain what AI is all about (as we've done above) is fraught with the errors in outspread and unclear and non-consistent definitions of what we're talking about. And so that's not the same as understanding what this thing called AI really is, or even how that's supposed to be an issue in the context of whatever our understanding of human nature is. Hmm, how can I explain this clearer?

An AI is smart when it recognises what we expect it to recognise, or do what we expect it to do, and only then. If it does anything we don't expect or don't understand, we cannot proclaim to understand that unexpected. Was that clever or stupid? Well, in human terms it looked pretty stupid, but do we know for sure that a dog licking himself is a stupid thing to do? For a dog?

If a supposed AI recognises a forest of trees in a macro picture of dust particles, do we know for sure whether that is stupid or wrong, or it being creative, or having an imagination? This is a major problem of evaluation and how we can talk about something being a threat or otherwise. We judge everything through human values, and rather sterile and logical ones as such. Why is that at the core of intelligence?

The right stuff

It just might be that, technically, we can create an AI. Like Sam Harris, I'm a hardcore physicalist, so sure, it must be possible, at least when we build with "the right stuff", but that right there might just be the main issue all and in itself; using the stuff we're currently using (and this is the kind of stuff that Moore's Law apply to, I might add), such as bits and bytes and silicon and integers and binary memory and registers.

When we simulate a neuron in a computer, for example, as many AI researches do when they try to use human modelling as a basis for AI, we really have no idea what we're doing! We are converting a biological analogue into a digital filtered signal, and just patch things up in ways that "work", but may or may not be what's actually happening in nature. It doesn't matter at what speeds it does their bidding, or what capacity they have for capturing some phenomena; it just might be that there are some things that don't translate well into digital form.

The computer screen in front of you most likely can represent something like 224 or 16,777,216 color variations (24 bit color representation, possibly with 8 bit depth of transparency), and for most people that's more than enough. We watch movies, do image manipulation and otherwise enjoy our colorful computing life using only these roughly 16 million colors. It looks awesome. This is because we humans have eyes that can discern around 10 million colors or shades thereof, again roughly speaking. However, how many colors are there, really, out there, in nature? How much are we not able to see?

The answer to that is; almost infinite, because the scale of colors is infinitely large with almost infinitely finely grades of differences in frequency. It's so large a spectrum with such small gradients that we don't have instruments to capture either size or depth. In other words, as far as we can see with our eyes we capture it all, but in reality we're seeing a tiny, tiny, infinitely small part of reality.

I suspect that what we capture in our computers in regards to AI are those 24 bits of richness; it's an infinite tiny small part of reality, so small as to render the attempt laughingly futile. We make a 24-bit deep AI, when in order for it to have the features we'd like it to have (or speculate it should have), we're off by such a large amount that it seems, well, pointless. Sure, we can re-create something that looks a bit like reality in our computer, but, really, it's a long, long way off. We're trying to build our AI with something that simply can't build it.

Now don't get me wrong; I love AI concepts, and I used to make them for a living. I follow the field, as well as developments in computer languages and hardware. I even understand a good chunk of the saviour of any AI geek out there, quantum computing. But just like the bit has a problem, so does the qubit, and mostly it's because it's still a bit. Sure, it's faster and bigger and oozing with potential, but that potential is quickly gobbled up in the state of the art of quantum computing and our digital approach to logically building our tools. And there's the odd sensationalist new discovery that's going to make it all happen, even if there's been sensations like this happening since the beginning of computing. I'm not holding my breath. Moore's Law is not going to get us there; faster computers gives us a much faster way to do, well, really dumb things.

Some HiFi enthusiasm is spot on

Do you know why there's some HiFi enthusiasts out there still who refuse to buy digital stereo receivers and amplifiers and persist in buying music on vinyl records? That's because the 16-bit (or 18 if you're fanzy) resolution of most recorded music (although there's a slight move towards 24-bit resolutions these days) is missing something. It's hard to pin down, and I think even the most sophisticated enthusiast would be hard pressed to really pin it down. But there's ... something. Something is lost in the digital version. Some call it warmth, some might be referring to "quality" (and again we should read a little Wittgenstein, or maybe a bit more Pirsig on this one), but it is certainly ... something, somewhere deep in the filters and conversion from reality to the digital. Let this stand as an allegory of the problem; Bostrom hears the music, and it sounds like the real thing! It really does! So, it must be the real thing!

A similar argument is made between digital amplifiers and analogue tube amplifiers. Sure, the latter may be slightly more noisy, however the range, and warmth, and color, and depth, and all sorts of properties we associate with the good can also be found there. The digital is lesser. Flatter. A bit restrictive. Something, can't quite pin it down, they would surmise.

Another similar argument is made between CD (16 bit) sound and better 24-bit sound. And with receivers which captures radio signals along a digital vs. analogue spectrum. Or equalizers or pre-amps or capacitor conversion rates or range of converters on both sides of the equation, in recording, mixing, mastering, and playback.

P.S. Any odd cable with enough surface area to conduct properly will do, including $2 electrical cables. The monstrously expensive cables is one thing some HiFi enthusiasts get really, really wrong.

Sam Harris

Ok, so back in January 2015 Sam Harris wrote a piece on his blog which is like a prequel to what he said in his podcast. I want to dig out just the opening gambit of that, as it stands as an example of where all of this went wrong for him. His very first line reads
"It seems increasingly likely that we will one day build machines that possess superhuman intelligence."
By what do you know this? At the present we don't know what we can build in the future. We don't know the ingredients of intelligence, little less what it would mean to build a software program that contains it. Most of this I addressed earlier, but this initial premise is dubious at best.
"We need only continue to produce better computers"
My biggest objection here is the word "only", which makes it seem like all this wonderful newfangled AI only needs is a faster computer, as opposed to new knowledge, better technology, deeper understanding and tons of research. I'll chalk this one up as the wrong word injected by Sam because it makes the sentence sound better.

He then goes on to say stuff in his first paragraph that I agree with, until we get to the end of it (which will be the last I'll quote from it; the rest of his post is based on the poorly reasoned quoted);
There is no reason to believe that a suitably advanced digital computer couldn’t do the same.
What, again? Now, I see your sneaky "suitable" word in there from which you can shake off any criticism for all eternity, but ignoring that I can think of plenty of reasons why a really advanced computer can't tackle the AI conundrum;

  • We don't know what we're making
  • We don't know how to make what we want
  • We don't know by what criteria we should judge it right
  • We don't know how computers could deal with things like values, empathy, creativity and so on
  • The hardware doesn't cut it
  • The software doesn't cut it
  • The software can't cross the analytics hump
  • It might be impossible

Just off the top of my head. So there's certainly some reasons to think otherwise.

The issue at hand here isn't that we can't imagine doing it; no, it's that we might not actually be able to do it. And no one "who knows about the development of AI" would tell you otherwise. Yes, we're making strides in analytics and neural network simluations, but these things have no bearing on what we associated with anything intelligent.

What does Harris fear? Let's grant him the "weak AI" moniker, and that that thing actually exists, and if it gets a little bit smarter, maybe that's something closer to what Harris is talking about? However, it's hard to pin down if there really is a difference between Harris' AI and, say, a Bostrom superintelligence AI; they both talk about how it will take over important human intellectual endeavours, that it might be able to solve complex issues, and that we're creating something close to a god. And that we better make sure it's a good god, and not a bad one. Harris, for example, brings up the good ol' "it will try to ensure its own survival, and secure its computation resources" cliché, as if either of those two goals have to include all the scenarios from The Terminator and The Matrix wrapped up in one. (And I'll especially address the "computational resources" nonsense in the next section) What, exactly, makes an AI choose the Hollywood solutions instead of, say, more realistic ones? Surely, it must think like a script writer.

Anthropomorphic issues and logical impossibilities

Recall from earlier that there's a big problem in evaluating an AI in non-human terms. And now, let's make it harder! Anthropomorphise something is to apply human traits to something that isn't human, often in trying to simply or understand that thing, even if it's misappropriate. Our car is called Becky, and she needs a service. There, I did it. Now you try!

A computer's memory is nothing like our human memory. Sure, both concepts use the word "memory", but they are two absolutely different things. This is an example of how we create a piece of technology, and we give it an anthropomorphic name because there are some similarity. But then mistakenly expect that computer memory works the same way as human memory is two levels up from a category error; computers don't remember anything, they have a limited amount of really fast storage that we call "memory" (RAM, technically) in addition to the normal slower but more abundant one we call harddisks, etc.

A computer's CPU is nothing like a brain. We sometimes say that the CPU is the brain of the computer, but it isn't. It just plain isn't. A CPU have registers - a slot of memory - in which you cram a piece of memory from somewhere else (usually from RAM; see above), do a bit-wise manipulation of that piece of memory (this is the essential part of what we know as software, or a computer program), and plonk it back into RAM or similar. Repeat this a few million times a second. That's a CPU; take bytes, fiddle with it, put it back.

A computer program - software, for short - is nothing like thinking or how the brain works. It's a long string of instructions on how to shuffles pieces of computer memory around around. If this byte has this bit set, do this shuffling over here. Otherwise, do this shuffling over there. Move these bytes from here to there. Change those bytes according to this logical formula. There is nothing here except logical rules over bytes of data.

A computer's input is nothing like seeing, smelling, tasting or hearing. A computer sees only long streams of bytes, and we use computer programs to try to shuffle things around in order to create a symbolic sense of what those bytes could mean as a useful output to us humans. We literally write computer programs to convert bytes into human symbols, without any knowledge or understanding of what those symbols are. For example, the computer might juggle some bytes around and come up with this stream of bytes; 83 97 109 which in binary looks like 01010011 01100001 01101101. If you convert each of these numbers into symbols, then with 83 we have defined it - outside the computer! In a paper manual! A human convention! - as an upper-case S, 97 is a lower-case a, and 109 a lower-case m. Sam. Three symbols which according to one of our human alphabets are the letters s, a and m that together creates, to us humans, the word Sam. To the computer, however, it is three byte values of 83, 97 and 109. We - the humans seeing these symbols on a screen - are seeing the word Sam. The computer sees nothing at all.

Now, a combination of clever software and a CPU might do some pretty cool stuff, and indeed we do, but don't confuse any of these for any human equivalents, or even think that the computer sees in symbols like we do. They just don't.

This also points to the problem of the constant translation that happens between humans and machine all the time that we are completely oblivious of; the computer just manipulates numbers, and only we see the symbols! Get it? We - the humans, as we read it - see the symbols. The computer does not. We make computer programs that shifts bytes into symbols. The computer doesn't. We create input that means something to us. The computer sees just numbers to crunch. We see the meaningful output, while the computer sees only numbers.

And it gets worse! The computer not only sees only numbers, but it doesn't even friggin' see them! Remember I showed you the binary version of Sam? 01010011 01100001 01101101 These, in a computer CPU, aren't even ones and zeros; the symbols of one and zero are human constructs. Inside the computer it deals with the state of signal and non-signal, of on and off of electrical switches. The CPU reacts to the on and off states of bits (the individual components of a byte, the atom of the computing world) in a prescribed manner, and the computer programmer tells it exactly how to react to any given pattern that comes along.

And so the question becomes what you can express using this incredibly constrained set of building blocks? Well, Gödel's incompleteness theorem is here to rain on our AI parade and piss on the logical positivists I have mentioned earlier;

Any logical system powerful enough to express truth-statements, can't be complete and consistent. (My own paraphrase) The most human example I can come up with is the statement; "This statement is false." Chew on that one for a second. Not only does it hold great philosophical value (lots of deep and wonderful questions can be found there; see it you should!), but it also points out something an AI must be able to do easily; lie, or at least don't frown when seeing one. We humans have no problem with the statement as such, we don't explode at the sight of it. Computers, however, deal with this differently. Each logical statements must be broken into computational statements, and when statements like above comes along, you get something akin to "divide by zero", or, I give up! Does not compute! Because, frankly, for a computer, it must compute, or it can't do anything with it.

And so, after all that, do we still think this is the stuff intelligence will be made from?

Me, the villain, and the Hollywood bullshit AI

There's a big problem in thinking that a superintelligence created in a supercomputer environment or specialised computers somehow have control over all aspects of computers everywhere, by virtue of sharing the common word "computer". How often don't we see in movies how a superintelligence created in the lab all of a sudden roam the internet, control lights in a building, and makes any electronic equipment all of a sudden do their bidding. Now, we geeky programmers and system administrators laugh loud at such scenes, as we know that it's hard for many of these things to work at all at the best of times, little less being controlled through a network.

Remember that the superintelligence is bound to the environment in which the superintelligence can be, well, superintelligent. This means that it cannot all of a sudden travel the internet, run on other machines, and control things all over the shop. All things might be connected, but you are still bound by the constraints of those connections. A superintelligence who needs special computers to run can't transfer themselves to any other computer and then simply run there. No, it runs where it is superintelligent, and has to use the internet in just the same crude, shitty way the rest of us does. Just because it is a superintelligence doesn't mean it can download that porn flick any faster. It needs to hack into other computers and create trojans or sleepers there, and then direct the traffic back to itself, and this is very much how we already do things. Sure, it might be able to create a network of drones quicker and possibly smarter, but those drones aren't superintelligent. We all have to work with the tools we're given. And the superintelligence is bound to its environment and tools and constraints.

As stupid as some people might be with their computer security, it doesn't follow that a superintelligence can gain access to my home router, for example, which I can access over the internet. We can pretend I have a secret lair in my basement, and at just the right moment as the protagonist wants to exit my secret lair with my collection of chocolate eggs, it doesn't matter how friggin' super-intelligent this superintelligence is, or how fast it can do things, it still have to brute guess my password with an increasing wait before it allows the next try. Maybe it knows of some hack or vulnerability in that exact routers firmware, but given I take security seriously and have a firmware that I've modified myself - like any good villain should do! - I doubt this very much. Especially within the time it takes our protagonist to leave with her booty before the sleeping gas kicks in.

Hollywood sucks at depicting reality. Let's not forget that, people. Really, truly, absolutely friggin' sucks at it. Why are we letting our reason be so influenced by this shit?

There's one more thing I need to dig into before I wrap up this way too long screed. Every problem with the concept of AI we have dreamed up are anthropomorphic in nature; we expect something intelligent to be like the intelligence of us humans. We tend to think that if we create something, it needs to be judged in human terms. If we create an AI that's a bit of a dick, well, it might be a dick to us humans, but it may not be a dick to its own kind, or maybe not dick to certain kinds of humans, or maybe a dick to dogs but not cats, or ... ? The options are endless. So why are we choosing only the "dick to humans" as a big risk? Why are we so dead sure that one is the one that is likely to happen?

The fear-mongers wants us to believe that we humans can pose a threat to a superintelligence existence, and hence it wants to rid the world of us. Or, as Sam Harris says, we might not even be a threat, but maybe it determines that "what we humans want" is something that we, ultimately, don't want. Like, that all we want is pleasure, and hence the machine decides to put us all in vats and feed dopamine directly to our brains, making us happy and feeling pleasure, but is that really a life you'd like?

I don't know of any straw-man argument made of more hay. Why are we jumping to the worst possibly scenario instead of anything pleasant? Do we have actual reasons for doing so, apart from being a more thrilling Hollywood script? And I mean this absolutely seriously; is there any reason to go with one horrible scenario over any non-horrible?

Secondly, if we truly create something superintelligent, why are we still assuming that we treat it and it treats us as something utterly stupid? Why are we assuming that something superintelligent even ever would think bad thoughts? Or think badly of us? Or think that if we humans are a threat to it and we might shut it down, why wouldn't it just go, ok? Why are we assuming that it will fight for survival at all? Why are we assuming that it wants more resources, that it wants to expand? Why are we assuming it will fight us, for whatever reason?

Because we anthropomorphizing the shit out of it. Nuff said. Now stop it.


TL;DR: Don't be seduced by the Hollywood bullshit! Think a little deeper!

Bloody hell, I didn't expect this screed to become this long, and if you've read all the way here, I do apologize. I've seen so many Bostrom inspired AI doomsday bullshit articles of late that I felt this post came out of necessity for some balance from someone who, well, knows a few bits or bytes about the actual technology and issues at hand, including some philosophical insights that are perhaps more important than people realise. No, I'm not an expert in the field who travels the world in a shiny dress speaking at all the TED derivatives there are. If nothing else, I hope to stop a few.

I realize that each of these sections could be much, much longer, using far more technical jargon. I'm sure I could impress you with names of special neural network simulations and rattle off histographical methods that would lure a lot of people into thinking my software is so smart. Darn it, I've done it before in real life, solving really hard and complex problems in pattern recognition and decision making with cumulative histograms, clever dynamic threshold and filter techniques, throw filtered data at neural network simulations (Bayesian trees and clustered analysis, if I remember correctly), and sure, it looks impressive, it can fool people into thinking this is really intelligent software solving problems in intelligent ways.

It's not. It's friggin' stupid stuff that, by virtue of being put together by humans who can instruct the computer to show symbolic results we humans can relate to, because it looks like it's intelligent, we mistake it for intelligent. It's not AI. It's not weak nor strong AI. It's just analytics.

It's only a model.

Also, I wish smart people like Sam Harris would at least try a little harder to see both technical issues with the whole strong AI concept, and if not technical, then at least some philosophical. And I mean that in the nicest possible way;

I want people to engage more in the AI debate, but more by virtue of some deeper thinking not influenced by these technobabble dudes like Bostrom and their shiny future projections that are so close to science fiction that I, well, think it is pure science fiction.

Wednesday, June 15, 2011

The problem of knowledge through science and religion

I love being an online harpie, ploughing the rough dirt of these them tubes so it can later squirt new life. And my honest self proclaims ideas and thoughts about how to better harvest that which will be beneficial to all. I'm of simple farming stock, you see, like pretty much the most of us, thinking hard about how to get the best yield, and one of the best means of doing so I have found to be engaging with people at the other end of whatever opinion spectrum I find myself on. I am a feminist, so I go talk to misogynists. I am a rationalist, so I go talking with irrational people. I am a misnomer tribulationists, so I go discuss existentialism with sarang optimetriloquists. It's a good thing to pursue your opposites if you are honest about your thirst for knowledge; fresh opinions you may not automatically agree with always makes you re-examine your own. This process is gold.

However, every once in as I engage in online discussions, mostly by commenting on blog posts (because it's cheap and fast) my feeble prose lead to bigger and bigger discussions that in the end undermines the argument and the discussion itself by its very nature. Some times the medium simply needs to change. This time I am a naturalist atheist, so I go discuss with Cory at Joshia Concept Ministries, a theologically inclined Christian of Calvinist leanings. The size and complexity of that online blog post discussion has become too big for the inner peace of comment-systems. No, of course I could ignore it and moved on, but it was about especially a two-step topic near and dear to me;

  • How do you know?

Yes, how do you know what you know? Or even, why do you know? How did you get the knowledge? How did you keep it? What's stopping you from losing it? That's step one. Second step is;

  • How do you know that your knowledge is true?

Indeed. Not surprisingly, as a huge science buff and hobbyist historian / occult philosopher of such, I can't begin to list the thousands and thousands of years of trying (and often failing) to create knowledge that can be separated from hear-say, opinion and outright lying. Knowledge is a dangerous sea of deception where just because you call it knowledge you think it is true. You won't believe just how hard it has been to get bias out of the domain of knowledge, bias you often don't even think is there to distort the truth aspect of it. Thousands of years have passed just to get society at large to even dare question bias and authority in nuggets of knowledge, there's just tons of general knowledge that simply isn't true. Not true by virtue of people actively lying, most of the time, but by how hard it is to just get rid of the stuff, to understand that anecdotal evidence is not evidence, that that story you heard is probably not true even if your grandma told it, and that even if we'd like there to be an Area 51 conspiracy involving aliens doesn't mean there is one. It's toxic. It infects everything around us.

One of the more pertinent concepts of it is that crummy word Epistemology. (Don't worry; you follow the link, I'll wait here until you get back) Lots of concepts enters into this subject, from Heisenberg's uncertainty principle to pragmatic vs. formal version of what is Truth, mathematics and different orders of logic, basic tenants of philosophy and the state of mind, group think, cultural context and psychology, and possibly various notions of neuroscience and the distinctions (yes, plural) between consciousness and cognitive operations. I could go on, but let's keep it reasonably simple with our two questions above while we dig into the meat of the actual discussion. I'm only here replying to Cory's last reply which hopefully will bring enough context (and if not, go and read the whole thing linked above. Again, I'll wait here for you). Let's also create some color: I am green, Cory is blue ;
My tone and venom are generally proportionate to the intelligence level of the argument. I re-read my post, however, and failed to see any venom or scathing tone in the post itself [...]. If the argument is dumb, then I can get a bit mean [...]. 
All of this is subjective, of course, and not humbling in the least. However, tone and venom is normally not something I disapprove of, in fact I can be quite the filth-bucket myself if I feel proficiently self-justified and self-righteous enough about something. I love a good rant, too, often when directed at people I find stupid or misguided. However, I require one thing for it to be the Right Thing [TM] in my book;

A large degree of truth.

It's more than fine to lampoon and make fun of people's opinions except when you do it without any real meat to it, without a certain degree of accepted truth (even if controversial). Racial or misogynist slurs are easy examples of such, and comedians know how to utilize this well. It may indeed be funny, at least for some, but there's a cut-off point where funny becomes unfunny, where the joke you told isn't so much a joke as it is a revealing truth about you that, in fact, might sit a bit uncomfortable with your audience.

So in our context, I thought Cory wanted to engage in intelligent discourse about the topic of knowledge. He even included words like epistemology, and went on a big spiel about methodological naturalism and metaphysical naturalism, which are big words in almost any normal conversation, and for me no more than an invitation to talk about these things a bit deeper. Cory is a Christian claiming to have knowledge about these things, and I always find it interesting to hear what people of such a label has to say, I perk up and listen, especially considering the epistemological weak ground I find they base their whole world model on (more on that underneath, of course).

However, claiming people have a huge amount of ignorance about something (not once, but several times), calling them out, requires that your comeback or argument has serious meat to its bone. But when it doesn't, when it isn't obvious you yourself have the required deep knowledge of the subject at hand, well, the message comes across as snarky, mean and, well, venomous. 

You write that someone is ignorant about methodological naturalism and metaphysical naturalism and some split down the middle, creating two camps in which you place constraints on the original statement so that they fit with your argument. It's like William Craig Lane when he is refusing to argue against a combatant's argument on the grounds that it is epistemological, he will only address the ontological ones, even when it doesn't matter which way you define certain arguments. In other words, an artificial split has been inserted into an argument so that it becomes easier to refute or dismiss it, even if it was asked for by the other side and still a good argument. Let's keep that in mind while we look at the original statement you criticized ;
Evolution, Physics, Astronomy,etc. describe reality as it is, but do take some effort to learn – theists just want quick answers: god
And the meat of your post is this ;
Monica fails to make two important distinctions
No, no she doesn't. She doesn't have to define anything to make her statement. There's nothing wrong about it as it stands, no philosophical legs broken, no need for an analytic breakdown of its various semantic parts required, no breach of empiric context, no violation of general knowledge. You may argue about the validity of "quick answers" (I know people who went for years to seminary, and those years were far from easy, and that would have been a far better reply), but you don't (at least not here; there's a possible jab at it later). What happens here is that you're reacting to something and so demand that she makes some distinction (which may or may not only be only your own requirement, not the rest of the world) in order for you to accept it. And that is very, very different from your bold assertion which we'll look at next ;
The first distinction is between methodological naturalism and metaphysical naturalism. This is a mistake most atheists make.
No, this is a mistake theologians require of others, nothing more. In the field of natural philosophy (the original name for science, or Naturalism, if you prefer) you will find many, many different definitions, constraints and opposing views of what goes into or should stay outside that bucket we call Science. Let's make a few simplifications for us to follow ;
Methodological naturalism is a way of acquiring knowledge, or the scientific process, if you like, where observation, testing and evidence loops as a means of epistemic statements approaching some sense of Truth.
Metaphysical naturalism is the notion that there's nothing more than nature, that all states of consciousness and being are reducible to natural phenomena.
I don't think many dispute methodological naturalism as a brilliant means of getting to the truth of things. The amount of joint success and prosperity it has brought to the human context is beyond any alternative ever ventured, and I think you'd be quite foolish to claim otherwise (that's not an accusation), and I think most Christians (bar the Creationists and overly literally religious) are happy with this part.

However, it's the second one that is problematic. The theist will always argue it being false, while an atheist will always argue it true. You cannot claim it a mistake atheists do; it is in fact the very thing for atheists to do, it is the definition of what it means to be atheist. Unless you have evidence that shows it to be false, you're just projecting your own wishes unto a definition that doesn't include your opinion, and that's exactly what you're doing. The definition of metaphysical naturalism do not stand in the way of the original tweet message and an acceptable notion of truth; only your opinion does. 

But let's dig further. Next you brush against the point I was making at the top; you're not really caring too much about the distinctions you feel she needs to make;
The tweet is ignorant because theists are not lazy and we do not just punt everything we don’t know to God. We simply leave God as a possible explanation for things, especially things that appear to have no naturalistic explanation.
I could go on for days about the fallacies with that short statement, but I'll make it as short as I can. The main hint lies in this part; "We simply leave God as a possible explanation for things", which, when you look at it should makes us ask a couple of questions ;

1. Simply? I would say whether a person believes in God or not totally and utterly changes that person's life and of those around them, and it certainly changes the mode of thinking. These are not simple things. I understand that's not what you meant (certainly not completely) but you don't simply think God is a possible, you assert that your god is responsible. And here's the clincher; In the presence of alternative explanations, to discard the pragmatic or natural explanations require you to provide some pretty good reasons and / or evidence to the opposite, and to do that is not simple.

2. "we do not just punt everything we don’t know to God". Yes, yes you do. Anything that you don't understand (and this includes swaths of what theology itself comes up with), anything that seems to need or have a divine answer, you attribute to God, usually in mysterious ways. Not sure why you want to claim otherwise, as most of your posts do exactly this; invoke God in the holes of your knowledge. Don't fully understand why your god had to sacrifice himself to himself without it being a true sacrifice? There's god staring back at us. Not sure about the meaning of the trinity? Let theology fill the gaps with god. I'm sure your argument here is that theology isn't worthless, that it provides some degree of epistemics into the dogma of your choice, but I'm inclined to point to 2000 years of historical Christian doctrine as a counter-example of how Augustine still today is perhaps the most important Christian escapologist throughout time and the 38.000 denominations that has come through it (although, to be fair, it's not always through Augustine, but I dare say half of those denominations are directly linked to his thinking).
The universe itself, life, consciousness, and intelligence. None of these things have natural explanations, or has science finally explained how I can create an original thought and I missed the memo?
What, exactly, do you mean by a natural explanation? All of these things have natural explanations (that is, rooted in naturalism or natural philosophy) with tons of epistemic knowledge to back it up that reaches far, far beyond anything any religion would ever hope to come up with. What is your measuring stick? I'm starting to suspect that your 'natural explanation' isn't how scientists would use the term (or normal people, come to that), and we have perhaps more a problem of language than we have of evidence or explanations.
What science can ever explain how I can just construct a story?
I'm puzzled by this assertion. Neuroscience, of course, the science of how the brain works. Very interesting field which I follow very, very closely. (It's no accident that some of the most high profile atheist have degrees in or around neuroscience, like Sam Harris and Michael Shermer of experimental psychology, or PZ Myers as an extension to embryology and brain development)

I suspect (but feel free to correct me) that what you mean is some level of explanation that we simply do not yet have, like the edge of current scientific knowledge where complexity of the brain reaches outside of what we currently are able to figure out. Note, however, that I do not accept God of the gaps, the notion that our holes in our knowledge can be filled with god; it has been proven over and over throughout time to be simply wrong. Why is it that every time science reach a limit (of understanding, of instrument, of observation, whatever) the religious are fast to shout "God!"? I kinda understand that you don't agree with God of the gaps either, but when we really scrutinize theology, what apart from God of the gaps is there on any important religious doctrine?
I don’t see how there could ever be a scientific explanation for Stephen King having a flash of inspiration, working with it, and then churning out Carrie, or The Dark Half, or the story collection Four Past Midnight.
Is this assertion based on your knowledge of neuroscience, or the lack of it? In other words, could it be that you "don't see how something can be" some way simply by virtue that you don't understand it? The opposite end require that you have deep knowledge about the subject before you can reject its premise.
The point of my entire rant is that the metaphysical naturalist precludes even asking the question “why” by eliminating the supernatural on a mere definition rather than investigating it.
Okay, so we're finally coming to the meat of it; the never-ending why vs. how question. However, there's some glaring problems with this notion which we'll soon see, but first we're heading back into knowledge land;
As to your second point, I am not attacking anything imaginary. These distinction most certainly exist. It is you, the metaphysical naturalist, who doesn’t want them to exist. As hard as you fight to rid even the possibility of anything supernatural existing, it is clearly your side that seeks to suppress any voice of reason from my side.
This paragraph is lush in problems, but you're simply missing the point; I wasn't claiming that the distinction was imaginary, I said the constraints you make through them were. I'm not fighting hard to keep the supernatural out of these categories; they are by their very definition supernatural-free (that's pretty much the definition of Natural). It is your job to provide evidence that there is such a super thing in addition. Don't put the burden of proof on those who don't see what sounds like your imaginary things.

Another problem with this is that you make claims that your super-natural domain seeps into and reacts directly with the natural world. At that point the super-natural enters the natural world, and it can be measured, tested and prodded to see if the super-natural indeed has any effect. And this is the problem; when we do measure, test and prod, we do not find anything but that which exists in the natural world. This is the constraint you are trying to break down, so I'm not suppressing your voice of "reason", I'm not trying to get the super-natural out of the bucket of answers for things, I'm saying that you are making claims of a category that don't show up in that category. What does it even mean to be super-natural when the effect is claimed to be in the natural world? If its effect is in the natural world, it belongs in the testable domain of metaphysical naturalism.

In other words; what is this super natural domain of which you speak? Seriously; what is it?
An argument from ontology would point you to the fact that this is how we came about the knowledge of the Big Bang, quantum states of the universe (positive gravity energy counters out mass of the universe, etc.) and the red-shift origins and direction of the universe, how it began and how it probably will end. Your argument is just not very sharp; knowledge in science is a string of connected pieces of evidence and further knowledge, and trying to make everything black and white is not going to make you understand much, if anything. The understanding of something – almost anything! – will lead to understanding of something else. There is no finite knowledge in science, only in religion.
I don’t have a clue what you’re tying to say with your jab on religion. 
It wasn't a jab. Finite knowledge, absolute truth, is what religion is all about. It says in the bible that X and Y are so (Was Jesus the son of God? Yes, or no? Very absolute, and not open to discussion), therefore it is so. You might argue that theology tries very hard to make biblical knowledge more plastic and flexible, but it certainly doesn't lead to a unity among Christians (rather the opposite), and it seems only to be able to shift dogma in the very outer reaches of faith and hardly ever on core parts of it, so you can have theology challenge the meaning of Psalm 129, but no theology can alter the meaning of John 3:16 (although we atheists do, much to your chagrin and invocation of the dreaded 'you don't know theology!'). Within science on the other hand, there is no dogma; that would be hunted down, and rooted from the system of knowledge very fast; bias is shunned; opinion is bunk. If Einstein said something dumb, no scientist would claim it otherwise. Newton was a genius who shaped much of modern math and physics, but he also meddled in alchemy; he's not remembered for the latter, nor did the latter become true or respected because of him. Opinion is truly bunk. Theories change with more evidence. Scientific knowledge is always moving, until they become so undeniable that we call them laws, however, even "laws" are nothing more than strong theories for which there is no counter evidence. But we're still open to them changing, and that is the beauty of science.
But here’s where I think you’re wrong with the rest of what you’re saying here: the Big Bang, quantum states, and red shift can tell us things like what happened at the moment time began, how the universe consists, and how old the universe might be. None of these pieces of evidence can explain why any of this is in motion. That is where theology comes into play.
I can only assume your "why this is in motion" here has nothing to do with actual motion, but more about the reason they exist, yes? I'm sorry if I've missed some important sermon of late, but what does theology in fact tell us about the purpose of any of those things? Or even why your god created humans, or anything else. Or why he or she did X over any Y.

The distinction between why and how is bunk and bogus, a linguistic construct. "Why does the Earth orbit the Sun?" has a ton of empiric knowledge and evidence that doesn't discriminate using the question "Why?" If you're going to proclaim that theology ponders why something is, you need to explain what that actually means, what specific thing you require a Why for, and how theology answers it better than Science.

Why did I have chicken for lunch today? I have good empirical explanations for that. Why is there a universe? I have good empirical evidence for that, too. You may not like or understand those reasons, but I think this is again more a failure of communication and language than of the meat of the discussion. Theology doesn't answer any why's at all, when you think about it; it speculates, ponders, tries to explain stuff in a framework that's already on shaky epistemic ground, that usually only works within its framework but not when applied to science and the natural world. Theology is nothing short of speculation and opinion, and does not give us answers to Why. I dare you to provide Why questions theology solves that Science have nothing on.
Some scientists probably are pondering the why, but philosophy gets them there, not science.
I'll let this stand as a testament to your knowledge on what science is. Like why it used to be called natural philosophy.
Science can inform or shed light on philosophical musings, but will never actually provide a why answer. God of the gaps covers “how”–we don’t know how DNA got there, so God did it! But that’s not what I’m saying here at all. I’m saying that we can know how DNA got here through science, but we won’t know why without philosophy or theology.
Show me the Why. What Why? What Why compel us to reach for theology? The more knowledge about anything that is accumulated, the complexity of the knowledge-base gets funneled into a narrower and narrower definition of explanation, and it matters not whether you call it a "why" or a "how" question.

No, what needs to happen here is that you need to come up with a category of questions that theology only can answer that isn't religious-specific (so, questions about the world to which only religion has an answer) that do not rely on the linguistic definition of it.
James Hannam’s blog is dedicated to debunking myths about the Middle Ages, especially the myth that the Church is anti-science. A longer, more involved primer on this would be The Victory of Reason by Rodney Stark. Here’s a post from a Christian lamenting the hostility between Christians and atheists over science; both use science to support their respective positions and it gets ugly. That’s not necessary.
Science and scientists do not care what other scientist think, feel, or opine as long as good science is performed. And good science is backed by evidence. Inside that process you can believe whatever you want, and Christian scientists - as well as atheist, Muslim, Hindu, Wiccan or any other belief system person scientist - are lauded for their science, and not for their other thoughts on stuff. And that's the rub, isn't it? Science within the domain of science is what we all agree on is good and well. This is the methodological naturalism we talked about earlier, the process that - as long as we all abide to it - creates good, trusted, evidence-based science, a uniting force for the betterment of mankind.

It's when the Christian scientists stops being a scientists we say that their purported science isn't science anymore. It's when their faulty logic or reasoning seeps into their science we proclaim it to be wrong. And this is especially noticeable when scientific evidence and theories rubs against religious dogma and doctrine. When we finally all agreed that the planet we lived on was round and like a ball, theology came along to tell us that the mountain in Matthew 4:1-11 really was an allegory, or that the expression "all the kingdoms of the world" was, or many other versions on the same topic, trying to fit the words of the bible fit with new scientific knowledge, no matter if that's the word written in scripture. Christians then needed to make a choice; accept theology, or stop believing (because, what is it that compel us to accept that one part of the bible is true while some other part is allegory or just wrong? Surely Christians wouldn't pick and choose that which suited them ...). This process is happening in Christians all the time, especially among those who actually cares about the truth rather than care about their religious belief.

As to the original point that many Christians were scientists, and that science came out of a Christian society and culture, I'm sorry but you need to back that up with some well-founded rhetoric; all the evidence - all the writings of thousands of scientists throughout history - lament religious rejection as evidence appears in varying degrees, with few exceptions. You cannot argue that Newton was a Christian even if he believed in God, because in his time there was not an option not to believe it. Societal allowance of your own religious beliefs - or lack of any such - is a fairly modern invention.
So the old canards that scientists are scientists despite their religious background, the Catholic Church suppressed science in the Middle Ages, and faith is antithetical to reason are just bogus.
Galileo Galilei's struggles were bogus? You're perpetrating a myth.
There are plenty of reasonable religious people who aren’t reasonable despite religion, but reasonable and religious. Strange to many atheists, I know. But it is true. And church history is filled with such folks.
You got this backwards. Some good scientists are Christian, for sure, but they a) probably didn't become Christian after they became scientists, and b) we humans have a cunning ability to hold both rational and irrational thoughts at the same time. I have an irrational fear of darkness, but also have a rational stance on spiders. I have an irrational view on the meaning of love, but I have a rational way of dealing with it. If people who have irrational beliefs are able to make compromises in their heads that do not interfere with their science, I won't protest, it's their own business.
“Indeed, only theology is capable of establishing why.” 
No, what you really mean is “Indeed, only theology is capable of establishing a religious-framed, dogma-based why.” 
Don’t tell me what I mean. 
 As as rhetorical device, I think you understand well what that phrasing is supposed to mean;

  1. I disagree with what you're saying
  2. Here's a revision to emphasize where you're wrong
And as such, let me make this glaringly clear; theology only works if you accept the tenants of religious belief. Saying outright that everybody else who don't subscribe to your faith-based model of thinking is wrong, and that your theology is the only option for truth (or the nonsensical "why" questions), then you are far beyond arrogant and reasonable; you've entered the waters of the lunatic religious who can't tell the difference between your opinion and evidence to back it up, or between a rational argument and a religious one.

Theology is making the impossible seem plausible in the light of contradictory evidence, and is not something anyone should be proud of (not to mention the implications for just how loud and clear your god’s message in the bible really is, needing an army of theologians to explain and ponder and postulate and theorize and channel and project and often just make up stuff in order to make sense of the bible and often to try not to look too embarrassed about what it actually says …) 
And your rumination on theology is totally misguided. Theology is gaining knowledge about God, which we do through the study of nature and Scripture.
Where is the evidence of this? And what do you mean, knowledge about God? You've got the bible - a book, a finite set of scriptures - and a culture of religious belief and doctrine. From this you try to make compromises between the doctrine, the scripture and the real world, and this is called theology; explaining to people how these three fit together. Most of the time theology tries to explain scripture in lieu with developments in the real world, be it societal or scientific (ie. how does the story of Job fit with the notion of unnecessary suffering or fairness? Or, what do we think about God's commands of genocide in the old testament, and the Geneva convention? [or, you know, just our ethical inner lives]), but where does this constitute knowledge about God? It is just your opinion! You may proclaim that it's the holy ghost that leads you down the garden path of this knowledge, which is, as you well know, another statement based on a severe lack of evidence, just like all "revealed truths."

You claim to understand epistemology, that thing I started with at the top, yet as soon as you find yourself or shaky epistemic ground, you shun it and leap straight for theology as if it was some kind of safe harbor? Theology is opinion. Prove me otherwise.
Perspicuity of Scripture is a central tenet of Christianity, and (as both Indy David and I have been attempting to explain to Boz) Scripture is abundantly clear. I know this because even the dimmest atheist can turn on the TV and realize that televangelists like Joel Osteen and TD Jakes are full of it. 
I'm sure we agree that they're full of it, but they use the same methods as you (theology), same source as you (scripture), over the same culture as you (doctrine), bent over the same epistemological anvil (faith).

How do you explain this? As far as I can tell, what sets you apart is your own opinions (which you may or may not attribute to the holy ghost, or revealed truth, which both are ripe with epistemic problems) and rationalizations, that the person you are will weigh the outcome / theology of your process.
Because even the atheist, allegedly unable to grasp Scripture (Ehp 4:18), can still—based on Scripture!–see that these people are selling something different than what the Bible says. I’m betting you, or any atheist in this thread could refute Paula White using the Bible.
Oh, absolutely! But that's because we understand theology, and we can refute both her and you and any other Christian that has their own theology about something or other; the Bible is so full of contradictions, vague notions and concepts, stories and ideas that, frankly, any position on almost anything is possible through it. What makes me different from Paula in regards to you is that I don't use faith as part of my reasoning, I don't allow irrationality in. However, it unites the two of you far stronger than it unites you and me, to put it that way. Theology is that point where you accept the irrational into critical analysis.
Christians agree far more often than we disagree. We may get to those same conclusions by different ways, but the point is we do get there. But so what? Does disunity necessarily disqualify from truth? Science isn’t unified at all, and changes quite frequently everything it says. Is fast food harmful or good for you? Scientific studies that support both can be found. Is Ida the missing link in human evolution? A best selling book said yes, but a panel of other scientists soon concluded no. What about global warming? I’ve seen both sides argued convincingly. What’s the point? That scientists argue methods and conclusions can’t be used to know objective truth. Yet, when theologians argue methods and conclusions about the Bible, it somehow becomes proof that theology is nonsense. What a fantastic double standard!
I don't think you understand science. Now, I'm not saying that to be mean or to lament to make a stronger argument, but simply state it from reading your take on science. I've written before about the difference between science and scientists (and I seem to come back to that all the time), where we humans are biased and fallible (hey, we agree on the basic tenant of sin, although disagree wildly on what it is and why it is there). We all are tempted to make money, to gain power, to get our faces or names known, to be at the center of controversy, to lead the way, and the sad fact is that some of us humans are simply better humans than others, despite being scientists, atheists, Christians, Muslim, Hindu, Buddhist, Wiccan or otherwise.

And isn't this what you say about the televangelists above? You are a better Christian than them? There are good and bad scientists. But here's the difference; science, as methodological naturalism, has tons of methods and mechanisms in place to root out the bad, and preserve the good, by focusing on evidence (the opposite of opinion), and allow bad theories to die away, to always be ready to update or revise current thinking in lieu of new evidence. A lot of the examples you bring up is just scientists acting badly (or stupidly), like Ida being some kind of missing link (no serious scientists believes there's missing links, that's purely a media construct) which was ferociously debunked, not by clergy, or politicians, or normal people, or otherwise clever people, but by other scientists! This is how Science always wins out over scientists, and this distinction is vital to understand. No one fights new evidence in science as hard as other scientists. This is what leads to strong trust in fallibility, predictability and evidence.
If we mutually respected one another and left the questions each is equipped to answer in the domain it is meant to stay (science = how; theology = why) there would be far less problems in this area. Problem is that scientists are trying to venture into areas that theology is better equipped to answer, and our fighting it is caricatured as our “knowing” that science will disprove God and wreck our faith. Nope. We just know that science cannot answer questions of why we are humans with the faculties we have, and moreover it can’t tell us a moral use of these faculties. Science has its place. But so does theology. One will never unseat the other.

Spoken like a true faith-based religious theologian. Your assertion "science cannot answer questions of why we are humans with the faculties we have" is demonstrably false, and holding on to this nonsense is exactly why people like me needs to engage with people like you; for some reason you are weaving yourself into a model of thinking that is two steps removed from reality, where your language is semantically disjoint from a discourse than can happen between your lamented mutually respected parties. If you insists on opinion having any truth value, then you've got the wrong idea about how to engage with rationality and - dare I say it? - the real world.

There's just too much space between us to make it to a rational level. The How vs. Why distinction is not real, theology is mere opinion to the rest of the real world (even if you think it's truth or valuable or knowledge in your head), science answers far more questions than you care to admit, and the categories you give each team is artificially disjointed by constraints of faith.

Lastly, "Science has its place. But so does theology. One will never unseat the other" is damn true; Science proves itself again and again to have actual results and empirical values of truth, while theology makes the religious cope with the onslaught of scientific knowledge upon a frail epistemic faith-based world view. Theology will never take over science, by virtue of being opinion not based in anything close to empiricism, and science will never take theology's place by virtue of its definition;
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions.
Note those two latter; can you test it? Can you make predictions on it? No? Then it isn't science, and you can call it whatever fairy name you want. Like, religion, and that's fine, but don't insert your own criteria into a debate, don't serialize and split apart natural philosophy in order to shoot it down with theology. Calling that a straw-man argument is being far too polite.

How do you know that your knowledge is true? You claim theology to generate knowledge, but how do you know your knowledge is true? It has no value if it isn't true, in fact I'd say it has some pretty clear disadvantages if it isn't true; it means your life is based on a fantasy, on knowledge that isn't true. For every statement ever made from the Bible, how do you know your knowledge is true?

How do you know your knowledge is true? Methodological naturalism. QED.
There was an error in this gadget