Thursday, May 21, 2015

AI and bad thinking : Sam Harris (and others) on Artificial Intelligence


TL;DR: Don't be seduced by the Hollywood bullshit! Think a little deeper!


Bias and background

So, I mostly like Sam Harris. Sure, we disagree on a few things (like gun ownership, violence management, some politics, and his last interaction with Noam Chomsky was just terrible) but mostly I agree with him, and I consider him a philosophical middle-weight (this ranks well above most people who call themselves philosopher, btw. The heavy-weight category is only a handful) and public debater, on par with my hero Christopher Hitchens albeit using a very different style. I would recommend people listen to him more carefully.

However, in his latest podcast episode, called "Ask me anything #1", someone asked him about his latest views on Artificial Intelligence (AI for short), and they weren't all positive. You see, he's lately been invited along to a conference on the matter of threats to human existence, and AI feature very, very high on the list of potential human extinguishers. He lists Elon Musk as his friend and inviter, and within the conference refers to "people close to development of AI" that all agree with the following;

AI's will get smarter or more advanced than the human intelligence, will be able to modify and improve their own code, and come to some negative conclusion about us puny humans, whereas the next logical step is, of course, "Exterminate the humans!" I might be paraphrasing.

Harris didn't list who these people "close to the development" were, but I can probably rattle off a few names that might have spoken or were present there, like the electronics maker Ray Kurzweil (and I'll point to the latest of his books on the subject, "The Sigularity is Near"), the philosopher Nick Bostrom (who lately paid a visit to my favourite philosophy podcast, The Partially-examined Life, where the episode with Bostrom has the best and possibly most fitting cartoon-version of Nick!), possibly Robert Li or Bill Gates and / or a bunch of other high-profile tech company big-wigs, and maybe some smaller characters like Luke Muelhauser (who I have a great deal of respect for). There's tons more, because this is a fun subject!

Also, for reference, have a quick look through this page on AI takeover (AI has to take over something in order to wield power to kill us, of course), and I'll just casually note that half of that page is of examples of these bad AI's from science fiction. Let that sink in for a few seconds.

Last, but not least, let me add that I used to develop AI's for a living, through high-security surveillance systems using video motion detection, for prisons, nuclear power plants, and the like. And even though I don't work in that field any more, I still try to keep myself up to date with the goings on in AI. I really love this stuff!

Types of AI

Ok, so to be fair, there's a number of different kinds of AI people talk about, and it's easy to get them all confused, and they range from the simplest computer analytics of yesterday to the superbeings of the future. Within the AI community (there's no such thing, really, but let's make it one of all the people who have some kind of interest in all things AI) we these days talk mainly of two types of AI;

Weak AI

This is the current state of affairs, with big computers crunching data from the real world, using analytics and neural network simulations to report to and interact with us in meaningful and helpful ways. Nobody really invokes the word "intelligence" with this one, it's only there for historical reasons. Sam Harris admittedly speak of an AI that is somewhere between this one, and the next. This is the one we all use as a platform to leap into the future of possibilities from ...

Strong AI

... and this is where we usually land; the super-intelligent, possibly conscious piece of software running on complex computers that have super-human intellectual capabilities. I call this the Hollywood AI, but a lot of people call it "superintelligence" or "strong AI" or "conscious machines" or even "artificial general intelligence (AGI)" or somesuch. However, this one is so loosely defined as to defy an intellectual discussion which we'll soon see, and in many ways this is the one I protest the most. Shall we?

I realize that both of these types of AI uses the "intelligence" moniker. For the first type, it's for historical reasons, as mentioned, since "weak AI" was the thing we used to call just AI. This is because when the terms were coined, there's was little to no deep thinking or hard knowledge about the issues and terms thrown around to discuss these things, and, I suspect, our perfectly vibrant human creative urges made us project into the future about what incredible feats of software we could produce, hence they were thought to be intelligent back then.

But of course 30 years of insanely little progress in the "intelligence" department require us to rethink our concepts a bit, and so the weak and strong monikers were invented to denote some kind of magic threshold between where we're up to and where we might be going. When "people in the know" are asked about when they think this "strong AI" might happen, they project some time into the future, from "next 5-10 years"(which is what experts have been claiming for the last 40 years, at least, but of course this time it will be true!) to roughly about 50 years or so, depending on the expert in question.

They're so wrong.

Intelligence

Let's jump off the deep end; the biggest issue here is that nothing worth talking about in this "debate" is well-defined or even properly understood. Let's start at the very beginning with the following two questions;
  • What is artificial?
  • What is intelligence?
If we are to talk about AI's being either good or bad, or in some other state that can be considered a threat to human existence, we need to define what this thing is, otherwise it is pure speculation, or, science fiction, and if there's anything these people really, really hate is to be put into the bucket of science fiction. So I'm not going to do that. Yet.

Bostrom and Kurzweil (both mentioned earlier as big "strong AI" proponents) are fond of defining themselves a bit out of it by saying the threat is called "super-human intelligence", or something to that effect. Now, if we poke around a bit, we might get something like this;
"A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind"
So it's a thing that has an intelligence that has surpassed ours, which begs a couple of questions;

1. what is intelligence? Well, it's neither here nor there; there are so many factors that come into it, and the definitions vary greatly across people and time, encompasses different models of values and epistemology. In short, we're not quite sure how to talk about it. Is it the stricter definitions of universal traits we're after, or the more folksy way of talking about the smartness of people (and things), or somewhere along the scale? Or outside it?

Consider this; think of all the people you know of that are really smart, and now list the things that make them smart. How many things are there on your list? In fact, in how many different kinds of ways do we think smartness, or intelligence, come into play? I have a friend who's really smart with money, but he's a bit of a dick. Does this mean we need to define dickishness as part of being smart? No one would say that, for sure, even if it just might be true. But then, I have another friend, and she's mathematically smart and really clever at crossword puzzles, yet struggles with depression.

So, are we to define up all the positive traits, and try to leave out all the bad? In my examples above we could easily define dickishness and depression as negative things we don't want, but keep in mind that these are broad, overarching, stereotypical and imprecise words we use to explain something far, far more complex underneath. There is no such thing as dickishness; it's a number of factors of a persons traits and properties that together add up the term we use. The same with depression. Some of these traits and properties might be needed for intelligence to happen, like being obsessive with details or logic. Sure, that might lead to logical positivism (yeah, not an endorsement), or maybe true mathematical insight, or, you know, OCD. I'm not going to say that it is either positive or negative, because the point is that it can be both and neither.

Now the same with the word "intelligence." See the problem?

2. What does it mean that an intelligence has surpassed another? Given that intelligence is a compound of a number of traits and properties of beings, like a big city where some buildings are tall and some are small, some are this color, others have that kind of roof, this one houses a lot of people, but this one houses but a few, this one has computers in them, this is a library, this, that, this, that ... hundreds of buildings of various kinds. How do you compare Mumbai, India to Rio, Brazil? How do you proclaim that one of them have surpassed the other? Sure, in people, or money, or the number of houses, or some other trivial data, but surpassing as a whole?

If you don't think deeply about things, then sure, Mumbai has surpassed Rio, or the other way around, depending on your bias and information. Now what does that tell you? Well, it should give you a hint about the fact that "intelligence" is nothing more than a statistical average explained in vague terms we people use on each other. And that is no platform on which any serious debate about one loosely defined thing is better or worse than some other loosely defined thing might be.

Building blocks of AI, or any I

Each "intelligence" consists of a set of building blocks of various sizes and capabilities. Let's dig into some common definitions of the kind traits we think an intelligence should have, and just briefly point out some complexities and issues along the way. And keep in mind that these kind of definitions are rife with fast and loose facts and definitions (although I would point out if they are completely wrong or very different from what these "near the AI development" people would actually say), so this is just to point out some surface problems before we get deeper down.

Let's for the sake of argument use this one;
"A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do"
Reason - "[...] is the capacity for consciously making sense of things, applying logic, establishing and verifying facts, and changing or justifying practices, institutions, and beliefs based on new or existing information". There's almost too much to unpack in this one alone, but pointing to consciousness, facts, beliefs, justifying practices and other epistemologically difficult problems, well, I'm not sure what we think our current technology is capable of doing with any of these. But it's pretty clear that we don't actually agree on what reason is, and how it's supposed to work. The only thing we really got down in terms of computer systems is the logic part, however I'll talk a bit more on this later on when I talk about mathematics and Gödel.

Plan - "[...] is one of the executive functions, it encompasses the neurological processes involved in the formulation, evaluation and selection of a sequence of thoughts and actions to achieve a desired goal."  Here we get into desires, thoughts goals and actions. The two latter I can see being approached in an AI with some degree of complexity, but the two former are seriously hard nuts to crack; it implies non-fatalistic networks of information that affect other non-fatalistic networks in order to evaluate and reshape the information, not to mention some pretty complex handling of data that deals with this thing we call "memory." I'll talk more about that later.

Abstract thinking - "[...] in its main sense is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods. "An abstraction" is the product of this process—a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category" So, how does an AI today deal with concepts? We mostly use symbolic logic and languages to do AI, which is fine and well, but these are representational in nature and backed by logic; they are, in other words, bound to what we already know in terms of computers and programming, they fall into mathematical pitfalls and don't offer us any solutions to the before-mentioned hard problems, but most importantly they are guided and defined by the programmer. There is no intelligence here, only symbolic definitions defined elsewhere.

I could go on, but the point is that all of these things are really, really hard problems to solve. Sure, hard doesn't mean impossible, but don't let that truism hide the fact that when we, today, talk about what AI can and can't do - and hey, that's an important point, because from what it can do now we project into the future what it might be able to do - we really see that we're scratching some limited amount of surface in very limited ways and from this limited progress we extrapolate huge progress in the future.

There's also the deeper point that once you start parsing out what is actually involved in these things, the complexities stand out more, and I know for a fact that a lot of people "close to the development of AI" tend to forget this (often unknowingly), so it serves as a timely reminder.

Now, as a pointer back into that mystical and tedious land of philosophy, I want to remind us all that we have a slightly more difficult issue at hand; we're all engaging in language games. I refer you to Wittgenstein, one of my hero philosophers (yes, he's one of the few in the heavy-weight category), especially his latter incarnation where he rejected his earlier axiomatic and logical positivist attempts at epistemology. He's got a lot to say on a number of topics, of course, but here I'd like to riff a bit on the fact that we're all speaking a variety of languages, using a variety of words, that mean different things in a variety of contexts, and that there's interpretations and translations between a variety of players (where the self, the You, is playing several of those parts) in trying to figure out meaning and message, as well as the different interpretations you get from a variety of actions based on someone's conscious and subconscious workings. So, keep this in mind; I'll refer to Wittgenstein later on, but if you should only read one of the linked articles, Wittgenstein on language games is the one to go for. He's was a really intelligent guy.

So, when I say "Wittgenstein was really intelligent", what is being said? Well, it depends. It's not that clear. Maybe I should write a book about it to clarify my position.

Superintelligence

Nick Bostrom has written a book like that, and his "Superintelligence" is trying to a) define what it is, and b) convince us that the threat is real. I'll use this book as a shorthand way to refer the people I listed further up. Sure, there's subtleties and variation, but for simplicity's sake, I think it rather nicely represent them all in this debate as a lot of them also happens to endorse his book.

In order for something to be superintelligent we need to know what intelligent is. However, I think it's reasonable to argue that we don't really have that knowledge, but let's again for the sake of argument give some slack to all these definitions, and maybe talk about what this thing is supposed to be like, according to Bostrom.

Bostrom says that it is "[...] an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills"

Let's compare that to state of the AI art. What are we currently up to? Well, a huge proportion of it is recognising things. Just, you know, look at a picture (or a sound, or a stream, like a video, which was my field of expertise for many years) and tell me what you see. This is an important part of pattern recognition, and in my view sits at the core of the human empiric experience; if you can experience things and link this stream of input to symbols for further processing (like an input into a neural network, for example), then at least you have a framework for interaction, and a seed of something that might have epistemological grounding. (A lot of this talk is about images, but it applies equally well to sound and touch) So far, so good.

So, how are they going? Well, not all that great, even though the machines are getting better at looking at a picture and give you some keywords to what they recognise that thing to be. In clearly outlined images that are somewhat histographically easy on your filters and thresholds (I could rant for hours about just how shady [pun for the initiated] the techniques we use to trick normal people into thinking our software is clever, but that's a different post waiting to be written), sure, we get ... a few words of what might be in an image. Normally this is done through running millions of images through a neural network simulation, throwing statistical analysis in there for good measure, and if you think intelligence is more or less the same as statistics, then sure, maybe we've come a long way.

But the problem isn't even recognition in itself, but the opposite of that, of not seeing anything at all. Show the computer a fuzzy image, or something out of focus, or something that's half in frame, or any of a million different "what if" contenders, and chances are things are not recognized, or what's worse, recognized as something they are not. The same problem crops up again and again in various parts of machine recognition. Take, for example, autonomous cars, which Alon Musk (Sam's inviting friend) is getting himself heavily involved in. The problem there isn't to make the car follow the rules, but when to break them; if you and your car is in a situation where it makes sense to break the traffic rules in order save lives (for example, a big boulder is heading right for you, and the only way to save your life is to swerve quickly across a double line). We humans are great at breaking rules. Computers not at all.

I need to be a bit clearer on this point, though, namely how the rules are defined by us humans. The outcome of those rules are judged through our human values and evaluations. Make no mistake about it; when we judge what is and isn't an AI, we're judging it through our very human notions of what an AI should be like. But intelligence isn't just a human thing, and within lurks a big problem; how can we talk about superintelligence? We don't know what that nor normal intelligence is!

I'll also point out at this point that Bostrom mentions creativity, wisdom and social skills as being at the core of this superintelligence. Given that not a single experiment with AI that has ever been conducted even comes close to deal with those three core concepts is telling. Because, frankly, they are really, really hard. Being creative is hard for humans, little less trying to figure out how a computer can do that too. And wisdom? Any person worth their epistemological salt would laugh very hard right now. (If this point needs further clarification after you've read up on knowledge and wisdom, let me know) Finally, social skills. Apart from statistical methods of analysis and response, we have no AI that understands the social. Or, frankly, understands anything.

And that's at the crux of this whole problem of AI's in general; they don't understand anything, they just analyse data and give humanly defined responses. Most people have higher expectations of an AI than simply analytics and logic, and I'd go as far as to say that we probably want a superintelligence to understand stuff. A lot of this whole discussion might be summarised in the following question; what do we really mean when we say "understand"?

Understanding

This time I want to start with a more head-on definition;
"1. a. To become aware of the nature and significance of; know or comprehend: She understands the difficulty involved. b. To become aware of the intended meaning of (a person or remark, for example): We understand what they're saying; we just disagreewith it. When he began describing his eccentric theories, we could no longer understand him. c. To know and be tolerant or sympathetic toward: hoped that they would understand my complaint.
2. To know thoroughly by close contact or long experience with: That teacher understands children. I understand the basics of car repair.3. a. To learn indirectly or infer, as from hearsay: understand his departure was unexpected. Am I to understand you are staying the night b. To assume to be or accept as agreed: It is understood that the fee will be $50.4. To supply or add (words or a meaning, for example) mentally: verb is understood at the end of the statement "Yes, let's.""
Let's make one thing very clear; to understand something is not the same as to remember something, not even remembering patterns associated with given symbols (which is where current AI is up to) or analyse it or process it. No, we're talking here about comprehension, meaning, intentions, awareness, tolerance, sympathy, learning, and other mental modes. Anyone close to "the development of AI" knows darn well that we've got nothing on any of these. At the moment it's all analytics, threshold, symbolic logic, some fuzzy logic, and random-ish notions thrown in to make the systems a little more human, but nowhere in AI research is there a hint of, say, "sympathy", or "awareness." Maybe you could scratch the surface of some of the others, but by golly, we are so far removed from the problem that extrapolating progress from it is just, well, baffling to see. If people "near the development of AI" says that they got prototypes of an AI that understands anything, Nobel's and other accolades would already be placed firmly in their hands. But they are not.

Let's take a look at the first example given; "She understands the difficulty involved." You and I, dear reader, understand what this means in broad terms. We don't know what she understands, but we understand what it means to understand this difficulty, because we understand and have experience with past difficulties. We know what a difficulty is. We understand that the generic 'difficulty' is somewhat linked to our own understanding of 'difficulty.' We understand these basic idioms easily.

So it is that an AI must also understand these basic idioms before we can even talk about what it means to understand, but analytics don't get you there. Analytics gets you from complex data, to a parsing tree, to symbolic logic, to a statistical probability of options of that data, however that's not understanding anything at this point, because there is no data. We don't have the data, only a conceptual relationship between 'difficulty' and our experiences of such. But current AI don't really know what to do with no data. (An interjection here is to make 'difficulty' as an abstract concept operate as data, however there's a trap in just classifying everything as data as we shall soon see)

I need to invoke Wittgenstein again here (even though other philosophers of course also has made similar points) and his notion of language games; just because you understand how a pawn in a game of chess moves, doesn't mean you understand what the pawn really is, little less how to play chess. Language that we use in order to explain what AI is all about (as we've done above) is fraught with the errors in outspread and unclear and non-consistent definitions of what we're talking about. And so that's not the same as understanding what this thing called AI really is, or even how that's supposed to be an issue in the context of whatever our understanding of human nature is. Hmm, how can I explain this clearer?

An AI is smart when it recognises what we expect it to recognise, or do what we expect it to do, and only then. If it does anything we don't expect or don't understand, we cannot proclaim to understand that unexpected. Was that clever or stupid? Well, in human terms it looked pretty stupid, but do we know for sure that a dog licking himself is a stupid thing to do? For a dog?

If a supposed AI recognises a forest of trees in a macro picture of dust particles, do we know for sure whether that is stupid or wrong, or it being creative, or having an imagination? This is a major problem of evaluation and how we can talk about something being a threat or otherwise. We judge everything through human values, and rather sterile and logical ones as such. Why is that at the core of intelligence?

The right stuff

It just might be that, technically, we can create an AI. Like Sam Harris, I'm a hardcore physicalist, so sure, it must be possible, at least when we build with "the right stuff", but that right there might just be the main issue all and in itself; using the stuff we're currently using (and this is the kind of stuff that Moore's Law apply to, I might add), such as bits and bytes and silicon and integers and binary memory and registers.

When we simulate a neuron in a computer, for example, as many AI researches do when they try to use human modelling as a basis for AI, we really have no idea what we're doing! We are converting a biological analogue into a digital filtered signal, and just patch things up in ways that "work", but may or may not be what's actually happening in nature. It doesn't matter at what speeds it does their bidding, or what capacity they have for capturing some phenomena; it just might be that there are some things that don't translate well into digital form.

The computer screen in front of you most likely can represent something like 224 or 16,777,216 color variations (24 bit color representation, possibly with 8 bit depth of transparency), and for most people that's more than enough. We watch movies, do image manipulation and otherwise enjoy our colorful computing life using only these roughly 16 million colors. It looks awesome. This is because we humans have eyes that can discern around 10 million colors or shades thereof, again roughly speaking. However, how many colors are there, really, out there, in nature? How much are we not able to see?

The answer to that is; almost infinite, because the scale of colors is infinitely large with almost infinitely finely grades of differences in frequency. It's so large a spectrum with such small gradients that we don't have instruments to capture either size or depth. In other words, as far as we can see with our eyes we capture it all, but in reality we're seeing a tiny, tiny, infinitely small part of reality.

I suspect that what we capture in our computers in regards to AI are those 24 bits of richness; it's an infinite tiny small part of reality, so small as to render the attempt laughingly futile. We make a 24-bit deep AI, when in order for it to have the features we'd like it to have (or speculate it should have), we're off by such a large amount that it seems, well, pointless. Sure, we can re-create something that looks a bit like reality in our computer, but, really, it's a long, long way off. We're trying to build our AI with something that simply can't build it.

Now don't get me wrong; I love AI concepts, and I used to make them for a living. I follow the field, as well as developments in computer languages and hardware. I even understand a good chunk of the saviour of any AI geek out there, quantum computing. But just like the bit has a problem, so does the qubit, and mostly it's because it's still a bit. Sure, it's faster and bigger and oozing with potential, but that potential is quickly gobbled up in the state of the art of quantum computing and our digital approach to logically building our tools. And there's the odd sensationalist new discovery that's going to make it all happen, even if there's been sensations like this happening since the beginning of computing. I'm not holding my breath. Moore's Law is not going to get us there; faster computers gives us a much faster way to do, well, really dumb things.

Some HiFi enthusiasm is spot on

Do you know why there's some HiFi enthusiasts out there still who refuse to buy digital stereo receivers and amplifiers and persist in buying music on vinyl records? That's because the 16-bit (or 18 if you're fanzy) resolution of most recorded music (although there's a slight move towards 24-bit resolutions these days) is missing something. It's hard to pin down, and I think even the most sophisticated enthusiast would be hard pressed to really pin it down. But there's ... something. Something is lost in the digital version. Some call it warmth, some might be referring to "quality" (and again we should read a little Wittgenstein, or maybe a bit more Pirsig on this one), but it is certainly ... something, somewhere deep in the filters and conversion from reality to the digital. Let this stand as an allegory of the problem; Bostrom hears the music, and it sounds like the real thing! It really does! So, it must be the real thing!

A similar argument is made between digital amplifiers and analogue tube amplifiers. Sure, the latter may be slightly more noisy, however the range, and warmth, and color, and depth, and all sorts of properties we associate with the good can also be found there. The digital is lesser. Flatter. A bit restrictive. Something, can't quite pin it down, they would surmise.

Another similar argument is made between CD (16 bit) sound and better 24-bit sound. And with receivers which captures radio signals along a digital vs. analogue spectrum. Or equalizers or pre-amps or capacitor conversion rates or range of converters on both sides of the equation, in recording, mixing, mastering, and playback.

P.S. Any odd cable with enough surface area to conduct properly will do, including $2 electrical cables. The monstrously expensive cables is one thing some HiFi enthusiasts get really, really wrong.

Sam Harris

Ok, so back in January 2015 Sam Harris wrote a piece on his blog which is like a prequel to what he said in his podcast. I want to dig out just the opening gambit of that, as it stands as an example of where all of this went wrong for him. His very first line reads
"It seems increasingly likely that we will one day build machines that possess superhuman intelligence."
By what do you know this? At the present we don't know what we can build in the future. We don't know the ingredients of intelligence, little less what it would mean to build a software program that contains it. Most of this I addressed earlier, but this initial premise is dubious at best.
"We need only continue to produce better computers"
My biggest objection here is the word "only", which makes it seem like all this wonderful newfangled AI only needs is a faster computer, as opposed to new knowledge, better technology, deeper understanding and tons of research. I'll chalk this one up as the wrong word injected by Sam because it makes the sentence sound better.

He then goes on to say stuff in his first paragraph that I agree with, until we get to the end of it (which will be the last I'll quote from it; the rest of his post is based on the poorly reasoned quoted);
There is no reason to believe that a suitably advanced digital computer couldn’t do the same.
What, again? Now, I see your sneaky "suitable" word in there from which you can shake off any criticism for all eternity, but ignoring that I can think of plenty of reasons why a really advanced computer can't tackle the AI conundrum;

  • We don't know what we're making
  • We don't know how to make what we want
  • We don't know by what criteria we should judge it right
  • We don't know how computers could deal with things like values, empathy, creativity and so on
  • The hardware doesn't cut it
  • The software doesn't cut it
  • The software can't cross the analytics hump
  • It might be impossible

Just off the top of my head. So there's certainly some reasons to think otherwise.

The issue at hand here isn't that we can't imagine doing it; no, it's that we might not actually be able to do it. And no one "who knows about the development of AI" would tell you otherwise. Yes, we're making strides in analytics and neural network simluations, but these things have no bearing on what we associated with anything intelligent.

What does Harris fear? Let's grant him the "weak AI" moniker, and that that thing actually exists, and if it gets a little bit smarter, maybe that's something closer to what Harris is talking about? However, it's hard to pin down if there really is a difference between Harris' AI and, say, a Bostrom superintelligence AI; they both talk about how it will take over important human intellectual endeavours, that it might be able to solve complex issues, and that we're creating something close to a god. And that we better make sure it's a good god, and not a bad one. Harris, for example, brings up the good ol' "it will try to ensure its own survival, and secure its computation resources" cliché, as if either of those two goals have to include all the scenarios from The Terminator and The Matrix wrapped up in one. (And I'll especially address the "computational resources" nonsense in the next section) What, exactly, makes an AI choose the Hollywood solutions instead of, say, more realistic ones? Surely, it must think like a script writer.

Anthropomorphic issues and logical impossibilities

Recall from earlier that there's a big problem in evaluating an AI in non-human terms. And now, let's make it harder! Anthropomorphise something is to apply human traits to something that isn't human, often in trying to simply or understand that thing, even if it's misappropriate. Our car is called Becky, and she needs a service. There, I did it. Now you try!

A computer's memory is nothing like our human memory. Sure, both concepts use the word "memory", but they are two absolutely different things. This is an example of how we create a piece of technology, and we give it an anthropomorphic name because there are some similarity. But then mistakenly expect that computer memory works the same way as human memory is two levels up from a category error; computers don't remember anything, they have a limited amount of really fast storage that we call "memory" (RAM, technically) in addition to the normal slower but more abundant one we call harddisks, etc.

A computer's CPU is nothing like a brain. We sometimes say that the CPU is the brain of the computer, but it isn't. It just plain isn't. A CPU have registers - a slot of memory - in which you cram a piece of memory from somewhere else (usually from RAM; see above), do a bit-wise manipulation of that piece of memory (this is the essential part of what we know as software, or a computer program), and plonk it back into RAM or similar. Repeat this a few million times a second. That's a CPU; take bytes, fiddle with it, put it back.

A computer program - software, for short - is nothing like thinking or how the brain works. It's a long string of instructions on how to shuffles pieces of computer memory around around. If this byte has this bit set, do this shuffling over here. Otherwise, do this shuffling over there. Move these bytes from here to there. Change those bytes according to this logical formula. There is nothing here except logical rules over bytes of data.

A computer's input is nothing like seeing, smelling, tasting or hearing. A computer sees only long streams of bytes, and we use computer programs to try to shuffle things around in order to create a symbolic sense of what those bytes could mean as a useful output to us humans. We literally write computer programs to convert bytes into human symbols, without any knowledge or understanding of what those symbols are. For example, the computer might juggle some bytes around and come up with this stream of bytes; 83 97 109 which in binary looks like 01010011 01100001 01101101. If you convert each of these numbers into symbols, then with 83 we have defined it - outside the computer! In a paper manual! A human convention! - as an upper-case S, 97 is a lower-case a, and 109 a lower-case m. Sam. Three symbols which according to one of our human alphabets are the letters s, a and m that together creates, to us humans, the word Sam. To the computer, however, it is three byte values of 83, 97 and 109. We - the humans seeing these symbols on a screen - are seeing the word Sam. The computer sees nothing at all.

Now, a combination of clever software and a CPU might do some pretty cool stuff, and indeed we do, but don't confuse any of these for any human equivalents, or even think that the computer sees in symbols like we do. They just don't.

This also points to the problem of the constant translation that happens between humans and machine all the time that we are completely oblivious of; the computer just manipulates numbers, and only we see the symbols! Get it? We - the humans, as we read it - see the symbols. The computer does not. We make computer programs that shifts bytes into symbols. The computer doesn't. We create input that means something to us. The computer sees just numbers to crunch. We see the meaningful output, while the computer sees only numbers.

And it gets worse! The computer not only sees only numbers, but it doesn't even friggin' see them! Remember I showed you the binary version of Sam? 01010011 01100001 01101101 These, in a computer CPU, aren't even ones and zeros; the symbols of one and zero are human constructs. Inside the computer it deals with the state of signal and non-signal, of on and off of electrical switches. The CPU reacts to the on and off states of bits (the individual components of a byte, the atom of the computing world) in a prescribed manner, and the computer programmer tells it exactly how to react to any given pattern that comes along.

And so the question becomes what you can express using this incredibly constrained set of building blocks? Well, Gödel's incompleteness theorem is here to rain on our AI parade and piss on the logical positivists I have mentioned earlier;

Any logical system powerful enough to express truth-statements, can't be complete and consistent. (My own paraphrase) The most human example I can come up with is the statement; "This statement is false." Chew on that one for a second. Not only does it hold great philosophical value (lots of deep and wonderful questions can be found there; see it you should!), but it also points out something an AI must be able to do easily; lie, or at least don't frown when seeing one. We humans have no problem with the statement as such, we don't explode at the sight of it. Computers, however, deal with this differently. Each logical statements must be broken into computational statements, and when statements like above comes along, you get something akin to "divide by zero", or, I give up! Does not compute! Because, frankly, for a computer, it must compute, or it can't do anything with it.

And so, after all that, do we still think this is the stuff intelligence will be made from?

Me, the villain, and the Hollywood bullshit AI

There's a big problem in thinking that a superintelligence created in a supercomputer environment or specialised computers somehow have control over all aspects of computers everywhere, by virtue of sharing the common word "computer". How often don't we see in movies how a superintelligence created in the lab all of a sudden roam the internet, control lights in a building, and makes any electronic equipment all of a sudden do their bidding. Now, we geeky programmers and system administrators laugh loud at such scenes, as we know that it's hard for many of these things to work at all at the best of times, little less being controlled through a network.

Remember that the superintelligence is bound to the environment in which the superintelligence can be, well, superintelligent. This means that it cannot all of a sudden travel the internet, run on other machines, and control things all over the shop. All things might be connected, but you are still bound by the constraints of those connections. A superintelligence who needs special computers to run can't transfer themselves to any other computer and then simply run there. No, it runs where it is superintelligent, and has to use the internet in just the same crude, shitty way the rest of us does. Just because it is a superintelligence doesn't mean it can download that porn flick any faster. It needs to hack into other computers and create trojans or sleepers there, and then direct the traffic back to itself, and this is very much how we already do things. Sure, it might be able to create a network of drones quicker and possibly smarter, but those drones aren't superintelligent. We all have to work with the tools we're given. And the superintelligence is bound to its environment and tools and constraints.

As stupid as some people might be with their computer security, it doesn't follow that a superintelligence can gain access to my home router, for example, which I can access over the internet. We can pretend I have a secret lair in my basement, and at just the right moment as the protagonist wants to exit my secret lair with my collection of chocolate eggs, it doesn't matter how friggin' super-intelligent this superintelligence is, or how fast it can do things, it still have to brute guess my password with an increasing wait before it allows the next try. Maybe it knows of some hack or vulnerability in that exact routers firmware, but given I take security seriously and have a firmware that I've modified myself - like any good villain should do! - I doubt this very much. Especially within the time it takes our protagonist to leave with her booty before the sleeping gas kicks in.

Hollywood sucks at depicting reality. Let's not forget that, people. Really, truly, absolutely friggin' sucks at it. Why are we letting our reason be so influenced by this shit?

There's one more thing I need to dig into before I wrap up this way too long screed. Every problem with the concept of AI we have dreamed up are anthropomorphic in nature; we expect something intelligent to be like the intelligence of us humans. We tend to think that if we create something, it needs to be judged in human terms. If we create an AI that's a bit of a dick, well, it might be a dick to us humans, but it may not be a dick to its own kind, or maybe not dick to certain kinds of humans, or maybe a dick to dogs but not cats, or ... ? The options are endless. So why are we choosing only the "dick to humans" as a big risk? Why are we so dead sure that one is the one that is likely to happen?

The fear-mongers wants us to believe that we humans can pose a threat to a superintelligence existence, and hence it wants to rid the world of us. Or, as Sam Harris says, we might not even be a threat, but maybe it determines that "what we humans want" is something that we, ultimately, don't want. Like, that all we want is pleasure, and hence the machine decides to put us all in vats and feed dopamine directly to our brains, making us happy and feeling pleasure, but is that really a life you'd like?

I don't know of any straw-man argument made of more hay. Why are we jumping to the worst possibly scenario instead of anything pleasant? Do we have actual reasons for doing so, apart from being a more thrilling Hollywood script? And I mean this absolutely seriously; is there any reason to go with one horrible scenario over any non-horrible?

Secondly, if we truly create something superintelligent, why are we still assuming that we treat it and it treats us as something utterly stupid? Why are we assuming that something superintelligent even ever would think bad thoughts? Or think badly of us? Or think that if we humans are a threat to it and we might shut it down, why wouldn't it just go, ok? Why are we assuming that it will fight for survival at all? Why are we assuming that it wants more resources, that it wants to expand? Why are we assuming it will fight us, for whatever reason?

Because we anthropomorphizing the shit out of it. Nuff said. Now stop it.

Finally

TL;DR: Don't be seduced by the Hollywood bullshit! Think a little deeper!

Bloody hell, I didn't expect this screed to become this long, and if you've read all the way here, I do apologize. I've seen so many Bostrom inspired AI doomsday bullshit articles of late that I felt this post came out of necessity for some balance from someone who, well, knows a few bits or bytes about the actual technology and issues at hand, including some philosophical insights that are perhaps more important than people realise. No, I'm not an expert in the field who travels the world in a shiny dress speaking at all the TED derivatives there are. If nothing else, I hope to stop a few.

I realize that each of these sections could be much, much longer, using far more technical jargon. I'm sure I could impress you with names of special neural network simulations and rattle off histographical methods that would lure a lot of people into thinking my software is so smart. Darn it, I've done it before in real life, solving really hard and complex problems in pattern recognition and decision making with cumulative histograms, clever dynamic threshold and filter techniques, throw filtered data at neural network simulations (Bayesian trees and clustered analysis, if I remember correctly), and sure, it looks impressive, it can fool people into thinking this is really intelligent software solving problems in intelligent ways.

It's not. It's friggin' stupid stuff that, by virtue of being put together by humans who can instruct the computer to show symbolic results we humans can relate to, because it looks like it's intelligent, we mistake it for intelligent. It's not AI. It's not weak nor strong AI. It's just analytics.

It's only a model.

Also, I wish smart people like Sam Harris would at least try a little harder to see both technical issues with the whole strong AI concept, and if not technical, then at least some philosophical. And I mean that in the nicest possible way;

I want people to engage more in the AI debate, but more by virtue of some deeper thinking not influenced by these technobabble dudes like Bostrom and their shiny future projections that are so close to science fiction that I, well, think it is pure science fiction.

Wednesday, March 7, 2012

Interactions with a thinking Christian

On Monday I posted a comment on the blog of Thinking Christian, in a post that is a reply of sorts to another post by evolutionary biologist Jerry Coyne (who, in turn, was taking Christian philosopher Alvin Plantinga to task for sloppy apologetics and argumentation). I was away yesterday with no time nor decent access to continue with the follow-ups, and when I returned there's just so much to dig into that I decided to place my replies here. So.

Update: Small changes towards the end to make sure I talk about models of thought and not the people in question.

Update 2: It pains me to say this, but the Thinking Christian has, well, banned me. I've pasted at the bottom a couple of banned responses. I'm very disappointed by this kind of shutting people up - I would never do that myself - but can hardly say I'm surprised.

My comment basically said two things;

1. By changing one of the premises by upping the complexity without defining further premises to define them, you are essentially begging the question. If I am to take the thinking Christian seriously - and, by 'thinking', I am assuming that logic, reason and deeper thinking than what is required for making dinner is involved - then each time something is begging the question, we need to point this out, otherwise these arguments aren't sound. And so I pointed out that in order for Tom (that's the thinking Christian in question) to have a good reason to ridicule Coyne, we must all agree with what omnipotence actually mean in this context. I will go into detail about this a bit later (and Tom asked me to specifically delve into it).

2. Tom used as part of his rhetoric that "Nothing can cause itself." Now, I admit that perhaps my wording here was a bit clumsy, but even if me begging for the logical reason for the argument - even when clumsy - still dives into the problem that believer of a thing that has always existed face as they bring out the everlasting god vs. everything else is caused;

   We don't know that everything else is caused.

The usual example in this argument is that the universe had a beginning, therefore someone created it, therefore god. But there is a huge misunderstanding in lay peoples understanding of what this beginning of the universe mean, and the key to this lies in the fact that scientists - when they want to be a bit more precise than when they're on some talk show on TV - call this universe "the known universe." This is important; The word "universe" is what we use for what we see, and, indeed, what we see - mostly light, the bits of matter we can see in our solar system - all of that can be traced back to a common point in time from which their journey started some 13.7 billion years ago, and hence we say that this is the age of the known universe, and that even time might have started here.

But this only that little part of it all that we can see. In fact, we can't see all of it; for the first 380.000 years of the universe the heat was so high that photons couldn't even be created to create light; they would try to bond, and then get shredded apart again by the heavy radiation. It was slightly after those 380.000 years that the universe cooled enough to create light, and from which we could start to see anything. This border between what we can see and what lies beyond is called the radiation microwave background, and it gives us a good map of how the universe - the known universe - further evolved, but is quite out of scope for this post.

However, we - and I mean scientists, of course - can calculate back into those 380.000 years using powerful computers and simulators, all the way back through the slowing and expansion of the known universe and the inflation, all the way to a few Planck constants after the Big Bang (which didn't bang), but not further as our understanding of quantum mechanics break down at this point. Beyond this point we don't know anything, however this is not to say we will never know, only that this is the current limit of our knowledge.

When we consider that the universe is made up with positive and negative energy, and after weighing the universe - again, the known universe - and accounting for all that is in it, we are left with the fabulously funky concept that the total energy of the known universe is precisely 0. Zero. Nothing. But more on that later when we talk about whether something can come from nothing.

Quick aside, though, as this whole section stands as an answer to another commenter JAD who said "Again, there is nothing logically contradictory about the premise “that Something has always existed.”" Yes, you're right. However, the point admittedly clumsily made was that Christians refuse to give anything but their god the privilege to having always existed, like matter, energy, empty space, or the universe - as opposed to the more popular known universe - and they refuse this not because it isn't plausible or logical, but because they don't like it or don't agree with it. Anyway, onwards. The theory (and the facts of) the known universe says nothing about the materials that were used in the formation of it, but we can make a pretty good case that there is stuff there that have always existed. But we don't know.

Tom says; " I think you, like Dr. Coyne, would likely say that you place a high value on evidence."

And I dare say that all of us place a high value on evidence. Saying otherwise would be philosophical sophistry. Would you drink Mercury? No, because evidence tells us you'll simply die. We all use evidence all the time, from walking out your door, surviving the landscape, interacting with the universe, and trying to grasp notice of what happens in it. I think it's probably fair to say that we all value evidence equally much, but that some of us are happy to replace evidence with faith when evidence interfere with our world view and / or preconceived notions.

We need to be clear about this; everything that enters the scientific consensus is by any practical means true. The only way to shift the truth-value is to produce counter evidence. (Btw, science is quite happy to accept counter evidence. In fact, all good science that are part of the consensus was at some time counter evidence. But I digress) We need to treat the laws of physics and biology and cosmology and chemistry as proven to be true, because if you don't, the onus is on you to come up with that lovely counter evidence. Being a kook or dream up an insane (but internally logical) platform of knowledge may work for the kook in question, but evidence is rooted in the concept of testing and re-testing the data to increasingly ridiculously rigid levels (and I mean that in a good way; those levels are ridiculously high for us normal lay folks, but in science they are the norm) to be agreed by consensus.

Tom says: "There is nothing sophistical, my friend, about saying that omnipotence is an attribute of God that has to do with power; or that maximal power can do what maximal power can do, but that maximal power still can only do what power can do"

There are many problems to address here. I'll start with what Tom clarified in a further comment: "Given the hypothesis “The God of Christian theism exists,” and given the definition of God that comes with that ex hypothesi, can you defend Coyne’s position that such a God ought to be regarded as potentially being able to commit divine suicide?" This was his point A, with a direct follow-up question point B giving the context of the post itself, basically asking "Can the Christian god of Alvin Plantinga commit suicide?"

The answer is, unsurprising to many but mostly surprising to Christians, "why not?" Theology is a funny thing in that one can pick and choose from thousands of years of religious thinking about all sorts of issues, including suicide. Traditionally, suicide was regarded the ultimate sin for Catholic reasons (Aquinas and confession of sin before death), but lots of people (including Augustine) would argue that it's a sin because it violates the sixth commandment.

But hang on a minute; the call to arms on the believing side is that their god cannot sin (it is illogical for their god to sin), therefore suicide is ruled out. But how? Where in the Bible does it say that suicide is a sin? It seems that the commandments - vague as they are on this topic - were rules written for man, not for god; even the most cursory reading of the bible will give you plenty of examples of this god killing people or having people killed in his name for a host of different reasons as he see fit. William Lane Craig is on record for proposing (and believing) that divine command theory makes anything god wishes and orders to be good, even if we mere humans might find it distasteful. Is that the yard-stick on our topic as well? Killing doesn't seem to be something god has an automatic, all-encompassing problem with, as opposed to, say, picking sticks for firewood on the sabbath, or taunt some guy because he's bald (god killed the stick-picker and 42 children for the taunting, although this is admittedly low-hanging fruit). I think I could make a case that ‘Biblical morality’ is situational, based on the arbitrary whims of Yahweh, so why couldn't he whim a suicide?

But all of this is of course context but besides the point. More to the point is, have Tom got any reason for why god can't? I know the concept seems rather absurd for a Christian as they've been told all their lives that god is love and all life is sacred and god is love and everlasting and so on, but is there something concrete to point to? Suicide isn't mentioned in the bible as such (although there are people who die at the hands of actions they themselves started), and I can make a case that the tale of Samson's demise can stand as an example of an approved suicide. If the conditions are right, why not?

There's various quotations about gods eternity, of course, and the everlasting love, which indeed suggest that he's at least planned to hang around for a very long time. However, that doesn't say can't, only a won't. And that was my point; your god can't do something? Or just won't do something? And then, if the question gets answered, we move to biblical morality, but more on that later.

(Update: Tom didn't say this, Doug did)

Commenter Doug  moves to a further comment: "I find it curious that Alexander references “arguments” (plural), when the neo-atheist arguments typically boil down to one:
1. If God exists, his primary concern would necessarily be to justify His existence to me (since I, the Rational Man [tm], demand it).
2. Inexplicably, God declines to play by my rules and jump through my hoops (to be known hereafter as “evidence”).
3. Therefore, God does not exist.
"

You know what I find curious? The sentence preceding the above reads "But my question to Alexander was of a different nature, addressing the unstated atheist preference: not for a definition per se but for a definition they like." and yet ... and yet, there are proclaims that all atheist arguments boil down to the one he wrote above.

The curious part is the double-standard. Isn't it obvious to you (ie. Christians in this debate) that you yourself are giving us a definition that you like to swat? That argument up there? It isn't mine. This is called "building a straw-man", where you dream up an argument you think your opponent have, or, at least, hope he has, and then easily burn it down because it's flawed, stupid and not really all that good. I don't really need to point out that it's a stupid argument, because, well, you made it so. It exist in your head, not in the real world.

So, let's look at the real world, which, incidentally, is a fun way to summarize the only argument atheists, in fact, do make;

"1. There is no scientific evidence for a god. 2. Therefore, there probably isn't any god."

Packed into that is of course also the finer point of science and probabilities (it seems a lot of misinformed people out there took Richard Dawkins to task last week or so for saying, er, exactly what he's been saying for years and years and written a whole book about. Oh, the irony!), but it essentially comes down to one thing; evidence. And that we talked about at the very top of this post, so I won't reiterate my stance on that again.

JAD further writes something, well, interesting: "Why should I accept an eternally existing universe over an eternally existing transcendent Mind (God)? Which is the better explanation?"

Right. Which is the better explanation. Better. Explanation. At this point it is tempting to throw up my arms and just bail out, but I'll use it as a parting comment.

The question itself is easy enough to answer: "Because truth matters." It's not my problem if people like to believe things for whatever reason - like comfort, bliss, support, need and so on - as long as those beliefs don't affect the lives of others. If you can't grasp this argument, then imagine that your country was run by a militant Jainist (there's a hysterical joke in this definition if you care to look it up) and all meat eating was banned and ants had rights to eat your house over you getting rid of them. If nothing else, I believe strongly that we must adhere to the secular state as the only state in which we all can have freedom of religious belief (and if you think otherwise, then you shouldn't have the right to complain about other opinions, no matter how different from yours they might be; we share this world, and we are different, and none of that is going to change).

But dear Jad (and others who might be lost in this model of thought); there are glaring problems here. You don't mean "eternally existing transcendent Mind", you mean the Christian god of Yahveh who is jealous of other non-existent gods, who gets angry, who smites, interferes in the discourse of men, tells people to kill their own, is a gambler with people's lives as tokens in his play,who has a son who is also himself (and not), and kills himself as a scapegoat for others so that others can - by mere thinking that this event happened the way it did and where the players were exactly who they claimed to be - get to a different non-physical place with non-physical bodies called heaven where there's going to be singing and praising for all eternity to this transcendent mind that started it all and knew all along what would happen. Oh, and he can do anything, except those things the believers in this story find unfitting in their model of thought.

That god, that Yahveh of the talking bush and virgin birth and killing babies of peoples he didn't approve of, the god that orders an unskilled person to build a boat so big as to defy biological facts and physical laws of nature, the god of having a personal relationship with everyone except those he doesn't, who loves all except those he don't - That is the god we're talking about here. (Update: First comment on the post pointed out that it was a bit attacking. Well, this section about Yahveh is not meant to be attacking at all; it is meant to bring the actual properties of that specific god into a discussion that far to often diverts into a wishy-washy definition of what a god is. I've re-worded some of this to make that clearer.)

Let's not lose sight of what the consequences of your belief is. Your god is not some vague ethereal mind that cannot be defined and must remain in the mysterious space of transcendent mumbo-jumbo of avoidance like some New Age hippie definition of love that always win wars or whatever, man. Talking philosophy and reason and logic is all good and well, and is often a wonderful tool in sussing out a model from which truth can spring, but I don't think you have completely honest intentions of following the truth wherever they may lead you (ie. change your model of epistemology based on a different model of epistemology that may or may not be better than yours at defining truth and scrutiny), and so the danger of bringing in this New Age version of god is a testament to the undefined, to the flaky thought-through concepts that you build your argument on. I'm not saying this as a criticism of your god nor your beliefs, I'm pointing out that when we place our arguments the premises needs to be rooted in reality rather than making it some thought experiment that only has validity in our imagination. This is why I keep focusing on sophistry (as a challenge, mind you) and the fact that I too often see double-standards wherever you call up logic and reason. Sure, you can create logically consistent arguments within your model of thought itself - so can the guy in the asylum down the road who thinks he's Jesus or Napoleon or Elvis - but it is directly incompatible with science, as referenced at the beginning of this post. And when that happens, well, evidence is really what matters; how else are we to determine what is true if not for a joint subjective process of tests, results and basic epistemology?

Have evidence? Then bring it; surely a few thousand years should be enough to dig up at least a smidgen of something that claims to be such an integral part of everything. Bring it because it is important. We all value the truth, and we all should follow the truth wherever it takes us, but we need to be very honest about our knowledge and our epistemology. Convincing the unbelievers of something so fantastic and integrated into everything should be easy, and yet it seems it is not. Any miracle would do. Even plausibility would do.

Otherwise it's just our opinion. Opinions are, well, interesting and worth listening to, but at some point those opinions need to have some connection with either reality or some other person who has her own opinions. Our arguments are sharing opinions, and the more we can link it to objective logic, the easier it is for the other party to accept them as true. Beliefs are opinions sometimes shared with others. You might ask at this point if there is any harm in false beliefs, for which the best answer I've heard (from Red Neck, Blue Collar, Atheist) is "I believed the gun was empty."

Update2 : Banned comments
Two replies that didn't make it through. One to Tom, another to Rodrigues.


@Tom:"In other words, you blew off the answer that was on the table for discussion. Totally ignored it."

No, I actually addressed it. Poorly. First time in the comment right after you gave your answer, the very next comment. It has to be said that it wasn't an "ok, I get what you say now" answer, I was distracted by the focus on proposition #4 for your original answer.

My answer in #73 was a summary of my problems with the propositions, not a direct dealing with your answer. It wasn't that I had ignored you, and I certainly didn't mean to skip a beat in the flurry of comments; I was re-stating why it is a problem, but more importantly, I stated in my objection five (building on 3 and 4) that as a springboard for talking about things that can be caused or not (as Melissa pointed out), causation needs to get a better treatment if the propositions can be used for our discussion. Yes, I should perhaps have clarified better what I meant here (and would had anyone asked) about various definitions for causality (ie. Aristotlian, Humean, etc. in philosophy vs. causality as meant in physics). The causality for a finite universe is obvious, but not so for an infinite one. I was waiting for why the unmoved mover was an acceptable one, and not nature.

You then move to the "non-reply" comment, which was my response to Melissa's "I was going to reply to most of your objections but [...]" where she says that it's addressed elsewhere, and btw number 5 is wrong. If she's not responding to the eight objections, then that's a non-reply. Not no reply, but a reply that doesn't really reply much, or, a clumsy way of saying I was hoping for more. But again, I can see how that snowballed another poorly lit hill.

In the very next comment you say "Melissa referenced that. You’ve ignored it", but I didn't ignore it. I had addressed it. Not in an admittedly perfect way, that is for sure, but claiming I had ignored it is, well, not true, and certainly not by intent.

But I'll say this: The weird part is that I've dealt with all the way back to beginning with talking about the origin of the universe on my blog - the very definition of the universe, the known universe, the all-encompassing universe; three different universes that we need to figure out before we go on, to which you replied (for Doug, which he agreed with) that it is the known universe. Ever since then I've tried to explain that the four propositions are then irrelevant when you want to talk about the supernatural which comes in as soon as you make claims about it (this friggin' universe) needing to be caused. This is the main thrust of my objection, and as far as I can tell from reading through the comments, still not addressed.

With the definition of universe you gave you are giving special privileges on your god to be uncaused which you are not willing to allow for *any* version of the universe. From here we ventured into your four objections for why the universe can't be uncaused, so this whole flashback scene was a bit of a surprise. But here we are, and that's that.



@Rodrigues: "At any rate, for someone who directed "Oh my, how ridiculously stupid this is getting. I can’t tell you how much expletive juices are bubbling up through my spline in frustration, but; Yes, really, really, really, Really!" at Doug -- this is just an example, others could be gathered -- methinks this is a case of the pot calling the kettle black (not that it excuses me in anyway)."

So, did I call anyone anything? Was I personalizing anything there, except expressing my own feelings of the situation, and using Doug's *own* words back at him as stylistic effect? Did I call him ignorant? Was it condescending? No, it was pure frustration on display, *my* frustration. I come into the debate with an agnostic attitude, explaining in fine details that we (the human collective known as science) don't know for sure a whole lot of stuff, that I myself am agnostic and don't know a lot of stuff, and then - as if this is a surprise, and changes the game somehow - being called out as an agnostic.

Now, as far as frustration goes, yes, perhaps I could have phrased it differently, but I've not attacked the person or implied anything bad about Doug (if nothing else I can say in hindsight he's been one of the nicer ones). I stated that "this" is getting stupid, "this" being this whole "debate", this frustration being an example of what I see as a lack in my opponents to make any effort towards understanding what I'm saying. I've stated several times through this rigmarole my agnostic leanings, and later I'm being accused of being one? There's no way of getting through here; damned if I do, damned if I don't.

Share it