Integrative AI

What I wish to explore today – in a rudimentary and truly exploratory way – is the idea of incorporated artificial intelligence, which I will explain in a minute, and its philosophical, psychological and physiological implications and impediments. A couple of working definitions are necessary before we begin. Let ‘artificial intelligence’ refer to an entity capable of independent thought yet not of evolutionary genetic origin. When I speak of ‘incorporation’ or ‘integration’ I mean the inclusion of an artificially intelligent agent acting in close conjunction with a human actor. This, I believe, is not only a technological possibility but highly likely within the next one hundred years. In referring to ‘integrated agent’ or simply ‘agent’ I mean this theoretical entity that is inferred from the above. The ‘actor’ or ‘host’ is the being in which the entity is embedded. I leave my definitions intentionally vague because I cannot pretend to know how the intricacies of development will proceed. It is still possible and useful to examine the implications of this without knowing the details of how it will come about, or even if it will. Think of this essay as a diving board of unknown height into murky water of unknown depth.

Master Chief, you'd better be there when I need you.

What I wish to explore in these three aspects can be condensed for semantic purposes: Philosophically the redefinition of humanity and the loss of self; psychologically the risk of schizophrenia and other maladaptive distortions of reality, as well as intentional manipulation on the part of the agent; and physiologically the likelihood of host immune response or microbial infection and again, the possibility of intentional manipulation in normal processes of the part of the agent. Buckle up – it’s going to be a shaky ride.

The definition of self is, of course, a philosophical conundrum with a long and multifaceted history. Animism would posit that the self is but a sliver of a whole, whereas the Cartesian definition founds the distinction of human self from non-human self on cognitive ability. The existential self is an extrapolation and deviation from Cartesian thought via the works of Husserl, Heidegger, Sartre and more recently the likes of Derrida and Foucault. With these latter two especially I am woefully unfamiliar. Taking this terrible condensation of complex thought as a given, we can begin an exploration of how artificial intelligence itself, as well as the integrated actor we are exploring today, interferes with what has become the classical definition of self. I am human. I emote, empathize, was born and will die. An artificial entity would not necessarily be subject to these conditions, but we may decide they are requisite to that entity being considered human. That is a whole different debate, outside of the limited scope we have adopted today. Taking that such an entity comes to exist and is capable of independent thought, what are the implications of this entity’s close engagement with a human actor? Specifically the loss of self, especially when working within the post-Cartesian, post-Existential framework of self, emerges as a problem when a potential self and not-self become so closely intertwined that it is nearly impossible to distinguish which, the agent or the actor, was originator of any given thought. Yet if we take a broader definition of the self as a mere animistic entity amongst of myriad of “selfs” adopting every form imaginable – from the subatomic particles to the cognitive reality of higher mammals – this conundrum simply disappears. The agent-actor self becomes but a new self, just as the animistic entities that make up a human are combined into a larger animistic entities. Is this argument simply a flagrant use of Deus Ex Machina? Perhaps, but it will suffice to negate the problem of the philosophical loss of self in our soft tirade.

Fun looking blog for all things nerdy, click image to link.

In psychological terms it should not be much of a stretch to see the dangers of intentionally putting a voice in one’s head, though it does raise the issue of defining psychological illness itself. Most human guidelines of behaviour are normative. Though revelation would have us believe otherwise, there is no grand code of what is appropriate and what is not. Largely we simply look to others for how we should act, and consider ourselves in good standing so long as our fruit does not fall too far from the tree. When this definition fails we turn largely to a functional definition; if behaviour such as limited agoraphobia is well within normal ranges we ask instead if it interferes with the continued livelihood of the individual. For instance, if an urban dweller decides not to venture from their home at night because of fear of violent assault this is not necessarily a problem. If that same individual finds their social desires or their nourishment affected by this fear it has clearly become a problem. As above I have given a simple dichotomy to demonstrate that what may be a problem in one light is not always so with a simple expansion of definition. Is this mere mental trickery? Not if the definition which renders the problem obsolete is shown to be correct. That again is an issue for another day and more adept individuals.

Our next point – that of intentional manipulation – is one that a science fiction author could likely have a lot of fun with. That an agent could interfere with an actor, especially when given a route of such direct access, is not so much a problem of integrative AI as we our exploring it today but more a characteristic of manipulation in general. If we do ever attempt something so novel as integrating an independent agent into our own cognitive processes it will undoubtedly raise all sorts of unforeseen consequences. Many of these could be quite dangerous. Yet Isaac Asimov, the man who unwittingly coined the term robotics, has done much to accommodate humanity to the inevitable technological realities of the twenty-first century. In his ‘Robot Series’ Asimov extensively explored the possibility of sentient and empathic robotic beings. He was writing within a tradition that since the time of Mary Shelly had indulged itself in the monstrous possibilities of technology. His efforts, culminating in his ‘Foundation’ series, have done more to help bring advanced technology into our day-to-day lives than Steve Jobs and Bill Gates would gladly take credit for. So our argument here is literary rather that dichotomous: Fear is a natural human response, though not always a rational one. Often we must explore the fearful possibilities of something new and unknown before we can finally come around to its undeniable value in human progress and productivity.

As for the physiological problems inherent in the development we have been exploring, they are best left to those with adequate technical know. I for one am not convinced that the author’s interest in including them was anything more that alliterative. Shame on the author for such obvious sophistry.

Toothpaste for dinner - more webcom fun. (Click for Link)

To summarize our explorations let’s say that they are largely a problem of framing. The paradigms with which we view the world often go unquestioned, like the unnoticed smudges of the glass as we enjoy a pleasant view. The interference of AI with our understanding of self is largely a product of philosophical developments within the last few hundred years. These developments themselves are artefacts of Western thought and as an important addendum that should be noted; are also rooted in Christian pedagogy. The same holds just as true for our post-Freudian, post-Jung, cognitive processes understanding of behaviour and mental illness. Even if the wayward conjecture made above – that the invention of integrated AI is technologically probable within the next one hundred years – proves completely unfounded and untrue, our exploration here of the cyborg has furnished some useful understanding. Roadblocks only stand in your way if you refuse to drive on the shoulder. Better yet, blaze a trail.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s