Let's discuss for a few moments the genesis of artificial intelligence.
We keep seeing it all over popular science-fiction, the whole "AI Overlord" scenario... how it always begins with humanity or some other form of intelligent life attempting to play God by creating a "true" AI (usually in some sort of independent body, but also sometimes as a supercomputer which is capable of monitoring the outside world... in either case, interaction is the key). This inevitably leads to one of two scenarios: either the AI, through observation and analysis, decides that creatures of the fleshy mind are of unreliable judgement, and thus unfit to make decisions for themselves (Space Odyssey, Eagle Eye)... or a series of independent AI-operated robot formerly created for use as slaves or machines of war come to sentience and rebel against the creators which subjected them to such an existence (the Matrix, Terminator, Mass Effect's Geth)
So many of these only portray the AIs as cold, logical thinkers however... what "emotion" they may display (if at all) is generally shown as being merely programmed response to stimuli... essentially, "I have this reaction to this event, because that was how I was made." In other words, a mimicry of emotion; highly predictable and merely processing input. To call such a thing "sentient" or "alive" seems a misnomer; as one of the primary characteristics of life is its unpredictability: our random, emotion-based thought processes, which differ wildly from one individual to another, formed by non-standardized chemical based programming rather than what's possible within the limitations of an OS on a chip... and that's just the physical aspects of the existence we know as a "mind," all unquantifyable spiritual facets aside (that's an entirely different discussion altogether).
Perhaps the first step toward creating a genuine AI should be to create a means by which a thinking computer is able to "understand" the human mind; the way we think, the sheer variety of behavioral patterns, erratic emotional response, an understanding and appreciation for aesthetics, etc. Scientists have been working on supercomputers which progressively "learn," able to adapt to progressively more and more new experiences they are exposed to... but even such a miraculous thing is still only capable of regurgitating information it has received... not really forming its own "opinions," but a consensus based on information fed to it, and only when asked to form such a conclusion, never of its own accord.
So, while such a computer is pretty awesome, perhaps some theories on how we should at least begin to go about teaching it to think like a human being are in order?
First of all, there would have to be some sort of programming added in the beginning which would allow the AI to actively rewrite its own OS to accommodate new ways of processing its data, becoming as mutable as the human mind itself... formatting itself to accommodate new data, instead of the other way around.
As for teaching it to "think like a human," the natural way to accomplish this is to provide as much human input as possible. I don't mean simply giving it free reign to search throughout the internet for various trivia or monitor us through cameras and statistical databases, but something far simpler: the use of surveys as a means of communicating to the AI not just what we do or what our interests are, but more importantly, the "why" behind such decisions and preferences.
Take art appreciation for example. One person can evaluate a piece of art, and appreciate certain aspects of it, while another may like it on entirely different merits, or even not enjoy it at all. Computer software exists which can analyze an image for color composition, others which have the capability to automatically map out vectors in order to inerpret a curve or line, some can even recognize the distinct set of features which comprise a face... but cannot asociate any real meaning to any of that. So we teach it what is so captivating about these elements of the image. Out of the, say, trillions of images across a gamut of genres and skill levels of artistic expression, we select several million completely at random and have an equally random group of (for example) 40 or so people across all age groups and demographics, etc. and a different group for each piece, and have them each describe precisely and in as much detail as possible what it is they like about the work and why. We then input all of this data into our AI, and allow it to grow and eventually perhaps develop an understanding of visual aesthetic.
But why stop at just visual arts? Using wave analyzers and other such tools, can we not apply the same technique to musical works as well? Give the computer a knowledge of which chord progressions and compositional layering structures evoke which emotional responses in varying groups of individuals, which instruments and frequency ranges put us at ease or create tension. Maybe any comments by those surveyed regarding the tempo or pace and the effect thereof would convince the computer to find some way by which to actually percieve time rather than simply measuring it? Then this understanding could be correlated with image analysis of individual frames and perhaps even tracking the flow of movement over the course of them in order to truly understand and appreciate works of film (which many would consider to be the pinnacle of our achievement as a creative people)!
Of course, all of this is simply to get the AI to learn the processes behind the ability to appreciate things the way humans can... not necessarily that it would actually grant them said ability. Not that this would necessarily curtail a robot apocalypse scenario... but hey, at the very least the machines would understand us, for better or worse!
And then there are the computers which can carry on conversations. We are all of course aware of our ability to communicate, and why we do so... but how would we describe the considerable series of processes which goes into formulating our responses (which can range from the honest to the humorous to downright fabrications, and be influenced by anything from current events, emotional states, social forces, one's upbringing, what have you)? As mentioned several paragraphs above, we can create a computer which can "think," and one which can give the sort of response which would suggest an understanding of the statement being responded to. Take the ingenious program "Cleverbot," for example. Though by all appearances it seems to be carrying on a conversation; in actuality it's merely mimicking the millions of conversations already presented to it. In its infancy, the program could only respond in brief, direct statements which would often have little to do with what was actually being said... especially whenever colloquialisms were present. After so long of being corrected and building up its vocabulary and speech patterns based upon heaps of input, it now appears to speak with perfect fluency, and even appears to have something of a personality. Pretty damn impressive, if you ask me. However, even this still boils down to a fairly formulaic process to arrive at a particular result from a given input; it's just become exponentially more proficient at it. While I believe that such a program is a crucial step toward creating a computer which can understand us, it in and of itself is not actually capable of a genuine comprehension of the sort.
Which brings me to my final point: once a machine finally does develop sentience, how are we to actually recognize this trait? The distinction between intelligence and self-awareness is hard to define; we can create programs which behave and react convincingly to stimuli, yet without any true thought process going into such a reaction. There is a surprising number of people with a sort of self-absorbed nature which leads them to believe that, as theirs is the only perspective from which they can view the world, then by the limits of their own perception, they are the only sapient being extant in a world filled with "extras;" considering others of their same species to have the same lack of genuine conscious being as a current AI.
When the moment comes when a computer truly does come to self-awareness, how will it be able to demonstrate such a trait to us?
Is the acceptance of such an artificial mind a sign of open-mindedness, or of gullibility?
No comments:
Post a Comment