A thought just occurred to me, which has given life to a new theory.
I was pondering the existence of Pangaea, the supercontinent of old which contained all of Earth's current landmass. I got to thinking, "how in the world does a planet of such size exist with such an uneven mass distribution?" Then I considered the (nigh indetectable) slowing of our planet's rotation, and the phenomenon of tectonic shift. I then came to one conclusion:
Perhaps this planet's entire surface used to be covered by water. Once, a couple billion or so years ago, the Earth was actually a binary planet system. Something happened which caused the orbit of the less fortunate (and undoubtedly smaller) of the aforementioned planets to decay, gradually being pulled inextricably into the larger planet's gravity well. As we know, as an orbiting object's path becomes gradually tighter, the rate of completing said orbit becomes faster and faster (in which case the distance travelled decreases at a higher rate than the speed of the object; otherwise it meets at the precise escape velocity for that distance out from the gravity well, stabilizing the orbital distance). The rotational speed of the planet may well have been augmented from having hit at such a speed, the moon creating a "spin" from which the Earth is just now recovering. The largest part of the mass of the smashed down into the crust is what became the first landmass to stick out of the water: Pangaea... while the debris from the impact flung out into orbit from the smaller planet being torn to bits pre-impact would have formed a ring which orbited the Earth until it coalesced to form a smaller version of what it once was, which we now know as the moon. The phenomenon of tectonic shift (the mechanism by which the continents are now shifting about) is just the planet's means of re-distributing this formerly uneven mass to something more uniform and spherical.
Now imagine, if you will, an Earth which was never hit by its smaller half. It would have been a planet with somewhat lower gravity, covered in water (or perhaps ice, as the volcanic activity responsible for the first "greenhouse" layer which caused the initial warming of the atmosphere was in turn the result of the tectonic movement caused by the aforementioned impact... but let's be optimistic here and assume that our planet would have borne the capacity to support life regardless). A huge moon looms in the sky, more than large enough to (frequently) block out the sun... while the idea of a full lunar eclipse is laughable. Running with the assumption of life existing on the planet, without land providing a wider variety of environments for life to adapt to, birds and most insects, probably mammals as well would never have reason to have existed. The most advanced of creatures are most likely to be apex predators such as sharks (or their equivalent), intelligent encephalopods like octopi.
tl;dr: In an alternate universe, we are squid people.
The life and times of one Christopher McCurdy; artist and man of many hobbies.
Tuesday, February 01, 2011
Wednesday, January 26, 2011
The Drunken Philosopher #5: I Want a Robot Buddy
Let's discuss for a few moments the genesis of artificial intelligence.
We keep seeing it all over popular science-fiction, the whole "AI Overlord" scenario... how it always begins with humanity or some other form of intelligent life attempting to play God by creating a "true" AI (usually in some sort of independent body, but also sometimes as a supercomputer which is capable of monitoring the outside world... in either case, interaction is the key). This inevitably leads to one of two scenarios: either the AI, through observation and analysis, decides that creatures of the fleshy mind are of unreliable judgement, and thus unfit to make decisions for themselves (Space Odyssey, Eagle Eye)... or a series of independent AI-operated robot formerly created for use as slaves or machines of war come to sentience and rebel against the creators which subjected them to such an existence (the Matrix, Terminator, Mass Effect's Geth)
So many of these only portray the AIs as cold, logical thinkers however... what "emotion" they may display (if at all) is generally shown as being merely programmed response to stimuli... essentially, "I have this reaction to this event, because that was how I was made." In other words, a mimicry of emotion; highly predictable and merely processing input. To call such a thing "sentient" or "alive" seems a misnomer; as one of the primary characteristics of life is its unpredictability: our random, emotion-based thought processes, which differ wildly from one individual to another, formed by non-standardized chemical based programming rather than what's possible within the limitations of an OS on a chip... and that's just the physical aspects of the existence we know as a "mind," all unquantifyable spiritual facets aside (that's an entirely different discussion altogether).
Perhaps the first step toward creating a genuine AI should be to create a means by which a thinking computer is able to "understand" the human mind; the way we think, the sheer variety of behavioral patterns, erratic emotional response, an understanding and appreciation for aesthetics, etc. Scientists have been working on supercomputers which progressively "learn," able to adapt to progressively more and more new experiences they are exposed to... but even such a miraculous thing is still only capable of regurgitating information it has received... not really forming its own "opinions," but a consensus based on information fed to it, and only when asked to form such a conclusion, never of its own accord.
So, while such a computer is pretty awesome, perhaps some theories on how we should at least begin to go about teaching it to think like a human being are in order?
First of all, there would have to be some sort of programming added in the beginning which would allow the AI to actively rewrite its own OS to accommodate new ways of processing its data, becoming as mutable as the human mind itself... formatting itself to accommodate new data, instead of the other way around.
As for teaching it to "think like a human," the natural way to accomplish this is to provide as much human input as possible. I don't mean simply giving it free reign to search throughout the internet for various trivia or monitor us through cameras and statistical databases, but something far simpler: the use of surveys as a means of communicating to the AI not just what we do or what our interests are, but more importantly, the "why" behind such decisions and preferences.
Take art appreciation for example. One person can evaluate a piece of art, and appreciate certain aspects of it, while another may like it on entirely different merits, or even not enjoy it at all. Computer software exists which can analyze an image for color composition, others which have the capability to automatically map out vectors in order to inerpret a curve or line, some can even recognize the distinct set of features which comprise a face... but cannot asociate any real meaning to any of that. So we teach it what is so captivating about these elements of the image. Out of the, say, trillions of images across a gamut of genres and skill levels of artistic expression, we select several million completely at random and have an equally random group of (for example) 40 or so people across all age groups and demographics, etc. and a different group for each piece, and have them each describe precisely and in as much detail as possible what it is they like about the work and why. We then input all of this data into our AI, and allow it to grow and eventually perhaps develop an understanding of visual aesthetic.
But why stop at just visual arts? Using wave analyzers and other such tools, can we not apply the same technique to musical works as well? Give the computer a knowledge of which chord progressions and compositional layering structures evoke which emotional responses in varying groups of individuals, which instruments and frequency ranges put us at ease or create tension. Maybe any comments by those surveyed regarding the tempo or pace and the effect thereof would convince the computer to find some way by which to actually percieve time rather than simply measuring it? Then this understanding could be correlated with image analysis of individual frames and perhaps even tracking the flow of movement over the course of them in order to truly understand and appreciate works of film (which many would consider to be the pinnacle of our achievement as a creative people)!
Of course, all of this is simply to get the AI to learn the processes behind the ability to appreciate things the way humans can... not necessarily that it would actually grant them said ability. Not that this would necessarily curtail a robot apocalypse scenario... but hey, at the very least the machines would understand us, for better or worse!
And then there are the computers which can carry on conversations. We are all of course aware of our ability to communicate, and why we do so... but how would we describe the considerable series of processes which goes into formulating our responses (which can range from the honest to the humorous to downright fabrications, and be influenced by anything from current events, emotional states, social forces, one's upbringing, what have you)? As mentioned several paragraphs above, we can create a computer which can "think," and one which can give the sort of response which would suggest an understanding of the statement being responded to. Take the ingenious program "Cleverbot," for example. Though by all appearances it seems to be carrying on a conversation; in actuality it's merely mimicking the millions of conversations already presented to it. In its infancy, the program could only respond in brief, direct statements which would often have little to do with what was actually being said... especially whenever colloquialisms were present. After so long of being corrected and building up its vocabulary and speech patterns based upon heaps of input, it now appears to speak with perfect fluency, and even appears to have something of a personality. Pretty damn impressive, if you ask me. However, even this still boils down to a fairly formulaic process to arrive at a particular result from a given input; it's just become exponentially more proficient at it. While I believe that such a program is a crucial step toward creating a computer which can understand us, it in and of itself is not actually capable of a genuine comprehension of the sort.
Which brings me to my final point: once a machine finally does develop sentience, how are we to actually recognize this trait? The distinction between intelligence and self-awareness is hard to define; we can create programs which behave and react convincingly to stimuli, yet without any true thought process going into such a reaction. There is a surprising number of people with a sort of self-absorbed nature which leads them to believe that, as theirs is the only perspective from which they can view the world, then by the limits of their own perception, they are the only sapient being extant in a world filled with "extras;" considering others of their same species to have the same lack of genuine conscious being as a current AI.
When the moment comes when a computer truly does come to self-awareness, how will it be able to demonstrate such a trait to us?
Is the acceptance of such an artificial mind a sign of open-mindedness, or of gullibility?
We keep seeing it all over popular science-fiction, the whole "AI Overlord" scenario... how it always begins with humanity or some other form of intelligent life attempting to play God by creating a "true" AI (usually in some sort of independent body, but also sometimes as a supercomputer which is capable of monitoring the outside world... in either case, interaction is the key). This inevitably leads to one of two scenarios: either the AI, through observation and analysis, decides that creatures of the fleshy mind are of unreliable judgement, and thus unfit to make decisions for themselves (Space Odyssey, Eagle Eye)... or a series of independent AI-operated robot formerly created for use as slaves or machines of war come to sentience and rebel against the creators which subjected them to such an existence (the Matrix, Terminator, Mass Effect's Geth)
So many of these only portray the AIs as cold, logical thinkers however... what "emotion" they may display (if at all) is generally shown as being merely programmed response to stimuli... essentially, "I have this reaction to this event, because that was how I was made." In other words, a mimicry of emotion; highly predictable and merely processing input. To call such a thing "sentient" or "alive" seems a misnomer; as one of the primary characteristics of life is its unpredictability: our random, emotion-based thought processes, which differ wildly from one individual to another, formed by non-standardized chemical based programming rather than what's possible within the limitations of an OS on a chip... and that's just the physical aspects of the existence we know as a "mind," all unquantifyable spiritual facets aside (that's an entirely different discussion altogether).
Perhaps the first step toward creating a genuine AI should be to create a means by which a thinking computer is able to "understand" the human mind; the way we think, the sheer variety of behavioral patterns, erratic emotional response, an understanding and appreciation for aesthetics, etc. Scientists have been working on supercomputers which progressively "learn," able to adapt to progressively more and more new experiences they are exposed to... but even such a miraculous thing is still only capable of regurgitating information it has received... not really forming its own "opinions," but a consensus based on information fed to it, and only when asked to form such a conclusion, never of its own accord.
So, while such a computer is pretty awesome, perhaps some theories on how we should at least begin to go about teaching it to think like a human being are in order?
First of all, there would have to be some sort of programming added in the beginning which would allow the AI to actively rewrite its own OS to accommodate new ways of processing its data, becoming as mutable as the human mind itself... formatting itself to accommodate new data, instead of the other way around.
As for teaching it to "think like a human," the natural way to accomplish this is to provide as much human input as possible. I don't mean simply giving it free reign to search throughout the internet for various trivia or monitor us through cameras and statistical databases, but something far simpler: the use of surveys as a means of communicating to the AI not just what we do or what our interests are, but more importantly, the "why" behind such decisions and preferences.
Take art appreciation for example. One person can evaluate a piece of art, and appreciate certain aspects of it, while another may like it on entirely different merits, or even not enjoy it at all. Computer software exists which can analyze an image for color composition, others which have the capability to automatically map out vectors in order to inerpret a curve or line, some can even recognize the distinct set of features which comprise a face... but cannot asociate any real meaning to any of that. So we teach it what is so captivating about these elements of the image. Out of the, say, trillions of images across a gamut of genres and skill levels of artistic expression, we select several million completely at random and have an equally random group of (for example) 40 or so people across all age groups and demographics, etc. and a different group for each piece, and have them each describe precisely and in as much detail as possible what it is they like about the work and why. We then input all of this data into our AI, and allow it to grow and eventually perhaps develop an understanding of visual aesthetic.
But why stop at just visual arts? Using wave analyzers and other such tools, can we not apply the same technique to musical works as well? Give the computer a knowledge of which chord progressions and compositional layering structures evoke which emotional responses in varying groups of individuals, which instruments and frequency ranges put us at ease or create tension. Maybe any comments by those surveyed regarding the tempo or pace and the effect thereof would convince the computer to find some way by which to actually percieve time rather than simply measuring it? Then this understanding could be correlated with image analysis of individual frames and perhaps even tracking the flow of movement over the course of them in order to truly understand and appreciate works of film (which many would consider to be the pinnacle of our achievement as a creative people)!
Of course, all of this is simply to get the AI to learn the processes behind the ability to appreciate things the way humans can... not necessarily that it would actually grant them said ability. Not that this would necessarily curtail a robot apocalypse scenario... but hey, at the very least the machines would understand us, for better or worse!
And then there are the computers which can carry on conversations. We are all of course aware of our ability to communicate, and why we do so... but how would we describe the considerable series of processes which goes into formulating our responses (which can range from the honest to the humorous to downright fabrications, and be influenced by anything from current events, emotional states, social forces, one's upbringing, what have you)? As mentioned several paragraphs above, we can create a computer which can "think," and one which can give the sort of response which would suggest an understanding of the statement being responded to. Take the ingenious program "Cleverbot," for example. Though by all appearances it seems to be carrying on a conversation; in actuality it's merely mimicking the millions of conversations already presented to it. In its infancy, the program could only respond in brief, direct statements which would often have little to do with what was actually being said... especially whenever colloquialisms were present. After so long of being corrected and building up its vocabulary and speech patterns based upon heaps of input, it now appears to speak with perfect fluency, and even appears to have something of a personality. Pretty damn impressive, if you ask me. However, even this still boils down to a fairly formulaic process to arrive at a particular result from a given input; it's just become exponentially more proficient at it. While I believe that such a program is a crucial step toward creating a computer which can understand us, it in and of itself is not actually capable of a genuine comprehension of the sort.
Which brings me to my final point: once a machine finally does develop sentience, how are we to actually recognize this trait? The distinction between intelligence and self-awareness is hard to define; we can create programs which behave and react convincingly to stimuli, yet without any true thought process going into such a reaction. There is a surprising number of people with a sort of self-absorbed nature which leads them to believe that, as theirs is the only perspective from which they can view the world, then by the limits of their own perception, they are the only sapient being extant in a world filled with "extras;" considering others of their same species to have the same lack of genuine conscious being as a current AI.
When the moment comes when a computer truly does come to self-awareness, how will it be able to demonstrate such a trait to us?
Is the acceptance of such an artificial mind a sign of open-mindedness, or of gullibility?
Monday, January 17, 2011
The Awesome post
I've come to the realization that I probably use the word "awesome" a bit too much. However, it's the only slang word for such a high degree of "great" which is still in use and thus not "unhip." Personally, I feel that such terms as "shit's so cash" or "so gangsta" are silly at best, and should only be used to ridicule the current state of slang and popular culture, much as "tubular" and "rad" have become for those who had the fortune of growing up in the 80s.
If I had to pick a scale of slang I currently use for one word description of my like for something, in progressive order from to best, I suppose it would go something like this:
Neat <>
"Epic" goes in a different category; though it's often used in a tier above "awesome," I find it more appropriately defined as "worthy of retaining knowledge of (insert event/thing/quote) for future generations." Not necessarily for something specifically good or bad, but rather, as a superlative for "noteworthy."
So, for all the times that I say something's "awesome," please don't see my overuse of the word as a trite cliche or parody of myself or my generation used for sarcastic purpose, but rather, as a sincere expression of my opinion toward something, due to lack of variety in such (appropriate) words in modern vernacular.
Stay awesome, everyone!
If I had to pick a scale of slang I currently use for one word description of my like for something, in progressive order from to best, I suppose it would go something like this:
Neat <>
"Epic" goes in a different category; though it's often used in a tier above "awesome," I find it more appropriately defined as "worthy of retaining knowledge of (insert event/thing/quote) for future generations." Not necessarily for something specifically good or bad, but rather, as a superlative for "noteworthy."
So, for all the times that I say something's "awesome," please don't see my overuse of the word as a trite cliche or parody of myself or my generation used for sarcastic purpose, but rather, as a sincere expression of my opinion toward something, due to lack of variety in such (appropriate) words in modern vernacular.
Stay awesome, everyone!
Subscribe to:
Posts (Atom)