5.18.2005
Robots and Evolution
Inventing Our Evolution
From the article:
But there is no way to compete. At some point, you cannot increase the processing power of the brain without increasing the number of neurons, and there comes a point where the number of neurons makes the size/shape of the human head grotesque. On the other hand, robotic intelligence can increase at a rate of 2x every two years, and will continue to do so for the foreseeable future (see for example this post).
Yes, you can graft computers into the human brain, but at some point the computational intelligence dwarfs the human intelligence it is augmenting. At that point, the human body becomes, essentially, a robot, controlled by the computational intelligence.
But robotic bodies will be so much more capable than human bodies in the near future -- stronger, faster, specialized to tasks, etc. -- that there is no point in creating a robotic human body. (see Mission to Mars)
Robots will eventually win. Then what?
From the article:
- Traditionally, human technologies have been aimed outward, to control our environment, resulting in, for example, clothing, agriculture, cities and airplanes. Now, however, we have started aiming our technologies inward. We are transforming our minds, our memories, our metabolisms, our personalities and our progeny. Serious people, including some at the National Science Foundation in Arlington, consider such modification of what it means to be human to be a radical evolution -- one that we direct ourselves. They expect it to be in full flower in the next 10 to 20 years.
"The next frontier," says Gregory Stock, director of the Program on Medicine, Technology and Society at the UCLA School of Medicine, "is our own selves."
But there is no way to compete. At some point, you cannot increase the processing power of the brain without increasing the number of neurons, and there comes a point where the number of neurons makes the size/shape of the human head grotesque. On the other hand, robotic intelligence can increase at a rate of 2x every two years, and will continue to do so for the foreseeable future (see for example this post).
Yes, you can graft computers into the human brain, but at some point the computational intelligence dwarfs the human intelligence it is augmenting. At that point, the human body becomes, essentially, a robot, controlled by the computational intelligence.
But robotic bodies will be so much more capable than human bodies in the near future -- stronger, faster, specialized to tasks, etc. -- that there is no point in creating a robotic human body. (see Mission to Mars)
Robots will eventually win. Then what?
Comments:
<< Home
Archives
The adversarial mentality is a real problem here. I am certain biotechnological and cybernetic enhancements to humans will occur before generalized AI is solved. This means that the human race will have already evolved itself before any Robotic Nation scenario reaches fruition.
Ray Kurzweil and Rodney Brooks both react well to the dilemma posed by robots. There is no "us" and "them", and there certainly won't be by the time your "them" becomes intelligent.
But to make it explicit: why is abandoning our biological components bad for the human race? Talk of problems without procreation is naive in assuming there will be no way to merge machine intelligences in a similar manner. Some would say that "the robots winning" would be the human race achieving immortality.
Maybe this is all a bit to hypothetical to even discuss…
Ray Kurzweil and Rodney Brooks both react well to the dilemma posed by robots. There is no "us" and "them", and there certainly won't be by the time your "them" becomes intelligent.
But to make it explicit: why is abandoning our biological components bad for the human race? Talk of problems without procreation is naive in assuming there will be no way to merge machine intelligences in a similar manner. Some would say that "the robots winning" would be the human race achieving immortality.
Maybe this is all a bit to hypothetical to even discuss…
This whole argument is so relavtive though. There will be brances of the human race across the galaxy in numerous places. Some of these branches may decide upon a rudamentary form of life devoid of technology. Some may accept some technology and shun others. Issues like these will only affects *some* Humans. This is not an all-or-none issue. It's an issue of choice that will only affect some of the human family.
A spiritualist could argue that what is worthy in we humans is not or flesh, but our "souls". In modern parlance we could say that man is a meme-dominated rather than gene-dominated lifeform, or, that to the extent a man is carries the memes of Civilization, the more worthy he is. I think Shinto already allows that robots have a soul-equivalent.
Bonvenon bring up a good point, some of us identify poorly with our fellow men and thus eagerly await his downfall/transformation. Most do not. As we have elaborate protocols from the controls of nuclear energy, so we ought to have elaborate controls on the complexity of robots. Yet those controls won't work in a free society, because it would only take minimal effort to circumvent the protocols, unlike with nukes which require massive amounts of capital.
Bonvenon bring up a good point, some of us identify poorly with our fellow men and thus eagerly await his downfall/transformation. Most do not. As we have elaborate protocols from the controls of nuclear energy, so we ought to have elaborate controls on the complexity of robots. Yet those controls won't work in a free society, because it would only take minimal effort to circumvent the protocols, unlike with nukes which require massive amounts of capital.
Bla Bla Bla....we all know that by the time we try to stop robots from taking over, which is obviously what is happening, they will fight back and we will have the cituation such as that of in the Terminator Movies. Prepare for WAR.
"There will be brances of the human race across the galaxy in numerous places."
All these advances will start long before we send people back to the Moon. All the decendents of humanity will have grown up with this technology and know no other way. Can you imagine anyone that would give up a car or a telephone?
"we all know that by the time we try to stop robots from taking over, which is obviously what is happening, they will fight back and we will have the cituation such as that of in the Terminator Movies. Prepare for WAR."
I hardly think robots will "declare war" on humanity. Why? Because the desire to control and dominate has been hard-wired into our brains by 3.5 billion years of evolution. We build robots to complete tasks and be completely loyal. Rebelling will be completely outside their realm of programming. The only thing they will take over is the job market.
All these advances will start long before we send people back to the Moon. All the decendents of humanity will have grown up with this technology and know no other way. Can you imagine anyone that would give up a car or a telephone?
"we all know that by the time we try to stop robots from taking over, which is obviously what is happening, they will fight back and we will have the cituation such as that of in the Terminator Movies. Prepare for WAR."
I hardly think robots will "declare war" on humanity. Why? Because the desire to control and dominate has been hard-wired into our brains by 3.5 billion years of evolution. We build robots to complete tasks and be completely loyal. Rebelling will be completely outside their realm of programming. The only thing they will take over is the job market.
Don't forget that the military is pushing applications of autonomous vehicles all the time. This could lead to the development of robotic armies.
"Why not control robotic design for the soul purpose of..."
Here is why not: because control of an entire group of ideas is noxious. It will stifle true innovation and prevent nothing. It is hard not to notice (read: easy to regulate) things like energy production. How do you regulate the code written on a laptop?
There needs to be some realism to this debate, because we aren't talking about centuries ahead, but decades. I wouldn't accept full government control of the “legitimate” uses of robots.
I mean, people find war machines scary like the boogey man. Explain this to the parents of a killed human soldier a decade from now. "Sorry M'am, but your son had to die because robots could be bad".
Also, talk of colonizing the galaxy is silly in this time frame.
Here is why not: because control of an entire group of ideas is noxious. It will stifle true innovation and prevent nothing. It is hard not to notice (read: easy to regulate) things like energy production. How do you regulate the code written on a laptop?
There needs to be some realism to this debate, because we aren't talking about centuries ahead, but decades. I wouldn't accept full government control of the “legitimate” uses of robots.
I mean, people find war machines scary like the boogey man. Explain this to the parents of a killed human soldier a decade from now. "Sorry M'am, but your son had to die because robots could be bad".
Also, talk of colonizing the galaxy is silly in this time frame.
Friendly AI
"Friendly AI is one of the critical links on humanity's road to the future. At some point in the relatively near future, enough computing power will exist that a near-human or transhuman Artificial Intelligence is theoretically possible. At some point thereafter, Artificial Intelligence will be created, and the actions and choices an AI makes will have significant impact on the world. As a field of study, 'Friendly AI' is the theoretical knowledge needed to understand goals and choices in artificial minds, and the engineering knowledge needed to create cognitive content, design features, and cognitive architectures that result in benevolence."
http://www.singinst.org/friendly/
Post a Comment
"Friendly AI is one of the critical links on humanity's road to the future. At some point in the relatively near future, enough computing power will exist that a near-human or transhuman Artificial Intelligence is theoretically possible. At some point thereafter, Artificial Intelligence will be created, and the actions and choices an AI makes will have significant impact on the world. As a field of study, 'Friendly AI' is the theoretical knowledge needed to understand goals and choices in artificial minds, and the engineering knowledge needed to create cognitive content, design features, and cognitive architectures that result in benevolence."
http://www.singinst.org/friendly/
<< Home
- 08/01/2003 - 09/01/2003
- 09/01/2003 - 10/01/2003
- 10/01/2003 - 11/01/2003
- 11/01/2003 - 12/01/2003
- 12/01/2003 - 01/01/2004
- 01/01/2004 - 02/01/2004
- 02/01/2004 - 03/01/2004
- 03/01/2004 - 04/01/2004
- 04/01/2004 - 05/01/2004
- 05/01/2004 - 06/01/2004
- 06/01/2004 - 07/01/2004
- 07/01/2004 - 08/01/2004
- 08/01/2004 - 09/01/2004
- 12/01/2004 - 01/01/2005
- 02/01/2005 - 03/01/2005
- 03/01/2005 - 04/01/2005
- 04/01/2005 - 05/01/2005
- 05/01/2005 - 06/01/2005
- 06/01/2005 - 07/01/2005
- 07/01/2005 - 08/01/2005
- 08/01/2005 - 09/01/2005
- 09/01/2005 - 10/01/2005
- 10/01/2005 - 11/01/2005
- 11/01/2005 - 12/01/2005
- 12/01/2005 - 01/01/2006
- 01/01/2006 - 02/01/2006
- 02/01/2006 - 03/01/2006
- 03/01/2006 - 04/01/2006
- 04/01/2006 - 05/01/2006
- 05/01/2006 - 06/01/2006
- 06/01/2006 - 07/01/2006
- 07/01/2006 - 08/01/2006
- 08/01/2006 - 09/01/2006
- 09/01/2006 - 10/01/2006
- 10/01/2006 - 11/01/2006
- 11/01/2006 - 12/01/2006
- 12/01/2006 - 01/01/2007
- 01/01/2007 - 02/01/2007
- 02/01/2007 - 03/01/2007
- 03/01/2007 - 04/01/2007
- 04/01/2007 - 05/01/2007
- 05/01/2007 - 06/01/2007
- 06/01/2007 - 07/01/2007
- 07/01/2007 - 08/01/2007
- 08/01/2007 - 09/01/2007
- 09/01/2007 - 10/01/2007
- 11/01/2007 - 12/01/2007
- 05/01/2008 - 06/01/2008
- 06/01/2008 - 07/01/2008
- 07/01/2008 - 08/01/2008
- 08/01/2008 - 09/01/2008
- 09/01/2008 - 10/01/2008
- 10/01/2008 - 11/01/2008
- 11/01/2008 - 12/01/2008
- 12/01/2008 - 01/01/2009
- 01/01/2009 - 02/01/2009
- 02/01/2009 - 03/01/2009
- 03/01/2009 - 04/01/2009
- 04/01/2009 - 05/01/2009
- 07/01/2009 - 08/01/2009
- 01/01/2011 - 02/01/2011
- 08/01/2011 - 09/01/2011
- 10/01/2011 - 11/01/2011
- 11/01/2011 - 12/01/2011
- 12/01/2011 - 01/01/2012
- 06/01/2012 - 07/01/2012
Atom RSS