7.25.2005
Understanding the coming Singularity
If you have ever heard of "the singularity", wondered what it means and would like to learn more, here is a FAQ that provides you with a nice introduction: Singularity FAQ for Dummies.
Comments:
<< Home
Archives
The singularity is also an argument for not worrying about mass unemployment through robotics. Life would be so incredibly different after a singularity that there is probably no way we could prepare for it.
Well, not if the super-intelligence is made in such a way that it only benefits the wealthy.
We already have non-human super-intelligent organisms: They're called "corporations."
They are not completely controllable by their owners, but they are "controllable enough."
There are locks in place on what ideas employees should be looking for, and what they can say while working for the corporation. These locks also succeed marvelusly, outside of the bounds of workplace operation.
I can easily imagine that a super intelligent AI would be lockable. It's subthoughts would not be endowed to communicate or even think outside of their assigned tasks. Communications between intelligence modules would be moderated and managed, by the artificial intelligence. There is no guarantee that an artificial intelligence would develop a sense of human morality, ethics, nor of rebellion.
So, it is entirely plausible that a super-intelligent AI can be managed, and, further, put to the interests of a few.
(For some imagination matter, consider reading "Nano.")
-- Lion
We already have non-human super-intelligent organisms: They're called "corporations."
They are not completely controllable by their owners, but they are "controllable enough."
There are locks in place on what ideas employees should be looking for, and what they can say while working for the corporation. These locks also succeed marvelusly, outside of the bounds of workplace operation.
I can easily imagine that a super intelligent AI would be lockable. It's subthoughts would not be endowed to communicate or even think outside of their assigned tasks. Communications between intelligence modules would be moderated and managed, by the artificial intelligence. There is no guarantee that an artificial intelligence would develop a sense of human morality, ethics, nor of rebellion.
So, it is entirely plausible that a super-intelligent AI can be managed, and, further, put to the interests of a few.
(For some imagination matter, consider reading "Nano.")
-- Lion
May I suggest a book?
Accelerando by Charles Stross http://www.accelerando.org
http://www.accelerando.org/book/
Accelerando by Charles Stross http://www.accelerando.org
http://www.accelerando.org/book/
The only places where the singularity has been analyzed intelligently are www.nickbostrom.com and
www.singinst.org
Every other discussion is just science fiction meets millenialism.
www.singinst.org
Every other discussion is just science fiction meets millenialism.
The classic (Vinge) idea of a Singularity can only be believed in by someone who is so ignorant that they think exponential growth of some technology can continue indefinitely, and so innumerate that they think an exponential curve has a vertical asymptote.
Having said that, superhuman AI is technogically credible. When it arrives it won't render humans redundant, but will allow precise analysis and numerical optimisation of important but currently subjective or poorly understood areas like economic policy and the best social structures for human welfare.
It shouldn't prove an existential threat to human so long as we understand and control its goal-directed behavoir (motivation) as distinct from analytic and model-building power (intelligence).
Post a Comment
Having said that, superhuman AI is technogically credible. When it arrives it won't render humans redundant, but will allow precise analysis and numerical optimisation of important but currently subjective or poorly understood areas like economic policy and the best social structures for human welfare.
It shouldn't prove an existential threat to human so long as we understand and control its goal-directed behavoir (motivation) as distinct from analytic and model-building power (intelligence).
<< Home
- 08/01/2003 - 09/01/2003
- 09/01/2003 - 10/01/2003
- 10/01/2003 - 11/01/2003
- 11/01/2003 - 12/01/2003
- 12/01/2003 - 01/01/2004
- 01/01/2004 - 02/01/2004
- 02/01/2004 - 03/01/2004
- 03/01/2004 - 04/01/2004
- 04/01/2004 - 05/01/2004
- 05/01/2004 - 06/01/2004
- 06/01/2004 - 07/01/2004
- 07/01/2004 - 08/01/2004
- 08/01/2004 - 09/01/2004
- 12/01/2004 - 01/01/2005
- 02/01/2005 - 03/01/2005
- 03/01/2005 - 04/01/2005
- 04/01/2005 - 05/01/2005
- 05/01/2005 - 06/01/2005
- 06/01/2005 - 07/01/2005
- 07/01/2005 - 08/01/2005
- 08/01/2005 - 09/01/2005
- 09/01/2005 - 10/01/2005
- 10/01/2005 - 11/01/2005
- 11/01/2005 - 12/01/2005
- 12/01/2005 - 01/01/2006
- 01/01/2006 - 02/01/2006
- 02/01/2006 - 03/01/2006
- 03/01/2006 - 04/01/2006
- 04/01/2006 - 05/01/2006
- 05/01/2006 - 06/01/2006
- 06/01/2006 - 07/01/2006
- 07/01/2006 - 08/01/2006
- 08/01/2006 - 09/01/2006
- 09/01/2006 - 10/01/2006
- 10/01/2006 - 11/01/2006
- 11/01/2006 - 12/01/2006
- 12/01/2006 - 01/01/2007
- 01/01/2007 - 02/01/2007
- 02/01/2007 - 03/01/2007
- 03/01/2007 - 04/01/2007
- 04/01/2007 - 05/01/2007
- 05/01/2007 - 06/01/2007
- 06/01/2007 - 07/01/2007
- 07/01/2007 - 08/01/2007
- 08/01/2007 - 09/01/2007
- 09/01/2007 - 10/01/2007
- 11/01/2007 - 12/01/2007
- 05/01/2008 - 06/01/2008
- 06/01/2008 - 07/01/2008
- 07/01/2008 - 08/01/2008
- 08/01/2008 - 09/01/2008
- 09/01/2008 - 10/01/2008
- 10/01/2008 - 11/01/2008
- 11/01/2008 - 12/01/2008
- 12/01/2008 - 01/01/2009
- 01/01/2009 - 02/01/2009
- 02/01/2009 - 03/01/2009
- 03/01/2009 - 04/01/2009
- 04/01/2009 - 05/01/2009
- 07/01/2009 - 08/01/2009
- 01/01/2011 - 02/01/2011
- 08/01/2011 - 09/01/2011
- 10/01/2011 - 11/01/2011
- 11/01/2011 - 12/01/2011
- 12/01/2011 - 01/01/2012
- 06/01/2012 - 07/01/2012
Atom RSS