12.21.2005

 

The End of Moore's Law?

The End of Moore's Law - Microchips are getting smaller - and that's the problem.

"Until recently, Moore's Law, the observation that the number of transistors on a microchip doubles every 18 months to two years, seemed a self-fulfilling prophecy. When Intel co-founder Gordon Moore issued his famous prediction 40 years ago, a chip could hold a few dozen transistors. Today, Intel can cram almost 1 billion transistors, each of which is less than 100 nanometers in size, on a single microchip. (One nanometer is 1 millionth of a millimeter?the equivalent of about 100 atoms.)"

It will be interesting to watch and see what the next breakthrough is.

Comments:
All estimates for working quantum computers that I've heard of place them FAR off in the future. Also, it may be that such a computer would end up being some kind of hybrid... in other words, a "traditional" chip CPU and for really heavy lifting, the "Q" chip (or whatever). I understand that quantum computing is actually not very efficient for simple tasks. There was an in-depth thread about this on Slashdot not too long ago. I understood about 3 sentences worth. ;)
 
"All estimates for working quantum computers that I've heard of place them FAR off in the future"

****

That might be true at today's rate of progress but the rate of progress is accelerating. For example, it was intially estimated that it would take over 100 years to sequence DNA.

People who predict the future usually assume the current rate of progress will remain the same. This results in overestimating the short-term while underestimating the longer-term
 
How about some news?

http://dsc.discovery.com/news/briefs/20051219/awarerobot_tec.html?source=rss
 
Darn links... anyway---


Robot Demonstrates Self Awareness
By Tracy Staedter, Discovery News
type size: [A] [A] [A]
Dec. 21, 2005— A new robot can recognize the difference between a mirror image of itself and another robot that looks just like it.

This so-called mirror image cognition is based on artificial nerve cell groups built into the robot's computer brain that give it the ability to recognize itself and acknowledge others.

The ground-breaking technology could eventually lead to robots able to express emotions.

Under development by Junichi Takeno and a team of researchers at Meiji University in Japan, the robot represents a big step toward developing self-aware robots and in understanding and modeling human self-consciousness.

"In humans, consciousness is basically a state in which the behavior of the self and another is understood," said Takeno.

Humans learn behavior during cognition and conversely learn to think while behaving, said Takeno.

To mimic this dynamic, a robot needs a common area in its neural network that is able to process information on both cognition and behavior.

Takeno and his colleagues built the robot with blue, red or green LEDs connected to artificial neurons in the region that light up when different information is being processed, based on the robot's behavior.

"The innovative part is the independent nodes in the hierarchical levels that can be linked and activated," said Thomas Bock of the Technical University of Munich in Germany.

For example, two red diodes illuminate when the robot is performing behavior it considers its own, two green bulbs light up when the robot acknowledges behavior being performed by the other.

One blue LED flashes when the robot is both recognizing behavior in another robot and imitating it.

Imitation, said Takeno, is an act that requires both seeing a behavior in another and instantly transferring it to oneself and is the best evidence of consciousness.

In one experiment, a robot representing the "self" was paired with an identical robot representing the "other."

When the self robot moved forward, stopped or backed up, the other robot did the same. The pattern of neurons firing and the subsequent flashes of blue light indicated that the self robot understood that the other robot was imitating its behavior.

In another experiment, the researchers placed the self robot in front of a mirror.

In this case, the self robot and the reflection (something it could interpret as another robot) moved forward and back at the same time. Although the blue lights fired, they did so less frequently than in other experiments.

In fact, 70 percent of the time, the robot understood that the mirror image was itself. Takeno's goal is to reach 100 percent in the coming year.
 
Post a Comment

<< Home
Archives © Copyright 2005 by Marshall Brain
Atom RSS

This page is powered by Blogger. Isn't yours?