9.08.2005

 

Robot Programmers

Machines Better Programmers than Humans

From the article: Consider the very interesting point you reach where you can say, "Robot programmer, I would like you to write a program that is better at writing programs". See also:

Comments:
A genetic algorithm is not the same thing as a generic automated programmer.

The whole problem in programming is that there is not a rich and precise enough language to transfer program requirements into usable code. People have been working on this for some time, making it more efficient and faster to write relevant code.

A good example is the memory management in Java, saving the errors and time used in languages like C and C++ by less experienced programmers. Does this mean Java is a hybrid system, half-human, half-robot? No. It means the humans have more powerful tools.


I cannot stress enough: genetic algorithms are not fully automated programming bots. A "coder" could become useless, but it would be replaced with a similar human who is simply more responsible for ever-higher levels of design. Think relays, to punch cards, to character mapped displays, to assembler, to virtual machine environments like MATLAB or Java.

I would think the author of a few programming books could distinguish the finer point, rather than parroting an article whose intended audience apparently might not know what an "algorithm" is.
 
Interesting “algorithm”, but I don’t understand how significant it is to automate coding? Is there more to read?

Ivan is correct about computer languages lacking AI tools for coding. Our school looked at some 50 different languages for coding an AI system. We choose Visual Basic.NET and had to code our own AI tool kit! The big commercial vendors and hacker languages lack so much for coding modern AI. Even LISP and Prolog are limited AI tools!
 
I didn't say development tools to program AI are limited. I said tools to auto-develop are limited.

All programming languages worth their salt can describe a complete Turing machine, and can thus create anything there is to program.

Part of the point when people complain that AI is more than increasing the speed of hardware is that the algorithms themselves are unknown, not that we can't program them.
 
This isn't the headline, but...

One day there WILL be a AI that can improve itself exponentially. When that happens, you better have a hand on the plug!
 
Of course, bringing up Java also shows how slowly the automation of programming has developed. LISP provided automatic memory management in 1958. New languages and language features aren't readily accepted, as Java shows us since there's no feature in Java that wasn't in some language by 1974.

I wouldn't worry too much about human-level AI. We don't have a deep understanding of human intelligence yet, and thus really don't know how to build such an AI. However, we don't need that level of AI to automate most tasks, as Marshall Brain's Manna story points out.
 
I think genetic algorithms will likely play a large role in evolving general artificial intelligence. They will be used to discover and refine efficient learning algorithms for neural nets that are working on specific problems. For instance, by iterative testing among thousands of variants, a genetic algorithm could find the right ratios in excitatory and inhibitory synpatic strengths, an appropriate connectivity of the neurons to begin with, and effective conditions for altering the synaptic strengths during run-time, so that a neural net could evolve to, say, examine a random picture and judge the relative distances to various objects in the picture (by, say, finding the edges around objects and seeing how they overlap - this would probably utilize the results of lower level neural nets - something like those in the V1 area of the brain).

Other genetic algorithms might find other combinations of initial and dynamical conditions allows for a neural net to learn how to parse a sentence correctly, given a large sample. Yet others could then be evolved to successfully combine the results of lower level nets, so that for instance the written word "duck", the sound of someone saying "duck', and a picture of a small yellow bird with webbed feet all trigger the same "duck" data structure in the neural net.

In other words, it makes sense evolve silicon-based brains in the same general way that organic brains evolved (only some 10 million times faster...). Of course, there may well be much faster techniques for certain problems, which couldn't have been evolved with biological hardware - but clearly evolving the structure of neural network, biasing subareas to learn various useful tasks, is sufficient to produce human level intelligence. Theoretical and experimental neuroscience will help immensely with this - they are starting to develop techniques that allow for the simultaneous recording of the signals from many neurons instead of just single neurons, and there has been a lot of progress in vision and natural language processing theory as well. They will give good starting points for the structures needed to achieve specific goals, which the GAs will then refine.
 
There are so many ways to go about constructing a general AI.

We could: Tag up the world, and program in (manually) all the pieces, and then network them all together, and then write negotiation procedures on top, so that services are found as needed.

We could: Reproduce human brains by manual simulation of neurons.

We could: Train behaviors genetically, until we find something that works.

We could: Actually figure out some general purpose learning algorithm. (Even though others have failed.)

We could: Train robots and AIs en masse, collaboratively. When a robot does something wrong, correct it, and distribute corrections.

We could: Do a combination of all of the above.

One day, we'll say, "Computer, make me a game." And it'll go: "Okay, how about- a little something like this?" It'll show us Tron, Light Cycles. "Nah, uh..." "Or this?" And it'll show us Snakes. "Hm, I guess I mean a Role Playing Game. Let's make a role playing game." "Okay, how about like this, or this, or this?"

And, we'll manipulate the game like playdough, using a variety of user interfaces to manipulate the game environment and describe the logic of the game.

And, I think it's coming sooner than people think- 100's of years, they imagine. I think it will take 20-40 years. This is because there are millions and millions of people working on these capablities, and we are soon to be in the hundreds of millions, if not billions, as the rest of the world comes online, and as knowledge work becomes the only work.
 
Post a Comment

<< Home
Archives © Copyright 2005 by Marshall Brain
Atom RSS

This page is powered by Blogger. Isn't yours?