The Open Android Project will bring robot programmers together to cooperatively build the mind of the first entertainment android. NOTE: a side-effect of this project is that it can be easily adapted to create a virtual android that uses digital resources such as in a computer game for it’s sensor and behavior tasks instead of real ones.
We believe that there may not be a grand unification theory of artificial intelligence, one that allows us to build a true android with thoughts and emotions. If we are wrong and there is such a theory, we don’t believe that it will be discovered within the next few decades.
Of course there will be breathtaking discoveries in neuroscience and cognitive research, but not a “holy grail” of A.I. that allows us to understand human thought well enough to build it piecemeal like we can currently build a robot.
In any circumstance, we don’t want to wait!
The intent of The Open Android Project is to create a logical framework of software and API’s that allows programmers to create plug-in behaviors, so that anyone can add a behavior and augment the intelligence of the android / robot. Each time a single plug-in is added, every member and their users immediately reap the fun and the benefit of the new plug-in’s functionality. With such a framework in place, it is easy to see how quickly an android can be created with thousands of fun behaviors that are shared by all.
The general frame work will consist of these modules (subject to change):
Each module will contain plug-in modules that performs tasks specific to that module.
The stimulus module is a collection of cooperating plug-in modules that analyze data from the android’s available sensors. Each plug-in can service any sensor it chooses, as long as it knows how to interface with it. The plug-in’s job, when called, is to process the incoming data and to produce one or more sensor tags as a response.
For example, suppose the android has vision sensors. Here are two possible plug-ins:
- John in the U.S.A writes a sensor module that works with the vision sensors that is responsible for identifying colored boxes. He defines his interface output as: “recognize_box(color, size)”. Any other plug-in in the stimulus module or the other modules, knows they can expect to get “recognize_box(color, size)” sensor tags from John’s plug-in.
- Rahul in India writes a sensor module that works with the vision sensors that is responsible for identifying objects that move horizontally. He defines his interface output as: “horz_movement(speed, direction)”. Any other plug-in in the stimulus module or the other modules knows they can expect to get “horz_movement(speed, direction)” sensor tags from Rahul’s plug-in.
The behavior module contains plug-ins that execute tasks using the android’s current hardware. The plug ins in the behavior module are identified by their behavior tags. Here are some example plug-ins:
- Rahul in India writes a plug-in to move the android towards a desired object He declares his plug-in with the “move(desired_object)” behavior tag. Any plug-in in the decision module knows that it can get the android to move towards a desired object with Rahul’s plug-in.
- John in the U.S.A writes a plug-in to make the android speak. He declares his plug-in with the “speak(text, pitch, speed)” behavior tag.
- Amir in India writes a plug-in that lets the android pick up an object and throw it. He declares his plug-in with the “pick_up_and_throw(desired_object, direction)” behavior tag.
- Jukka in Finland writes a funny plug-in that makes the android shiver and cower like he is frightened. He declares his plug-in with the “scared(intensity)” behavior tag.
The decision module is responsible for analyzing the available (and desired) data from the sensor module and then executing one or more behaviors that are available in the behavior module. Here is an example of a decision module plug-in:
- Akahito in Japan writes a plug-in that looks for red boxes that are moving fast in front of the android, and then has the android act scared if it sees one. For example, his plug-in looks for John’s “recognize_box(color, size)” sensor tag and Rahul’s “horz_movement(speed, direction)” sensor tag. If the box’s color is red and the speed of the box is greater than 1 foot per second, he executes Jukka’s “scared(intensity)” behavior tag. In fact, a possible improvment would be to increase the intensity parameter in proportion to the speed of the moving box.
Obviously there is no limit to the number of plug-ins that can be created and shared. As each plug-in is added, the number of behaviors that can be created and expressed increase exponentially, due to the larger number of possible combinations that can be synthesized from the plug-ins.
If you have any questions or want to talk about this project before it is ready for release, come to The Open Android Project forum.