Hi Andi,
I too have thought about using the human/computer
interface as sensors and actuators for an AI. This
would accomplish a few things: 1. Grounding of
symbols in a real world that is mostly symbolic and
precise to begin with. 2. Behaviors and actions could
be developed that are useful
I wrote:
I just had a notion. The proper sensory input and motor output for an AI is
the computer screen (and sound input and regular keyboard and mouse input).
One thing that needs to exist is a freely available standard API for these
things, so people can work on them, plus
I just had a notion. The proper sensory input and motor output for an AI is
the computer screen (and sound input and regular keyboard and mouse input).
One thing that needs to exist is a freely available standard API for these
things, so people can work on them, plus implementations for the
Andrew Babian wrote:
I just had a notion. The proper sensory input and motor output for an AI is
the computer screen (and sound input and regular keyboard and mouse input).
One thing that needs to exist is a freely available standard API for these
things, so people can work on them, plus
I would dearly love to have some intercompatible standards for robotics interfaces. There have been a few attempts to define a standard in the past, but none of them have been very successful so far. A few years ago I remember there was something called the Robotics Engineering Task Force which