On 10/24/06, Bob Mottram <[EMAIL PROTECTED]> wrote:

On 23/10/06, Neil H. <[EMAIL PROTECTED]> wrote:
> > I think their stuff was also licenced to Sony for use on their
> > AIBO, before Sony axed their robotics products.
>
> Sony licensed the tech, but I think they only used it so that AIBO
> could visually recognize pre-printed patterns on cards, which would
> signal the AIBO to dance, return to the charging station, etc. SIFT is
> IMHO overkill for that kind of thing, and it's a pity they didn't do
> anything more interesting with it.


It's a shame they ditched AIBO and their other robots in development.  AIBO
users were rather unhappy about that.  Perhaps some other company will buy
the rights.

Not just end-users, but there were also a number of research labs
which used AIBOs as a robotics platform. (I was in such a lab as an
undergrad) They were pretty nice for running software on, and it was
more-or-less impossible to get a robot with similar capabilities at
that $1000 price range.

Somewhat surprisingly, the RoboCup four-legged league is still active,
with 24 teams qualifying to compete this year:
http://www.robocup2006.org/sixcms/detail.php?id=390&lang=en

I wonder how they keep their AIBOs operational over the years...

> Perhaps. To play devil's advocate, how well do you think stereo vision
> system would actually work for creating a 3D structure of a home
> environment? It seems that distinctive features in the home tend to be
> few and far between. Of course, the regions between distinctive
> features tend to be planar surfaces, so perhaps it isn't too bad.


Well this is exactly what I'm (unofficially) working on now.  From the
results I have at the moment I can say with confidence that it will be
possible to navigate a robot around a home environment using a pair of
stereo cameras, with the robot remaining within at least a 7cm position
tollerance.  7cm is just a raw localisation figure, and after kalman
filtering and sensor fusion with odometry the accuracy should be much better
than that.  You might think that there are not many features on walls, but
even in environments which people consider to be "blank" there are often
small imperfections or shading gradients which stereo algorithms can pick
up.  In real life few surfaces are perfectly uniform.

With good localisation performance high quality mapping becomes possible.  I
can run the stereo algorithms at various levels of detail, and use
traditional occupancy grid methods (with a few tweaks) to build up evidence
in a probablistic fashion.  The idea at the moment is to have the
localisation algorithms running in real time using low-res grids, and to
build a separate high quality model of the environment in a high resolution
grid more gradually in a low priority background task.  Once you have a good
quality grid model its then quite straightforward to detect things like
walls and furniture, and to simplify the data down to something which is a
more efficient representation similar to something you might find in a game
or an AGI sim.  You can also use the grid model in exactly the same way that
2D background subtraction systems work (except in 3D) in order to detect
changes within the environment.

This a pretty interesting approach. I'd love to see more details on
this in the future.

-- Neil

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to