On 21/03/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

* use a combination of lidar and camera input
* write code that took this combined input to make a 3D contour map of
the perceived surfaces in the world
* use standard math transforms to triangulate this contour map
* use some AI heuristics (with feedback from the more general AI
routines) to approximate sets of these little
triangles by larger polygons
* finally, feed these larger polygons into the "polygon vision" module
we have designed for NM in a sim-world
context



This is very much the traditional machine vision approach, described by
Moravec and others and used with some success recently in the DARPA Grand
Challenge.  I'm also following the same approach which is a very
straightforward application of standard engineering techniques.  The
logistics of doing this are quite complicated, involving camera calibration,
correspondence matching and probabilistic spatial modelling and I think the
sheer complexity (and drudgery) of the programming task is the reason why
few people have ever attempted to do this so far.  Being able to create
large scale voxel models which can be maintained in a computationally
efficient manner suitable for real time use also involves some fancy
algorithms.

I would agree that where things start to become interesting are at the
polygon level, but you still need to maintain an underlying voxel model of
space because you can't calculate probability distributions accurately using
polygons alone.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to