Andrea said,

> I am trying to use blobfinder for landmark navigation in the Pyrobot
> simulator. I can not find any documentation, but I've determined
> experimentally that blob[0][2] ranges from 0-39 with 20 indicating the
> blob is centered in the camera view, blob[0][0] is related to the
> distance from the blob but is not in robot units and is not linear, and
> blob[0][4] is the mass, perhaps in numbers of pixels in the camera
> view. Can you tell me more precise information about these return
> values?

I assume that you mean that you are using a Blobify filter on the
simulated camera, since Pyrobot simulator doesn't support a "blobfinder"
interface yet (like the Stage simulator does).

There is a little bit of documentation on the Blobify filter at:

http://pyrorobotics.org/?page=PyroModuleVisionSystem

and

http://pyrorobotics.org/?page=PyroVisionSystemFunctions

> I have created a simple world with different colored boxes in each
> corner, and I'm trying to set the robot's position based on the
> location of one of the landmarks. I am trying to see if I can get
> better results than by using dead reckoning. I'd like to be able to
> calculate the robot's location based on either blob[0] or by utilizing
> the known mass of the landmark (in which case I'd like to know the
> height of the boxes) and comparing that to the mass returned. If there
> is a better way to do this, I am open to suggestions. I know I can use
> triangulation if the camera can see two landmarks, but I am hoping to
> avoid that.

The height of obstacles in the camera view are just based on the distance
to it in the Pyrobot simulator (currently, many objects don't even have
heights as they are represented as 2D data. We'll correct that when we
fully represent objects so that the same descriptions can be used in 3D
worlds, too).  So, yes, you could look at the image and compute an
approximate distance to the obstacle by examining the position of the
blob. This would also sorta work in the real world. In the Pyrobot
simulator, objects are guaranteed to be on the horizon---half above and
half below, as if the camera were in a perfect position.

> I am also wondering if there is a way to look for blobs of multiple
> colors (not a range), or if I have to blobify on each color and call
> clearFilters(). Also, is there a way to get the world coordinates of
> the robots from the simulator to test the accuracy of the navigation?

You can have at least 3 blobifies going on without bothering each other,
and can also do more. First, you can blobify on each of Red, Green, and
Blue. If you do that, you may not want to draw a border box (that might
interfere):

# blobify red, match between 255 and 255, sort on mass, return 1 blob
# and don't draw the bounding box:
self.robot.camera[0].addFilter("blobify",0,255,255,0,1,0)

# blobify green, match between 255 and 255, sort on mass, return 2 blob
# and don't draw the bounding box:
self.robot.camera[0].addFilter("blobify",1,255,255,0,2,0)

BTW, if your boxes aren't Red, Green, or Blue to begin with, you can click
on them with the left (Red), middle (Green), or right (Blue) mouse buttons
to active the appropriate filters, then blobify.

Let us know if that doesn't help,

-Doug

> Thanks,
> Andrea


_______________________________________________
Pyro-users mailing list
[email protected]
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users

Reply via email to