kind of... this is work by qin lv of colorado and colleagues on:
*matching in large collections of big, high dimensional things, like images http://www.cs.princeton.edu/cass/papers/mplsh_vldb07.pdf
*doing image searching on a mobile device (though not in the same way as above): http://www.cs.colorado.edu/~lv/docs/mobisys09.pdf
qin has some students who will do an experiment using their techniques on a mobile device to determine the location within their lab... I'll report results when available
one of qin's collaborators, li shang, points out that there is context information that can be used to improve image matching, such as that a picture taken not long after an earlier picture can't be very far from it... using this idea one might have quite a large collection of images, giving fine grained spatial coverage, but not have to search very much of it, on the average
Clayton Lewis Professor of Computer Science Scientist in Residence, Coleman Institute for Cognitive Disabilities University of Colorado http://www.cs.colorado.edu/~clayton
_______________________________________________________ fluid-work mailing list - [email protected] To unsubscribe, change settings or access archives, see http://fluidproject.org/mailman/listinfo/fluid-work
