[ 
https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12993835#comment-12993835
 ] 

David Smiley commented on SOLR-2155:
------------------------------------

Bill,
It would be nice if the sorting didn't require a separate field than the 
geohash field since the geohash field already has the data required. That was 
the main point of my criticism RE using a character to separate the values.  I 
know how to modify your code accordingly but that's not really the interesting 
part of our conversation.

I am aware of how geodist() works and that you're algorithm is conceptually 
very similar.  But just because geodist() works this way and was written by 
Solr committers doesn't make it fast.  It loads every field value into RAM (!) 
via Lucene's field cache and then does a brute force (!) scan across all values 
to see if it's within the shape (a haversine based circle).  Then, yes, it only 
sorts on the remainder.  Pretty simple.  More evidence that this is suboptimal 
is some trends to parallelize the brute-force scan into multiple threads (AFAIK 
JTeam does this and I believe geodist() is planned to though I forget where I 
saw that).  The brute-force aspect of it is what I find most uninspiring; the 
RAM might not be so much a problem but still.

I know you can't use geohash for sort (except for approximation) but it can 
help filter the data set so that you don't compute haversine for points in 
geohash boxes that you know aren't within the queried box.  The fewer points in 
the queried box is relative to the entire globe of points will yield better 
performance.  That's the central idea I present. And I'm not talking about 
precision loss. I have a month of other stuff to get to then I can get to this, 
to include benchmarks.


> Geospatial search using geohash prefixes
> ----------------------------------------
>
>                 Key: SOLR-2155
>                 URL: https://issues.apache.org/jira/browse/SOLR-2155
>             Project: Solr
>          Issue Type: Improvement
>            Reporter: David Smiley
>         Attachments: GeoHashPrefixFilter.patch, GeoHashPrefixFilter.patch, 
> GeoHashPrefixFilter.patch, SOLR.2155.p2.patch
>
>
> There currently isn't a solution in Solr for doing geospatial filtering on 
> documents that have a variable number of points.  This scenario occurs when 
> there is location extraction (i.e. via a "gazateer") occurring on free text.  
> None, one, or many geospatial locations might be extracted from any given 
> document and users want to limit their search results to those occurring in a 
> user-specified area.
> I've implemented this by furthering the GeoHash based work in Lucene/Solr 
> with a geohash prefix based filter.  A geohash refers to a lat-lon box on the 
> earth.  Each successive character added further subdivides the box into a 4x8 
> (or 8x4 depending on the even/odd length of the geohash) grid.  The first 
> step in this scheme is figuring out which geohash grid squares cover the 
> user's search query.  I've added various extra methods to GeoHashUtils (and 
> added tests) to assist in this purpose.  The next step is an actual Lucene 
> Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
> TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
> matching geohash grid is found, the points therein are compared against the 
> user's query to see if it matches.  I created an abstraction GeoShape 
> extended by subclasses named PointDistance... and CartesianBox.... to support 
> different queried shapes so that the filter need not care about these details.
> This work was presented at LuceneRevolution in Boston on October 8th.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to