Robert:

It's much better to ask usage questions like this on the user's list
rather than in a
comment on a JIRA, the user's list gets seen by a much wider audience. See
the solr-user section here: http://lucene.apache.org/solr/discussion.html

Quick answers:
1> you're better off chunking things up. But beware the "deep paging"
problem, as
     you go farther and farther into the list, response will slow down.
2> top N is indeed based on score, see the scoring algorithm in the
Similarity (Lucene)
     class. And it's just the top &rows documents. Their scores could
be > 0.99999 or
     < 0.0000001, just the top N. If two docs have the exact same
score, the tie is
     broken by the internal Lucene doc ID so the results are consistent.

Why do you want to return all the docs anyway? Some kind of dump?
Perhaps there's
a better way do do what you want.

But again, please move the discussion over to the user's list.

Best
Erick

On Wed, Oct 17, 2012 at 2:20 AM, Robert Tseng (JIRA) <j...@apache.org> wrote:
>
>     [ 
> https://issues.apache.org/jira/browse/SOLR-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13477653#comment-13477653
>  ]
>
> Robert Tseng commented on SOLR-2155:
> ------------------------------------
>
> Hi All,
>
> New to Solr here!  I have a question for you all on gh_geofilt.  My document 
> has rows of path, think of KML lineString, of which I want to do bounding box 
> check which of them fall within a box.  Each row basically has an id field 
> and a multivalued field describing the line with multiple points.
>
> What I want returned is all lines that fall within but I read Solr is not 
> very good, yet, in returning large number of hits.  Hence the row params to 
> limit result to top N rows.  My two questions are:
>
> 1. If want to retrieve all rows, do I query twice from solrj.  Once to get 
> number of hits so I can set the number of rows that grabs all row in a second 
> call? Or two I should chunk up the query call using the start params as an 
> offset?
>
> 2. If it's only returning top N, is it based on score?  What is considered 
> high score? A row with most number of hits in the box? Cloest to the center?
>
>> Geospatial search using geohash prefixes
>> ----------------------------------------
>>
>>                 Key: SOLR-2155
>>                 URL: https://issues.apache.org/jira/browse/SOLR-2155
>>             Project: Solr
>>          Issue Type: Improvement
>>            Reporter: David Smiley
>>            Assignee: David Smiley
>>         Attachments: GeoHashPrefixFilter.patch, GeoHashPrefixFilter.patch, 
>> GeoHashPrefixFilter.patch, Solr2155-1.0.2-project.zip, 
>> Solr2155-1.0.3-project.zip, Solr2155-1.0.4-project.zip, 
>> Solr2155-for-1.0.2-3.x-port.patch, 
>> SOLR-2155_GeoHashPrefixFilter_with_sorting_no_poly.patch, 
>> SOLR.2155.p3.patch, SOLR.2155.p3tests.patch
>>
>>
>> {panel:title=NOTICE} The status of this issue is a plugin for Solr 3.x 
>> located here: https://github.com/dsmiley/SOLR-2155.  Look at the 
>> introductory readme and download the plugin .jar file.  Lucene 4's new 
>> spatial module is largely based on this code.  The Solr 4 glue for it should 
>> come very soon but as of this writing it's hosted temporarily at 
>> https://github.com/spatial4j.  For more information on using SOLR-2155 with 
>> Solr 3, see http://wiki.apache.org/solr/SpatialSearch#SOLR-2155  This JIRA 
>> issue is closed because it won't be committed in its current form.
>> {panel}
>> There currently isn't a solution in Solr for doing geospatial filtering on 
>> documents that have a variable number of points.  This scenario occurs when 
>> there is location extraction (i.e. via a "gazateer") occurring on free text. 
>>  None, one, or many geospatial locations might be extracted from any given 
>> document and users want to limit their search results to those occurring in 
>> a user-specified area.
>> I've implemented this by furthering the GeoHash based work in Lucene/Solr 
>> with a geohash prefix based filter.  A geohash refers to a lat-lon box on 
>> the earth.  Each successive character added further subdivides the box into 
>> a 4x8 (or 8x4 depending on the even/odd length of the geohash) grid.  The 
>> first step in this scheme is figuring out which geohash grid squares cover 
>> the user's search query.  I've added various extra methods to GeoHashUtils 
>> (and added tests) to assist in this purpose.  The next step is an actual 
>> Lucene Filter, GeoHashPrefixFilter, that uses these geohash prefixes in 
>> TermsEnum.seek() to skip to relevant grid squares in the index.  Once a 
>> matching geohash grid is found, the points therein are compared against the 
>> user's query to see if it matches.  I created an abstraction GeoShape 
>> extended by subclasses named PointDistance... and CartesianBox.... to 
>> support different queried shapes so that the filter need not care about 
>> these details.
>> This work was presented at LuceneRevolution in Boston on October 8th.
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA administrators
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to