This issue started on java-user, but I am moving it to solr-dev:
http://www.lucidimagination.com/search/document/46481456bc214ccb/bitset_filter_arrayindexoutofboundsexception

I am using solr trunk and building an RTree from stored document fields. This process worked fine until a recent change in 2.9 that has different document id strategy then I was used to.

In that thread, Yonik suggested:
- pop back to the top level from the sub-reader, if you really need a single set
- if a set-per-reader will work, then cache per segment (better for
incremental updates anyway)

I'm not quite sure what you mean by a "set-per-reader". Previously I was building a single RTree and using it until the the last modified time had changed. This avoided building an index anytime a new reader was opened and the index had not changed. I'm fine building a new RTree for each reader if that is required.

Is there any existing code that deals with this situation?

- - - -

Yonik also suggested:

Relatively new in 2.9, you can pass null to enumerate over all non- deleted docs:
  TermDocs td = reader.termDocs(null);

It would probably be a lot faster to iterate over indexed values though.

If I iterate of indexed values (from the FieldCache i presume) then how do i get access to the document id?

- - - - -  -

thanks for any pointers.

ryan

Reply via email to