[
https://issues.apache.org/jira/browse/SOLR-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13457311#comment-13457311
]
Michael Garski commented on SOLR-2592:
--------------------------------------
The latest patch I submitted (SOLR-2592_r1384367.patch) still requires encoding
the value to be hashed within the unique id such as with a composite id. I am
more in the mindset of keeping it simple by encoding the hashed value into the
document's unique id, and will defer to the committers provide guidance in the
approach taken. At the end of the day, either solution will address my need of
ensuring related documents are all on the same shard and I can query a given
subset of shards rather than the entire collection.
I was not aware of SOLR-3133, however it looks to be a duplicate of SOLR-2656
which was committed on March 27, 2012, giving the realtime get handler
distributed support.
> Pluggable shard lookup mechanism for SolrCloud
> ----------------------------------------------
>
> Key: SOLR-2592
> URL: https://issues.apache.org/jira/browse/SOLR-2592
> Project: Solr
> Issue Type: New Feature
> Components: SolrCloud
> Affects Versions: 4.0-ALPHA
> Reporter: Noble Paul
> Assignee: Mark Miller
> Attachments: dbq_fix.patch, pluggable_sharding.patch,
> pluggable_sharding_V2.patch, SOLR-2592.patch, SOLR-2592_r1373086.patch,
> SOLR-2592_r1384367.patch, SOLR-2592_rev_2.patch,
> SOLR_2592_solr_4_0_0_BETA_ShardPartitioner.patch
>
>
> If the data in a cloud can be partitioned on some criteria (say range, hash,
> attribute value etc) It will be easy to narrow down the search to a smaller
> subset of shards and in effect can achieve more efficient search.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]