[jira] [Comment Edited] (SOLR-5473) Split clusterstate.json per collection and watch states selectively
[ https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094318#comment-14094318 ] Noble Paul edited comment on SOLR-5473 at 8/25/14 9:49 PM: --- bq. I don't mind that as an expert, unsupported override or something, but by and large I think this should be a system wide config, similar to legacyMode +1 was (Author: noble.paul): bq I don't mind that as an expert, unsupported override or something, but by and large I think this should be a system wide config, similar to legacyMode +1 > Split clusterstate.json per collection and watch states selectively > > > Key: SOLR-5473 > URL: https://issues.apache.org/jira/browse/SOLR-5473 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Labels: SolrCloud > Fix For: 5.0, 4.10 > > Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, > SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473_no_ui.patch, SOLR-5473_undo.patch, > ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log > > > As defined in the parent issue, store the states of each collection under > /collections/collectionname/state.json node and watches state changes > selectively. > https://reviews.apache.org/r/24220/ -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5473) Split clusterstate.json per collection and watch states selectively
[ https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14062112#comment-14062112 ] Noble Paul edited comment on SOLR-5473 at 7/15/14 2:27 PM: --- bq. wouldn't say I am still convinced that caching till you fail is the same as watching You are right. caching till you fail is just an optimization in CloudSolrServer . According to me the client has no business to watch the state at all. The cost of an extra request per stale state is negligible IMHO bq.That's why I am saying that at least in the simplistic case this should be left to configuration – watch none, all, or selected. Yes, I'm inclined to add this (selective watch) as an option which kicks in only if the no:of collections is greater than a certain threshold (say 10) . In that case all Solr nodes will watch all states. To sum it up . My preference is # Have SolrJ do caching till it fails or till it times out (no watching whatsoever). Please enlighten me with a case where it is risky # SolrNodes should choose to watch all or selective based on the no:of collections or a configurable clusterwide property was (Author: noble.paul): bq. wouldn't say I am still convinced that caching till you fail is the same as watching You are right. caching till you fail is just an optimization in CloudSolrServer . According to me the client has no business to watch the state at all. The cost of an extra request per stale state is negligible IMHO bq.That's why I am saying that at least in the simplistic case this should be left to configuration – watch none, all, or selected. Yes, I'm inclined to add this as an option which kicks in only if the no:of collections is greater than a certain threshold (say 10) . In that case all Solr nodes wil watch all states. To sum it up . My preference is # Have SolrJ do caching till it fails or till it times out (no watching whatsoever). Please enlighten me with a case where it is risky # SolrNodes should choose to watch all or selective based on the no:of collections or a configurable clusterwide property > Split clusterstate.json per collection and watch states selectively > > > Key: SOLR-5473 > URL: https://issues.apache.org/jira/browse/SOLR-5473 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 5.0 > > Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74_POC.patch, SOLR-5473-configname-fix.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, > ec2-50-16-38-73_solr.log > > > As defined in the parent issue, store the states of each collection under > /collections/collectionname/state.json node -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-5473) Split clusterstate.json per collection and watch states selectively
[ https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057659#comment-14057659 ] Mark Miller edited comment on SOLR-5473 at 7/10/14 4:50 PM: bq. ClusterState has no reference to ZkStateReader. +1 on that part, but it doesn't seem to address much else, so I don't have too much to say. {quote} All changes will be visible realtime. The point is nodes NEVER cache any states (only Solrj does SOLR-5474). .nodes watch collections where it is a member. Other states are always fetched just in time from ZK. {quote} It sounds like what I said is an issue? You can easily be on a node in your cluster that doesn't have part of a collection. If you are using the admin UI to view your cluster and a node from another collection goes down, will that reflect on the solr admin UI you are using that doesn't host part of that collection? I think this is a big deal if not, and nothing in the patch addresses this kinds of issues for users or developers. You are telling me all the behavior is the same? I don't believe that yet. was (Author: markrmil...@gmail.com): bq. ClusterState has no reference to ZkStateReader. +1 on that part, but it doesn't seem to address much else, so I don't have too much to say. {quote} All changes will be visible realtime. The point is nodes NEVER cache any states (only Solrj does SOLR-5474). .nodes watch collections where it is a member. Other states are always fetched just in time from ZK. {quote} It sounds like what I said is an issue? You can easily be on a node in your cluster that doesn't have part of a collection. If you are using the admin UI to view your cluster and node from another collection goes down, will that reflect on the solr admin UI you are using that doesn't host part of that cluster? I think this is a big deal if not, and nothing in the patch addresses this kinds of issues for users or developers. You are telling me all the behavior is the same? I don't believe that yet. > Split clusterstate.json per collection and watch states selectively > > > Key: SOLR-5473 > URL: https://issues.apache.org/jira/browse/SOLR-5473 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 5.0 > > Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, > SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, > SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, > SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log > > > As defined in the parent issue, store the states of each collection under > /collections/collectionname/state.json node -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org