[jira] [Updated] (SOLR-3376) SolrCloud: Specifying shardId not working correctly, although the failures are inconsistent.
[ https://issues.apache.org/jira/browse/SOLR-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3376: -- Affects Version/s: (was: 4.0) Fix Version/s: 4.0 SolrCloud: Specifying shardId not working correctly, although the failures are inconsistent. Key: SOLR-3376 URL: https://issues.apache.org/jira/browse/SOLR-3376 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Erick Erickson Fix For: 4.0 I'm seeing some odd results when specifying shardId parameter. I'm trying the 4-node, 2-shard example from the Wiki and specifying shardIds like this: dir shardId start orderrunnng ZK port example 1 1 y8983 example22 2 y7574 example31 3 y8900 example42 4 y7500 And I'm waiting a bit between starting various examples to let ZK settle down. Once all of them are started, I was looking at http://localhost:8983/solr/#/~cloud?view=graph to check out what that looks like (pretty cool IMO, especially since I didn't have to do it). The problem was that shard 2 only reported a single instance, while shard1 showed the two instances I was expecting. I'm running with 3 embedded ZK instances, just for yucks. Interestingly the node that didn't show up was the only node that was NOT running ZK. When I removed all the shardId parameters, nuked zoo_data from all directories and just started them up (with numShards=2 on the bootstrap ZK node), all 4 nodes showed up just fine. When starting with shardId specified and trying to go straight to the admin interface on the node that wasn't showing up, I'd get odd errors like This interface requires that you activate the admin request handlers, add the following configuration to your solrconfig.xml:. I also couldn't search directly on that machine, http://localhost:7574/solr/select?q=*:*; returns a 404 error. Command starting server that's giving me trouble: java -Xmx1G -Djetty.port=7500 -DzkHost=localhost:9983,localhost:8574,localhost:9900 -DshardId=2 -jar start.jar Command for one that works fine: java -Xmx1G -Djetty.port=8900 -DzkRun -DzkHost=localhost:9983,localhost:8574,localhost:9900 -DshardId=1 -jar start.jar Sami Siren and he reports similar issues via e-mail conversation. Sami says that ZK 3.3.5 apparently (without exhaustive tests) fixed the problem for him, but when I tried ZK 3.3.5 I saw the same issue. Of course with all the recent stuff with Ivy, I may have screwed up when/where the JARs were. So then I went back to ZK 3.3.4 and couldn't reproduce the problem. Which seems highly suspicious to me. It was failing every time before with 3.3.4, so it sounds like gremlins. And then I tried ZK 3.3.5 again (changed the ivy.xml in solrj, blew away the ZK 3.3.4, rebuilt, removed zoo_data, recopied example to three other directories) and it works fine there too now. Sh. Mostly this is a placeholder to insure we try this, I guarantee that sys admins will want to assign specific machines to specific shards, so this'll get used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3355) Add shard name to SolrCore statistics
[ https://issues.apache.org/jira/browse/SOLR-3355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3355: -- Affects Version/s: (was: 4.0) Fix Version/s: 4.0 Add shard name to SolrCore statistics - Key: SOLR-3355 URL: https://issues.apache.org/jira/browse/SOLR-3355 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Michael Garski Assignee: Mark Miller Priority: Trivial Fix For: 4.0 Attachments: SOLR-3355.patch The JMX stats of the core do not expose the shard name that it is hosting, which could be of use. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3347) deleteByQuery failing with SolrCloud without _version_ in schema.xml
[ https://issues.apache.org/jira/browse/SOLR-3347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3347: -- Fix Version/s: 4.0 deleteByQuery failing with SolrCloud without _version_ in schema.xml Key: SOLR-3347 URL: https://issues.apache.org/jira/browse/SOLR-3347 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Benson Margulies Fix For: 4.0 Attachments: 0001-Attempt-to-repro-problem-with-del-and-SolrCloud.patch, provision-and-start.sh, schema.xml, solrconfig.xml Distributed execution of deleteByQuery(\*:\*) depends on the existence of a field \_version\_ in the schema. The default schema has no comment on this field to indicate its important or relevance to SolrCloud, and no message is logged nor error status returned when there is no such field. The code in DistributedUpdateProcessor just has an if statement that never ever does any local deleting without it. I don't know whether the intention was that this should work or not. If someone would clue me in, I'd make a patch for schema.xml to add comments, or a patch to D-U-P to add logging. If it was supposed to work, I'm probably not qualified to make the fix to make it work. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2894) Implement distributed pivot faceting
[ https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2894: -- Affects Version/s: (was: 4.0) Fix Version/s: 4.0 Implement distributed pivot faceting Key: SOLR-2894 URL: https://issues.apache.org/jira/browse/SOLR-2894 Project: Solr Issue Type: Improvement Reporter: Erik Hatcher Fix For: 4.0 Attachments: SOLR-2894.patch Following up on SOLR-792, pivot faceting currently only supports undistributed mode. Distributed pivot faceting needs to be implemented. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3139) StreamingUpdateSolrServer doesn't send UpdateRequest.getParams()
[ https://issues.apache.org/jira/browse/SOLR-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3139: -- Affects Version/s: (was: 4.0) Fix Version/s: 4.0 StreamingUpdateSolrServer doesn't send UpdateRequest.getParams() Key: SOLR-3139 URL: https://issues.apache.org/jira/browse/SOLR-3139 Project: Solr Issue Type: Bug Components: clients - java Reporter: Andrzej Bialecki Fix For: 4.0 CommonsHttpSolrServer properly encodes the request's SolrParams depending on GET or POST. However, StreamingUpdateSolrServer only looks at the params to determine whether they contain optimize/commit ops, and otherwise discards them. This is unexpected - it should properly encode and send SolrParams per request, in a similar way as CommonsHttpSolrServer does. Currently this bug prevents one from e.g. selecting a different update chain per request when using the streaming server. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3273) 404 Not Found on action=PREPRECOVERY
[ https://issues.apache.org/jira/browse/SOLR-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3273: -- Priority: Minor (was: Major) 404 Not Found on action=PREPRECOVERY Key: SOLR-3273 URL: https://issues.apache.org/jira/browse/SOLR-3273 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0 Environment: Any Reporter: Per Steffensen Assignee: Mark Miller Priority: Minor We have an application based on a recent copy of 4.0-SNAPSHOT. We have a preformance test setup where we performance test our application (and therefore indirectly Solr(Cloud)). When we run the performance test against a setup using SolrCloud without replication, everything seems to run very nicely for days. When we add replication to the setup the same performance test shows some problems - which we will report (and maybe help fix) in distinct issues here in jira. About the setup - the setup is a little more complex than described below, but I believe the description will tell enough: We have two solr servers which we start from solr-install/example using this command (ZooKeepers have been started before) - we first start solr on server1, and then starts solr on server2 after solr on server1 finished starting up: {code} nohup java -Xmx4096m -Dcom.sun.management.jmxremote -DzkHost=server1:2181,server2:2181,server3:2181 -Dbootstrap_confdir=./myapp/conf -Dcollection.configName=myapp_conf -Dsolr.solr.home=./myapp -Djava.util.logging.config.file=logging.properties -jar start.jar ./myapp/logs/stdout.log 2./myapp/logs/stderr.log {code} The ./myapp/solr.xml looks like this on server1: {code:xml} ?xml version=1.0 encoding=UTF-8 ? solr persistent=false cores adminPath=/admin/myapp host=server1 hostPort=8983 hostContext=solr core name=collA_slice1_shard1 instanceDir=. dataDir=collA_slice1_data collection=collA shard=slice1 / /cores /solr {code} The ./myapp/solr.xml looks like this on server2: {code:xml} ?xml version=1.0 encoding=UTF-8 ? solr persistent=false cores adminPath=/admin/myapp host=server2 hostPort=8983 hostContext=solr core name=collA_slice1_shard2 instanceDir=. dataDir=collA_slice1_data collection=collA shard=slice1 / /cores /solr {code} The first thing we observe is that Solr server1 (running collA_slice1_shard1) seems to start up nicely, but when Solr server2 (running collA_slice1_shard2) is started up later it quickly reports the following in its solr.log an keeps doing that for a long time: {code} SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: Not Found request: http://server1:8983/solr/admin/cores?action=PREPRECOVERYcore=collA_slice1_shard1nodeName=server2%3A8983_solrcoreNodeName=server2%3A8983_solr_collA_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2 at org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:40) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264) at org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206) {code} Please note that we have changed a little bit in the way errors are logged, but basically this means that Solr server2 gets an 404 Not Found on its request http://server1:8983/solr/admin/cores?action=PREPRECOVERYcore=collA_slice1_shard1nodeName=server2%3A8983_solrcoreNodeName=server2%3A8983_solr_collA_slice1_shard2state=recoveringcheckLive=truepauseFor=6000wt=javabinversion=2; to Solr server1. Seems like there is not a common agreement among the Solr servers on how/where to send those requests and how/where to listen for them. Regards, Per Steffensen -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3231) Add the ability to KStemmer to preserve the original token when stemming
[ https://issues.apache.org/jira/browse/SOLR-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3231: -- Affects Version/s: (was: 4.0) Fix Version/s: 4.0 Add the ability to KStemmer to preserve the original token when stemming Key: SOLR-3231 URL: https://issues.apache.org/jira/browse/SOLR-3231 Project: Solr Issue Type: Improvement Components: Schema and Analysis Reporter: Jamie Johnson Fix For: 4.0 Attachments: KStemFilter.patch While using the PorterStemmer, I found that there were often times that it was far to aggressive in it's stemming. In my particular case it is unrealistic to provide a protected word list which captures all possible words which should not be stemmed. To avoid this I proposed a solution whereby we store the original token as well as the stemmed token so exact searches would always work. Based on discussions on the mailing list Ahmet Arslan, I believe the attached patch to KStemmer provides the desired capabilities through a configuration parameter. This largely is a copy of the org.apache.lucene.wordnet.SynonymTokenFilter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3034) If you vary a setting per round and that setting is a long string, the report padding/columns break down.
[ https://issues.apache.org/jira/browse/LUCENE-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated LUCENE-3034: Fix Version/s: (was: 3.6) If you vary a setting per round and that setting is a long string, the report padding/columns break down. - Key: LUCENE-3034 URL: https://issues.apache.org/jira/browse/LUCENE-3034 Project: Lucene - Java Issue Type: Improvement Components: modules/benchmark Reporter: Mark Miller Assignee: Mark Miller Priority: Trivial Fix For: 4.0 This is especially noticeable if you vary a setting where the value is a fully specified class name - in this case, it would be nice if columns in each row still lined up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2949) QueryElevationComponent does not fully support distributed search
[ https://issues.apache.org/jira/browse/SOLR-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2949: -- Attachment: SOLR-2949.patch A short term fix + a fix for SOLR-3252 + a test. QueryElevationComponent does not fully support distributed search - Key: SOLR-2949 URL: https://issues.apache.org/jira/browse/SOLR-2949 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Mark Miller Priority: Minor Fix For: 3.6, 4.0 Attachments: SOLR-2949.patch The QueryElevationComponent does not fully support distributed search. Add tests and make a fix for it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2949) QueryElevationComponent does not fully support distributed search
[ https://issues.apache.org/jira/browse/SOLR-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2949: -- Attachment: SOLR-2949.patch A better, more working patch. QueryElevationComponent does not fully support distributed search - Key: SOLR-2949 URL: https://issues.apache.org/jira/browse/SOLR-2949 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Mark Miller Priority: Minor Fix For: 3.6, 4.0 Attachments: SOLR-2949.patch, SOLR-2949.patch The QueryElevationComponent does not fully support distributed search. Add tests and make a fix for it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3215) We should clone the SolrInputDocument before adding locally and then send that clone to replicas.
[ https://issues.apache.org/jira/browse/SOLR-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3215: -- Attachment: SOLR-3215.patch I'd like to commit this soon. We should clone the SolrInputDocument before adding locally and then send that clone to replicas. - Key: SOLR-3215 URL: https://issues.apache.org/jira/browse/SOLR-3215 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3215.patch If we don't do this, the behavior is a little unexpected. You cannot avoid having other processors always hit documents twice unless we support using multiple update chains. We have another issue open that should make this better, but I'd like to do this sooner than that. We are going to have to end up cloning anyway when we want to offer the ability to not wait for the local add before sending to replicas. Cloning with the current SolrInputDocument, SolrInputField apis is a little scary - there is an Object to contend with - but it seems we can pretty much count on that being a primitive that we don't have to clone? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3227) Solr Cloud should continue working when a logical shard goes down
[ https://issues.apache.org/jira/browse/SOLR-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3227: -- Summary: Solr Cloud should continue working when a logical shard goes down (was: Solr Cloud should continue working when a shard goes down) Solr Cloud should continue working when a logical shard goes down - Key: SOLR-3227 URL: https://issues.apache.org/jira/browse/SOLR-3227 Project: Solr Issue Type: New Feature Components: SolrCloud Affects Versions: 4.0 Reporter: Ranjan Bagchi I can start up a SolrCloud instance up one instance w/ zookeeper, and started a second instance defining a shard name in solr.xml, and the second shard shows up in zookeeper and both indexes are searchable. However, if I bring the second server down -- the first one stops working until I restart server #2. The desired behavior is that SolrCloud deregisters server #2 and the cloud remains searchable with only server #1's index. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3213) Upgrade to commons-csv once it is released
[ https://issues.apache.org/jira/browse/SOLR-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3213: -- Fix Version/s: 4.0 Adding a fix version so that this has to ride the release 'push' train. Upgrade to commons-csv once it is released -- Key: SOLR-3213 URL: https://issues.apache.org/jira/browse/SOLR-3213 Project: Solr Issue Type: Task Components: Build Reporter: Uwe Schindler Fix For: 4.0 Since SOLR-3204 we have a jarjar'ed apache-solr-commons-csv-SNAPSHOT.jar file in lib folder. Once version 1.0 of commons-csv is officially released, we should upgrade that to this version, remove maven publishing and change the import statements to the official package name in java files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3194) Attaching a commit to an update request results in too many commits on each node.
[ https://issues.apache.org/jira/browse/SOLR-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3194: -- Attachment: SOLR-3194.patch Attaching a commit to an update request results in too many commits on each node. - Key: SOLR-3194 URL: https://issues.apache.org/jira/browse/SOLR-3194 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3194.patch As reported to me by Alexey Serba, if you choose to pass a commit=true param with an update request, too many commits are asked for on each node. The problem is that it causes a local commit which triggers commits on the other nodes, and it also forwards on the commit=true param causing further commits. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3166) Allow bootstrapping multiple config sets from multi-core setups.
[ https://issues.apache.org/jira/browse/SOLR-3166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3166: -- Attachment: SOLR-3166.patch This patch allows you to pass the sys prop boostrap_conf=true to upload the configs found for each core in the local solr.xml. Each conf set will be uploaded and named after the collection for that core. The core will also be set to use that config set. Essentially, this lets you easily bootstrap a multicore setup. Allow bootstrapping multiple config sets from multi-core setups. Key: SOLR-3166 URL: https://issues.apache.org/jira/browse/SOLR-3166 Project: Solr Issue Type: New Feature Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3166.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3153) When a leader goes down he should ask replicas to sync in parallel rather than serially.
[ https://issues.apache.org/jira/browse/SOLR-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3153: -- Attachment: SOLR-3153.patch When a leader goes down he should ask replicas to sync in parallel rather than serially. Key: SOLR-3153 URL: https://issues.apache.org/jira/browse/SOLR-3153 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: SOLR-3153.patch Need to finish this todo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3806) Add a Download button to the Download webpage.
[ https://issues.apache.org/jira/browse/LUCENE-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated LUCENE-3806: Attachment: downloadbutton.png Add a Download button to the Download webpage. -- Key: LUCENE-3806 URL: https://issues.apache.org/jira/browse/LUCENE-3806 Project: Lucene - Java Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Attachments: downloadbutton.png -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3806) Add a Download button to the Download webpage.
[ https://issues.apache.org/jira/browse/LUCENE-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated LUCENE-3806: Attachment: (was: downloadbutton.png) Add a Download button to the Download webpage. -- Key: LUCENE-3806 URL: https://issues.apache.org/jira/browse/LUCENE-3806 Project: Lucene - Java Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Attachments: downloadbutton.png -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3137) When solr.xml is persisted, you lose all system property substitution that was used.
[ https://issues.apache.org/jira/browse/SOLR-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3137: -- Attachment: SOLR-3137.patch updates patch - close to done I think - I don't handle properties because of some oddity I have not figured out - they appear to stored un-sys-subbed, but then when written out they are subbed? I'm not sure they are that important to handle anyway? When solr.xml is persisted, you lose all system property substitution that was used. - Key: SOLR-3137 URL: https://issues.apache.org/jira/browse/SOLR-3137 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3137.patch, SOLR-3137.patch A lesser issue is that we also write out properties that where not originally in the file with the defaults they picked up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3138) Add node roles to core admin handler 'create core' and solrj.
[ https://issues.apache.org/jira/browse/SOLR-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3138: -- Attachment: SOLR-3138.patch simple patch - ill commit it shortly Add node roles to core admin handler 'create core' and solrj. - Key: SOLR-3138 URL: https://issues.apache.org/jira/browse/SOLR-3138 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: SOLR-3138.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3131) details command fails when a replication is forced with a fetchIndex command on a non-slave server
[ https://issues.apache.org/jira/browse/SOLR-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3131: -- Affects Version/s: (was: 4.0) 3.5 Fix Version/s: 4.0 3.6 details command fails when a replication is forced with a fetchIndex command on a non-slave server -- Key: SOLR-3131 URL: https://issues.apache.org/jira/browse/SOLR-3131 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 3.5 Reporter: Tomás Fernández Löbbe Assignee: Mark Miller Priority: Minor Fix For: 3.6, 4.0 Attachments: SOLR-3131.patch Steps to reproduce the problem: 1) Start a master Solr instance (called A) 2) Start a Solr instance with replication handler configured, but with no slave configuration. (called B) 3) Issue the request http://B:port/solr/replication?command=fetchindexmasterUrl=http://A:port/solr/replication 4) While B is fetching the index, issue the request: http://B:port/solr/replication?command=details Expected behavior: See the replication details as usual. Getting an exception instead: java.lang.NullPointerException at org.apache.solr.handler.ReplicationHandler.isPollingDisabled(ReplicationHandler.java:447) at org.apache.solr.handler.ReplicationHandler.getReplicationDetails(ReplicationHandler.java:611) at org.apache.solr.handler.ReplicationHandler.handleRequestBody(ReplicationHandler.java:211) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1523) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:339) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:234) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3137) When solr.xml is persisted, you lose all system property substitution that was used.
[ https://issues.apache.org/jira/browse/SOLR-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3137: -- Attachment: SOLR-3137.patch patch with the general idea of what I am thinking - we store the orig solr.xml dom structure as a field. We also store a mapping from SolrCore to orginal core name. We have to keep that up to date on core reload. Then when writing out the solr.xml file we can use both those data structures to see if we should use the original raw value, a new updated value, etc. When solr.xml is persisted, you lose all system property substitution that was used. - Key: SOLR-3137 URL: https://issues.apache.org/jira/browse/SOLR-3137 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3137.patch A lesser issue is that we also write out properties that where not originally in the file with the defaults they picked up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3126) We should try to do a quick sync on std start up recovery before trying to do a full blown replication.
[ https://issues.apache.org/jira/browse/SOLR-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3126: -- Attachment: SOLR-3126.patch path for this - I stop committing in the prep recovery cmd so that it can be used also in the sync case - in the replicate case, we do a prep recovery cmd then an explicit commit We should try to do a quick sync on std start up recovery before trying to do a full blown replication. --- Key: SOLR-3126 URL: https://issues.apache.org/jira/browse/SOLR-3126 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3126.patch, SOLR-3126.patch just more efficient - especially on cluster shutdown/start where the replicas may all be up to date and match anway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3126) We should try to do a quick sync on std start up recovery before trying to do a full blown replication.
[ https://issues.apache.org/jira/browse/SOLR-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3126: -- Attachment: SOLR-3126.patch Current WIP. Still trying to track down an issue around FullSolrCloudTest#brindDownShardIndexSomeDocsAndRecover We should try to do a quick sync on std start up recovery before trying to do a full blown replication. --- Key: SOLR-3126 URL: https://issues.apache.org/jira/browse/SOLR-3126 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3126.patch just more efficient - especially on cluster shutdown/start where the replicas may all be up to date and match anway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2993) Integrate WordBreakSpellChecker with Solr
[ https://issues.apache.org/jira/browse/SOLR-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2993: -- Component/s: (was: SolrCloud) Integrate WordBreakSpellChecker with Solr - Key: SOLR-2993 URL: https://issues.apache.org/jira/browse/SOLR-2993 Project: Solr Issue Type: Improvement Components: spellchecker Affects Versions: 4.0 Reporter: James Dyer Priority: Minor Fix For: 4.0 Attachments: SOLR-2993.patch A SpellCheckComponent enhancement, leveraging the WordBreakSpellChecker from LUCENE-3523: - Detect spelling errors resulting from misplaced whitespace without the use of shingle-based dictionaries. - Seamlessly integrate word-break suggestions with single-word spelling corrections from the existing FileBased-, IndexBased- or Direct- spell checkers. - Provide collation support for word-break errors including cases where the user has a mix of single-word spelling errors and word-break errors in the same query. - Provide shard support. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2646) Integrate Solr benchmarking support into the Benchmark module
[ https://issues.apache.org/jira/browse/SOLR-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2646: -- Attachment: SOLR-2646.patch to trunk Integrate Solr benchmarking support into the Benchmark module - Key: SOLR-2646 URL: https://issues.apache.org/jira/browse/SOLR-2646 Project: Solr Issue Type: New Feature Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: Dev-SolrBenchmarkModule.pdf, SOLR-2646.patch, SOLR-2646.patch, SOLR-2646.patch As part of my buzzwords Solr pef talk, I did some work to allow some Solr benchmarking with the benchmark module. I'll attach a patch with the current work I've done soon - there is still a fair amount to clean up and fix - a couple hacks or three - but it's already fairly useful. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3125) When a SolrCore registers in zk there may be stale leader state in the clusterstate.json.
[ https://issues.apache.org/jira/browse/SOLR-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3125: -- Attachment: SOLR-3125.patch a workaround - on registering, instead of getting the leader from the cluster state, we go right to the ephemeral zk node When a SolrCore registers in zk there may be stale leader state in the clusterstate.json. - Key: SOLR-3125 URL: https://issues.apache.org/jira/browse/SOLR-3125 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3125.patch This can be a problem when stop the whole cluster and then start it - a leader that has not yet showed up in the cluster state may see a stale leader and try to recover from it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3117) CoreDescriptor attempts to use the name before checking if it is null
[ https://issues.apache.org/jira/browse/SOLR-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3117: -- Fix Version/s: 4.0 Assignee: Mark Miller CoreDescriptor attempts to use the name before checking if it is null - Key: SOLR-3117 URL: https://issues.apache.org/jira/browse/SOLR-3117 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 4.0 Reporter: Jamie Johnson Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: CoreDescriptor.patch in CoreDescriptor when creating the cloudDesc the name is accessed before checking if it is null I believe it should be the following instead {code} public CoreDescriptor(CoreContainer coreContainer, String name, String instanceDir) { this.coreContainer = coreContainer; this.name = name; if (name == null) { throw new RuntimeException(Core needs a name); } if(coreContainer != null coreContainer.getZkController() != null) { this.cloudDesc = new CloudDescriptor(); // cloud collection defaults to core name cloudDesc.setCollectionName(name.isEmpty() ? coreContainer.getDefaultCoreName() : name); } if (instanceDir == null) { throw new NullPointerException(Missing required \'instanceDir\'); } instanceDir = SolrResourceLoader.normalizeDir(instanceDir); this.instanceDir = instanceDir; this.configName = getDefaultConfigName(); this.schemaName = getDefaultSchemaName(); } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2957) collection URLs in a cluster
[ https://issues.apache.org/jira/browse/SOLR-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2957: -- Attachment: SOLR-2957.patch I've implemented the first part - if you pass a corename in the url that is not found, when in zk mode, we try using the corename as the collection name and we search the local instance for any cores that are in that collection, first looking for a leader. collection URLs in a cluster Key: SOLR-2957 URL: https://issues.apache.org/jira/browse/SOLR-2957 Project: Solr Issue Type: Sub-task Components: SolrCloud, update Reporter: Yonik Seeley Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-2957.patch In solrcloud, one can hit a URL of /collection1/select and get a distributed search over collection1. If we wish to maintain this, we'll need some more flexible URL to core mapping since there may be more than one core for the collection on a node, and the collection1 core on that node could go away altogether. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3104) Bad performance with distributed search when sort contains relevancy queries
[ https://issues.apache.org/jira/browse/SOLR-3104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3104: -- Attachment: SOLR-3104-3x.patch here is a 3x patch from my back merge Bad performance with distributed search when sort contains relevancy queries Key: SOLR-3104 URL: https://issues.apache.org/jira/browse/SOLR-3104 Project: Solr Issue Type: Improvement Components: search Affects Versions: 3.6 Reporter: XJ Wang Priority: Critical Fix For: 4.0 Attachments: SOLR-3104-3x.patch, SOLR-3104.patch So I found this issue when trying out distributed search with solr 3.5 and noticed big performance degradation for some queries comparing to the single box search. After some query analysis and comparison, it turns out that shard queries with fsv=true are much slower than the same queries w/o fsv=true. Some examples are like 1200ms vs 200ms (start=0, rows=30, hits100). From the discussions with Yonik Seeley on solr mailing list, it may due to fact that I'm using lot of relevancy queries in sorting. But Solr is not retrieving those sort values efficiently . This is critical for us and prevents us from moving to distributed search. I believe users like our scenarios will also suffer from this issue. Any patch/idea is welcomed. Quote from Yonik Seeley on solr-user mailing list: OK, so basically it's slow because functions with embedded relevancy queries are forward only - if you request the value for a docid previous to the last, we need to reboot the query (re-weight, ask for the scorer, etc). This means that for your 30 documents, that will require rebooting the query about 15 times (assuming that roughly half of the time the next docid will be less than the previous one). Unfortunately there's not much you can do externally... we need to implement optimizations at the Solr level for this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3108) Error in SolrCloud's replica lookup code when replica's are hosted in same Solr instance
[ https://issues.apache.org/jira/browse/SOLR-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3108: -- Attachment: SOLR-3108.patch Bruno's fix + a test. To make the test I also did a little around the numShards handling so that you can pass it on core creation with the CoreAdminHandler. Sami, it would prob be good for you to review that a bit - I'm not sure if we can do that in a cleaner way or not? Error in SolrCloud's replica lookup code when replica's are hosted in same Solr instance Key: SOLR-3108 URL: https://issues.apache.org/jira/browse/SOLR-3108 Project: Solr Issue Type: Bug Components: SolrCloud Environment: Solr trunk as of today (r1241575) Reporter: Bruno Dumon Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3108.patch, SOLR-3108.patch There's a small bug in ZkStateReader.getReplicaProps() when you have multiple replicas of the same shard/slice hosted in one CoreContainer. Not that you would often do this, but I was playing around with shards replicas using just one Solr instance and noticed it. The attached patch should make it clear, the check on !coreNodeName.equals(filterNodeName) will always be false in such case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3108) Error in SolrCloud's replica lookup code when replica's are hosted in same Solr instance
[ https://issues.apache.org/jira/browse/SOLR-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3108: -- Fix Version/s: 4.0 Error in SolrCloud's replica lookup code when replica's are hosted in same Solr instance Key: SOLR-3108 URL: https://issues.apache.org/jira/browse/SOLR-3108 Project: Solr Issue Type: Bug Components: SolrCloud Environment: Solr trunk as of today (r1241575) Reporter: Bruno Dumon Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3108.patch There's a small bug in ZkStateReader.getReplicaProps() when you have multiple replicas of the same shard/slice hosted in one CoreContainer. Not that you would often do this, but I was playing around with shards replicas using just one Solr instance and noticed it. The attached patch should make it clear, the check on !coreNodeName.equals(filterNodeName) will always be false in such case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3080) We should consider removing shard info from Zk when you explicitly unload a SolrCore.
[ https://issues.apache.org/jira/browse/SOLR-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3080: -- Comment: was deleted (was: bq. Also I was not sure if the slice should be deleted when there are no more shards with that id (currently it is not deleted). Yeah, I think that is the right move for now - don't delete it.) We should consider removing shard info from Zk when you explicitly unload a SolrCore. - Key: SOLR-3080 URL: https://issues.apache.org/jira/browse/SOLR-3080 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: SOLR-3080.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3091) When running in SolrCloud mode, whether a instance is supposed to be part of the quorum or not, it tries to start a local Solr ZK server.
[ https://issues.apache.org/jira/browse/SOLR-3091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3091: -- Component/s: SolrCloud When running in SolrCloud mode, whether a instance is supposed to be part of the quorum or not, it tries to start a local Solr ZK server. - Key: SOLR-3091 URL: https://issues.apache.org/jira/browse/SOLR-3091 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3091.patch When a Solr instance that is not part of the quorum tries to start the Solr Zk server, no zkRun is set, and if not using localhost for the host, the ZK server fails to start as we cannot match it with any URL in the zkHost string. We shouldn't be trying to start the Zk Server at all - some bug that snuck in and was not spotted earlier because it didn't manifest in Solr not starting when using localhost addresses. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3091) When running in SolrCloud mode, whether a instance is supposed to be part of the quorum or not, it tries to start a local Solr ZK server.
[ https://issues.apache.org/jira/browse/SOLR-3091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3091: -- Attachment: SOLR-3091.patch When running in SolrCloud mode, whether a instance is supposed to be part of the quorum or not, it tries to start a local Solr ZK server. - Key: SOLR-3091 URL: https://issues.apache.org/jira/browse/SOLR-3091 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3091.patch we try to match based on localhost by default - otherwise what zkRun is set to - if multiple hosts match, we go to port - however for a solr instance not part of the ensemble -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3091) When running in SolrCloud mode, whether a instance is supposed to be part of the quorum or not, it tries to start a local Solr ZK server.
[ https://issues.apache.org/jira/browse/SOLR-3091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3091: -- Description: When a Solr instance that is not part of the quorum tries to start the Solr Zk server, no zkRun is set, and if not using localhost for the host, the ZK server fails to start as we cannot match it with any URL in the zkHost string. We shouldn't be trying to start the Zk Server at all - some bug that snuck in and was not spotted earlier because it didn't manifest in Solr not starting when using localhost addresses. (was: we try to match based on localhost by default - otherwise what zkRun is set to - if multiple hosts match, we go to port - however for a solr instance not part of the ensemble) When running in SolrCloud mode, whether a instance is supposed to be part of the quorum or not, it tries to start a local Solr ZK server. - Key: SOLR-3091 URL: https://issues.apache.org/jira/browse/SOLR-3091 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3091.patch When a Solr instance that is not part of the quorum tries to start the Solr Zk server, no zkRun is set, and if not using localhost for the host, the ZK server fails to start as we cannot match it with any URL in the zkHost string. We shouldn't be trying to start the Zk Server at all - some bug that snuck in and was not spotted earlier because it didn't manifest in Solr not starting when using localhost addresses. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2622) ShowFileRequestHandler does not work in SolrCloud mode.
[ https://issues.apache.org/jira/browse/SOLR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2622: -- Summary: ShowFileRequestHandler does not work in SolrCloud mode. (was: ZkSolrResourceLoader does not support getConfigDir()) ShowFileRequestHandler does not work in SolrCloud mode. --- Key: SOLR-2622 URL: https://issues.apache.org/jira/browse/SOLR-2622 Project: Solr Issue Type: Bug Components: SolrCloud, web gui Environment: SVN-Revision: {{1139570}} Startup-Command: {{cd solr/example java -Dbootstrap_confdir=./solr/conf -Dcollection.configName=myconf -DzkRun -jar start.jar}} Reporter: Stefan Matheis (steffkes) Assignee: Mark Miller Priority: Minor Fix For: 4.0 Requesting {{/solr/admin/file/?contentType=text/xml;charset=utf-8file=schema.xml}} generates an HTTP 500: {code}org.apache.solr.common.cloud.ZooKeeperException: ZkSolrResourceLoader does not support getConfigDir() - likely, what you are trying to do is not supported in ZooKeeper mode at org.apache.solr.cloud.ZkSolrResourceLoader.getConfigDir(ZkSolrResourceLoader.java:99) at org.apache.solr.handler.admin.ShowFileRequestHandler.handleRequestBody(ShowFileRequestHandler.java:126) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:353) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582){code} It's not related to a specific file, requesting {{/solr/admin/file}} is enough. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3041) Solrs using SolrCloud feature for having shared config in ZK, might not all start successfully when started for the first time simultaneously
[ https://issues.apache.org/jira/browse/SOLR-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3041: -- Fix Version/s: 4.0 Solrs using SolrCloud feature for having shared config in ZK, might not all start successfully when started for the first time simultaneously - Key: SOLR-3041 URL: https://issues.apache.org/jira/browse/SOLR-3041 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0 Environment: Exact version: https://builds.apache.org/job/Solr-trunk/1718/artifact/artifacts/apache-solr-4.0-2011-12-28_08-33-55.tgz Reporter: Per Steffensen Fix For: 4.0 Original Estimate: 96h Remaining Estimate: 96h Starting Solr like this java -DzkHost=ZKs -Dbootstrap_confdir=./myproject/conf -Dcollection.configName=myproject_conf -Dsolr.solr.home=./myproject -jar start.jar When not already there (starting solr for the first time) the content of ./myproject/conf will be copied by Solr into ZK. That process does not work very well in parallel, so if the content is not there and I start several Solrs simultaneously, one or more of them might not start successfully. I see exceptions like the ones shown below, and the Solrs throwing them will not work correctly afterwards. I know that there could be different workarounds, like making sure to always start one Solr and wait for a while before starting the rest of them, but I think we should really be more robuste in these cases. Regards, Per Steffensen exception example 1 (the znode causing the problem can be different than /configs/myproject_conf/protwords.txt) org.apache.solr.common.cloud.ZooKeeperException: at org.apache.solr.core.CoreContainer.initZooKeeper(CoreContainer.java:193) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:337) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:294) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:240) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:93) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.start.Main.start(Main.java:441) at org.mortbay.start.Main.main(Main.java:119) Caused by: org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /configs/myproject_conf/protwords.txt at org.apache.zookeeper.KeeperException.create(KeeperException.java:110) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:637) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:347) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:308) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:290)
[jira] [Updated] (SOLR-3081) When using SolrCloud, warming queries are now defaulting to distrib=true.
[ https://issues.apache.org/jira/browse/SOLR-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3081: -- Attachment: SOLR-3081.patch Simple fix to default to distrib=false in the QuerySenderListener - I'll commit shortly. When using SolrCloud, warming queries are now defaulting to distrib=true. - Key: SOLR-3081 URL: https://issues.apache.org/jira/browse/SOLR-3081 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3081.patch We changed this default in general, but it seems warming queries still need to default to distrib=false. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3081) Default warming queries to distrib=false.
[ https://issues.apache.org/jira/browse/SOLR-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3081: -- Summary: Default warming queries to distrib=false. (was: When using SolrCloud, warming queries are now defaulting to distrib=true.) Default warming queries to distrib=false. - Key: SOLR-3081 URL: https://issues.apache.org/jira/browse/SOLR-3081 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3081.patch We changed this default in general, but it seems warming queries still need to default to distrib=false. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3082) If you use a lazy replication request handler, the commit listener will not be registered right away, and might miss tracking the last commit.
[ https://issues.apache.org/jira/browse/SOLR-3082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3082: -- Attachment: SOLR-3082.patch simple patch If you use a lazy replication request handler, the commit listener will not be registered right away, and might miss tracking the last commit. -- Key: SOLR-3082 URL: https://issues.apache.org/jira/browse/SOLR-3082 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3082.patch The result is that you might see version 0 as the latest commit version when it's really not. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3075) Overseer does not check cloudstate for previously assigned shardId but generates a new one
[ https://issues.apache.org/jira/browse/SOLR-3075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3075: -- Fix Version/s: 4.0 Assignee: Mark Miller Overseer does not check cloudstate for previously assigned shardId but generates a new one -- Key: SOLR-3075 URL: https://issues.apache.org/jira/browse/SOLR-3075 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0 Reporter: Sami Siren Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3075.patch Overseer does not check if core has already been assigned an shardId before assigning it a new shardId - same core could end up having multiple shardIds in CloudState. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3063) Bug in LeaderElectionIntgrationTest
[ https://issues.apache.org/jira/browse/SOLR-3063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3063: -- Assignee: Mark Miller Bug in LeaderElectionIntgrationTest --- Key: SOLR-3063 URL: https://issues.apache.org/jira/browse/SOLR-3063 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0 Reporter: Sami Siren Assignee: Mark Miller Priority: Minor Attachments: SOLR-3063.patch #getLeader() tries to get leader props from stale zkStateReader (one that has not set watches). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3066) SolrIndexSearcher open/close imbalance in some of the new SolrCloud tests.
[ https://issues.apache.org/jira/browse/SOLR-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3066: -- Comment: was deleted (was: Thanks Sami - committing this now. Only comment is that we might want to at least log warnings on the '// XXX stop processing, exit' spots, at least for debugging purposes - I have not for the moment, but if you agree I will add them.) SolrIndexSearcher open/close imbalance in some of the new SolrCloud tests. -- Key: SOLR-3066 URL: https://issues.apache.org/jira/browse/SOLR-3066 Project: Solr Issue Type: Test Reporter: Mark Miller I have not been able to duplicate this test issue on my systems yet, but on jenkins, some tests that start and stop jetty instances during the test are having trouble cleaning up and can bleed into other tests. I'm working on isolating the reason for this - I seem to have been ip banned from jenkins at the moment, but when I can ssh in there, I will be able to speed up the try/feedback loop some. I've spent a lot of time trying to duplicate across 3 other systems, but I don't see the same issue anywhere but our jenkins server thus far. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3065) Let overseer process cluster state changes asynchronously
[ https://issues.apache.org/jira/browse/SOLR-3065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3065: -- Fix Version/s: 4.0 Let overseer process cluster state changes asynchronously - Key: SOLR-3065 URL: https://issues.apache.org/jira/browse/SOLR-3065 Project: Solr Issue Type: Improvement Components: SolrCloud Affects Versions: 4.0 Reporter: Sami Siren Priority: Minor Fix For: 4.0 Attachments: SOLR-3065.patch Currently the overseer updates clusterstate.json on almost every change - one change at a time. This is not efficient when there are a lot of changes happening in short period of time (for example when a number of hosts are started at once). It would be better if changes were published on timely manner instead. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2358) Distributing Indexing
[ https://issues.apache.org/jira/browse/SOLR-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2358: -- Attachment: SOLR-2358.patch Okay, here is the patch - also requires new zookeeper and noggit jars. Distributing Indexing - Key: SOLR-2358 URL: https://issues.apache.org/jira/browse/SOLR-2358 Project: Solr Issue Type: New Feature Components: SolrCloud, update Reporter: William Mayor Priority: Minor Fix For: 4.0 Attachments: 2shard4server.jpg, SOLR-2358.patch, SOLR-2358.patch, apache-solr-noggit-r1211150.jar, zookeeper-3.3.4.jar The indexing side of SolrCloud - the goal of this issue is to provide durable, fault tolerant indexing to an elastic cluster of Solr instances. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2358) Distributing Indexing
[ https://issues.apache.org/jira/browse/SOLR-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2358: -- Attachment: zookeeper-3.3.4.jar apache-solr-noggit-r1211150.jar Distributing Indexing - Key: SOLR-2358 URL: https://issues.apache.org/jira/browse/SOLR-2358 Project: Solr Issue Type: New Feature Components: SolrCloud, update Reporter: William Mayor Priority: Minor Fix For: 4.0 Attachments: 2shard4server.jpg, SOLR-2358.patch, SOLR-2358.patch, apache-solr-noggit-r1211150.jar, zookeeper-3.3.4.jar The indexing side of SolrCloud - the goal of this issue is to provide durable, fault tolerant indexing to an elastic cluster of Solr instances. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2287) SolrCloud - Allow users to query by multiple, compatible collections
[ https://issues.apache.org/jira/browse/SOLR-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2287: -- Summary: SolrCloud - Allow users to query by multiple, compatible collections (was: (SolrCloud) Allow users to query by multiple collections) SolrCloud - Allow users to query by multiple, compatible collections Key: SOLR-2287 URL: https://issues.apache.org/jira/browse/SOLR-2287 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Soheb Mahmood Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: SOLR-2287.patch, SOLR-2287.patch, SOLR-2287.patch This code fixes the todo items mentioned on the SolrCloud wiki: -optionally allow user to query by collection -optionally allow user to query by multiple collections (assume schemas are compatible) We are going to put a patch to see if anyone has any trouble with this code and/or if there is any comments on how to improve this code. Unfortunately, as of now, we don't have a test class as we are working on it. We are sorry about this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2287) (SolrCloud) Allow users to query by multiple collections
[ https://issues.apache.org/jira/browse/SOLR-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2287: -- Fix Version/s: 4.0 (SolrCloud) Allow users to query by multiple collections Key: SOLR-2287 URL: https://issues.apache.org/jira/browse/SOLR-2287 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Soheb Mahmood Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: SOLR-2287.patch, SOLR-2287.patch, SOLR-2287.patch This code fixes the todo items mentioned on the SolrCloud wiki: -optionally allow user to query by collection -optionally allow user to query by multiple collections (assume schemas are compatible) We are going to put a patch to see if anyone has any trouble with this code and/or if there is any comments on how to improve this code. Unfortunately, as of now, we don't have a test class as we are working on it. We are sorry about this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3001) Documents droping when using DistributedUpdateProcessor
[ https://issues.apache.org/jira/browse/SOLR-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3001: -- Fix Version/s: 4.0 Documents droping when using DistributedUpdateProcessor --- Key: SOLR-3001 URL: https://issues.apache.org/jira/browse/SOLR-3001 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.0 Environment: Windows 7, Ubuntu Reporter: Rafał Kuć Assignee: Mark Miller Fix For: 4.0 I have a problem with distributed indexing in solrcloud branch. I've setup a cluster with three Solr servers. I'm using DistributedUpdateProcessor to do the distributed indexing. What I've noticed is when indexing with StreamingUpdateSolrServer or CommonsHttpSolrServer and having a queue or a list which have more than one document the documents seems to be dropped. I did some tests which tried to index 450k documents. If I was sending the documents one by one, the indexing was properly executed and the three Solr instances was holding 450k documents (when summed up). However if when I tried to add documents in batches (for example with StreamingUpdateSolrServer and a queue of 1000) the shard I was sending the documents to had a minimum number of documents (about 100) while the other shards had about 150k documents. Each Solr was started with a single core and in Zookeeper mode. An example solr.xml file: {noformat} ?xml version=1.0 encoding=UTF-8 ? solr persistent=true cores defaultCoreName=collection1 adminPath=/admin/cores zkClientTimeout=1 hostPort=8983 hostContext=solr core shard=shard1 instanceDir=. name=collection1 / /cores /solr {noformat} The solrconfig.xml file on each of the shard consisted of the following entries: {noformat} requestHandler name=/update class=solr.XmlUpdateRequestHandler lst name=defaults str name=update.chaindistrib/str /lst /requestHandler {noformat} {noformat} updateRequestProcessorChain name=distrib processor class=org.apache.solr.update.processor.DistributedUpdateProcessorFactory / processor class=solr.LogUpdateProcessorFactory / processor class=solr.RunUpdateProcessorFactory/ /updateRequestProcessorChain {noformat} I found a solution, but I don't know if it is a proper one. I've modified the code that is responsible for handling the replicas in: {{private ListString setupRequest(int hash)}} of {{DistributedUpdateProcessorFactory}} I've added the following code: {noformat} if (urls == null) { urls = new ArrayListString(1); urls.add(leaderUrl); } else { if (!urls.contains(leaderUrl)) { urls.add(leaderUrl); } } {noformat} after: {noformat} urls = getReplicaUrls(req, collection, shardId, nodeName); {noformat} If this is the proper approach I'll be glad to provide a patch with the modification. -- Regards Rafał Kuć Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem search :: http://search-lucene.com/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2358) Distributing Indexing
[ https://issues.apache.org/jira/browse/SOLR-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2358: -- Description: The indexing side of SolrCloud - the goal of this issue is to provide durable, fault tolerant indexing to an elastic cluster of Solr instances. (was: The first steps towards creating distributed indexing functionality in Solr) Distributing Indexing - Key: SOLR-2358 URL: https://issues.apache.org/jira/browse/SOLR-2358 Project: Solr Issue Type: New Feature Components: SolrCloud, update Reporter: William Mayor Priority: Minor Fix For: 4.0 Attachments: SOLR-2358.patch The indexing side of SolrCloud - the goal of this issue is to provide durable, fault tolerant indexing to an elastic cluster of Solr instances. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2654) lockType/ not used consistently in all places Directory objects are instantiated
[ https://issues.apache.org/jira/browse/SOLR-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2654: -- Fix Version/s: (was: 3.6) lockType/ not used consistently in all places Directory objects are instantiated -- Key: SOLR-2654 URL: https://issues.apache.org/jira/browse/SOLR-2654 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-2654.patch, SOLR-2654.patch, SOLR-2654.patch, SOLR-2654.patch, SOLR-2654.patch, SOLR-2654.patch, SOLR-2654.patch, SOLR-2654.patch, SOLR-2698.patch nipunb noted on the mailing list then when configuring solr to use an alternate lockType/ (ie: simple) the stats for the SolrIndexSearcher list NativeFSLockFactory being used by the Directory. The problem seems to be that SolrIndexConfig is not consulted when constructing Directory objects used for IndexReader (it's only used by SolrIndexWriter) I don't _think_ this is a problem in most cases since the IndexReaders should all be readOnly in the core solr code) but plugins could attempt to use them in other ways. In general it seems like a really bad bug waiting to happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-1632) Distributed IDF
[ https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-1632: -- Attachment: SOLR-1632.patch I found this work hidden away in my eclipse workspace! It still has the thread local stuff - either I had only thought of what I was going to do to remove it, or this was not the latest work, but either way, it starts us from a trunk applyable patch, which is much better. There is still a fair amount to do at minimum to switch to using the new scoring stats. I started some really simple moves towards this (super baby step) and so things dont compile at the moment. Patch should be clean though. Distributed IDF --- Key: SOLR-1632 URL: https://issues.apache.org/jira/browse/SOLR-1632 Project: Solr Issue Type: New Feature Components: search Affects Versions: 1.5 Reporter: Andrzej Bialecki Attachments: SOLR-1632.patch, distrib-2.patch, distrib.patch Distributed IDF is a valuable enhancement for distributed search across non-uniform shards. This issue tracks the proposed implementation of an API to support this functionality in Solr. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2700) transaction logging
[ https://issues.apache.org/jira/browse/SOLR-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2700: -- Component/s: SolrCloud transaction logging --- Key: SOLR-2700 URL: https://issues.apache.org/jira/browse/SOLR-2700 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Yonik Seeley Attachments: SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch, SOLR-2700.patch A transaction log is needed for durability of updates, for a more performant realtime-get, and for replaying updates to recovering peers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2912) File descriptor leak in ShowFileRequestHandler.getFileContents()
[ https://issues.apache.org/jira/browse/SOLR-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2912: -- Fix Version/s: 4.0 3.6 File descriptor leak in ShowFileRequestHandler.getFileContents() Key: SOLR-2912 URL: https://issues.apache.org/jira/browse/SOLR-2912 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 3.2 Reporter: Michael Ryan Priority: Minor Fix For: 3.6, 4.0 There is a file descriptor leak in ShowFileRequestHandler.getFileContents() - the InputStream is not closed. This could cause a Too many open files error if the admin page is loaded a lot. I've only tested this on 3.2, but I think it affects all recent versions, including trunk. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3034) If you vary a setting per round and that setting is a long string, the report padding/columns break down.
[ https://issues.apache.org/jira/browse/LUCENE-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated LUCENE-3034: Fix Version/s: (was: 3.1.1) 3.6 If you vary a setting per round and that setting is a long string, the report padding/columns break down. - Key: LUCENE-3034 URL: https://issues.apache.org/jira/browse/LUCENE-3034 Project: Lucene - Java Issue Type: Improvement Components: modules/benchmark Reporter: Mark Miller Assignee: Mark Miller Priority: Trivial Fix For: 3.6, 4.0 This is especially noticeable if you vary a setting where the value is a fully specified class name - in this case, it would be nice if columns in each row still lined up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2622) ZkSolrResourceLoader does not support getConfigDir()
[ https://issues.apache.org/jira/browse/SOLR-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2622: -- Priority: Minor (was: Major) Affects Version/s: (was: 4.0) Fix Version/s: 4.0 ZkSolrResourceLoader does not support getConfigDir() Key: SOLR-2622 URL: https://issues.apache.org/jira/browse/SOLR-2622 Project: Solr Issue Type: Bug Components: SolrCloud, web gui Environment: SVN-Revision: {{1139570}} Startup-Command: {{cd solr/example java -Dbootstrap_confdir=./solr/conf -Dcollection.configName=myconf -DzkRun -jar start.jar}} Reporter: Stefan Matheis (steffkes) Priority: Minor Fix For: 4.0 Requesting {{/solr/admin/file/?contentType=text/xml;charset=utf-8file=schema.xml}} generates an HTTP 500: {code}org.apache.solr.common.cloud.ZooKeeperException: ZkSolrResourceLoader does not support getConfigDir() - likely, what you are trying to do is not supported in ZooKeeper mode at org.apache.solr.cloud.ZkSolrResourceLoader.getConfigDir(ZkSolrResourceLoader.java:99) at org.apache.solr.handler.admin.ShowFileRequestHandler.handleRequestBody(ShowFileRequestHandler.java:126) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:353) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582){code} It's not related to a specific file, requesting {{/solr/admin/file}} is enough. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2799) SolrCloud reads its entire state from Zookeeper on every update instead of what has changed
[ https://issues.apache.org/jira/browse/SOLR-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2799: -- Fix Version/s: 4.0 Assignee: Mark Miller SolrCloud reads its entire state from Zookeeper on every update instead of what has changed --- Key: SOLR-2799 URL: https://issues.apache.org/jira/browse/SOLR-2799 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jamie Johnson Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: cloudstate.patch, cloudstate.patch Currently solrcloud reads the entire cloud state from ZK anytime an update is scheduled which can be very inefficient with a large number of shards. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2799) SolrCloud reads its entire state from Zookeeper on every update instead of what has changed
[ https://issues.apache.org/jira/browse/SOLR-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-2799: -- Description: Currently solrcloud reads the entire cloud state from ZK anytime an update is scheduled which can be very inefficient with a large number of shards. see discussion on user list: http://www.lucidimagination.com/search/document/54fa402cf3171bc3/solr_cloud_number_of_shard_limitation was:Currently solrcloud reads the entire cloud state from ZK anytime an update is scheduled which can be very inefficient with a large number of shards. SolrCloud reads its entire state from Zookeeper on every update instead of what has changed --- Key: SOLR-2799 URL: https://issues.apache.org/jira/browse/SOLR-2799 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jamie Johnson Assignee: Mark Miller Priority: Minor Fix For: 4.0 Attachments: SOLR-2799.patch, cloudstate.patch, cloudstate.patch Currently solrcloud reads the entire cloud state from ZK anytime an update is scheduled which can be very inefficient with a large number of shards. see discussion on user list: http://www.lucidimagination.com/search/document/54fa402cf3171bc3/solr_cloud_number_of_shard_limitation -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org