Re: 2B tests

2015-03-15 Thread Michael Wechner
what are the "2B tests"? I guess the entry point is

lucene/core/src/test/org/apache/lucene/index/Test2BTerms.java

or where would you start to learn more about these tests?

Thanks

Michael


Am 15.03.15 um 21:58 schrieb Michael McCandless:
> I confirmed 2B tests are passing on 4.10.x.  Took 17 hours to run ...
> this is the command I run, for future reference:
>
>   ant test -Dtests.monster=true -Dtests.heapsize=30g -Dtests.jvms=1
> -Dtests.workDir=/p/tmp
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7109) Indexing threads stuck during network partition can put leader into down state

2015-03-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362787#comment-14362787
 ] 

Shalin Shekhar Mangar commented on SOLR-7109:
-

Thanks for fixing the Java7 error, Yonik!

> Indexing threads stuck during network partition can put leader into down state
> --
>
> Key: SOLR-7109
> URL: https://issues.apache.org/jira/browse/SOLR-7109
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3, 5.0
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7109.patch, SOLR-7109.patch
>
>
> I found this recently while running some Jepsen tests. I found that some 
> threads get stuck on zk operations for a long time in 
> ZkController.updateLeaderInitiatedRecoveryState method and when they wake up 
> they go ahead with setting the LIR state to down. But in the mean time, new 
> leader has been elected and sometimes you'd get into a state where the leader 
> itself is put into recovery causing the shard to reject all writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7214) JSON Facet API

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362776#comment-14362776
 ] 

ASF subversion and git services commented on SOLR-7214:
---

Commit 1666876 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666876 ]

SOLR-7214: JSON Facet API

> JSON Facet API
> --
>
> Key: SOLR-7214
> URL: https://issues.apache.org/jira/browse/SOLR-7214
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
> Attachments: SOLR-7214.patch
>
>
> Overview is here: http://heliosearch.org/json-facet-api/
> The structured nature of nested sub-facets are more naturally expressed in a 
> nested structure like JSON rather than the flat structure that normal query 
> parameters provide.
> Goals:
> - First class JSON support
> - Easier programmatic construction of complex nested facet commands
> - Support a much more canonical response format that is easier for clients to 
> parse
> - First class analytics support
> - Support a cleaner way to do distributed faceting
> - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7126) Secure loading of runtime external jars

2015-03-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362774#comment-14362774
 ] 

Yonik Seeley edited comment on SOLR-7126 at 3/16/15 5:46 AM:
-

Reopening.  This test (TestCryptoKeys) has been sometimes failing for me.
I just saw a fail on jenkins too.

I just changed this to a blocker for 5.1 also...
Unless there is something inherently hard to test here, there should be no 
excuses for new tests being flakey.


was (Author: ysee...@gmail.com):
Reopening.  This test (TestCryptoKeys) has been sometimes failing for me.
I just saw a fail on jenkins too.

> Secure loading of runtime external jars
> ---
>
> Key: SOLR-7126
> URL: https://issues.apache.org/jira/browse/SOLR-7126
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: security
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7126.patch, SOLR-7126.patch, SOLR-7126.patch
>
>
> We need to ensure that the jars loaded into solr are trusted 
> We shall use simple PKI to protect the jars/config loaded into the system
> The following are the steps involved for doing that.
> {noformat}
> #Step 1:
> # generate a 768-bit RSA private key. or whaterver strength you would need
> $ openssl genrsa -out priv_key.pem 768
> # store your private keys safely (with  a password if possible)
> # output public key portion in DER format (so that Java can read it)
> $ openssl rsa -in priv_key.pem -pubout -outform DER -out pub_key.der
> #Step 2:
> #Load the .DER files to ZK under /keys/exe
> Step3:
> # start all your servers with -Denable.runtime.lib=true 
> Step 4:
> # sign the sha1 digest of your jar with one of your private keys and get the 
> base64 string of that signature . 
> $ openssl dgst -sha1 -sign priv_key.pem myjar.jar | openssl enc -base64 
> #Step 5:
> # load your jars into blob store . refer SOLR-6787
> #Step 6:
> # use the command to add your jar to classpath as follows
> {noformat}
> {code}
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "add-runtimelib" : {"name": "jarname" , "version":2 , 
> "sig":"mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0="
>  }// output of step 4. concatenate the lines 
> }' 
> {code}
> sig is the extra parameter that is nothing but the base64 encoded value of 
> the jar's sha1 signature 
> If no keys are present , the jar is loaded without any checking. 
> Before loading a jar from blob store , each Solr node would check if there 
> are keys present in the keys directory. If yes, each jar's signature will be 
> verified with all the available public keys. If atleast one succeeds , the 
> jar is loaded into memory. If nothing succeeds , it will be rejected 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7126) Secure loading of runtime external jars

2015-03-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7126:
---
Priority: Blocker  (was: Major)

> Secure loading of runtime external jars
> ---
>
> Key: SOLR-7126
> URL: https://issues.apache.org/jira/browse/SOLR-7126
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: security
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7126.patch, SOLR-7126.patch, SOLR-7126.patch
>
>
> We need to ensure that the jars loaded into solr are trusted 
> We shall use simple PKI to protect the jars/config loaded into the system
> The following are the steps involved for doing that.
> {noformat}
> #Step 1:
> # generate a 768-bit RSA private key. or whaterver strength you would need
> $ openssl genrsa -out priv_key.pem 768
> # store your private keys safely (with  a password if possible)
> # output public key portion in DER format (so that Java can read it)
> $ openssl rsa -in priv_key.pem -pubout -outform DER -out pub_key.der
> #Step 2:
> #Load the .DER files to ZK under /keys/exe
> Step3:
> # start all your servers with -Denable.runtime.lib=true 
> Step 4:
> # sign the sha1 digest of your jar with one of your private keys and get the 
> base64 string of that signature . 
> $ openssl dgst -sha1 -sign priv_key.pem myjar.jar | openssl enc -base64 
> #Step 5:
> # load your jars into blob store . refer SOLR-6787
> #Step 6:
> # use the command to add your jar to classpath as follows
> {noformat}
> {code}
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "add-runtimelib" : {"name": "jarname" , "version":2 , 
> "sig":"mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0="
>  }// output of step 4. concatenate the lines 
> }' 
> {code}
> sig is the extra parameter that is nothing but the base64 encoded value of 
> the jar's sha1 signature 
> If no keys are present , the jar is loaded without any checking. 
> Before loading a jar from blob store , each Solr node would check if there 
> are keys present in the keys directory. If yes, each jar's signature will be 
> verified with all the available public keys. If atleast one succeeds , the 
> jar is loaded into memory. If nothing succeeds , it will be rejected 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7126) Secure loading of runtime external jars

2015-03-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reopened SOLR-7126:


Reopening.  This test (TestCryptoKeys) has been sometimes failing for me.
I just saw a fail on jenkins too.

> Secure loading of runtime external jars
> ---
>
> Key: SOLR-7126
> URL: https://issues.apache.org/jira/browse/SOLR-7126
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: security
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7126.patch, SOLR-7126.patch, SOLR-7126.patch
>
>
> We need to ensure that the jars loaded into solr are trusted 
> We shall use simple PKI to protect the jars/config loaded into the system
> The following are the steps involved for doing that.
> {noformat}
> #Step 1:
> # generate a 768-bit RSA private key. or whaterver strength you would need
> $ openssl genrsa -out priv_key.pem 768
> # store your private keys safely (with  a password if possible)
> # output public key portion in DER format (so that Java can read it)
> $ openssl rsa -in priv_key.pem -pubout -outform DER -out pub_key.der
> #Step 2:
> #Load the .DER files to ZK under /keys/exe
> Step3:
> # start all your servers with -Denable.runtime.lib=true 
> Step 4:
> # sign the sha1 digest of your jar with one of your private keys and get the 
> base64 string of that signature . 
> $ openssl dgst -sha1 -sign priv_key.pem myjar.jar | openssl enc -base64 
> #Step 5:
> # load your jars into blob store . refer SOLR-6787
> #Step 6:
> # use the command to add your jar to classpath as follows
> {noformat}
> {code}
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "add-runtimelib" : {"name": "jarname" , "version":2 , 
> "sig":"mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0="
>  }// output of step 4. concatenate the lines 
> }' 
> {code}
> sig is the extra parameter that is nothing but the base64 encoded value of 
> the jar's sha1 signature 
> If no keys are present , the jar is loaded without any checking. 
> Before loading a jar from blob store , each Solr node would check if there 
> are keys present in the keys directory. If yes, each jar's signature will be 
> verified with all the available public keys. If atleast one succeeds , the 
> jar is loaded into memory. If nothing succeeds , it will be rejected 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b22) - Build # 11985 - Failure!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11985/
Java: 32bit/jdk1.8.0_40-ea-b22 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCryptoKeys.test

Error Message:
{"error":{ "msg":"no such blob or version available: signedjar/1", 
"code":404}}

Stack Trace:
java.lang.AssertionError: {"error":{
"msg":"no such blob or version available: signedjar/1",
"code":404}}
at 
__randomizedtesting.SeedInfo.seed([76A35EDC90D32AA7:FEF761063E2F475F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.TestCryptoKeys.test(TestCryptoKeys.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:

[jira] [Commented] (SOLR-7109) Indexing threads stuck during network partition can put leader into down state

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362750#comment-14362750
 ] 

ASF subversion and git services commented on SOLR-7109:
---

Commit 1666863 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666863 ]

SOLR-7109: fix java7 compile error

> Indexing threads stuck during network partition can put leader into down state
> --
>
> Key: SOLR-7109
> URL: https://issues.apache.org/jira/browse/SOLR-7109
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3, 5.0
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7109.patch, SOLR-7109.patch
>
>
> I found this recently while running some Jepsen tests. I found that some 
> threads get stuck on zk operations for a long time in 
> ZkController.updateLeaderInitiatedRecoveryState method and when they wake up 
> they go ahead with setting the LIR state to down. But in the mean time, new 
> leader has been elected and sometimes you'd get into a state where the leader 
> itself is put into recovery causing the shard to reject all writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-15 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-6339:
-
Attachment: LUCENE-6339.patch

Updated Patch:
 - nuke AutomatonUtil
 - make CompletionAnalyzer immutable
 - add tests
 - minor fixes

> [suggest] Near real time Document Suggester
> ---
>
> Key: LUCENE-6339
> URL: https://issues.apache.org/jira/browse/LUCENE-6339
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/search
>Affects Versions: 5.0
>Reporter: Areek Zillur
>Assignee: Areek Zillur
> Fix For: 5.0
>
> Attachments: LUCENE-6339.patch, LUCENE-6339.patch, LUCENE-6339.patch
>
>
> The idea is to index documents with one or more *SuggestField*(s) and be able 
> to suggest documents with a *SuggestField* value that matches a given key.
> A SuggestField can be assigned a numeric weight to be used to score the 
> suggestion at query time.
> Document suggestion can be done on an indexed *SuggestField*. The document 
> suggester can filter out deleted documents in near real-time. The suggester 
> can filter out documents based on a Filter (note: may change to a non-scoring 
> query?) at query time.
> A custom postings format (CompletionPostingsFormat) is used to index 
> SuggestField(s) and perform document suggestions.
> h4. Usage
> {code:java}
>   // hook up custom postings format
>   // indexAnalyzer for SuggestField
>   Analyzer analyzer = ...
>   IndexWriterConfig config = new IndexWriterConfig(analyzer);
>   Codec codec = new Lucene50Codec() {
> PostingsFormat completionPostingsFormat = new 
> Completion50PostingsFormat();
> @Override
> public PostingsFormat getPostingsFormatForField(String field) {
>   if (isSuggestField(field)) {
> return completionPostingsFormat;
>   }
>   return super.getPostingsFormatForField(field);
> }
>   };
>   config.setCodec(codec);
>   IndexWriter writer = new IndexWriter(dir, config);
>   // index some documents with suggestions
>   Document doc = new Document();
>   doc.add(new SuggestField("suggest_title", "title1", 2));
>   doc.add(new SuggestField("suggest_name", "name1", 3));
>   writer.addDocument(doc)
>   ...
>   // open an nrt reader for the directory
>   DirectoryReader reader = DirectoryReader.open(writer, false);
>   // SuggestIndexSearcher is a thin wrapper over IndexSearcher
>   // queryAnalyzer will be used to analyze the query string
>   SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
> queryAnalyzer);
>   
>   // suggest 10 documents for "titl" on "suggest_title" field
>   TopSuggestDocs suggest = indexSearcher.suggest("suggest_title", "titl", 10);
> {code}
> h4. Indexing
> Index analyzer set through *IndexWriterConfig*
> {code:java}
> SuggestField(String name, String value, long weight) 
> {code}
> h4. Query
> Query analyzer set through *SuggestIndexSearcher*.
> Hits are collected in descending order of the suggestion's weight 
> {code:java}
> // full options for TopSuggestDocs (TopDocs)
> TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)
> // full options for Collector
> // note: only collects does not score
> void suggest(String field, CharSequence key, int maxNumPerLeaf, Filter 
> filter, Collector collector)
> {code}
> h4. Analyzer
> *CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
> suggest field only parameters. 
> {code:java}
> CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
> preservePositionIncrements, int maxGraphExpansions)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6339) [suggest] Near real time Document Suggester

2015-03-15 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-6339:
-
Description: 
The idea is to index documents with one or more *SuggestField*(s) and be able 
to suggest documents with a *SuggestField* value that matches a given key.
A SuggestField can be assigned a numeric weight to be used to score the 
suggestion at query time.

Document suggestion can be done on an indexed *SuggestField*. The document 
suggester can filter out deleted documents in near real-time. The suggester can 
filter out documents based on a Filter (note: may change to a non-scoring 
query?) at query time.

A custom postings format (CompletionPostingsFormat) is used to index 
SuggestField(s) and perform document suggestions.

h4. Usage
{code:java}
  // hook up custom postings format
  // indexAnalyzer for SuggestField
  Analyzer analyzer = ...
  IndexWriterConfig config = new IndexWriterConfig(analyzer);
  Codec codec = new Lucene50Codec() {
PostingsFormat completionPostingsFormat = new Completion50PostingsFormat();

@Override
public PostingsFormat getPostingsFormatForField(String field) {
  if (isSuggestField(field)) {
return completionPostingsFormat;
  }
  return super.getPostingsFormatForField(field);
}
  };
  config.setCodec(codec);
  IndexWriter writer = new IndexWriter(dir, config);
  // index some documents with suggestions
  Document doc = new Document();
  doc.add(new SuggestField("suggest_title", "title1", 2));
  doc.add(new SuggestField("suggest_name", "name1", 3));
  writer.addDocument(doc)
  ...
  // open an nrt reader for the directory
  DirectoryReader reader = DirectoryReader.open(writer, false);
  // SuggestIndexSearcher is a thin wrapper over IndexSearcher
  // queryAnalyzer will be used to analyze the query string
  SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
queryAnalyzer);
  
  // suggest 10 documents for "titl" on "suggest_title" field
  TopSuggestDocs suggest = indexSearcher.suggest("suggest_title", "titl", 10);
{code}

h4. Indexing
Index analyzer set through *IndexWriterConfig*
{code:java}
SuggestField(String name, String value, long weight) 
{code}

h4. Query
Query analyzer set through *SuggestIndexSearcher*.
Hits are collected in descending order of the suggestion's weight 
{code:java}
// full options for TopSuggestDocs (TopDocs)
TopSuggestDocs suggest(String field, CharSequence key, int num, Filter filter)

// full options for Collector
// note: only collects does not score
void suggest(String field, CharSequence key, int maxNumPerLeaf, Filter filter, 
Collector collector)
{code}

h4. Analyzer
*CompletionAnalyzer* can be used instead to wrap another analyzer to tune 
suggest field only parameters. 
{code:java}
CompletionAnalyzer(Analyzer analyzer, boolean preserveSep, boolean 
preservePositionIncrements, int maxGraphExpansions)
{code}

  was:
The idea is to index documents with one or more *SuggestField*(s) and be able 
to suggest documents with a *SuggestField* value that matches a given key.
A SuggestField can be assigned a numeric weight to be used to score the 
suggestion at query time.

Document suggestion can be done on an indexed *SuggestField*. The document 
suggester can filter out deleted documents in near real-time. The suggester can 
filter out documents based on a Filter (note: may change to a non-scoring 
query?) at query time.

A custom postings format (CompletionPostingsFormat) is used to index 
SuggestField(s) and perform document suggestions.

h4. Usage
{code:java}
  // hook up custom postings format
  // indexAnalyzer for SuggestField
  Analyzer analyzer = ...
  IndexWriterConfig config = new IndexWriterConfig(analyzer);
  Codec codec = new Lucene50Codec() {
@Override
public PostingsFormat getPostingsFormatForField(String field) {
  if (isSuggestField(field)) {
return new 
CompletionPostingsFormat(super.getPostingsFormatForField(field));
  }
  return super.getPostingsFormatForField(field);
}
  };
  config.setCodec(codec);
  IndexWriter writer = new IndexWriter(dir, config);
  // index some documents with suggestions
  Document doc = new Document();
  doc.add(new SuggestField("suggest_title", "title1", 2));
  doc.add(new SuggestField("suggest_name", "name1", 3));
  writer.addDocument(doc)
  ...
  // open an nrt reader for the directory
  DirectoryReader reader = DirectoryReader.open(writer, false);
  // SuggestIndexSearcher is a thin wrapper over IndexSearcher
  // queryAnalyzer will be used to analyze the query string
  SuggestIndexSearcher indexSearcher = new SuggestIndexSearcher(reader, 
queryAnalyzer);
  
  // suggest 10 documents for "titl" on "suggest_title" field
  TopSuggestDocs suggest = indexSearcher.suggest("suggest_title", "titl", 10);
{code}

h4. Indexing
Index analyzer set through *IndexWriterConfig*
{code:java}
SuggestField(String name, String va

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2778 - Still Failing

2015-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2778/

All tests passed

Build Log:
[...truncated 8540 lines...]
[javac] Compiling 756 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac]   required: Map
[javac]   found:Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:529:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:477:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build.xml:191:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:509:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:462:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:375:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:520:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1882:
 Compile failed; see the compiler error output for details.

Total time: 24 minutes 31 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2431
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 0.18 sec
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7214) JSON Facet API

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362730#comment-14362730
 ] 

ASF subversion and git services commented on SOLR-7214:
---

Commit 1666856 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1666856 ]

SOLR-7214: JSON Facet API

> JSON Facet API
> --
>
> Key: SOLR-7214
> URL: https://issues.apache.org/jira/browse/SOLR-7214
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
> Attachments: SOLR-7214.patch
>
>
> Overview is here: http://heliosearch.org/json-facet-api/
> The structured nature of nested sub-facets are more naturally expressed in a 
> nested structure like JSON rather than the flat structure that normal query 
> parameters provide.
> Goals:
> - First class JSON support
> - Easier programmatic construction of complex nested facet commands
> - Support a much more canonical response format that is easier for clients to 
> parse
> - First class analytics support
> - Support a cleaner way to do distributed faceting
> - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6345) null check all term/fields in queries

2015-03-15 Thread Lee Hinman (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362714#comment-14362714
 ] 

Lee Hinman commented on LUCENE-6345:


I'm going to work on this.

Looking through the code, I see a mixture of:

{noformat}
Term t = Objects.requireNonNull(term);
{noformat}

As well as:

{noformat}
if (term == null) {
  throw new IllegalArgumentException("Term must not be null");
}
{noformat}

Any particular preference here? I think an explicit message is nicer but I can 
go either way. If no one has an opinion about it I'll pick one and go with it :)

> null check all term/fields in queries
> -
>
> Key: LUCENE-6345
> URL: https://issues.apache.org/jira/browse/LUCENE-6345
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> See the mail thread "is this lucene 4.1.0 bug in PerFieldPostingsFormat".
> If anyone seriously thinks adding a null check to ctor will cause measurable 
> slowdown to things like regexp or wildcards, they should have their head 
> examined.
> All queries should just check this crap in ctor and throw exceptions if 
> parameters are invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b47) - Build # 11820 - Still Failing!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11820/
Java: 64bit/jdk1.9.0-ea-b47 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 8615 lines...]
[javac] Compiling 756 source files to 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java
[javac] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
[javac] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types: Map cannot be converted to 
Map
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:529: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:477: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:61: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:191: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:509: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:462: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:375: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:520: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1882: 
Compile failed; see the compiler error output for details.

Total time: 44 minutes 54 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.9.0-ea-b47 
-XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b22) - Build # 11819 - Still Failing!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11819/
Java: 32bit/jdk1.8.0_40-ea-b22 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 8633 lines...]
[javac] Compiling 756 source files to 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java
[javac] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
[javac] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types: Map cannot be converted to 
Map
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:529: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:477: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:61: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:191: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:509: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:462: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:375: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:520: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1882: 
Compile failed; see the compiler error output for details.

Total time: 65 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_40-ea-b22 -server 
-XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2777 - Still Failing

2015-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2777/

All tests passed

Build Log:
[...truncated 8550 lines...]
[javac] Compiling 756 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac]   required: Map
[javac]   found:Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:529:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:477:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build.xml:191:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:509:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:462:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:375:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:520:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1882:
 Compile failed; see the compiler error output for details.

Total time: 25 minutes 40 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2431
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 0.21 sec
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: SegmentInfo.maxDoc() vs getDocCount()

2015-03-15 Thread Michael McCandless
On Sun, Mar 15, 2015 at 5:03 PM, Nicholas Knize  wrote:
> Had to make a minor change in ES to support this refactor (no biggie).
> Having fresh eyes and curiosity I thought I'd ask what the reason is behind
> this variable name, 'maxDoc', if deletes are not taken into consideration?
> Its a bit of a confusing variable and method name.

You are right: maxDoc is really quite a ridiculous (yet, historical,
legacy, been in Lucene forever) name.  Lucene historically has used
"numDocs" to be number of live docs in the index, and "maxDoc" to be
number of live + deleted (but not yet merged away) docs.

I'm personally API blind to both these names :)

Maybe we should fix it?

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SegmentInfo.maxDoc() vs getDocCount()

2015-03-15 Thread Nicholas Knize
Had to make a minor change in ES to support this refactor (no biggie).
Having fresh eyes and curiosity I thought I'd ask what the reason is behind
this variable name, 'maxDoc', if deletes are not taken into consideration?
Its a bit of a confusing variable and method name.


[jira] [Updated] (SOLR-7245) Temporary ZK election or connection loss should not stall indexing due to LIR

2015-03-15 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar updated SOLR-7245:

Attachment: SOLR-7245.patch

I need to think this through a bit more, but here's a starting patch which 
tries to have the update path make a best effort in updating ZK but not 
stalling if disconnected. Comments welcome..

> Temporary ZK election or connection loss should not stall indexing due to LIR
> -
>
> Key: SOLR-7245
> URL: https://issues.apache.org/jira/browse/SOLR-7245
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Ramkumar Aiyengar
>Assignee: Ramkumar Aiyengar
>Priority: Minor
> Attachments: SOLR-7245.patch
>
>
> If there's a ZK election or connection loss, and the leader is unable to 
> reach a replica, it currently would stall till the ZK connection is 
> established, due to the LIR process. This shouldn't happen, and in some way 
> regresses the work done in SOLR-5577.
> I will try get to this, but if someone races me to it, feel free to..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7245) Temporary ZK election or connection loss should not stall indexing due to LIR

2015-03-15 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar reassigned SOLR-7245:
---

Assignee: Ramkumar Aiyengar

> Temporary ZK election or connection loss should not stall indexing due to LIR
> -
>
> Key: SOLR-7245
> URL: https://issues.apache.org/jira/browse/SOLR-7245
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Ramkumar Aiyengar
>Assignee: Ramkumar Aiyengar
>Priority: Minor
>
> If there's a ZK election or connection loss, and the leader is unable to 
> reach a replica, it currently would stall till the ZK connection is 
> established, due to the LIR process. This shouldn't happen, and in some way 
> regresses the work done in SOLR-5577.
> I will try get to this, but if someone races me to it, feel free to..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2012 - Still Failing!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2012/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 8532 lines...]
[javac] Compiling 756 source files to 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/classes/java
[javac] 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac]   required: Map
[javac]   found:Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:529: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:477: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:61: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/extra-targets.xml:39: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build.xml:191: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/common-build.xml:509: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/common-build.xml:462: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/common-build.xml:375: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/common-build.xml:520: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/common-build.xml:1882: 
Compile failed; see the compiler error output for details.

Total time: 47 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0 
-XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362604#comment-14362604
 ] 

Uwe Schindler commented on SOLR-7217:
-

Patch looks almost fine, although I am not so happy about having the detection 
stuff inside the formData parser. Maybe the content-type detection should be 
part of Standard Parser? I think we could move the detection part into the "if 
(isFormdata()) {...}" part. There is also the place where we decide if it's 
"curl", so the detection should only happen there.

> Auto-detect HTTP body content-type
> --
>
> Key: SOLR-7217
> URL: https://issues.apache.org/jira/browse/SOLR-7217
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Attachments: SOLR-7217.patch
>
>
> It's nice to be able to leave off the specification of content type when hand 
> crafting a request (i.e. from the command line) and for our documentation 
> examples.
> For example:
> {code}
> curl http://localhost:8983/solr/query -d '
> {
>   query:"hero"
> }'
> {code}
> Note the missing 
> {code}
> -H 'Content-type:application/json'
> {code}
> that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b22) - Build # 11818 - Still Failing!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11818/
Java: 64bit/jdk1.8.0_40-ea-b22 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 8639 lines...]
[javac] Compiling 756 source files to 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java
[javac] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
[javac] 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types: Map cannot be converted to 
Map
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:529: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:477: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:61: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:191: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:509: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:462: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:375: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:520: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1882: 
Compile failed; see the compiler error output for details.

Total time: 38 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.8.0_40-ea-b22 
-XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Solr-Artifacts-5.x - Build # 769 - Failure

2015-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-5.x/769/

No tests ran.

Build Log:
[...truncated 12330 lines...]
[javac] Compiling 756 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build/solr-core/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac]   required: Map
[javac]   found:Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:453:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/common-build.xml:388:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:520:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/lucene/common-build.xml:1882:
 Compile failed; see the compiler error output for details.

Total time: 1 minute 58 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Solr-Artifacts-5.x #768
Archived 3 artifacts
Archive block size is 32768
Received 0 blocks and 37563361 bytes
Compression is 0.0%
Took 40 sec
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2776 - Still Failing

2015-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2776/

All tests passed

Build Log:
[...truncated 8559 lines...]
[javac] Compiling 756 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/java
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/cloud/ZkController.java:173:
 error: incompatible types
[javac]   private final Map electionContexts = 
Collections.synchronizedMap(new HashMap<>());
[javac] 
   ^
[javac]   required: Map
[javac]   found:Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:529:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:477:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build.xml:191:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:509:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:462:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:375:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:520:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1882:
 Compile failed; see the compiler error output for details.

Total time: 19 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2431
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 23 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 787 - Still Failing

2015-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/787/

4 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:38317/c8n_1x3_commits_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:38317/c8n_1x3_commits_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([244FE66A6BD041BA:AC1BD9B0C52C2C42]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:598)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apa

[jira] [Commented] (LUCENE-6161) Applying deletes is sometimes dog slow

2015-03-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362553#comment-14362553
 ] 

Michael McCandless commented on LUCENE-6161:


Thanks for reporting this impressive speedup!

The attached log is hard to read, because the timestamp is "computer" readable 
(timestamp seconds) ...

> Applying deletes is sometimes dog slow
> --
>
> Key: LUCENE-6161
> URL: https://issues.apache.org/jira/browse/LUCENE-6161
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-6161.patch, LUCENE-6161.patch, LUCENE-6161.patch, 
> LUCENE-6161.patch, LUCENE-6161.patch
>
>
> I hit this while testing various use cases for LUCENE-6119 (adding 
> auto-throttle to ConcurrentMergeScheduler).
> When I tested "always call updateDocument" (each add buffers a delete term), 
> with many indexing threads, opening an NRT reader once per second (forcing 
> all deleted terms to be applied), I see that 
> BufferedUpdatesStream.applyDeletes sometimes seems to take a lng time, 
> e.g.:
> {noformat}
> BD 0 [2015-01-04 09:31:12.597; Lucene Merge Thread #69]: applyDeletes took 
> 339 msec for 10 segments, 117 deleted docs, 607333 visited terms
> BD 0 [2015-01-04 09:31:18.148; Thread-4]: applyDeletes took 5533 msec for 62 
> segments, 10989 deleted docs, 8517225 visited terms
> BD 0 [2015-01-04 09:31:21.463; Lucene Merge Thread #71]: applyDeletes took 
> 1065 msec for 10 segments, 470 deleted docs, 1825649 visited terms
> BD 0 [2015-01-04 09:31:26.301; Thread-5]: applyDeletes took 4835 msec for 61 
> segments, 14676 deleted docs, 9649860 visited terms
> BD 0 [2015-01-04 09:31:35.572; Thread-11]: applyDeletes took 6073 msec for 72 
> segments, 13835 deleted docs, 11865319 visited terms
> BD 0 [2015-01-04 09:31:37.604; Lucene Merge Thread #75]: applyDeletes took 
> 251 msec for 10 segments, 58 deleted docs, 240721 visited terms
> BD 0 [2015-01-04 09:31:44.641; Thread-11]: applyDeletes took 5956 msec for 64 
> segments, 15109 deleted docs, 10599034 visited terms
> BD 0 [2015-01-04 09:31:47.814; Lucene Merge Thread #77]: applyDeletes took 
> 396 msec for 10 segments, 137 deleted docs, 719914 visit
> {noformat}
> What this means is even though I want an NRT reader every second, often I 
> don't get one for up to ~7 or more seconds.
> This is on an SSD, machine has 48 GB RAM, heap size is only 2 GB.  12 
> indexing threads.
> As hideously complex as this code is, I think there are some inefficiencies, 
> but fixing them could be hard / make code even hairier ...
> Also, this code is mega-locked: holds IW's lock, holds BD's lock.  It blocks 
> things like merges kicking off or finishing...
> E.g., we pull the MergedIterator many times on the same set of sub-iterators. 
>  Maybe we can create the sorted terms up front and reuse that?
> Maybe we should go "term stride" (one term visits all N segments) not 
> "segment stride" (visit each segment, iterating all deleted terms for it).  
> Just iterating the terms to be deleted takes a sizable part of the time, and 
> we now do that once for every segment in the index.
> Also, the "isUnique" bit in LUCENE-6005 should help here, since if we know 
> the field is unique, we can stop seekExact once we found a segment that has 
> the deleted term, we can maybe pass false for removeDuplicates to 
> MergedIterator...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



2B tests

2015-03-15 Thread Michael McCandless
I confirmed 2B tests are passing on 4.10.x.  Took 17 hours to run ...
this is the command I run, for future reference:

  ant test -Dtests.monster=true -Dtests.heapsize=30g -Dtests.jvms=1
-Dtests.workDir=/p/tmp

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6360) TermsQuery should rewrite to a ConstantScoreQuery over a BooleanQuery when there are few terms

2015-03-15 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362542#comment-14362542
 ] 

Paul Elschot commented on LUCENE-6360:
--

I wonder whether a compressing DocIdSet could also help here.
EliasFanoDocIdSet uses an internal threshold for the number of matching docs, 
and above that threshold it changes itself to a bitset.
The tradeoff for this is not directly related to skipping because building the 
set requires all matching docs.
But a small compressing docidset skips/advances faster than a bitset.

Some of this can be estimated in advance by the doc frequencies of the terms 
involved.

To  figure out the threshold(s), real life test cases would be helpful.
Do you have some in mind already?



> TermsQuery should rewrite to a ConstantScoreQuery over a BooleanQuery when 
> there are few terms
> --
>
> Key: LUCENE-6360
> URL: https://issues.apache.org/jira/browse/LUCENE-6360
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
>
> TermsQuery helps when there are lot of terms from which you would like to 
> compute the union, but it is a bit harmful when you have few terms since it 
> cannot really skip: it always consumes all documents matching the underlying 
> terms.
> It would certainly help to rewrite this query to a ConstantScoreQuery over a 
> BooleanQuery when there are few terms in order to have actual skip support.
> As usual the hard part is probably to figure out the threshold. :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6924) TestSolrConfigHandlerCloud fails frequently.

2015-03-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362541#comment-14362541
 ] 

Yonik Seeley commented on SOLR-6924:


Just hit one of these failures myself and thought it was my code... ugh.

> TestSolrConfigHandlerCloud fails frequently.
> 
>
> Key: SOLR-6924
> URL: https://issues.apache.org/jira/browse/SOLR-6924
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Noble Paul
>
> I see this fail all the time. Usually something like:
> java.lang.AssertionError: Could not get expected value  P val for path 
> [response, params, y, p] full output {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_76) - Build # 11817 - Failure!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11817/
Java: 32bit/jdk1.7.0_76 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCryptoKeys.test

Error Message:
Could not get expected value  
'nKmpxWH7XBlGuf51wEyIabN+HrkmFa/2sKJFIC/SeCKa1+txQxgO8vuekTGXymksq9b3K8Hs2+KsK3c9zTYORA=='
 for path 'overlay/runtimeLib/signedjar/sig' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "overlay":{ 
"znodeVersion":2, "requestHandler":{"/runtime":{ "name":"/runtime", 
"class":"org.apache.solr.core.RuntimeLibReqHandler", 
"runtimeLib":true}}, "runtimeLib":{"signedjar":{ 
"name":"signedjar", "version":1, 
"sig":"QKqHtd37QN02iMW9UEgvAO9g9qOOuG5vEBNkbUsN7noc2hhXKic/ABFIOYJA9PKw61mNX2EmNFXOcO3WClYdSw=="

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'nKmpxWH7XBlGuf51wEyIabN+HrkmFa/2sKJFIC/SeCKa1+txQxgO8vuekTGXymksq9b3K8Hs2+KsK3c9zTYORA=='
 for path 'overlay/runtimeLib/signedjar/sig' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":2,
"requestHandler":{"/runtime":{
"name":"/runtime",
"class":"org.apache.solr.core.RuntimeLibReqHandler",
"runtimeLib":true}},
"runtimeLib":{"signedjar":{
"name":"signedjar",
"version":1,

"sig":"QKqHtd37QN02iMW9UEgvAO9g9qOOuG5vEBNkbUsN7noc2hhXKic/ABFIOYJA9PKw61mNX2EmNFXOcO3WClYdSw=="
at 
__randomizedtesting.SeedInfo.seed([14303402B4110E74:9C640BD81AED638C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:399)
at org.apache.solr.cloud.TestCryptoKeys.test(TestCryptoKeys.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesO

[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2015-03-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362511#comment-14362511
 ] 

Shalin Shekhar Mangar commented on SOLR-7191:
-

This is a very broad issue so there are likely to be multiple problems and 
their solutions. It's probably best to start splitting out individual changes 
into their own sub-tasks so that each can be reviewed and committed 
individually.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-15 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7199:
-
Attachment: (was: SOLR-7199.patch)

> core loading should succeed irrespective of errors in loading certain 
> components
> 
>
> Key: SOLR-7199
> URL: https://issues.apache.org/jira/browse/SOLR-7199
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
> Attachments: SOLR-7199.patch
>
>
> If a certain component has some error , the core fails to load completely. 
> This was fine in standalone mode. We could always restart the node after 
> making corrections. In SolrCloud, the collection is totally gone and there is 
> no way to resurrect it using any commands . If the core is loaded , I can at 
> least use config commands to correct those mistakes .
> In short, Solr should try to make the best effort to make the core available 
> with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-15 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7199:
-
Attachment: SOLR-7199.patch

> core loading should succeed irrespective of errors in loading certain 
> components
> 
>
> Key: SOLR-7199
> URL: https://issues.apache.org/jira/browse/SOLR-7199
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
> Attachments: SOLR-7199.patch
>
>
> If a certain component has some error , the core fails to load completely. 
> This was fine in standalone mode. We could always restart the node after 
> making corrections. In SolrCloud, the collection is totally gone and there is 
> no way to resurrect it using any commands . If the core is loaded , I can at 
> least use config commands to correct those mistakes .
> In short, Solr should try to make the best effort to make the core available 
> with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362505#comment-14362505
 ] 

Shalin Shekhar Mangar commented on SOLR-7247:
-

Shard splitting does work with a custom route field. See SOLR-5246

> sliceHash for compositeIdRouter is not coherent with routing
> 
>
> Key: SOLR-7247
> URL: https://issues.apache.org/jira/browse/SOLR-7247
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Paolo Cappuccini
>
> in CompositeIdRouter the function sliceHash check routeField configured for 
> collection.
> This make me to guess that intended behaviour is manage alternative field to  
> id field to hash documents.
> But the signature of this method is very general ( can take id, doc or 
> params) and it is used in different ways from different functionality.
> In my opinion it should have overloads instead of a weak internal logic. One 
> overload with "doc" and "collection" and another one with "id" , "params" and 
> "collections".
> In any case , if "\_route_" is not available by "params" , "collection" 
> should be mandatory and in case of RouteField, also "doc" should be mandatory.
> This will break SplitIndex but it will save coherence of data.
> If i configure routeField i noticed that is broken the DeleteCommand (this 
> pass to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass 
> only "id" )
> It should be forbidden to specify RouteField to compositeIdRouter or 
> implements related functionality to make possible to hash documents based on 
> RouteField.
> in case of DeleteCommand command the workaround is to specify "_route_" param 
> in request but in case of Index Splitting is not possible any workaround.
> In this case it should be passed entire document during splitting ("doc" 
> parameter") or build params with proper "\_route_" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362501#comment-14362501
 ] 

Noble Paul commented on SOLR-7247:
--

[~yo...@apache.org] Do you mean to say that , it is impossible or very 
difficult to make routing on an alternate field?

If it is broken , we need to fix it. Other data stores let you route on 
secondary fields .

> sliceHash for compositeIdRouter is not coherent with routing
> 
>
> Key: SOLR-7247
> URL: https://issues.apache.org/jira/browse/SOLR-7247
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Paolo Cappuccini
>
> in CompositeIdRouter the function sliceHash check routeField configured for 
> collection.
> This make me to guess that intended behaviour is manage alternative field to  
> id field to hash documents.
> But the signature of this method is very general ( can take id, doc or 
> params) and it is used in different ways from different functionality.
> In my opinion it should have overloads instead of a weak internal logic. One 
> overload with "doc" and "collection" and another one with "id" , "params" and 
> "collections".
> In any case , if "\_route_" is not available by "params" , "collection" 
> should be mandatory and in case of RouteField, also "doc" should be mandatory.
> This will break SplitIndex but it will save coherence of data.
> If i configure routeField i noticed that is broken the DeleteCommand (this 
> pass to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass 
> only "id" )
> It should be forbidden to specify RouteField to compositeIdRouter or 
> implements related functionality to make possible to hash documents based on 
> RouteField.
> in case of DeleteCommand command the workaround is to specify "_route_" param 
> in request but in case of Index Splitting is not possible any workaround.
> In this case it should be passed entire document during splitting ("doc" 
> parameter") or build params with proper "\_route_" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7223) Tooltips admin panel get switched midway edismax

2015-03-15 Thread Jelle Janssens (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362498#comment-14362498
 ] 

Jelle Janssens commented on SOLR-7223:
--

Thanks! :)

> Tooltips admin panel get switched midway edismax
> 
>
> Key: SOLR-7223
> URL: https://issues.apache.org/jira/browse/SOLR-7223
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.10.1
>Reporter: Jelle Janssens
>Priority: Trivial
> Attachments: SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.png
>
>
> When hovering over the tooltips in SOLR admin, in the edismax section, the 
> tooltip gets switched from being set on the input box to the label. This 
> happens between "bf" and "uf".  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2015-03-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-7191:
---

Assignee: Shalin Shekhar Mangar

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7109) Indexing threads stuck during network partition can put leader into down state

2015-03-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-7109.
-
Resolution: Fixed
  Assignee: Shalin Shekhar Mangar

Thanks everyone!

> Indexing threads stuck during network partition can put leader into down state
> --
>
> Key: SOLR-7109
> URL: https://issues.apache.org/jira/browse/SOLR-7109
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3, 5.0
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7109.patch, SOLR-7109.patch
>
>
> I found this recently while running some Jepsen tests. I found that some 
> threads get stuck on zk operations for a long time in 
> ZkController.updateLeaderInitiatedRecoveryState method and when they wake up 
> they go ahead with setting the LIR state to down. But in the mean time, new 
> leader has been elected and sometimes you'd get into a state where the leader 
> itself is put into recovery causing the shard to reject all writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7109) Indexing threads stuck during network partition can put leader into down state

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362494#comment-14362494
 ] 

ASF subversion and git services commented on SOLR-7109:
---

Commit 1666826 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666826 ]

SOLR-7109: Indexing threads stuck during network partition can put leader into 
down state

> Indexing threads stuck during network partition can put leader into down state
> --
>
> Key: SOLR-7109
> URL: https://issues.apache.org/jira/browse/SOLR-7109
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3, 5.0
>Reporter: Shalin Shekhar Mangar
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7109.patch, SOLR-7109.patch
>
>
> I found this recently while running some Jepsen tests. I found that some 
> threads get stuck on zk operations for a long time in 
> ZkController.updateLeaderInitiatedRecoveryState method and when they wake up 
> they go ahead with setting the LIR state to down. But in the mean time, new 
> leader has been elected and sometimes you'd get into a state where the leader 
> itself is put into recovery causing the shard to reject all writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7109) Indexing threads stuck during network partition can put leader into down state

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362493#comment-14362493
 ] 

ASF subversion and git services commented on SOLR-7109:
---

Commit 1666825 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1666825 ]

SOLR-7109: Indexing threads stuck during network partition can put leader into 
down state

> Indexing threads stuck during network partition can put leader into down state
> --
>
> Key: SOLR-7109
> URL: https://issues.apache.org/jira/browse/SOLR-7109
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3, 5.0
>Reporter: Shalin Shekhar Mangar
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7109.patch, SOLR-7109.patch
>
>
> I found this recently while running some Jepsen tests. I found that some 
> threads get stuck on zk operations for a long time in 
> ZkController.updateLeaderInitiatedRecoveryState method and when they wake up 
> they go ahead with setting the LIR state to down. But in the mean time, new 
> leader has been elected and sometimes you'd get into a state where the leader 
> itself is put into recovery causing the shard to reject all writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-15 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7199:
-
Attachment: (was: SOLR-7199.patch)

> core loading should succeed irrespective of errors in loading certain 
> components
> 
>
> Key: SOLR-7199
> URL: https://issues.apache.org/jira/browse/SOLR-7199
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
> Attachments: SOLR-7199.patch
>
>
> If a certain component has some error , the core fails to load completely. 
> This was fine in standalone mode. We could always restart the node after 
> making corrections. In SolrCloud, the collection is totally gone and there is 
> no way to resurrect it using any commands . If the core is loaded , I can at 
> least use config commands to correct those mistakes .
> In short, Solr should try to make the best effort to make the core available 
> with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5994) Add Jetty configuration to serve JavaDocs

2015-03-15 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362486#comment-14362486
 ] 

Alexandre Rafalovitch commented on SOLR-5994:
-

Yes, one of the project ideas I have is to make a downloadable collection that 
includes version specific always-compressed javadocs that are better than what 
is already available, have a full-blown Solr search index and so on.

Part of the technology for this has already been tested for 
http://www.solr-start.com/javadoc/solr-lucene/index.html (notice the search 
box), but it needs to become downloadable and possibly take advantage of 
recent Velocity improvements in Solr for the interface design.


> Add Jetty configuration to serve JavaDocs 
> --
>
> Key: SOLR-5994
> URL: https://issues.apache.org/jira/browse/SOLR-5994
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, web gui
>Affects Versions: 4.7
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>  Labels: javadoc
> Attachments: SOLR-5994.patch, javadoc-jetty-context.xml
>
>
> It's possible to add another context file for Jetty that will serve Javadocs 
> from the server.
> This avoids some Javascript issues, makes the documentation more visible, and 
> opens the door for better integration in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7223) Tooltips admin panel get switched midway edismax

2015-03-15 Thread Thanatos (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thanatos updated SOLR-7223:
---
Attachment: SOLR-7223.patch

True, that is a bit redundant but it might accommodate all users.
Updated the patch.

> Tooltips admin panel get switched midway edismax
> 
>
> Key: SOLR-7223
> URL: https://issues.apache.org/jira/browse/SOLR-7223
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.10.1
>Reporter: Jelle Janssens
>Priority: Trivial
> Attachments: SOLR-7223.patch, SOLR-7223.patch, SOLR-7223.png
>
>
> When hovering over the tooltips in SOLR admin, in the edismax section, the 
> tooltip gets switched from being set on the input box to the label. This 
> happens between "bf" and "uf".  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-15 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7199:
-
Attachment: SOLR-7199.patch

Even if any component fails to load , the core loading will succeed , but the 
requests involving those components would fail

> core loading should succeed irrespective of errors in loading certain 
> components
> 
>
> Key: SOLR-7199
> URL: https://issues.apache.org/jira/browse/SOLR-7199
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
> Attachments: SOLR-7199.patch, SOLR-7199.patch
>
>
> If a certain component has some error , the core fails to load completely. 
> This was fine in standalone mode. We could always restart the node after 
> making corrections. In SolrCloud, the collection is totally gone and there is 
> no way to resurrect it using any commands . If the core is loaded , I can at 
> least use config commands to correct those mistakes .
> In short, Solr should try to make the best effort to make the core available 
> with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2775 - Still Failing

2015-03-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2775/

4 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:15621/_/t/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:15621/_/t/collection1
at 
__randomizedtesting.SeedInfo.seed([31DF846F78339C1:8B49C79C597F5439]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:594)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:556)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:604)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:565)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.Statem

[jira] [Commented] (SOLR-6770) Add/edit param sets and use them in Requests

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362477#comment-14362477
 ] 

ASF subversion and git services commented on SOLR-6770:
---

Commit 1666817 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666817 ]

SOLR-6770: reformat - fix bad indentation and funky formatting

> Add/edit param sets and use them in Requests
> 
>
> Key: SOLR-6770
> URL: https://issues.apache.org/jira/browse/SOLR-6770
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6770.patch, SOLR-6770.patch, SOLR-6770.patch
>
>
> Make it possible to define paramsets and use them directly in requests
> example
> {code}
> curl http://localhost:8983/solr/collection1/config/params -H 
> 'Content-type:application/json'  -d '{
> "set" : {"x": {
>   "a":"A val",
>   "b": "B val"}
>},
> "set" : {"y": {
>"x":"X val",
>"Y": "Y val"}
>},
> "update" : {"y": {
>"x":"X val modified"}
>},
> "delete" : "z"
> }'
> #do a GET to view all the configured params
> curl http://localhost:8983/solr/collection1/config/params
> #or  GET with a specific name to get only one set of params
> curl http://localhost:8983/solr/collection1/config/params/x
> {code}
> This data will be stored in conf/params.json
> This is used requesttime and adding/editing params will not result in core 
> reload and it will have no impact on the performance 
> example usage http://localhost/solr/collection/select?useParams=x,y
> or it can be directly configured with a request handler as follows
> {code}
> 
> {code}
>  {{useParams}} specified in request overrides the one specified in 
> {{requestHandler}}
> A more realistic example
> {code}
> curl http://localhost:8983/solr/collection1/config/params -H 
> 'Content-type:application/json'  -d '{
> "set":{"query":{
> "defType":"edismax",
> "q.alt":"*:*",
> "rows":10,
> "fl":"*,score"  },
>   "facets":{
> "facet":"on",
> "facet.mincount": 1
>   },
>  "velocity":{
>"wt": "velocity",
>"v.template":"browse",
>"v.layout": "layout"
>  }
> }
> }
> {code}
>  and use all of them directly is a requesthandler
> {code:xml}
>useParams="query,facets,velocity,browse"/>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6770) Add/edit param sets and use them in Requests

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362474#comment-14362474
 ] 

ASF subversion and git services commented on SOLR-6770:
---

Commit 1666816 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1666816 ]

SOLR-6770: reformat - fix bad indentation and funky formatting

> Add/edit param sets and use them in Requests
> 
>
> Key: SOLR-6770
> URL: https://issues.apache.org/jira/browse/SOLR-6770
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6770.patch, SOLR-6770.patch, SOLR-6770.patch
>
>
> Make it possible to define paramsets and use them directly in requests
> example
> {code}
> curl http://localhost:8983/solr/collection1/config/params -H 
> 'Content-type:application/json'  -d '{
> "set" : {"x": {
>   "a":"A val",
>   "b": "B val"}
>},
> "set" : {"y": {
>"x":"X val",
>"Y": "Y val"}
>},
> "update" : {"y": {
>"x":"X val modified"}
>},
> "delete" : "z"
> }'
> #do a GET to view all the configured params
> curl http://localhost:8983/solr/collection1/config/params
> #or  GET with a specific name to get only one set of params
> curl http://localhost:8983/solr/collection1/config/params/x
> {code}
> This data will be stored in conf/params.json
> This is used requesttime and adding/editing params will not result in core 
> reload and it will have no impact on the performance 
> example usage http://localhost/solr/collection/select?useParams=x,y
> or it can be directly configured with a request handler as follows
> {code}
> 
> {code}
>  {{useParams}} specified in request overrides the one specified in 
> {{requestHandler}}
> A more realistic example
> {code}
> curl http://localhost:8983/solr/collection1/config/params -H 
> 'Content-type:application/json'  -d '{
> "set":{"query":{
> "defType":"edismax",
> "q.alt":"*:*",
> "rows":10,
> "fl":"*,score"  },
>   "facets":{
> "facet":"on",
> "facet.mincount": 1
>   },
>  "velocity":{
>"wt": "velocity",
>"v.template":"browse",
>"v.layout": "layout"
>  }
> }
> }
> {code}
>  and use all of them directly is a requesthandler
> {code:xml}
>useParams="query,facets,velocity,browse"/>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6955) TestBlobHandler Failure.

2015-03-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362429#comment-14362429
 ] 

Noble Paul edited comment on SOLR-6955 at 3/15/15 5:11 PM:
---

Actually it is very easily reproduced in thai locale

{noformat}
 ant test  -Dtestcase=TestBlobHandler   -Dtests.locale=th_TH_TH_#u-nu-thai 
-Dtests.timezone=Asia/Hovd -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{noformat}


was (Author: noble.paul):
Actually it is very easily reproduced in thai lacale

{noformat}
 ant test  -Dtestcase=TestBlobHandler   -Dtests.locale=th_TH_TH_#u-nu-thai 
-Dtests.timezone=Asia/Hovd -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{noformat}

> TestBlobHandler Failure.
> 
>
> Key: SOLR-6955
> URL: https://issues.apache.org/jira/browse/SOLR-6955
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Noble Paul
>Priority: Minor
>
> I'm not sure this fail is that common, but I see this test fail from time to 
> time.
> Latest:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestBlobHandler 
> -Dtests.method=testDistribSearch -Dtests.seed=FABDED257D0E2B4A 
> -Dtests.slow=true -Dtests.locale=th_TH_TH_#u-nu-thai 
> -Dtests.timezone=America/Juneau -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 19.6s J7 | TestBlobHandler.testDistribSearch <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
> {responseHeader={status=0, QTime=1}, response={numFound=0, start=0, docs=[]}}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FABDED257D0E2B4A:7B5B633D0A514B76]:0)
>[junit4]>  at 
> org.apache.solr.handler.TestBlobHandler.doBlobHandlerTest(TestBlobHandler.java:96)
>[junit4]>  at 
> org.apache.solr.handler.TestBlobHandler.doTest(TestBlobHandler.java:200)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4550 - Still Failing!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4550/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 586FE98B107E1814-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Sol

[jira] [Commented] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362437#comment-14362437
 ] 

Yonik Seeley commented on SOLR-7247:


You are correct.  I argued against adding routeField to compositeIdRouter, but 
some people like the flexibility (even if it's incomplete, broken, and doesn't 
make anyone's life easier).

I'll continue to recommend users against routing on anything other than the 
"id" field for compositeId routing.

It's simple: if you have some existing external doc id you want to use 
(existing_id), *and* you have another field you want to partition documents on 
(my_route_field), then simply make the Solr "id" value equal to 
!


> sliceHash for compositeIdRouter is not coherent with routing
> 
>
> Key: SOLR-7247
> URL: https://issues.apache.org/jira/browse/SOLR-7247
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Paolo Cappuccini
>
> in CompositeIdRouter the function sliceHash check routeField configured for 
> collection.
> This make me to guess that intended behaviour is manage alternative field to  
> id field to hash documents.
> But the signature of this method is very general ( can take id, doc or 
> params) and it is used in different ways from different functionality.
> In my opinion it should have overloads instead of a weak internal logic. One 
> overload with "doc" and "collection" and another one with "id" , "params" and 
> "collections".
> In any case , if "\_route_" is not available by "params" , "collection" 
> should be mandatory and in case of RouteField, also "doc" should be mandatory.
> This will break SplitIndex but it will save coherence of data.
> If i configure routeField i noticed that is broken the DeleteCommand (this 
> pass to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass 
> only "id" )
> It should be forbidden to specify RouteField to compositeIdRouter or 
> implements related functionality to make possible to hash documents based on 
> RouteField.
> in case of DeleteCommand command the workaround is to specify "_route_" param 
> in request but in case of Index Splitting is not possible any workaround.
> In this case it should be passed entire document during splitting ("doc" 
> parameter") or build params with proper "\_route_" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.10-Linux (64bit/jdk1.7.0_76) - Build # 95 - Failure!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/95/
Java: 64bit/jdk1.7.0_76 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([8C5C44D4E47A78AF]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:620)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:183)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.rest.TestManagedResourceStorage: 1) Thread[id=8804, 
name=OverseerHdfsCoreFailoverThread-93482971290206211-188.138.97.18:_-n_00,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:136)
 at java.lang.Thread.run(Thread.java:745)2) Thread[id=8807, 
name=searcherExecutor-4826-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=8808, 
name=Thread-3360, state=WAITING, group=TGRP-TestManagedResourceStorage] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
org.apache.solr.core.CloserThread.run(CoreContainer.java:905)4) 
Thread[id=8810, name=coreZkRegister-4820-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(Li

[jira] [Updated] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Paolo Cappuccini (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Cappuccini updated SOLR-7247:
---
Description: 
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

In my opinion it should have overloads instead of a weak internal logic. One 
overload with "doc" and "collection" and another one with "id" , "params" and 
"collections".

In any case , if "\_route_" is not available by "params" , "collection" should 
be mandatory and in case of RouteField, also "doc" should be mandatory.

This will break SplitIndex but it will save coherence of data.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "\_route_" parameter.

  was:
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

In my opinion it should have overloads instead of a weak internal logic. One 
overload with "doc" and "collection" and another one with "id" , "params" and 
"collections".

In any case , if "\_route_" is not available by "params" , "collection" should 
be mandatory and in case of RouteField, also "doc" should be mandatory.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "\_route_" parameter.


> sliceHash for compositeIdRouter is not coherent with routing
> 
>
> Key: SOLR-7247
> URL: https://issues.apache.org/jira/browse/SOLR-7247
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Paolo Cappuccini
>
> in CompositeIdRouter the function sliceHash check routeField configured for 
> collection.
> This make me to guess that intended behaviour is manage alternative field to  
> id field to hash documents.
> But the signature of this method is very general ( can take id, doc or 
> params) and it is used in different ways from different functionality.
> In my opinion it should have overloads instead of a weak internal logic. One 
> overload with "doc" and "collection" and another one with "id" , "params" and 
> "collections".
> In any case , if "\_route_" is not available by "params" , "collection" 
> should be mandatory and in case of RouteField, also "doc" should be mandatory.
> This will break SplitIndex but it will save coherence of data.
> If i configure routeField i noticed that is broken the DeleteCommand (this 
> pass to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass 
> only "id" )
> It should be forbidden to specify RouteField to compositeIdRouter or 
> implements related functionality to make possible to hash documents based on 
> RouteField.
> in case of DeleteCommand command the workaround is to specify "_route_" param 
> in request but in case of Index Splitting is not possible any workaround.
> In this case it should be passed entire document during splitting ("doc" 
> parameter") or build params with proper "\_route_" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Paolo Cappuccini (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Cappuccini updated SOLR-7247:
---
Description: 
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

In my opinion it should have overloads instead of a weak internal logic. One 
overload with "doc" and "collection" and another one with "id" , "params" and 
"collections".

In any case , if "\_route_" is not available by "params" , "collection" should 
be mandatory and in case of RouteField, also "doc" should be mandatory.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "\_route_" parameter.

  was:
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "\_route_" parameter.


> sliceHash for compositeIdRouter is not coherent with routing
> 
>
> Key: SOLR-7247
> URL: https://issues.apache.org/jira/browse/SOLR-7247
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Paolo Cappuccini
>
> in CompositeIdRouter the function sliceHash check routeField configured for 
> collection.
> This make me to guess that intended behaviour is manage alternative field to  
> id field to hash documents.
> But the signature of this method is very general ( can take id, doc or 
> params) and it is used in different ways from different functionality.
> In my opinion it should have overloads instead of a weak internal logic. One 
> overload with "doc" and "collection" and another one with "id" , "params" and 
> "collections".
> In any case , if "\_route_" is not available by "params" , "collection" 
> should be mandatory and in case of RouteField, also "doc" should be mandatory.
> If i configure routeField i noticed that is broken the DeleteCommand (this 
> pass to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass 
> only "id" )
> It should be forbidden to specify RouteField to compositeIdRouter or 
> implements related functionality to make possible to hash documents based on 
> RouteField.
> in case of DeleteCommand command the workaround is to specify "_route_" param 
> in request but in case of Index Splitting is not possible any workaround.
> In this case it should be passed entire document during splitting ("doc" 
> parameter") or build params with proper "\_route_" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Paolo Cappuccini (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Cappuccini updated SOLR-7247:
---
Description: 
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "\_route_" parameter.

  was:
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "\_route_" field.


> sliceHash for compositeIdRouter is not coherent with routing
> 
>
> Key: SOLR-7247
> URL: https://issues.apache.org/jira/browse/SOLR-7247
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Paolo Cappuccini
>
> in CompositeIdRouter the function sliceHash check routeField configured for 
> collection.
> This make me to guess that intended behaviour is manage alternative field to  
> id field to hash documents.
> But the signature of this method is very general ( can take id, doc or 
> params) and it is used in different ways from different functionality.
> If i configure routeField i noticed that is broken the DeleteCommand (this 
> pass to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass 
> only "id" )
> It should be forbidden to specify RouteField to compositeIdRouter or 
> implements related functionality to make possible to hash documents based on 
> RouteField.
> in case of DeleteCommand command the workaround is to specify "_route_" param 
> in request but in case of Index Splitting is not possible any workaround.
> In this case it should be passed entire document during splitting ("doc" 
> parameter") or build params with proper "\_route_" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Paolo Cappuccini (JIRA)
Paolo Cappuccini created SOLR-7247:
--

 Summary: sliceHash for compositeIdRouter is not coherent with 
routing
 Key: SOLR-7247
 URL: https://issues.apache.org/jira/browse/SOLR-7247
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Paolo Cappuccini


in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "_route_" field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-15 Thread Paolo Cappuccini (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Cappuccini updated SOLR-7247:
---
Description: 
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "\_route_" field.

  was:
in CompositeIdRouter the function sliceHash check routeField configured for 
collection.
This make me to guess that intended behaviour is manage alternative field to  
id field to hash documents.

But the signature of this method is very general ( can take id, doc or params) 
and it is used in different ways from different functionality.

If i configure routeField i noticed that is broken the DeleteCommand (this pass 
to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass only 
"id" )

It should be forbidden to specify RouteField to compositeIdRouter or implements 
related functionality to make possible to hash documents based on RouteField.

in case of DeleteCommand command the workaround is to specify "_route_" param 
in request but in case of Index Splitting is not possible any workaround.

In this case it should be passed entire document during splitting ("doc" 
parameter") or build params with proper "_route_" field.


> sliceHash for compositeIdRouter is not coherent with routing
> 
>
> Key: SOLR-7247
> URL: https://issues.apache.org/jira/browse/SOLR-7247
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Paolo Cappuccini
>
> in CompositeIdRouter the function sliceHash check routeField configured for 
> collection.
> This make me to guess that intended behaviour is manage alternative field to  
> id field to hash documents.
> But the signature of this method is very general ( can take id, doc or 
> params) and it is used in different ways from different functionality.
> If i configure routeField i noticed that is broken the DeleteCommand (this 
> pass to sliceHash only "id" and "params" ) and SolrIndexSplitter ( this pass 
> only "id" )
> It should be forbidden to specify RouteField to compositeIdRouter or 
> implements related functionality to make possible to hash documents based on 
> RouteField.
> in case of DeleteCommand command the workaround is to specify "_route_" param 
> in request but in case of Index Splitting is not possible any workaround.
> In this case it should be passed entire document during splitting ("doc" 
> parameter") or build params with proper "\_route_" field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Improve the way update processors are used and make it simpler

2015-03-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362432#comment-14362432
 ] 

Noble Paul commented on SOLR-6892:
--

I have added info level logging which says exactly the order of 
Updateprocessors used

> Improve the way update processors are used and make it simpler
> --
>
> Key: SOLR-6892
> URL: https://issues.apache.org/jira/browse/SOLR-6892
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6892.patch
>
>
> The current update processor chain is rather cumbersome and we should be able 
> to use the updateprocessors without a chain.
> The scope of this ticket is 
> * A new tag {{}}  becomes a toplevel tag and it will be 
> equivalent to the {{}} tag inside 
> {{}} . The only difference is that it should 
> require a {{name}} attribute. The {{}} tag will 
> continue to exist and it should be possible to define {{}} inside 
> as well . It should also be possible to reference a named URP in a chain.
> * processors will be added in the request with their names . Example 
> {{processor=a,b,c}} ,  {{post-processor=x,y,z}} . This creates an implicit 
> chain of the named URPs the order they are specified
> * There are multiple request parameters supported by update request 
> ** processor : This chain is executed executed at the leader right before the 
> LogUpdateProcessorFactory + DistributedUpdateProcessorFactory . The replicas 
> will not execute this. 
> ** post-processor : This chain is executed right before the 
> RunUpdateProcessor in all replicas , including the leader
> * What happens to the update.chain parameter ? {{update.chain}} will be 
> honored . The implicit chain is created by merging both the update.chain and 
> the request params. {{post-processor}} will be inserted right before the 
> {{RunUpdateProcessorFactory}} in the chain.   and {{processor}} will be 
> inserted right before the 
> LogUpdateProcessorFactory,DistributedUpdateProcessorFactory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6955) TestBlobHandler Failure.

2015-03-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362429#comment-14362429
 ] 

Noble Paul commented on SOLR-6955:
--

Actually it is very easily reproduced in thai lacale

{noformat}
 ant test  -Dtestcase=TestBlobHandler   -Dtests.locale=th_TH_TH_#u-nu-thai 
-Dtests.timezone=Asia/Hovd -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{noformat}

> TestBlobHandler Failure.
> 
>
> Key: SOLR-6955
> URL: https://issues.apache.org/jira/browse/SOLR-6955
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Noble Paul
>Priority: Minor
>
> I'm not sure this fail is that common, but I see this test fail from time to 
> time.
> Latest:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestBlobHandler 
> -Dtests.method=testDistribSearch -Dtests.seed=FABDED257D0E2B4A 
> -Dtests.slow=true -Dtests.locale=th_TH_TH_#u-nu-thai 
> -Dtests.timezone=America/Juneau -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 19.6s J7 | TestBlobHandler.testDistribSearch <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
> {responseHeader={status=0, QTime=1}, response={numFound=0, start=0, docs=[]}}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FABDED257D0E2B4A:7B5B633D0A514B76]:0)
>[junit4]>  at 
> org.apache.solr.handler.TestBlobHandler.doBlobHandlerTest(TestBlobHandler.java:96)
>[junit4]>  at 
> org.apache.solr.handler.TestBlobHandler.doTest(TestBlobHandler.java:200)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6955) TestBlobHandler Failure.

2015-03-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362420#comment-14362420
 ] 

Yonik Seeley commented on SOLR-6955:


http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11812/testReport/junit/org.apache.solr.handler/TestBlobHandler/doBlobHandlerTest/

{code}
Error Message

{responseHeader={status=0, QTime=1}, response={numFound=0, start=0, docs=[]}}
Stacktrace

java.lang.AssertionError: {responseHeader={status=0, QTime=1}, 
response={numFound=0, start=0, docs=[]}}
at 
__randomizedtesting.SeedInfo.seed([6F0E7027CA3B4F73:8FCF527571D73981]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestBlobHandler.doBlobHandlerTest(TestBlobHandler.java:97)
{code}

> TestBlobHandler Failure.
> 
>
> Key: SOLR-6955
> URL: https://issues.apache.org/jira/browse/SOLR-6955
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Noble Paul
>Priority: Minor
>
> I'm not sure this fail is that common, but I see this test fail from time to 
> time.
> Latest:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestBlobHandler 
> -Dtests.method=testDistribSearch -Dtests.seed=FABDED257D0E2B4A 
> -Dtests.slow=true -Dtests.locale=th_TH_TH_#u-nu-thai 
> -Dtests.timezone=America/Juneau -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 19.6s J7 | TestBlobHandler.testDistribSearch <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
> {responseHeader={status=0, QTime=1}, response={numFound=0, start=0, docs=[]}}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FABDED257D0E2B4A:7B5B633D0A514B76]:0)
>[junit4]>  at 
> org.apache.solr.handler.TestBlobHandler.doBlobHandlerTest(TestBlobHandler.java:96)
>[junit4]>  at 
> org.apache.solr.handler.TestBlobHandler.doTest(TestBlobHandler.java:200)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2011 - Failure!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2011/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

38 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.test

Error Message:
Could not get expected value  
'org.apache.solr.search.function.NvlValueSourceParser' for path 
'config/valueSourceParser/cu/class' full output: {   "responseHeader":{ 
"status":0, "QTime":1},   "config":{ "znodeVersion":0, 
"luceneMatchVersion":"org.apache.lucene.util.Version:5.1.0", 
"updateHandler":{   "class":"solr.DirectUpdateHandler2",   
"autoCommmitMaxDocs":-1,   "indexWriterCloseWaitsForMerges":true,   
"openSearcher":true,   "commitIntervalLowerBound":-1,   
"commitWithinSoftCommit":true,   "autoCommit":{ "maxDocs":-1,   
  "maxTime":-1, "commitIntervalLowerBound":-1},   
"autoSoftCommit":{ "maxDocs":-1, "maxTime":-1}}, "query":{  
 "useFilterForSortedQuery":false,   "queryResultWindowSize":1,   
"queryResultMaxDocsCached":2147483647,   "enableLazyFieldLoading":false,
   "maxBooleanClauses":1024,   "":{ "name":"fieldValueCache",   
  "initialSize":"10", "size":"1", "showItems":"-1"}}, 
"requestHandler":{   "standard":{ "name":"standard", 
"class":"solr.StandardRequestHandler"},   "/admin/file":{ 
"name":"/admin/file", "class":"solr.admin.ShowFileRequestHandler",  
   "invariants":{"hidden":"bogus.txt"}},   "/update":{ 
"name":"/update", 
"class":"org.apache.solr.handler.UpdateRequestHandler", "defaults":{}}, 
  "/update/json":{ "name":"/update/json", 
"class":"org.apache.solr.handler.UpdateRequestHandler", 
"defaults":{"update.contentType":"application/json"}},   "/update/csv":{
 "name":"/update/csv", 
"class":"org.apache.solr.handler.UpdateRequestHandler", 
"defaults":{"update.contentType":"application/csv"}},   
"/update/json/docs":{ "name":"/update/json/docs", 
"class":"org.apache.solr.handler.UpdateRequestHandler", "defaults":{
   "update.contentType":"application/json",   
"json.command":"false"}},   "/config":{ "name":"/config", 
"class":"org.apache.solr.handler.SolrConfigHandler", "defaults":{}},
   "/schema":{ "name":"/schema", 
"class":"org.apache.solr.handler.SchemaHandler", "defaults":{}},   
"/replication":{ "name":"/replication", 
"class":"org.apache.solr.handler.ReplicationHandler", "defaults":{}},   
"/get":{ "name":"/get", 
"class":"org.apache.solr.handler.RealTimeGetHandler", "defaults":{  
 "omitHeader":"true",   "wt":"json",   "indent":"true"}},   
"/admin/luke":{ "name":"/admin/luke", 
"class":"org.apache.solr.handler.admin.LukeRequestHandler", 
"defaults":{}},   "/admin/system":{ "name":"/admin/system", 
"class":"org.apache.solr.handler.admin.SystemInfoHandler", 
"defaults":{}},   "/admin/mbeans":{ "name":"/admin/mbeans", 
"class":"org.apache.solr.handler.admin.SolrInfoMBeanHandler", 
"defaults":{}},   "/admin/plugins":{ "name":"/admin/plugins",   
  "class":"org.apache.solr.handler.admin.PluginInfoHandler", 
"defaults":{}},   "/admin/threads":{ "name":"/admin/threads",   
  "class":"org.apache.solr.handler.admin.ThreadDumpHandler", 
"defaults":{}},   "/admin/properties":{ "name":"/admin/properties", 
"class":"org.apache.solr.handler.admin.PropertiesRequestHandler",   
  "defaults":{}},   "/admin/logging":{ "name":"/admin/logging", 
"class":"org.apache.solr.handler.admin.LoggingHandler", 
"defaults":{}},   "/admin/ping":{ "name":"/admin/ping", 
"class":"org.apache.solr.handler.PingRequestHandler", "defaults":{},
 "invariants":{   "echoParams":"all",   
"q":"solrpingquery"}},   "/admin/segments":{ 
"name":"/admin/segments", 
"class":"org.apache.solr.handler.admin.SegmentsInfoRequestHandler", 
"defaults":{}}}, "valueSourceParser":{"cu":{ "name":"cu", 
"class":"org.apache.solr.core.CountUsageValueSourceParser"}}, 
"directoryFactory":{   "name":"DirectoryFactory",   
"class":"org.apache.solr.core.MockDirectoryFactory",   
"solr.hdfs.blockcache.enabled":true,   
"solr.hdfs.blockcache.blocksperbank":1024,   "solr.hdfs.home":"",   
"solr.hdfs.confdir":"",   "solr.hdfs.blockcache.global":"false"}, 
"updateRequestProcessorChain":[   { "name":"nodistrib", 
"":[   {"class":"solr.NoOpDistributingUpdateProcessorFactory"}, 
  {"class":"solr.RunUpdateProcessorFactory"

[jira] [Resolved] (SOLR-7246) Speed up BasicZkTest, TestManagedResourceStorage

2015-03-15 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar resolved SOLR-7246.
-
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

> Speed up BasicZkTest, TestManagedResourceStorage
> 
>
> Key: SOLR-7246
> URL: https://issues.apache.org/jira/browse/SOLR-7246
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Ramkumar Aiyengar
>Assignee: Ramkumar Aiyengar
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-7246.patch
>
>
> Currently {{AbstractZkTestCase}} implementations wait for a full ZK timeout 
> at shutdown since the ZK server is shut down before the core. This can be 
> sped up by a minute or so for each test case by ensuring the core is shut 
> down before the ZK server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7246) Speed up BasicZkTest, TestManagedResourceStorage

2015-03-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362325#comment-14362325
 ] 

ASF subversion and git services commented on SOLR-7246:
---

Commit 1666788 from [~andyetitmoves] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666788 ]

SOLR-7246: Speed up BasicZkTest, TestManagedResourceStorage

> Speed up BasicZkTest, TestManagedResourceStorage
> 
>
> Key: SOLR-7246
> URL: https://issues.apache.org/jira/browse/SOLR-7246
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Ramkumar Aiyengar
>Assignee: Ramkumar Aiyengar
>Priority: Minor
> Attachments: SOLR-7246.patch
>
>
> Currently {{AbstractZkTestCase}} implementations wait for a full ZK timeout 
> at shutdown since the ZK server is shut down before the core. This can be 
> sped up by a minute or so for each test case by ensuring the core is shut 
> down before the ZK server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_76) - Build # 4438 - Failure!

2015-03-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4438/
Java: 32bit/jdk1.7.0_76 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'response/params/watched/x' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   
"response":{"znodeVersion":-1}}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 
'response/params/watched/x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{"znodeVersion":-1}}
at 
__randomizedtesting.SeedInfo.seed([44CEFB5B592ADE9B:9C83D60CAEF77B3B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:399)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.

[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-15 Thread Xu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362286#comment-14362286
 ] 

Xu Zhang commented on SOLR-6350:


Thanks so much Hoss :) .

I think I fix the NaN errors, which is because my patch doesn't compute 
percentile stats with the new trunk change.  This also shows the edge case: 
when user asking percentiles for empty document set, we will give NaN.

{quote}
I didn't do any in depth review yet, but i did sprinkle some nocommits arround 
related to things that jumped out at me while resolving conflicts
{quote}
Some work seems quite dirty to me, will spend some time to improve that. For 
example,  we have a test case which will test all stats combinations, I just 
exclude percentiles right now, which is quite awful.  

And another thing is I didn't do too much performance tests around this. There 
are plenty of parameters for Tdigest. I just pick a default number and 
ArrayDigest. 

{quote}
I didn't notice any distributed test, not sure if that's something still 
needing done, or if that was just because of a mistake in creating the patch 
file and "new files" weren't included.
{quote}
I just added 4 simple test cases in the TestDistributedSearch.java. I probably 
missed them in my last patch. Have them back.

{quote}
BTW: no need ot put your name as a suffix in the patch filename – convention is 
to just name the patch after the jira. The only reason to worry about 
distinguishing the file names is in cases where you explicitly posting a 
"variant" patch (ie: a strawman that you feel/know is broken and shouldn't be 
taken seriously long term, an alternative proposal to some other existing 
patch, an orthogonal patch only containing tests or some other independent 
changes, etc...)
{quote}
Ha, thanks a lot. 


> Percentiles in StatsComponent
> -
>
> Key: SOLR-6350
> URL: https://issues.apache.org/jira/browse/SOLR-6350
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
> SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch
>
>
> Add an option to compute user specified percentiles when computing stats
> Example...
> {noformat}
> stats.field={!percentiles='1,2,98,99,99.999'}price
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6350) Percentiles in StatsComponent

2015-03-15 Thread Xu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Zhang updated SOLR-6350:
---
Attachment: SOLR-6350.patch

This patch has the latest trunk change, and should fix NaN errors.

> Percentiles in StatsComponent
> -
>
> Key: SOLR-6350
> URL: https://issues.apache.org/jira/browse/SOLR-6350
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
> SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch
>
>
> Add an option to compute user specified percentiles when computing stats
> Example...
> {noformat}
> stats.field={!percentiles='1,2,98,99,99.999'}price
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org