[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2388 - Still Failing

2014-12-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2388/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([22D23199875D3546:A334BF81F002557A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-12-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259328#comment-14259328
 ] 

ASF subversion and git services commented on SOLR-3711:
---

Commit 1648041 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1648041 ]

SOLR-3711: Truncate long strings in /browse field facets

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0, Trunk

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2018 - Failure!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2018/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: 
true)

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerStatusTest.testDistribSearch

Error Message:
Error from server at http://127.0.0.1:51400: reload the collection time out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
from server at http://127.0.0.1:51400: reload the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([758330C202FA3706:F465BEDA75A5573A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:566)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:213)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:209)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.invokeCollectionApi(AbstractFullDistribZkTestBase.java:1867)
at 
org.apache.solr.cloud.OverseerStatusTest.doTest(OverseerStatusTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-12-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259330#comment-14259330
 ] 

ASF subversion and git services commented on SOLR-3711:
---

Commit 1648042 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1648042 ]

SOLR-3711: Truncate long strings in /browse field facets

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0, Trunk

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-12-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259331#comment-14259331
 ] 

ASF subversion and git services commented on SOLR-3711:
---

Commit 1648043 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1648043 ]

SOLR-3711: Truncate long strings in /browse field facets

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0, Trunk

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-12-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259332#comment-14259332
 ] 

ASF subversion and git services commented on SOLR-3711:
---

Commit 1648044 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1648044 ]

SOLR-3711: Truncate long strings in field facets in generic facet_fields.vm

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0, Trunk

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-12-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-3711.

Resolution: Fixed

When a string has no whitespace it does not wrap (like content_type as Jan 
mentioned) - this has been fixed on both trunk and 5x.

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0, Trunk

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3055) Use NGramPhraseQuery in Solr

2014-12-27 Thread Tomoko Uchida (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-3055:

Attachment: SOLR-3055-2.patch
SOLR-3055-1.patch

 Use NGramPhraseQuery in Solr
 

 Key: SOLR-3055
 URL: https://issues.apache.org/jira/browse/SOLR-3055
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis, search
Reporter: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-3055-1.patch, SOLR-3055-2.patch, SOLR-3055.patch


 Solr should use NGramPhraseQuery when searching with default slop on n-gram 
 field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3055) Use NGramPhraseQuery in Solr

2014-12-27 Thread Tomoko Uchida (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259343#comment-14259343
 ] 

Tomoko Uchida commented on SOLR-3055:
-

Again, I think there are three strategies for implementation.

1. embed gram size information in TokenStraem by adding new attribute (taken by 
first patch)
  - Pros: fully integrated with Lucene, so any application have not to write 
additional codes to optimize n-gram based phrase query
  - Pros: no configuration is needed because query parser create 
NGramPhraseQuery automatically
  - Pros: maybe most simple to implement
  - Cons: there might be some kind of conflicts with other attributes? 

2. NGramTokenizer expose gramSize for later use, and Solr's QueryParser 
create NGramPhraseQuery
  - Pros: no effect to Lucene's default behavior
  - Pros: no configuration is needed because query parser create 
NGramPhraseQuery automatically
  - Cons: extra codes are needed to use NGramPhraseQuery per each query parser

3. add gramSize (or something like) attribute to schema.xml, and Solr's query 
parser create NGramPhraseQuery using given gramSize by user
  - Pros: no effect to Lucene's and Solr's default behavior
  - Cons: new configuration attribute will be introduced
  - Cons: what's happen if user give gramSize value inconsistent with 
minGramSize or maxGramSize given to NGramTokenizer? maybe it's problematic.

I attach two patches, one (SOLR-3055-1.patch) for strategy 1 and other 
(SOLR-3055-2.patch) for strategy 2.
Reviews / suggestions will be appreciated.

 Use NGramPhraseQuery in Solr
 

 Key: SOLR-3055
 URL: https://issues.apache.org/jira/browse/SOLR-3055
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis, search
Reporter: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-3055-1.patch, SOLR-3055-2.patch, SOLR-3055.patch


 Solr should use NGramPhraseQuery when searching with default slop on n-gram 
 field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3055) Use NGramPhraseQuery in Solr

2014-12-27 Thread Tomoko Uchida (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-3055:

Attachment: schema.xml

 Use NGramPhraseQuery in Solr
 

 Key: SOLR-3055
 URL: https://issues.apache.org/jira/browse/SOLR-3055
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis, search
Reporter: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-3055-1.patch, SOLR-3055-2.patch, SOLR-3055.patch, 
 schema.xml


 Solr should use NGramPhraseQuery when searching with default slop on n-gram 
 field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259352#comment-14259352
 ] 

Upayavira commented on SOLR-5507:
-

I've started working on this. I suspect it could be easier than I thought.

See: https://github.com/upayavira/solr-angular-ui/tree/solr5507 in 
solr/webapp/web. If you load http://localhost:8983/solr/index.html, you'll see 
the new UI, which is currently extremely bare-bones, just supporting some basic 
service calls. Next task is to get the wrapper HTML up and working, then make 
paging work, then start working through each page one at a time.

to [~erickerickson]: Your comments are good. My aim is to replicate the 
functionality of the existing UI, at least to get a reasonable distance into 
that, and then allow us to think about what sort of UI we really want. For 
example, if you start up a Solr with no cores or collections, you should be 
prompted with a page offering to create one for you. No idea when 5.0 is due to 
arrive, but I'm gonna try and run quickly with this in the hope that we can 
have some funky new UI features to help make 5.0 more special - working 
alongside the bin/solr stuff.

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3055) Use NGramPhraseQuery in Solr

2014-12-27 Thread Tomoko Uchida (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259351#comment-14259351
 ] 

Tomoko Uchida commented on SOLR-3055:
-

I performed brief benchmark by JMeter for Solr 5.0.0 trunk and strategy 1.
There seems to be significant performance gain for n-gram based phrase query.

- Hardware : MacBook Pro, 2.8GHz Intel Core i5
- Java version : 1.7.0_71
- Solr version : 5.0.0 SNAPSHOT / 5.0.0 SNAPSHOT with SOLR-3055-1.patch
- Java heap : 500MB
- Documents : Wikipedia (Japanese) 10 docs
- Solr config : attached solrconfig.xml (query result cache disabled)
- Schema : attached schema.xml (NGramTokenizer's maxGramSize=3, minGramSIze=2)
- Queries : python, javascript, windows, プログラミング, インターネット, スマートフォン 
(japanese)
- JMeter scenario : execute each 6 queries above 1000 times (i.e. perform 6000 
queries)
- JMeter Threads : 1

To warm up, I performed 2 times JMeter scinario for both settings. 
2nd round results are:

|| Solr || Avg. response time || Throughput ||
| 5.0.0-SNAPSHOT | 7msec | 137.8/sec |
| 5.0.0-SNAPSHOT with patch-1 | 4msec | 201.3/sec |


 Use NGramPhraseQuery in Solr
 

 Key: SOLR-3055
 URL: https://issues.apache.org/jira/browse/SOLR-3055
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis, search
Reporter: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-3055-1.patch, SOLR-3055-2.patch, SOLR-3055.patch, 
 schema.xml


 Solr should use NGramPhraseQuery when searching with default slop on n-gram 
 field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3055) Use NGramPhraseQuery in Solr

2014-12-27 Thread Tomoko Uchida (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-3055:

Attachment: solrconfig.xml

 Use NGramPhraseQuery in Solr
 

 Key: SOLR-3055
 URL: https://issues.apache.org/jira/browse/SOLR-3055
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis, search
Reporter: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-3055-1.patch, SOLR-3055-2.patch, SOLR-3055.patch, 
 schema.xml, solrconfig.xml


 Solr should use NGramPhraseQuery when searching with default slop on n-gram 
 field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11817 - Still Failing!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11817/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: true)

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([E000F7765D0F8513:61E6796E2A50E52F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Created] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6892:


 Summary: Make update processors toplevel components 
 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul


The current update processor chain is rather cumbersome and we should be able 
to use the updateprocessors without a chain.

The scope of this ticket is 
* updateProcessor tag becomes a toplevel tag and it will be equivalent to the 
processor tag inside updateRequestProcessorChain . The only difference is 
that it should require a {{name}} attribute
* Any update request will be able  to pass a param {{processor=a,b,c}} , where 
a,b,c are names of update processors. A just in time chain will be created with 
those update processors
* Some in built update processors (wherever possible) will be predefined with 
standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1974 - Failure!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1974/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: 
true)

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribCursorPagingTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:60232/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:60232/collection1
at 
__randomizedtesting.SeedInfo.seed([E5428420FCAD8091:64A40A388BF2E0AD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:576)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:213)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:209)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.indexDoc(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.DistribCursorPagingTest.doRandomSortsOnLargeIndex(DistribCursorPagingTest.java:573)
at 
org.apache.solr.cloud.DistribCursorPagingTest.doTest(DistribCursorPagingTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
  

[jira] [Created] (LUCENE-6140) simplify inflater usage in deflate CompressionMode

2014-12-27 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6140:
---

 Summary: simplify inflater usage in deflate CompressionMode
 Key: LUCENE-6140
 URL: https://issues.apache.org/jira/browse/LUCENE-6140
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6140.patch

This currently loops-n-grows the output byte[]. But we always decompress the 
whole block (we dont emit flushes or anything to allow otherwise) and ignore 
offset/length to the end, and we know how big the uncompressed size is 
up-front... we can just call inflate one time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6140) simplify inflater usage in deflate CompressionMode

2014-12-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6140:

Attachment: LUCENE-6140.patch

 simplify inflater usage in deflate CompressionMode
 --

 Key: LUCENE-6140
 URL: https://issues.apache.org/jira/browse/LUCENE-6140
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6140.patch


 This currently loops-n-grows the output byte[]. But we always decompress the 
 whole block (we dont emit flushes or anything to allow otherwise) and ignore 
 offset/length to the end, and we know how big the uncompressed size is 
 up-front... we can just call inflate one time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259384#comment-14259384
 ] 

Erik Hatcher commented on SOLR-6892:


While this does open up some potential custom power, I'm curious what use cases 
you see with being able for the indexing client to specify the processors?   It 
is good that processors become their own first class component such that they 
can be composed into update processor chains when (eventually) creating a chain 
with API, but I can see using individual processors from the /update call being 
a possible problem, such as not using the log processor and then not being able 
to see what happened exactly.

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259410#comment-14259410
 ] 

Alexandre Rafalovitch commented on SOLR-6892:
-

This use case does not feel strong enough to be *Major*. Are there specific 
business use-cases that really cannot be solved with pre-defined chains?

Also, a lot of URPs take parameters. The proposal above does not seem to allow 
that. And then what about DistributedUpdateProcessor and that the chains allow 
to specify items both before and after it. 

Also consider troubleshooting. It needs to be very clear what applied to the 
content as it came in. How would one find out if a chain was applied 
incorrectly.

Finally, what are built update processors? Built-in? So far, vast majority 
of them are built-in, as in shipped with Solr. And have their own class names. 
Do you means some standard *chains* could be pre-built and named? Do you have a 
good example? I would say these arguments apply a lot more to the analyzer 
chains (I'd love to see those built-in), but I am not sure about URPs.

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259409#comment-14259409
 ] 

Noble Paul commented on SOLR-6892:
--

The configuration is complex today. We need to make this less of vodoo  . Let's 
look at what is the purpose of an update processor. It is just a transformer 
for incoming documents. Let's apply the transformers in the order they are 
specified and let the system take care of the rest and avoid surprises. 

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #800: POMs out of sync

2014-12-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/800/

3 tests failed.
FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([2E8E4104C674FD50]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:105)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([4F86EA603418726E]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([4F86EA603418726E]:0)




Build Log:
[...truncated 53939 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:552: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:204: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 385 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259413#comment-14259413
 ] 

Alexandre Rafalovitch commented on SOLR-6892:
-

 Let's apply the transformers in the order they are specified and let the 
 system take care of the rest and avoid surprises

Actually, having a code hidden somewhere inside the system to do the 
non-trivial thing is what will create surprises. Right now, the user can look 
at the XML file and step through the cross-references to see what actually 
happened. Moving away into on-the-fly and case-by-case will *increase* the 
surprises. So, the proposal and the reasoning are not quite aligned here.

Things like pre-defined names for standard components could decrease surprises. 
The rest of the proposal does not. 

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259419#comment-14259419
 ] 

Noble Paul commented on SOLR-6892:
--

The configuration is not going away.  we will have the individuals URP 
specified and configured. The point is , the chain does nothing extra. 
Specifying the URP list at request time is no more complex than deciding the 
chain name. It is not taking the power away but adding the power to mix and 
match stuff at request time

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259422#comment-14259422
 ] 

Alexandre Rafalovitch commented on SOLR-6892:
-

So, would the better explanation be then is that you have an option of 
pre-configuring and naming individual items on the stack and then composing 
them either in pre-existing stack (effectively with aliases) or dynamically on 
the fly.

So, the addressable unit becomes an individual pre-configured URP (atom) as 
opposed to the full stack (molecule)? 

That would make more sense, though you still need to be super-clear on what 
becomes hidden from the XML file. For example, there should be an easy way to 
query all the pre-configured components. One of the issues with ElasticSearch 
is that it is hard to tell what those symbolic (analyzer chains) names 
correspond too, as it is hardcoded somewhere deep with it. 

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259429#comment-14259429
 ] 

Noble Paul commented on SOLR-6892:
--

If there are components that need little or no configuration, it can be made 
implicitly available with a well known name. Other components which require 
configuration will have to be configured in xml . 
But your  explanation is correct. We are changing the atomic unit from a chain 
to a URP

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259430#comment-14259430
 ] 

Yonik Seeley commented on SOLR-6892:


We shouldn't let too much implementation leak into the interface.  
DistribUpdateProcessor, etc, are much more implementation than interface.  For 
example, should one need to know that DistribUpdateProcessor is needed for 
atomic updates?  What if it's split into two processors in the future?  
Likewise for schemaless - it's currently implemented as a whole bunch of 
processors, but I could see it moving to a single processor in the future.  
It's implementation.  People should not be specifying this stuff on requests.

bq. For example, there should be an easy way to query all the pre-configured 
components.

Perhaps that's all this feature should be... a way to add additional named 
processors to the chain.  That should be relatively safe.

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11818 - Still Failing!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11818/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: true)

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   collection1:{ replicationFactor:1, shards:{   
shard1:{ range:8000-, state:active,   
  replicas:{core_node2:{ core:collection1, 
base_url:http://127.0.0.1:34946/j_ed/ky;, 
node_name:127.0.0.1:34946_j_ed%2Fky, state:active,  
   leader:true}}},   shard2:{ range:0-7fff, 
state:active, replicas:{   core_node1:{ 
core:collection1, base_url:http://127.0.0.1:60865/j_ed/ky;,  
   node_name:127.0.0.1:60865_j_ed%2Fky, 
state:active, leader:true},   core_node3:{
 core:collection1, 
base_url:http://127.0.0.1:40088/j_ed/ky;, 
node_name:127.0.0.1:40088_j_ed%2Fky, state:active, 
router:{name:compositeId}, maxShardsPerNode:1, 
autoAddReplicas:false, autoCreated:true},   control_collection:{  
   replicationFactor:1, shards:{shard1:{ 
range:8000-7fff, state:active, 
replicas:{core_node1:{ core:collection1, 
base_url:http://127.0.0.1:42361/j_ed/ky;, 
node_name:127.0.0.1:42361_j_ed%2Fky, state:active,  
   leader:true, router:{name:compositeId}, 
maxShardsPerNode:1, autoAddReplicas:false, 
autoCreated:true},   c8n_1x2:{ replicationFactor:2, 
shards:{shard1:{ range:8000-7fff, 
state:active, replicas:{   core_node1:{ 
core:c8n_1x2_shard1_replica2, 
base_url:http://127.0.0.1:42361/j_ed/ky;, 
node_name:127.0.0.1:42361_j_ed%2Fky, state:active,  
   leader:true},   core_node2:{ 
core:c8n_1x2_shard1_replica1, 
base_url:http://127.0.0.1:60865/j_ed/ky;, 
node_name:127.0.0.1:60865_j_ed%2Fky, state:recovering,  
   router:{name:compositeId}, maxShardsPerNode:1, 
autoAddReplicas:false}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  collection1:{
replicationFactor:1,
shards:{
  shard1:{
range:8000-,
state:active,
replicas:{core_node2:{
core:collection1,
base_url:http://127.0.0.1:34946/j_ed/ky;,
node_name:127.0.0.1:34946_j_ed%2Fky,
state:active,
leader:true}}},
  shard2:{
range:0-7fff,
state:active,
replicas:{
  core_node1:{
core:collection1,
base_url:http://127.0.0.1:60865/j_ed/ky;,
node_name:127.0.0.1:60865_j_ed%2Fky,
state:active,
leader:true},
  core_node3:{
core:collection1,
base_url:http://127.0.0.1:40088/j_ed/ky;,
node_name:127.0.0.1:40088_j_ed%2Fky,
state:active,
router:{name:compositeId},
maxShardsPerNode:1,
autoAddReplicas:false,
autoCreated:true},
  control_collection:{
replicationFactor:1,
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
core:collection1,
base_url:http://127.0.0.1:42361/j_ed/ky;,
node_name:127.0.0.1:42361_j_ed%2Fky,
state:active,
leader:true,
router:{name:compositeId},
maxShardsPerNode:1,
autoAddReplicas:false,
autoCreated:true},
  c8n_1x2:{
replicationFactor:2,
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
core:c8n_1x2_shard1_replica2,
base_url:http://127.0.0.1:42361/j_ed/ky;,
node_name:127.0.0.1:42361_j_ed%2Fky,
state:active,
leader:true},
  core_node2:{
core:c8n_1x2_shard1_replica1,
base_url:http://127.0.0.1:60865/j_ed/ky;,
node_name:127.0.0.1:60865_j_ed%2Fky,
state:recovering,
router:{name:compositeId},
maxShardsPerNode:1,
autoAddReplicas:false}}
at 
__randomizedtesting.SeedInfo.seed([BF21A17D56384635:3EC72F6521672609]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1940)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:247)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 

[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259437#comment-14259437
 ] 

Noble Paul commented on SOLR-6892:
--

Yes yonik . The default urp chain must be immutable . This is about adding URP 
s before that chain . 

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6810) Faster searching limited but high rows across many shards all with many hits

2014-12-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259438#comment-14259438
 ] 

Per Steffensen commented on SOLR-6810:
--

bq. Since dqa.forceSkipGetIds is always true for this new algorithm then 
computing the set X is not necessary and we can just directly fetch all return 
fields from individual shards and return the response to the user. Is that 
correct?

This is what happens by default with the new algorithm. But dqa.forceSkipGetIds 
is not always true. It is true by default, but you can explicitly set it to 
false by sending dqa.forceSkipGetIds=false in your request. So basically there 
are four options
* old alg without dqa.forceSkipGetIds or with dqa.forceSkipGetIds=false 
(default before SOLR-6810, and currently also after SOLR-6810)
* old alg with dqa.forceSkipGetIds=true (same as with distrib.singlePass=true 
before SOLR-6810)
* new alg without dqa.forceSkipGetIds or with dqa.forceSkipGetIds=true (does as 
you describe above)
* new alg with dqa.forceSkipGetIds=false (does as described in the JavaDoc you 
quoted)

The JavaDoc descriptions describe how the alg works WITHOUT dqa.forceSkipGetIds 
switched on. But dqa.forceSkipGetIds is switched on for the new alg by default. 
The JavaDoc for ShardParams.DQA.FORCE_SKIP_GET_IDS_PARAM describes how the two 
algs are altered when running with dqa.forceSkipGetIds=true. The thing is that 
you need to know this part as well to understand how the new alg works by 
default.

bq. I think the DefaultProvider and DefaultDefaultProvider aren't necessary? We 
can just keep a single static ShardParams.getDQA(SolrParams params) method and 
modify it if we ever need to change the default.

Well I would prefer to keep ShardParams.DQA.get(params) instead of having a 
ShardParams.getDQA(params) - it think it is better context'ing. But I will 
survive if you want to change it.
DefaultProvider in supposed to isolate the default decisions. 
DefaultDefaultProvider is an implementation that calculates the out-of-the-box 
defaults. It could be done directly in ShardParams.DQA.get, but I like to 
structure things. But I have to admit that the main reason I added the 
DefaultProvider thing, was that it makes it easier to change the 
default-decisions made when running the test-suite. I would like to randomly 
select the DQA to be used for every single query fired across the entire 
test-suite. This way we will have a very thorough test-coverage of both algs. 
Having the option of changing the DefaultProvider made it very easy to achieve 
this in SolrTestCaseJ4
{code}
private static DQA.DefaultProvider testDQADefaultProvider =
new DQA.DefaultProvider() {
  @Override
  public DQA getDefault(SolrParams params) {
// Select randomly the DQA to use
int algNo = 
Math.abs(random().nextInt()%(ShardParams.DQA.values().length));
return DQA.values()[algNo];
  }
};
{code}
{code}
DQA.setDefaultProvider(testDQADefaultProvider);
{code}

bq. If a user wants to change the default, the dqa can be set in the defaults 
section of the search handler.

I know it is a matter of opinion but in my mind the best place to deal with 
default for DQA is in the code that deals with DQA - not somewhere else. This 
makes a much better isolation and it makes code easier to understand. You can 
essentially navigate to ShardParams.DQA and read the code and JavaDoc and 
understand everything about DQA's. You do not have to know that there is a 
decision about default in the SeachHandler. But if you want to change that, it 
is ok for me.

bq. Why do we need the switchToTestDQADefaultProvider() and 
switchToOriginalDQADefaultProvider() methods? You are already applying the DQA 
for each request so why is the switch necessary?

No I am not applying the DQA for each request. I trust you understand why I 
want to run with randomized DQA across the entire test-suite - this is why I 
invented the testDQADefaultProvider. In tests that explicitly deal with testing 
DQA stuff, in some cases I want to switch on the real DefaultProvider because 
some of those tests are actually testing out-of-the-box default-behaviour. E.g. 
verifyForceSkipGetIds-tests in DistributedQueryComponentOptimizationTest. Also 
need it in DistributedExpandComponentTest until SOLR-6813 has been solved.

bq. There's still the ShardParams.purpose field which you added in SOLR-6812 
but I removed it. I still think it is unnecessary for purpose to be sent to 
shard. Is it necessary for this patch or is it just an artifact from SOLR-6812?

You are right. It is a mistake that I did not remove ShardParams.purpose

bq. Did you benchmark it against the current algorithm for other kinds of 
use-cases as well (3-5 shards, small number of rows)? Not asking for id can 
speed up responses there too I think.

I did not do any concrete benchmarking for other requests. We have changed our 
DQA 

[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259440#comment-14259440
 ] 

Erick Erickson commented on SOLR-5507:
--

[~upayavira] Grabbed the code from your fork and compiled. The index.html URL 
gives me just a json response. Which is fine if that's what you expect, I know 
it's still very early going. I'm wondering though whether I'm missing a library 
or something, in which case I'll just be patient ;)

Here's a thought though. IMO, the biggest shortcoming of the current UI is that 
it's not SolrCloud-friendly. What do you think about prioritizing spiffy new 
SolrCloud-friendly stuff before replicating the current functionality? True, 
people would be flipping back and forth between the two for a while, but spiffy 
new cloud stuff would add functionality...

It's up to the people doing the work of course, this is just a comment from the 
peanut gallery, people doing the work get to decide ;)...

Either way, the infrastructure needs to be in place first I'd guess. Thanks for 
taking this on!

Whe! Here we go!

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6810) Faster searching limited but high rows across many shards all with many hits

2014-12-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259438#comment-14259438
 ] 

Per Steffensen edited comment on SOLR-6810 at 12/27/14 6:09 PM:


bq. Since dqa.forceSkipGetIds is always true for this new algorithm then 
computing the set X is not necessary and we can just directly fetch all return 
fields from individual shards and return the response to the user. Is that 
correct?

This is what happens by default with the new algorithm. But dqa.forceSkipGetIds 
is not always true. It is true by default, but you can explicitly set it to 
false by sending dqa.forceSkipGetIds=false in your request. So basically there 
are four options
* old alg without dqa.forceSkipGetIds or with dqa.forceSkipGetIds=false 
(default before SOLR-6810, and currently also after SOLR-6810)
* old alg with dqa.forceSkipGetIds=true (same as with distrib.singlePass=true 
before SOLR-6810)
* new alg without dqa.forceSkipGetIds or with dqa.forceSkipGetIds=true (does as 
you describe above)
* new alg with dqa.forceSkipGetIds=false (does as described in the JavaDoc you 
quoted)

The JavaDoc descriptions describe how the alg works WITHOUT dqa.forceSkipGetIds 
switched on. But dqa.forceSkipGetIds is switched on for the new alg by default. 
The JavaDoc for ShardParams.DQA.FORCE_SKIP_GET_IDS_PARAM describes how the two 
algs are altered when running with dqa.forceSkipGetIds=true. The thing is that 
you need to know this part as well to understand how the new alg works by 
default.

bq. I think the DefaultProvider and DefaultDefaultProvider aren't necessary? We 
can just keep a single static ShardParams.getDQA(SolrParams params) method and 
modify it if we ever need to change the default.

Well I would prefer to keep ShardParams.DQA.get(params) instead of having a 
ShardParams.getDQA(params) - it think it is better context'ing. But I will 
survive if you want to change it.
DefaultProvider in supposed to isolate the default decisions. 
DefaultDefaultProvider is an implementation that calculates the out-of-the-box 
defaults. It could be done directly in ShardParams.DQA.get, but I like to 
structure things. But I have to admit that the main reason I added the 
DefaultProvider thing, was that it makes it easier to change the 
default-decisions made when running the test-suite. I would like to randomly 
select the DQA to be used for every single query fired across the entire 
test-suite. This way we will have a very thorough test-coverage of both algs. 
Having the option of changing the DefaultProvider made it very easy to achieve 
this in SolrTestCaseJ4
{code}
private static DQA.DefaultProvider testDQADefaultProvider =
new DQA.DefaultProvider() {
  @Override
  public DQA getDefault(SolrParams params) {
// Select randomly the DQA to use
int algNo = 
Math.abs(random().nextInt()%(ShardParams.DQA.values().length));
return DQA.values()[algNo];
  }
};
{code}
{code}
DQA.setDefaultProvider(testDQADefaultProvider);
{code}

bq. If a user wants to change the default, the dqa can be set in the defaults 
section of the search handler.

I know it is a matter of opinion but in my mind the best place to deal with 
default for DQA is in the code that deals with DQA - not somewhere else. This 
makes a much better isolation and it makes code easier to understand. You can 
essentially navigate to ShardParams.DQA and read the code and JavaDoc and 
understand everything about DQA's. You do not have to know that there is a 
decision about default in the SeachHandler. But if you want to change that, it 
is ok for me.

bq. Why do we need the switchToTestDQADefaultProvider() and 
switchToOriginalDQADefaultProvider() methods? You are already applying the DQA 
for each request so why is the switch necessary?

No I am not applying the DQA for each request. I trust you understand why I 
want to run with randomized DQA across the entire test-suite - this is why I 
invented the testDQADefaultProvider. In tests that explicitly deal with testing 
DQA stuff, in some cases I want to switch on the real DefaultProvider because 
some of those tests are actually testing out-of-the-box default-behaviour. E.g. 
verifyForceSkipGetIds-tests in DistributedQueryComponentOptimizationTest. Also 
need it in DistributedExpandComponentTest until SOLR-6813 has been solved.

bq. There's still the ShardParams.purpose field which you added in SOLR-6812 
but I removed it. I still think it is unnecessary for purpose to be sent to 
shard. Is it necessary for this patch or is it just an artifact from SOLR-6812?

You are right. It is a mistake that I did not remove ShardParams.purpose

bq. Did you benchmark it against the current algorithm for other kinds of 
use-cases as well (3-5 shards, small number of rows)? Not asking for id can 
speed up responses there too I think.

I did not do any concrete 

[jira] [Commented] (SOLR-6810) Faster searching limited but high rows across many shards all with many hits

2014-12-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259445#comment-14259445
 ] 

Per Steffensen commented on SOLR-6810:
--

bq.  IMO, one shouldn't have to look at the patch to figure out what it's 
trying to do.

Seems reasonable. The way things change is IMHO fairly well documented in 
JavaDocs of ShardParams.DQA so I will just steal from there
* Old DQA (FIND_ID_RELEVANCE_FETCH_BY_IDS)
{code}
   /**
* Algorithm
* - Shard-queries 1) Ask, by forwarding the outer query, each shard for id 
and relevance of the (up to) #rows most relevant matching documents
* - Find among those id/relevances the #rows id's with the highest global 
relevances (lets call this set of id's X)
* - Shard-queries 2) Ask, by sending id's, each shard to return the 
documents from set X that it holds
* - Return the fetched documents to the client
*/
...
   // Default do not force skip get-ids phase
{code}
* New DQA (FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS)
{code}
   /**
* Algorithm
* - Shard-queries 1) Ask, by forwarding the outer query, each shard for 
relevance of the (up to) #rows most relevant matching documents
* - Find among those relevances the #rows highest global relevances
* Note for each shard (S) how many entries (docs_among_most_relevant(S)) it 
has among the #rows globally highest relevances
* - Shard-queries 2) Ask, by forwarding the outer query, each shard S for 
id and relevance of the (up to) #docs_among_most_relevant(S) most relevant 
matching documents
* - Find among those id/relevances the #rows id's with the highest global 
relevances (lets call this set of id's X)
* - Shard-queries 3) Ask, by sending id's, each shard to return the 
documents from set X that it holds
* - Return the fetched documents to the client 
* 
* Advantages
* Asking for data from store (id in shard-queries 1) of 
FIND_ID_RELEVANCE_FETCH_BY_IDS) can be expensive, therefore sometimes you want 
to ask for data
* from as few documents as possible.
* The main purpose of this algorithm it to limit the rows asked for in 
shard-queries 2) compared to shard-queries 1) of FIND_ID_RELEVANCE_FETCH_BY_IDS.
* Lets call the number of rows asked for by the outer request for 
outer-rows
* shard-queries 2) will never ask for data from more than outer-rows 
documents total across all involved shards. shard-queries 1) of 
FIND_ID_RELEVANCE_FETCH_BY_IDS
* will ask each shard for data from outer-rows documents, and in worst 
case if each shard contains outer-rows matching documents you will
* fetch data for number of shards involved * outer-rows.
* Using FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS will become more 
beneficial the more
* - shards are involved
* - and/or the more matching documents each shard holds
*/
...
// Default force skip get-ids phase. In this algorithm there are really 
never any reason not to skip it
{code}
* dqa.forceSkipGetIds
{code}
   /** Request parameter to force skip get-ids phase of the distributed query? 
Value: true or false 
* Even if you do not force it, the system might choose to do it anyway
* Skipping the get-ids phase
* - FIND_ID_RELEVANCE_FETCH_BY_IDS: Fetch entire documents in Shard-queries 
1) and skip Shard-queries 2)
* - FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS: Fetch entire 
documents in Shard-queries 2) and skip Shard-queries 3)
*/
{code}

 Faster searching limited but high rows across many shards all with many hits
 

 Key: SOLR-6810
 URL: https://issues.apache.org/jira/browse/SOLR-6810
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, performance
 Attachments: branch_5x_rev1642874.patch, branch_5x_rev1642874.patch, 
 branch_5x_rev1645549.patch


 Searching limited but high rows across many shards all with many hits is 
 slow
 E.g.
 * Query from outside client: q=somethingrows=1000
 * Resulting in sub-requests to each shard something a-la this
 ** 1) q=somethingrows=1000fl=id,score
 ** 2) Request the full documents with ids in the global-top-1000 found among 
 the top-1000 from each shard
 What does the subject mean
 * limited but high rows means 1000 in the example above
 * many shards means 200-1000 in our case
 * all with many hits means that each of the shards have a significant 
 number of hits on the query
 The problem grows on all three factors above
 Doing such a query on our system takes between 5 min to 1 hour - depending on 
 a lot of things. It ought to be much faster, so lets make it.
 Profiling show that the problem is that it takes lots of 

[jira] [Commented] (SOLR-6810) Faster searching limited but high rows across many shards all with many hits

2014-12-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259447#comment-14259447
 ] 

Per Steffensen commented on SOLR-6810:
--

TestDistributedQueryAlgorithm.testDocReads shows very well exactly how the 
number of store accesses is reduced
{code}
// Test the number of documents read from store using 
FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS
// vs FIND_ID_RELEVANCE_FETCH_BY_IDS. This demonstrates the advantage of 
FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS
// over FIND_ID_RELEVANCE_FETCH_BY_IDS (and vice versa)
private void testDocReads() throws Exception {
  for (int startValue = 0; startValue = MAX_START; startValue++) {
// FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS (assuming skipGetIds 
used - default)
// Only reads data (required fields) from store for rows + (#shards * 
start) documents across all shards
// This can be optimized to become only rows 
// Only reads the data once

testDQADocReads(ShardParams.DQA.FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS,
 startValue, ROWS, ROWS + (startValue * jettys.size()), ROWS + (startValue * 
jettys.size()));

// DQA.FIND_ID_RELEVANCE_FETCH_BY_IDS (assuming skipGetIds not used - 
default)
// Reads data (ids only) from store for (rows + startValue) * #shards 
documents for each shard
// Besides that reads data (required fields) for rows documents across 
all shards
testDQADocReads(ShardParams.DQA.FIND_ID_RELEVANCE_FETCH_BY_IDS, startValue, 
ROWS, (ROWS + startValue) * jettys.size(), ROWS + ((ROWS + startValue) * 
jettys.size()));
  }
}
{code}

 Faster searching limited but high rows across many shards all with many hits
 

 Key: SOLR-6810
 URL: https://issues.apache.org/jira/browse/SOLR-6810
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, performance
 Attachments: branch_5x_rev1642874.patch, branch_5x_rev1642874.patch, 
 branch_5x_rev1645549.patch


 Searching limited but high rows across many shards all with many hits is 
 slow
 E.g.
 * Query from outside client: q=somethingrows=1000
 * Resulting in sub-requests to each shard something a-la this
 ** 1) q=somethingrows=1000fl=id,score
 ** 2) Request the full documents with ids in the global-top-1000 found among 
 the top-1000 from each shard
 What does the subject mean
 * limited but high rows means 1000 in the example above
 * many shards means 200-1000 in our case
 * all with many hits means that each of the shards have a significant 
 number of hits on the query
 The problem grows on all three factors above
 Doing such a query on our system takes between 5 min to 1 hour - depending on 
 a lot of things. It ought to be much faster, so lets make it.
 Profiling show that the problem is that it takes lots of time to access the 
 store to get id’s for (up to) 1000 docs (value of rows parameter) per shard. 
 Having 1000 shards its up to 1 mio ids that has to be fetched. There is 
 really no good reason to ever read information from store for more than the 
 overall top-1000 documents, that has to be returned to the client.
 For further detail see mail-thread Slow searching limited but high rows 
 across many shards all with high hits started 13/11-2014 on 
 dev@lucene.apache.org



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6810) Faster searching limited but high rows across many shards all with many hits

2014-12-27 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259447#comment-14259447
 ] 

Per Steffensen edited comment on SOLR-6810 at 12/27/14 6:35 PM:


TestDistributedQueryAlgorithm.testDocReads shows very well exactly how the 
number of store accesses is reduced
{code}
// Test the number of documents read from store using 
FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS
// vs FIND_ID_RELEVANCE_FETCH_BY_IDS. This demonstrates the advantage of 
FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS
// over FIND_ID_RELEVANCE_FETCH_BY_IDS (and vice versa)
private void testDocReads() throws Exception {
  for (int startValue = 0; startValue = MAX_START; startValue++) {
// FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS (assuming skipGetIds 
used - default)
// Only reads data (required fields) from store for rows + (#shards * 
start) documents across all shards
// This can be optimized to become only rows 
// Only reads the data once

testDQADocReads(ShardParams.DQA.FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS,
 startValue, ROWS, ROWS + (startValue * jettys.size()), ROWS + (startValue * 
jettys.size()));

// DQA.FIND_ID_RELEVANCE_FETCH_BY_IDS (assuming skipGetIds not used - 
default)
// Reads data (ids only) from store for (rows + startValue) * #shards 
documents for each shard
// Besides that reads data (required fields) for rows documents across 
all shards
testDQADocReads(ShardParams.DQA.FIND_ID_RELEVANCE_FETCH_BY_IDS, startValue, 
ROWS, (ROWS + startValue) * jettys.size(), ROWS + ((ROWS + startValue) * 
jettys.size()));
  }
}
{code}
{code}
testDQADocReads(ShardParams.DQA dqa, int start, int rows, int 
expectedUniqueIdCount, int expectedTotalCount) {
...
}
{code}


was (Author: steff1193):
TestDistributedQueryAlgorithm.testDocReads shows very well exactly how the 
number of store accesses is reduced
{code}
// Test the number of documents read from store using 
FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS
// vs FIND_ID_RELEVANCE_FETCH_BY_IDS. This demonstrates the advantage of 
FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS
// over FIND_ID_RELEVANCE_FETCH_BY_IDS (and vice versa)
private void testDocReads() throws Exception {
  for (int startValue = 0; startValue = MAX_START; startValue++) {
// FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS (assuming skipGetIds 
used - default)
// Only reads data (required fields) from store for rows + (#shards * 
start) documents across all shards
// This can be optimized to become only rows 
// Only reads the data once

testDQADocReads(ShardParams.DQA.FIND_RELEVANCE_FIND_IDS_LIMITED_ROWS_FETCH_BY_IDS,
 startValue, ROWS, ROWS + (startValue * jettys.size()), ROWS + (startValue * 
jettys.size()));

// DQA.FIND_ID_RELEVANCE_FETCH_BY_IDS (assuming skipGetIds not used - 
default)
// Reads data (ids only) from store for (rows + startValue) * #shards 
documents for each shard
// Besides that reads data (required fields) for rows documents across 
all shards
testDQADocReads(ShardParams.DQA.FIND_ID_RELEVANCE_FETCH_BY_IDS, startValue, 
ROWS, (ROWS + startValue) * jettys.size(), ROWS + ((ROWS + startValue) * 
jettys.size()));
  }
}
{code}

 Faster searching limited but high rows across many shards all with many hits
 

 Key: SOLR-6810
 URL: https://issues.apache.org/jira/browse/SOLR-6810
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, performance
 Attachments: branch_5x_rev1642874.patch, branch_5x_rev1642874.patch, 
 branch_5x_rev1645549.patch


 Searching limited but high rows across many shards all with many hits is 
 slow
 E.g.
 * Query from outside client: q=somethingrows=1000
 * Resulting in sub-requests to each shard something a-la this
 ** 1) q=somethingrows=1000fl=id,score
 ** 2) Request the full documents with ids in the global-top-1000 found among 
 the top-1000 from each shard
 What does the subject mean
 * limited but high rows means 1000 in the example above
 * many shards means 200-1000 in our case
 * all with many hits means that each of the shards have a significant 
 number of hits on the query
 The problem grows on all three factors above
 Doing such a query on our system takes between 5 min to 1 hour - depending on 
 a lot of things. It ought to be much faster, so lets make it.
 Profiling show that the problem is that it takes lots of time to access the 
 store to get id’s for (up to) 1000 docs (value of rows parameter) per shard. 
 Having 1000 shards its up to 1 mio ids that has to be fetched. There is 
 really no good reason to ever read information from 

[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259449#comment-14259449
 ] 

Alexandre Rafalovitch commented on SOLR-6892:
-

q. The default urp chain must be immutable

Careful with that one. There are sometimes valid reasons with putting an URP 
*after* DistributedUpdateProcessor. I believe it is usually connected with 
accessing stored content during the atomic update. We don't want to completely 
loose that flexibility.

Also, Debugging URP may want to be the last items in the chain too.

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6141) simplify stored fields bulk merge logic

2014-12-27 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6141:
---

 Summary: simplify stored fields bulk merge logic
 Key: LUCENE-6141
 URL: https://issues.apache.org/jira/browse/LUCENE-6141
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6141.patch

The current logic checks that the same chunksize and compression algorithm were 
used, but this is obselete as it no longer iterates chunks.

We only need to check that the format version is the same.

This also allows for bulk merging across compression algorithms. A good use 
case is BEST_SPEED - BEST_COMPRESSION for archiving.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6141) simplify stored fields bulk merge logic

2014-12-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6141:

Attachment: LUCENE-6141.patch

simple patch.

 simplify stored fields bulk merge logic
 ---

 Key: LUCENE-6141
 URL: https://issues.apache.org/jira/browse/LUCENE-6141
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6141.patch


 The current logic checks that the same chunksize and compression algorithm 
 were used, but this is obselete as it no longer iterates chunks.
 We only need to check that the format version is the same.
 This also allows for bulk merging across compression algorithms. A good use 
 case is BEST_SPEED - BEST_COMPRESSION for archiving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259455#comment-14259455
 ] 

Jack Krupansky commented on SOLR-5507:
--

This issue has gotten confused. Please clarify the summary and description to 
inform readers whether the intention is:

1. Simply refactor the implementation to make the code more maintainable and 
extensible.
2. Add features to the existing UI to cater to advanced users.
3. Revamp the UI itself to cater to new and novice users.
4. Replace the existing UI or supplement it with two UI's, one for novices 
(guides them through processes) and one for experts (access more features more 
easily.)

IOW, what are the requirements here?

I'm not opposed to any of the above, but the original issue summary and 
description seemed more focused on the internal implementation rather than the 
externals of a new UI.


 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6888) Decompressing documents on first-pass distributed queries to get docId is inefficient, use indexed values instead?

2014-12-27 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6888:
-
Summary: Decompressing documents on first-pass distributed queries to get 
docId is inefficient, use indexed values instead?  (was: Find a way to avoid 
decompressing entire blocks for just the docId/uniqueKey)

 Decompressing documents on first-pass distributed queries to get docId is 
 inefficient, use indexed values instead?
 --

 Key: SOLR-6888
 URL: https://issues.apache.org/jira/browse/SOLR-6888
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0, Trunk
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-6888-hacktiming.patch


 Assigning this to myself to just not lose track of it, but I won't be working 
 on this in the near term; anyone feeling ambitious should feel free to grab 
 it.
 Note, docId used here is whatever is defined for uniqueKey...
 Since Solr 4.1, the compression/decompression process is based on 16K blocks 
 and is automatic, and not configurable. So, to get a single stored value one 
 must decompress an entire 16K block. At least.
 For SolrCloud (and distributed processing in general), we make two trips, one 
 to get the doc id and score (or other sort criteria) and one to return the 
 actual data.
 The first pass here requires that we return the top N docIDs and sort 
 criteria, which means that each and every sub-request has to unpack at least 
 one 16K block (and sometimes more) to get just the doc ID. So if we have 20 
 shards and only want 20 rows, 95% of the decompression cycles will be wasted. 
 Not to mention all the disk reads.
 It seems like we should be able to do better than that. Can we argue that doc 
 ids are 'special' and should be cached somehow? Let's discuss what this would 
 look like. I can think of a couple of approaches:
 1 Since doc IDs are special, can we say that for this purpose returning 
 the indexed version is OK? We'd need to return the actual stored value when 
 the full doc was requested, but for the sub-request only what about returning 
 the indexed value instead of the stored one? On the surface I don't see a 
 problem here, but what do I know? Storing these as DocValues seems useful in 
 this case.
 1a A variant is treating numeric docIds specially since the indexed value 
 and the stored value should be identical. And DocValues here would be useful 
 it seems. But this seems an unnecessary specialization if 1 is implemented 
 well.
 2 We could cache individual doc IDs, although I'm not sure what use that 
 really is. Would maintaining the cache overwhelm the savings of not 
 decompressing? I really don't like this idea, but am throwing it out there. 
 Doing this from stored data up front would essentially mean decompressing 
 every doc so that seems untenable to try up-front.
 3 We could maintain an array[maxDoc] that held document IDs, perhaps lazily 
 initializing it. I'm not particularly a fan of this either, doesn't seem like 
 a Good Thing. I can see lazy loading being almost, but not quite totally, 
 useless, i.e. a hit ratio near 0, especially since it'd be thrown out on 
 every openSearcher.
 Really, the only one of these that seems viable is 1/1a. The others would 
 all involve decompressing the docs anyway to get the ID, and I suspect that 
 caching would be of very limited usefulness. I guess 1's viability hinges 
 on whether, for internal use, the indexed form of DocId is interchangeable 
 with the stored value.
 Or are there other ways to approach this? Or isn't it something to really 
 worry about?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_20) - Build # 4410 - Failure!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4410/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseParallelGC (asserts: true)

No tests ran.

Build Log:
[...truncated 10399 lines...]
FATAL: java.io.IOException: Unexpected termination of the channel
hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected 
termination of the channel
at hudson.remoting.Request.abort(Request.java:295)
at hudson.remoting.Channel.terminate(Channel.java:814)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:69)
at ..remote call to Windows VBOX(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1356)
at hudson.remoting.Request.call(Request.java:171)
at hudson.remoting.Channel.call(Channel.java:751)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:179)
at com.sun.proxy.$Proxy75.join(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:979)
at hudson.Launcher$ProcStarter.join(Launcher.java:388)
at hudson.tasks.Ant.perform(Ant.java:217)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:770)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:533)
at hudson.model.Run.execute(Run.java:1759)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:89)
at hudson.model.Executor.run(Executor.java:240)
Caused by: java.io.IOException: Unexpected termination of the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.init(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.init(ObjectInputStreamEx.java:40)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.10-Linux (64bit/jdk1.7.0_67) - Build # 189 - Failure!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/189/
Java: 64bit/jdk1.7.0_67 -XX:+UseCompressedOops -XX:+UseParallelGC (asserts: 
false)

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.MultiThreadedOCPTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.MultiThreadedOCPTest: 1) Thread[id=2548, 
name=OverseerThreadFactory-1562-thread-1, state=TIMED_WAITING, group=Overseer 
collection creation process.] at java.lang.Thread.sleep(Native Method)  
   at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1627)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1383)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=2391, 
name=OverseerThreadFactory-1374-thread-5, state=TIMED_WAITING, group=Overseer 
collection creation process.] at java.lang.Thread.sleep(Native Method)  
   at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1627)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1509)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.MultiThreadedOCPTest: 
   1) Thread[id=2548, name=OverseerThreadFactory-1562-thread-1, 
state=TIMED_WAITING, group=Overseer collection creation process.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1627)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1383)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
   2) Thread[id=2391, name=OverseerThreadFactory-1374-thread-5, 
state=TIMED_WAITING, group=Overseer collection creation process.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1627)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1509)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([FC9A7ACF5EBFAA20]:0)


FAILED:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Could not register as the leader because creating the ephemeral registration 
node in ZooKeeper failed

Stack Trace:
org.apache.solr.common.SolrException: Could not register as the leader because 
creating the ephemeral registration node in ZooKeeper failed
at 
__randomizedtesting.SeedInfo.seed([FC9A7ACF5EBFAA20:F892F53C4C1A4501]:0)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:150)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:155)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
at 

[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259493#comment-14259493
 ] 

Upayavira commented on SOLR-5507:
-

The way I see it, this ticket is about changing the underlying infrastructure 
to be one that is more amenable to extension.

Any other features/extensions that this should make possible will occur within 
their own tickets.

Whether we go for a complete rewrite, then add new features, or do a partial 
rewrite, or what, who knows, but as you are suggesting, [~jkrupan], this ticket 
is merely relating to the feature-for-feature rewrite.  All I ask, though, is 
that you forgive the occasional burst of ebullient enthusiasm!

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6140) simplify inflater usage in deflate CompressionMode

2014-12-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259502#comment-14259502
 ] 

Adrien Grand commented on LUCENE-6140:
--

+1 good catch

 simplify inflater usage in deflate CompressionMode
 --

 Key: LUCENE-6140
 URL: https://issues.apache.org/jira/browse/LUCENE-6140
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6140.patch


 This currently loops-n-grows the output byte[]. But we always decompress the 
 whole block (we dont emit flushes or anything to allow otherwise) and ignore 
 offset/length to the end, and we know how big the uncompressed size is 
 up-front... we can just call inflate one time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6141) simplify stored fields bulk merge logic

2014-12-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259503#comment-14259503
 ] 

Adrien Grand commented on LUCENE-6141:
--

+1 let's also add a test?

 simplify stored fields bulk merge logic
 ---

 Key: LUCENE-6141
 URL: https://issues.apache.org/jira/browse/LUCENE-6141
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6141.patch


 The current logic checks that the same chunksize and compression algorithm 
 were used, but this is obselete as it no longer iterates chunks.
 We only need to check that the format version is the same.
 This also allows for bulk merging across compression algorithms. A good use 
 case is BEST_SPEED - BEST_COMPRESSION for archiving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259519#comment-14259519
 ] 

Yonik Seeley commented on SOLR-6892:


bq. This is about adding URP s before that chain .

Dude, I'm not psychic ;-)  I didn't see that anywhere in this issue before now.

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259524#comment-14259524
 ] 

Upayavira commented on SOLR-5507:
-

The code on github ( https://github.com/upayavira/solr-angular-ui/tree/solr5507 
in solr/webapp/web ) just got wy better. The paging is there (in principle) 
and the first page - the index page, with the graphs on the right, is 
implemented. This should be a feature for feature match.

I shall just work down, one page at a time now.

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259525#comment-14259525
 ] 

Alexandre Rafalovitch commented on SOLR-5507:
-

Do you know how you are planning to address the admin-extra pages? They are 
useful but had issues with global style resets, so were not used much.

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259529#comment-14259529
 ] 

Upayavira commented on SOLR-5507:
-

Personally, I hope they will go away quietly. Beyond that, I haven't thought 
about it yet. This will be much more extensible, so perhaps we just allow 
people to add new pages as and when.

What have people used them for, and what are 'global style resets'?

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2014-12-27 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259532#comment-14259532
 ] 

Alexandre Rafalovitch commented on SOLR-5507:
-

Well, the benefits were that they were a file with the collection config, so 
that was - theoretically - an easy way to do a collection-specific add-on, 
including extra pages in the menu tree. I am not sure if having Admin UI in 
AngularJS will by itself solve the same use case. Unless you building-in some 
fancy magic router.

As to the global style resets, the default CSS was  AFAIK resetting all the 
styles (headers, etc). So, if you just wanted a quick admin-extra page, your 
font was set to 12 points and so were all your headers. So, it was fairly 
painful. The solution would have been to have the CSS styles reset scoped so it 
did not affect the included extra pages. But nobody ever worked on it. I am not 
sure if there was even a JIRA.

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Attachments: SOLR-5507.patch


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2391 - Failure

2014-12-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2391/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([CA56402266BDD46A:4BB0CE3A11E2B456]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-3055) Use NGramPhraseQuery in Solr

2014-12-27 Thread Koji Sekiguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259553#comment-14259553
 ] 

Koji Sekiguchi commented on SOLR-3055:
--

Hi Uchida-san, thank you for your effort for reworking this issue!

According to your observation (pros and cons), I like the 1st strategy to go 
on. And if you agree, why don't you add test cases for that one? And also, 
don't we need to consider other n-gram type Tokenizers even TokenFilters, such 
as NGramTokenFilter and CJKBigramFilter?

And, I think there is a restriction when minGramSize != maxGramSize. If it's 
not significant, I think we can examine the restriction separately from this 
issue because we rarely set different values to those for searching CJK words. 
But we use a lot NGramTokenizer with fixed gram size for searching CJK words, 
and we could get a nice performance gain by the patch as you've showed us.

 Use NGramPhraseQuery in Solr
 

 Key: SOLR-3055
 URL: https://issues.apache.org/jira/browse/SOLR-3055
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis, search
Reporter: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-3055-1.patch, SOLR-3055-2.patch, SOLR-3055.patch, 
 schema.xml, solrconfig.xml


 Solr should use NGramPhraseQuery when searching with default slop on n-gram 
 field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1975 - Still Failing!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1975/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC (asserts: true)

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:62168

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:62168
at 
__randomizedtesting.SeedInfo.seed([C7F2BDE2B52A7CC4:461433FAC2751CF8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:578)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:213)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:209)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:532)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:151)
at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-3055) Use NGramPhraseQuery in Solr

2014-12-27 Thread Tomoko Uchida (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259560#comment-14259560
 ] 

Tomoko Uchida commented on SOLR-3055:
-

Thank you for your response.

I will add test codes and updated patch to consider other Tokenizers / 
TokenFilters.

My patch seems to work well for both case, minGramSize == maxGramSize and 
minGramSize != maxGramSize. But not optimized for maxGramSize. 
In the case of minGramSize != maxGramSize, using maxGramSize for optimization 
derives the best performance improvement. We can examine about that (maybe need 
another issue.) In practice, we often set fixed gram size for CJK words as you 
pointed, so I think it is beneficial even if it is not optimized for 
maxGramSize.

 Use NGramPhraseQuery in Solr
 

 Key: SOLR-3055
 URL: https://issues.apache.org/jira/browse/SOLR-3055
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis, search
Reporter: Koji Sekiguchi
Priority: Minor
 Attachments: SOLR-3055-1.patch, SOLR-3055-2.patch, SOLR-3055.patch, 
 schema.xml, solrconfig.xml


 Solr should use NGramPhraseQuery when searching with default slop on n-gram 
 field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259566#comment-14259566
 ] 

Jack Krupansky commented on SOLR-6892:
--

Issue type should be Improvement, not Bug, right?

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259567#comment-14259567
 ] 

Jack Krupansky commented on SOLR-6892:
--

It might be instructive to look at how the search handler deals with search 
components and possibly consider rationalizing the two handlers so that there 
is a little more commonality in how lists of components/processors are 
specified. For example, consider a first, last, and full processor list. 
IOW, be able to specify a list of processors to apply before the 
solrconfig-specified list, after, or to completely replace the 
solrconfig-specified list of processors.

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6892) Make update processors toplevel components

2014-12-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259573#comment-14259573
 ] 

Noble Paul commented on SOLR-6892:
--

Thanks everyone. Currently the ticket is short on details. I hope to update 
this with finer details soon. 

 Make update processors toplevel components 
 ---

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul

 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * updateProcessor tag becomes a toplevel tag and it will be equivalent to 
 the processor tag inside updateRequestProcessorChain . The only 
 difference is that it should require a {{name}} attribute
 * Any update request will be able  to pass a param {{processor=a,b,c}} , 
 where a,b,c are names of update processors. A just in time chain will be 
 created with those update processors
 * Some in built update processors (wherever possible) will be predefined with 
 standard names and can be directly used in requests 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.10-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 191 - Failure!

2014-12-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/191/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:+UseCompressedOops -XX:+UseSerialGC 
(asserts: true)

1 tests failed.
FAILED:  org.apache.solr.cloud.TestDistribDocBasedVersion.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:51557/g/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:51557/g/collection1
at 
__randomizedtesting.SeedInfo.seed([DD969F037AE737E9:5C70111B0DB857D5]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:564)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.TestDistribDocBasedVersion.vadd(TestDistribDocBasedVersion.java:259)
at 
org.apache.solr.cloud.TestDistribDocBasedVersion.doTestDocVersions(TestDistribDocBasedVersion.java:190)
at 
org.apache.solr.cloud.TestDistribDocBasedVersion.doTest(TestDistribDocBasedVersion.java:102)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:871)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at