[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40) - Build # 4605 - Failure!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4605/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([F910913F8B0DB153]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:496)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:232)
at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=3438, name=searcherExecutor-2475-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=3438, name=searcherExecutor-2475-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([F910913F8B0DB153]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=3438, name=searcherExecutor-2

[jira] [Commented] (SOLR-7469) check-licenses happily ignoring incorrect start.jar.sha1, current start.jar.sha1 on trunk is out of date.

2015-04-24 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512278#comment-14512278
 ] 

Shalin Shekhar Mangar commented on SOLR-7469:
-

Thanks Hoss!

> check-licenses happily ignoring incorrect start.jar.sha1, current 
> start.jar.sha1 on trunk is out of date.
> -
>
> Key: SOLR-7469
> URL: https://issues.apache.org/jira/browse/SOLR-7469
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7469.patch
>
>
> as of r1675948, "ant clean jar-checksums" results in a modified 
> solr/licenses/start.jar.sha1 ...
> {noformat}
> hossman@frisbee:~/lucene/dev$ svn diff
> Index: solr/licenses/start.jar.sha1
> ===
> --- solr/licenses/start.jar.sha1  (revision 1675948)
> +++ solr/licenses/start.jar.sha1  (working copy)
> @@ -1 +1 @@
> -24e798bde886e1430978ece6c4aa90d781e2da30
> +b91b72f9167cce4c1caea0f8363fd9984456e34d
> {noformat}
> ...so apparently the version of start.jar we're fetching from ivy & using in 
> solr changed at some point w/o the SHA1 being updated?
> apparently because "check-licenses" is explicitly ignoring start.jar...
> {noformat}
> 
> 
> {noformat}
> ...this is seriously messed up.  we need to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7464) Cluster status API call should be served by the node which it was called from

2015-04-24 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512275#comment-14512275
 ] 

Shalin Shekhar Mangar commented on SOLR-7464:
-

How do we prevent stale responses? Earlier, if a node is not connected to ZK, 
it will return a 5xx error response but it looks like with this patch an old 
cluster state will be returned?

> Cluster status API call should be served by the node which it was called from
> -
>
> Key: SOLR-7464
> URL: https://issues.apache.org/jira/browse/SOLR-7464
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7464.patch
>
>
> Currently the cluster status api call has to go to the 
> OverseerCollectionProcessor to serve the request. We should directly serve 
> the request from the node where the call was made to. 
> That way we will end up putting lesser tasks into the overseer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40) - Build # 4725 - Still Failing!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4725/
Java: 64bit/jdk1.8.0_40 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testFieldSometimesMissingFromSegment

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1FDFF344FEFA8793:E7510981EC4BE40]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testFieldSometimesMissingFromSegment(TestPostingsHighlighter.java:1154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 6783 lines...]
   [junit4] Suite: 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestPostingsHighlighter 
-Dtests.method=testFieldSometimesMissingFromSegment 
-Dtests.seed=1FDFF344FEFA8793 -Dtests.slow=true -Dtests.locale=no_NO_NY 
-Dtests.timezone=Asia/Riyadh -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.08s J0 | 
TestPostingsHighlighter.testFieldSometimesMissingF

[jira] [Assigned] (SOLR-7471) Stop requiring docValues for interval faceting

2015-04-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-7471:
---

Assignee: Tomás Fernández Löbbe

> Stop requiring docValues for interval faceting
> --
>
> Key: SOLR-7471
> URL: https://issues.apache.org/jira/browse/SOLR-7471
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-7471.patch
>
>
> Will use fieldCache if docValues are not present



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7471) Stop requiring docValues for interval faceting

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512259#comment-14512259
 ] 

ASF subversion and git services commented on SOLR-7471:
---

Commit 1675995 from [~tomasflobbe] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1675995 ]

SOLR-7471: Stop requiring docValues for interval faceting

> Stop requiring docValues for interval faceting
> --
>
> Key: SOLR-7471
> URL: https://issues.apache.org/jira/browse/SOLR-7471
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-7471.patch
>
>
> Will use fieldCache if docValues are not present



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7471) Stop requiring docValues for interval faceting

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512257#comment-14512257
 ] 

ASF subversion and git services commented on SOLR-7471:
---

Commit 1675994 from [~tomasflobbe] in branch 'dev/trunk'
[ https://svn.apache.org/r1675994 ]

SOLR-7471: Stop requiring docValues for interval faceting

> Stop requiring docValues for interval faceting
> --
>
> Key: SOLR-7471
> URL: https://issues.apache.org/jira/browse/SOLR-7471
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-7471.patch
>
>
> Will use fieldCache if docValues are not present



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7377) SOLR Streaming Expressions

2015-04-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512247#comment-14512247
 ] 

Joel Bernstein edited comment on SOLR-7377 at 4/25/15 4:53 AM:
---

Ok, I see what you're getting at.

I'd been considering a variation on comparators that just determine equality. 
This is because some of the stream implementations like the UniqueStream and 
ReducerStream only need a way to determine Tuple equality. So I had already 
done a little research and turned up this:

http://www.learnabout-electronics.org/Digital/dig43.php

You'll see a section on Equality Comparators.

So, I'm thinking a simpler name like FieldComp would do the trick.

No need to change just yet. Feel free to finish up the patch using the 
EqualToComparator. Before we do the commit we'll go through and finalize the 
names.



 



was (Author: joel.bernstein):
Ok, I see what you're getting at.

I'd been considering a variation on comparators that just determine equality. 
This is because some of the stream implementations like the UniqueStream and 
ReducerStream only need a way to determine Tuple equality. So I had already 
done a little research and turned up this:

http://www.learnabout-electronics.org/Digital/dig43.php

You'll see a section on Equality Comparators.

So, I'm thinking a simpler name like FieldComp would do the trick.

No need to change just yet. Feel free to finish up the patch using the 
EqualToComparator. Before we do the commit we're go through and finalize the 
names.



 


> SOLR Streaming Expressions
> --
>
> Key: SOLR-7377
> URL: https://issues.apache.org/jira/browse/SOLR-7377
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-7377.patch
>
>
> It would be beneficial to add an expression-based interface to Streaming API 
> described in SOLR-7082. Right now that API requires streaming requests to 
> come in from clients as serialized bytecode of the streaming classes. The 
> suggestion here is to support string expressions which describe the streaming 
> operations the client wishes to perform. 
> {code:java}
> search(collection1, q=*:*, fl="id,fieldA,fieldB", sort="fieldA asc")
> {code}
> With this syntax in mind, one can now express arbitrarily complex stream 
> queries with a single string.
> {code:java}
> // merge two distinct searches together on common fields
> merge(
>   search(collection1, q="id:(0 3 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection2, q="id:(1 2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc, a_s asc")
> // find top 20 unique records of a search
> top(
>   n=20,
>   unique(
> search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_f desc"),
> over="a_f desc"),
>   sort="a_f desc")
> {code}
> The syntax would support
> 1. Configurable expression names (eg. via solrconfig.xml one can map "unique" 
> to a class implementing a Unique stream class) This allows users to build 
> their own streams and use as they wish.
> 2. Named parameters (of both simple and expression types)
> 3. Unnamed, type-matched parameters (to support requiring N streams as 
> arguments to another stream)
> 4. Positional parameters
> The main goal here is to make streaming as accessible as possible and define 
> a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7377) SOLR Streaming Expressions

2015-04-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512247#comment-14512247
 ] 

Joel Bernstein commented on SOLR-7377:
--

Ok, I see what you're getting at.

I'd been considering a variation on comparators that just determine equality. 
This is because some of the stream implementations like the UniqueStream and 
ReducerStream only need a way to determine Tuple equality. So I had already 
done a little research and turned up this:

http://www.learnabout-electronics.org/Digital/dig43.php

You'll see a section on Equality Comparators.

So, I'm thinking a simpler name like FieldComp would do the trick.

No need to change just yet. Feel free to finish up the patch using the 
EqualToComparator. Before we do the commit we're go through and finalize the 
names.



 


> SOLR Streaming Expressions
> --
>
> Key: SOLR-7377
> URL: https://issues.apache.org/jira/browse/SOLR-7377
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-7377.patch
>
>
> It would be beneficial to add an expression-based interface to Streaming API 
> described in SOLR-7082. Right now that API requires streaming requests to 
> come in from clients as serialized bytecode of the streaming classes. The 
> suggestion here is to support string expressions which describe the streaming 
> operations the client wishes to perform. 
> {code:java}
> search(collection1, q=*:*, fl="id,fieldA,fieldB", sort="fieldA asc")
> {code}
> With this syntax in mind, one can now express arbitrarily complex stream 
> queries with a single string.
> {code:java}
> // merge two distinct searches together on common fields
> merge(
>   search(collection1, q="id:(0 3 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection2, q="id:(1 2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc, a_s asc")
> // find top 20 unique records of a search
> top(
>   n=20,
>   unique(
> search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_f desc"),
> over="a_f desc"),
>   sort="a_f desc")
> {code}
> The syntax would support
> 1. Configurable expression names (eg. via solrconfig.xml one can map "unique" 
> to a class implementing a Unique stream class) This allows users to build 
> their own streams and use as they wish.
> 2. Named parameters (of both simple and expression types)
> 3. Unnamed, type-matched parameters (to support requiring N streams as 
> arguments to another stream)
> 4. Positional parameters
> The main goal here is to make streaming as accessible as possible and define 
> a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2221 - Failure!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2221/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (24 > 20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (24 > 20) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([FED46E22BB56560B:768051F815AA3BF3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.

[jira] [Updated] (SOLR-7471) Stop requiring docValues for interval faceting

2015-04-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-7471:

Attachment: SOLR-7471.patch

> Stop requiring docValues for interval faceting
> --
>
> Key: SOLR-7471
> URL: https://issues.apache.org/jira/browse/SOLR-7471
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-7471.patch
>
>
> Will use fieldCache if docValues are not present



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7471) Stop requiring docValues for interval faceting

2015-04-24 Thread JIRA
Tomás Fernández Löbbe created SOLR-7471:
---

 Summary: Stop requiring docValues for interval faceting
 Key: SOLR-7471
 URL: https://issues.apache.org/jira/browse/SOLR-7471
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe


Will use fieldCache if docValues are not present



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 258 - Still Failing

2015-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/258/

No tests ran.

Build Log:
[...truncated 52354 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
"file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.02 sec (5.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.2.0-src.tgz...
   [smoker] 28.2 MB in 0.04 sec (678.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.2.0.tgz...
   [smoker] 64.8 MB in 0.15 sec (429.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.2.0.zip...
   [smoker] 74.6 MB in 0.11 sec (687.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5777 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5777 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 208 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (87.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.2.0-src.tgz...
   [smoker] 36.0 MB in 0.05 sec (668.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.2.0.tgz...
   [smoker] 126.4 MB in 0.16 sec (783.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.2.0.zip...
   [smoker] 133.0 MB in 0.15 sec (895.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.2.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.2.0/server/lib/ext/javax.servlet-api-3.0.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.2.0.tgz...
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1533, in 
   [smoker] main()
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1478, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1522, in smokeTest
   [smoker] u

[jira] [Commented] (SOLR-7377) SOLR Streaming Expressions

2015-04-24 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512165#comment-14512165
 ] 

Dennis Gove commented on SOLR-7377:
---

I was thinking that all comparators, no matter their implemented comparison 
logic, return one of three basic values when comparing A and B. 

1. A and B are logically equal to each other
2. A is logically before B
3. A is logically after B

The implemented comparison logic is then wholly dependent on what one might be 
intending to use the comparator for. For example, EqualToComparator's 
implemented comparison logic will return that A and B are logically equal if 
they are in fact equal to each other. Its logically before/after response 
depends on the sort order (ascending or descending) but is basically deciding 
if A is less than B or if A is greater than B.

One could, if they wanted to, create a comparator returning that two dates are 
logically equal to each other if they occur within the same week. Or a 
comparator returning that two numbers are logically equal if their values are 
within the same logarithmic order of magnitude. So on and so forth.

My thinking is that comparators determine the logical comparison and make no 
assumption on what that implemented logic is. This leaves open the possibility 
of implementing other comparators for given situations as they arise.

> SOLR Streaming Expressions
> --
>
> Key: SOLR-7377
> URL: https://issues.apache.org/jira/browse/SOLR-7377
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-7377.patch
>
>
> It would be beneficial to add an expression-based interface to Streaming API 
> described in SOLR-7082. Right now that API requires streaming requests to 
> come in from clients as serialized bytecode of the streaming classes. The 
> suggestion here is to support string expressions which describe the streaming 
> operations the client wishes to perform. 
> {code:java}
> search(collection1, q=*:*, fl="id,fieldA,fieldB", sort="fieldA asc")
> {code}
> With this syntax in mind, one can now express arbitrarily complex stream 
> queries with a single string.
> {code:java}
> // merge two distinct searches together on common fields
> merge(
>   search(collection1, q="id:(0 3 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection2, q="id:(1 2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc, a_s asc")
> // find top 20 unique records of a search
> top(
>   n=20,
>   unique(
> search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_f desc"),
> over="a_f desc"),
>   sort="a_f desc")
> {code}
> The syntax would support
> 1. Configurable expression names (eg. via solrconfig.xml one can map "unique" 
> to a class implementing a Unique stream class) This allows users to build 
> their own streams and use as they wish.
> 2. Named parameters (of both simple and expression types)
> 3. Unnamed, type-matched parameters (to support requiring N streams as 
> arguments to another stream)
> 4. Positional parameters
> The main goal here is to make streaming as accessible as possible and define 
> a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7377) SOLR Streaming Expressions

2015-04-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511911#comment-14511911
 ] 

Joel Bernstein edited comment on SOLR-7377 at 4/25/15 1:33 AM:
---

Was reviewing the new Comparator package. Curious about the name 
EqualToComparator. This makes it sound like it's only used to determine 
equality. But it's also being used in certain situations to determine sort 
order. Since an equality comparator makes sense in certain situations, like 
with the ReducerStream, does it make sense to have two Comparator 
implementations? 


was (Author: joel.bernstein):
Was reviewing the new Comparator package. Curious about the name 
EquatToComparator. This makes it sound like it's only used to determine 
equality. But it's also being used in certain situations to determine sort 
order. Since an equality comparator makes sense in certain situations, like 
with the ReducerStream, does it make sense to have two Comparator 
implementations? 

> SOLR Streaming Expressions
> --
>
> Key: SOLR-7377
> URL: https://issues.apache.org/jira/browse/SOLR-7377
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-7377.patch
>
>
> It would be beneficial to add an expression-based interface to Streaming API 
> described in SOLR-7082. Right now that API requires streaming requests to 
> come in from clients as serialized bytecode of the streaming classes. The 
> suggestion here is to support string expressions which describe the streaming 
> operations the client wishes to perform. 
> {code:java}
> search(collection1, q=*:*, fl="id,fieldA,fieldB", sort="fieldA asc")
> {code}
> With this syntax in mind, one can now express arbitrarily complex stream 
> queries with a single string.
> {code:java}
> // merge two distinct searches together on common fields
> merge(
>   search(collection1, q="id:(0 3 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection2, q="id:(1 2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc, a_s asc")
> // find top 20 unique records of a search
> top(
>   n=20,
>   unique(
> search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_f desc"),
> over="a_f desc"),
>   sort="a_f desc")
> {code}
> The syntax would support
> 1. Configurable expression names (eg. via solrconfig.xml one can map "unique" 
> to a class implementing a Unique stream class) This allows users to build 
> their own streams and use as they wish.
> 2. Named parameters (of both simple and expression types)
> 3. Unnamed, type-matched parameters (to support requiring N streams as 
> arguments to another stream)
> 4. Positional parameters
> The main goal here is to make streaming as accessible as possible and define 
> a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3006 - Failure

2015-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3006/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (35 > 20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (35 > 20) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([A125E0DF8E2549:88F51A3A717248B1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.Thr

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b06) - Build # 12437 - Failure!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12437/
Java: 64bit/jdk1.8.0_60-ea-b06 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testFieldSometimesMissingFromSegment

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([9DDE29D30953DA68:8C74CA0FE96DE3BB]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testFieldSometimesMissingFromSegment(TestPostingsHighlighter.java:1154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 6833 lines...]
   [junit4] Suite: 
org.apache.lucene.search.postingshighlight.TestPostingsHighlighter
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestPostingsHighlighter 
-Dtests.method=testFieldSometimesMissingFromSegment 
-Dtests.seed=9DDE29D30953DA68 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=pt_PT -Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.10s J2 | 
TestPostingsHighlighter.testField

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4724 - Still Failing!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4724/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([709987D71C07735C:EA6DFA35829DEF60]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:794)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529&qt=standard&start=0&rows=20&version=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:787)
... 40 more




Build Log:
[...truncated 9550 lines...]
   [junit4] Suite: org.apache.solr.update.AutoC

[jira] [Commented] (SOLR-7463) Stop forcing MergePolicy's "NoCFSRatio" based on the IWC "useCompoundFile" configuration

2015-04-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512100#comment-14512100
 ] 

Tomás Fernández Löbbe commented on SOLR-7463:
-

You are right, it's the subclasses that are setting the ratio to 0.1, 
{{TieredMergePolicy}} and {{LogMergePolicy}}. I still think its better to let 
those classes set their defaults.

> Stop forcing MergePolicy's "NoCFSRatio" based on the IWC "useCompoundFile" 
> configuration
> 
>
> Key: SOLR-7463
> URL: https://issues.apache.org/jira/browse/SOLR-7463
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-7463.patch
>
>
> Let users specify this value via setter in the solrconfig.xml, and use 
> Lucene's default if unset (0.1). 
> Document "noCFSRatio" in the ref guide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7463) Stop forcing MergePolicy's "NoCFSRatio" based on the IWC "useCompoundFile" configuration

2015-04-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512086#comment-14512086
 ] 

Yonik Seeley commented on SOLR-7463:


bq.  and use Lucene's default if unset (0.1)

I see this in trunk's MergePolicy:
{code}
  protected static final double DEFAULT_NO_CFS_RATIO = 1.0;
{code}
Is 0.1 set somewhere else?

> Stop forcing MergePolicy's "NoCFSRatio" based on the IWC "useCompoundFile" 
> configuration
> 
>
> Key: SOLR-7463
> URL: https://issues.apache.org/jira/browse/SOLR-7463
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-7463.patch
>
>
> Let users specify this value via setter in the solrconfig.xml, and use 
> Lucene's default if unset (0.1). 
> Document "noCFSRatio" in the ref guide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7463) Stop forcing MergePolicy's "NoCFSRatio" based on the IWC "useCompoundFile" configuration

2015-04-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-7463:

Attachment: SOLR-7463.patch

Here is the patch I'm proposing. I'll give people a couple of days to comment.
I think this change could go to trunk and 5.x.

> Stop forcing MergePolicy's "NoCFSRatio" based on the IWC "useCompoundFile" 
> configuration
> 
>
> Key: SOLR-7463
> URL: https://issues.apache.org/jira/browse/SOLR-7463
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-7463.patch
>
>
> Let users specify this value via setter in the solrconfig.xml, and use 
> Lucene's default if unset (0.1). 
> Document "noCFSRatio" in the ref guide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7470) jvm/filesystem dependent failures in SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection -- directory traversal order dependency

2015-04-24 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512061#comment-14512061
 ] 

Hoss Man edited comment on SOLR-7470 at 4/25/15 12:00 AM:
--

Attached patch:
* fixes the test to index the xml files in a deterministic order dictated by 
the test seed.
* fixes mem.xml to use float price value.

If you apply the patch, and then revert the mem.xml change, anyone -- 
regardless of filesystem -- should be able to see the test reliably fail with 
this seed...

{noformat}
ant test  -Dtestcase=SolrCloudExampleTest 
-Dtests.method=testLoadDocsIntoGettingStartedCollection 
-Dtests.seed=2AD197B874223638:C770325C8CC3DD07 -Dtests.slow=true 
-Dtests.locale=es_PE -Dtests.timezone=Africa/Khartoum -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{noformat}

...i'm still hammering on the test to -verify there are no- _attempt to 
identify_ other file orderings that trigger similar bugs.



was (Author: hossman):

Attached patch:
* fixes the test to index the xml files in a deterministic order dictated by 
the test seed.
* fixes mem.xml to use float price value.

If you apply the patch, and then revert the mem.xml change, anyone -- 
regardless of filesystem -- should be able to see the test reliably fail with 
this seed...

{noformat}
ant test  -Dtestcase=SolrCloudExampleTest 
-Dtests.method=testLoadDocsIntoGettingStartedCollection 
-Dtests.seed=2AD197B874223638:C770325C8CC3DD07 -Dtests.slow=true 
-Dtests.locale=es_PE -Dtests.timezone=Africa/Khartoum -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{noformat}

...i'm still hammering on the test to verify there are no other file orderings 
that trigger similar bugs.


> jvm/filesystem dependent failures in 
> SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection -- directory 
> traversal order dependency
> --
>
> Key: SOLR-7470
> URL: https://issues.apache.org/jira/browse/SOLR-7470
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.2
>
> Attachments: SOLR-7470.patch
>
>
> SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection currently fails 
> 100% reliably on my laptop, regardless of seed with a root cause of...
> {noformat}
>[junit4]   2> 35968 T60 N:127.0.0.1:57372_ C:gettingstarted S:shard1 
> R:core_node1 c:gettingstarted_shard1_replica2 C15 oasc.SolrException.log 
> ERROR org.apache.solr.common.SolrException: ERROR: [doc=VS1GB400C3] Error 
> adding field 'price'='74.99' msg=For input string: "74.99"
>[junit4]   2>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:178)
>[junit4]   2>  at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:79)
>[junit4]   2>  at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>[junit4]   2>  at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>[junit4]   2>  at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
> {noformat}
> This test uses "data_driven_schema_configs" and indexes all of the \*.xml 
> files in "example/exampledocs".
> Two factors seem to be contributing to the the reason this fails consistently 
> for me (but not for jenkins or anyone else i've seen complain)...
> # The {{File.listFiles(FileFilter)}} method is used to iterate over the files
> # The "mem.xml" file has an integer price value: {{ name="price">185}}
> {{listFiles}} is documented that "There is no guarantee that the name strings 
> in the resulting array will appear in any specific order" and evidently with 
> my filesystem + JVM they come back in a consistent order everytime, which 
> just so happens to put mem.xml in front of any other file that also has a 
> "price" field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7470) jvm/filesystem dependent failures in SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection -- directory traversal order dependency

2015-04-24 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7470:
---
Attachment: SOLR-7470.patch


Attached patch:
* fixes the test to index the xml files in a deterministic order dictated by 
the test seed.
* fixes mem.xml to use float price value.

If you apply the patch, and then revert the mem.xml change, anyone -- 
regardless of filesystem -- should be able to see the test reliably fail with 
this seed...

{noformat}
ant test  -Dtestcase=SolrCloudExampleTest 
-Dtests.method=testLoadDocsIntoGettingStartedCollection 
-Dtests.seed=2AD197B874223638:C770325C8CC3DD07 -Dtests.slow=true 
-Dtests.locale=es_PE -Dtests.timezone=Africa/Khartoum -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{noformat}

...i'm still hammering on the test to verify there are no other file orderings 
that trigger similar bugs.


> jvm/filesystem dependent failures in 
> SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection -- directory 
> traversal order dependency
> --
>
> Key: SOLR-7470
> URL: https://issues.apache.org/jira/browse/SOLR-7470
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.2
>
> Attachments: SOLR-7470.patch
>
>
> SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection currently fails 
> 100% reliably on my laptop, regardless of seed with a root cause of...
> {noformat}
>[junit4]   2> 35968 T60 N:127.0.0.1:57372_ C:gettingstarted S:shard1 
> R:core_node1 c:gettingstarted_shard1_replica2 C15 oasc.SolrException.log 
> ERROR org.apache.solr.common.SolrException: ERROR: [doc=VS1GB400C3] Error 
> adding field 'price'='74.99' msg=For input string: "74.99"
>[junit4]   2>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:178)
>[junit4]   2>  at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:79)
>[junit4]   2>  at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>[junit4]   2>  at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>[junit4]   2>  at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
> {noformat}
> This test uses "data_driven_schema_configs" and indexes all of the \*.xml 
> files in "example/exampledocs".
> Two factors seem to be contributing to the the reason this fails consistently 
> for me (but not for jenkins or anyone else i've seen complain)...
> # The {{File.listFiles(FileFilter)}} method is used to iterate over the files
> # The "mem.xml" file has an integer price value: {{ name="price">185}}
> {{listFiles}} is documented that "There is no guarantee that the name strings 
> in the resulting array will appear in any specific order" and evidently with 
> my filesystem + JVM they come back in a consistent order everytime, which 
> just so happens to put mem.xml in front of any other file that also has a 
> "price" field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7470) jvm/filesystem dependent failures in SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection -- directory traversal order dependency

2015-04-24 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7470:
--

 Summary: jvm/filesystem dependent failures in 
SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection -- directory 
traversal order dependency
 Key: SOLR-7470
 URL: https://issues.apache.org/jira/browse/SOLR-7470
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 5.2


SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection currently fails 
100% reliably on my laptop, regardless of seed with a root cause of...

{noformat}
   [junit4]   2> 35968 T60 N:127.0.0.1:57372_ C:gettingstarted S:shard1 
R:core_node1 c:gettingstarted_shard1_replica2 C15 oasc.SolrException.log ERROR 
org.apache.solr.common.SolrException: ERROR: [doc=VS1GB400C3] Error adding 
field 'price'='74.99' msg=For input string: "74.99"
   [junit4]   2>at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:178)
   [junit4]   2>at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:79)
   [junit4]   2>at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
   [junit4]   2>at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
   [junit4]   2>at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
{noformat}

This test uses "data_driven_schema_configs" and indexes all of the \*.xml files 
in "example/exampledocs".

Two factors seem to be contributing to the the reason this fails consistently 
for me (but not for jenkins or anyone else i've seen complain)...

# The {{File.listFiles(FileFilter)}} method is used to iterate over the files
# The "mem.xml" file has an integer price value: {{185}}

{{listFiles}} is documented that "There is no guarantee that the name strings 
in the resulting array will appear in any specific order" and evidently with my 
filesystem + JVM they come back in a consistent order everytime, which just so 
happens to put mem.xml in front of any other file that also has a "price" field.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 257 - Still Failing

2015-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/257/

No tests ran.

Build Log:
[...truncated 52433 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
"file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.02 sec (8.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.2.0-src.tgz...
   [smoker] 28.2 MB in 0.04 sec (684.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.2.0.tgz...
   [smoker] 64.8 MB in 0.09 sec (687.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.2.0.zip...
   [smoker] 74.6 MB in 0.13 sec (552.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5777 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5777 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 208 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (68.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.2.0-src.tgz...
   [smoker] 36.0 MB in 0.07 sec (485.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.2.0.tgz...
   [smoker] 126.4 MB in 0.21 sec (595.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.2.0.zip...
   [smoker] 133.0 MB in 0.15 sec (904.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.2.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.2.0/server/lib/ext/javax.servlet-api-3.0.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.2.0.tgz...
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1533, in 
   [smoker] main()
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1478, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1522, in smokeTest
   [smoker] u

[jira] [Resolved] (SOLR-7469) check-licenses happily ignoring incorrect start.jar.sha1, current start.jar.sha1 on trunk is out of date.

2015-04-24 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-7469.

   Resolution: Fixed
Fix Version/s: Trunk

> check-licenses happily ignoring incorrect start.jar.sha1, current 
> start.jar.sha1 on trunk is out of date.
> -
>
> Key: SOLR-7469
> URL: https://issues.apache.org/jira/browse/SOLR-7469
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7469.patch
>
>
> as of r1675948, "ant clean jar-checksums" results in a modified 
> solr/licenses/start.jar.sha1 ...
> {noformat}
> hossman@frisbee:~/lucene/dev$ svn diff
> Index: solr/licenses/start.jar.sha1
> ===
> --- solr/licenses/start.jar.sha1  (revision 1675948)
> +++ solr/licenses/start.jar.sha1  (working copy)
> @@ -1 +1 @@
> -24e798bde886e1430978ece6c4aa90d781e2da30
> +b91b72f9167cce4c1caea0f8363fd9984456e34d
> {noformat}
> ...so apparently the version of start.jar we're fetching from ivy & using in 
> solr changed at some point w/o the SHA1 being updated?
> apparently because "check-licenses" is explicitly ignoring start.jar...
> {noformat}
> 
> 
> {noformat}
> ...this is seriously messed up.  we need to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7469) check-licenses happily ignoring incorrect start.jar.sha1, current start.jar.sha1 on trunk is out of date.

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511933#comment-14511933
 ] 

ASF subversion and git services commented on SOLR-7469:
---

Commit 1675969 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1675969 ]

SOLR-7469: Fix check-licenses to accurately detect if start.jar.sha1 is 
incorrect (merge r1675968)

> check-licenses happily ignoring incorrect start.jar.sha1, current 
> start.jar.sha1 on trunk is out of date.
> -
>
> Key: SOLR-7469
> URL: https://issues.apache.org/jira/browse/SOLR-7469
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: 5.2
>
> Attachments: SOLR-7469.patch
>
>
> as of r1675948, "ant clean jar-checksums" results in a modified 
> solr/licenses/start.jar.sha1 ...
> {noformat}
> hossman@frisbee:~/lucene/dev$ svn diff
> Index: solr/licenses/start.jar.sha1
> ===
> --- solr/licenses/start.jar.sha1  (revision 1675948)
> +++ solr/licenses/start.jar.sha1  (working copy)
> @@ -1 +1 @@
> -24e798bde886e1430978ece6c4aa90d781e2da30
> +b91b72f9167cce4c1caea0f8363fd9984456e34d
> {noformat}
> ...so apparently the version of start.jar we're fetching from ivy & using in 
> solr changed at some point w/o the SHA1 being updated?
> apparently because "check-licenses" is explicitly ignoring start.jar...
> {noformat}
> 
> 
> {noformat}
> ...this is seriously messed up.  we need to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7469) check-licenses happily ignoring incorrect start.jar.sha1, current start.jar.sha1 on trunk is out of date.

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511923#comment-14511923
 ] 

ASF subversion and git services commented on SOLR-7469:
---

Commit 1675968 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1675968 ]

SOLR-7469: Fix check-licenses to accurately detect if start.jar.sha1 is 
incorrect

> check-licenses happily ignoring incorrect start.jar.sha1, current 
> start.jar.sha1 on trunk is out of date.
> -
>
> Key: SOLR-7469
> URL: https://issues.apache.org/jira/browse/SOLR-7469
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: 5.2
>
> Attachments: SOLR-7469.patch
>
>
> as of r1675948, "ant clean jar-checksums" results in a modified 
> solr/licenses/start.jar.sha1 ...
> {noformat}
> hossman@frisbee:~/lucene/dev$ svn diff
> Index: solr/licenses/start.jar.sha1
> ===
> --- solr/licenses/start.jar.sha1  (revision 1675948)
> +++ solr/licenses/start.jar.sha1  (working copy)
> @@ -1 +1 @@
> -24e798bde886e1430978ece6c4aa90d781e2da30
> +b91b72f9167cce4c1caea0f8363fd9984456e34d
> {noformat}
> ...so apparently the version of start.jar we're fetching from ivy & using in 
> solr changed at some point w/o the SHA1 being updated?
> apparently because "check-licenses" is explicitly ignoring start.jar...
> {noformat}
> 
> 
> {noformat}
> ...this is seriously messed up.  we need to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7469) check-licenses happily ignoring incorrect start.jar.sha1, current start.jar.sha1 on trunk is out of date.

2015-04-24 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7469:
---
Attachment: SOLR-7469.patch

patch for trunk...

stops ignoring start.jar in check-license so we get accurate sha1 checks, and 
leverages additional-filters to recognize that start.jar comes from jetty and 
uses the jetty license.

> check-licenses happily ignoring incorrect start.jar.sha1, current 
> start.jar.sha1 on trunk is out of date.
> -
>
> Key: SOLR-7469
> URL: https://issues.apache.org/jira/browse/SOLR-7469
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: 5.2
>
> Attachments: SOLR-7469.patch
>
>
> as of r1675948, "ant clean jar-checksums" results in a modified 
> solr/licenses/start.jar.sha1 ...
> {noformat}
> hossman@frisbee:~/lucene/dev$ svn diff
> Index: solr/licenses/start.jar.sha1
> ===
> --- solr/licenses/start.jar.sha1  (revision 1675948)
> +++ solr/licenses/start.jar.sha1  (working copy)
> @@ -1 +1 @@
> -24e798bde886e1430978ece6c4aa90d781e2da30
> +b91b72f9167cce4c1caea0f8363fd9984456e34d
> {noformat}
> ...so apparently the version of start.jar we're fetching from ivy & using in 
> solr changed at some point w/o the SHA1 being updated?
> apparently because "check-licenses" is explicitly ignoring start.jar...
> {noformat}
> 
> 
> {noformat}
> ...this is seriously messed up.  we need to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7469) check-licenses happily ignoring incorrect start.jar.sha1, current start.jar.sha1 on trunk is out of date.

2015-04-24 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7469:
--

 Summary: check-licenses happily ignoring incorrect start.jar.sha1, 
current start.jar.sha1 on trunk is out of date.
 Key: SOLR-7469
 URL: https://issues.apache.org/jira/browse/SOLR-7469
 Project: Solr
  Issue Type: Task
Reporter: Hoss Man
Assignee: Hoss Man
Priority: Blocker
 Fix For: 5.2


as of r1675948, "ant clean jar-checksums" results in a modified 
solr/licenses/start.jar.sha1 ...

{noformat}
hossman@frisbee:~/lucene/dev$ svn diff
Index: solr/licenses/start.jar.sha1
===
--- solr/licenses/start.jar.sha1(revision 1675948)
+++ solr/licenses/start.jar.sha1(working copy)
@@ -1 +1 @@
-24e798bde886e1430978ece6c4aa90d781e2da30
+b91b72f9167cce4c1caea0f8363fd9984456e34d
{noformat}

...so apparently the version of start.jar we're fetching from ivy & using in 
solr changed at some point w/o the SHA1 being updated?

apparently because "check-licenses" is explicitly ignoring start.jar...

{noformat}


{noformat}

...this is seriously messed up.  we need to fix this.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7377) SOLR Streaming Expressions

2015-04-24 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511911#comment-14511911
 ] 

Joel Bernstein commented on SOLR-7377:
--

Was reviewing the new Comparator package. Curious about the name 
EquatToComparator. This makes it sound like it's only used to determine 
equality. But it's also being used in certain situations to determine sort 
order. Since an equality comparator makes sense in certain situations, like 
with the ReducerStream, does it make sense to have two Comparator 
implementations? 

> SOLR Streaming Expressions
> --
>
> Key: SOLR-7377
> URL: https://issues.apache.org/jira/browse/SOLR-7377
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Dennis Gove
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-7377.patch
>
>
> It would be beneficial to add an expression-based interface to Streaming API 
> described in SOLR-7082. Right now that API requires streaming requests to 
> come in from clients as serialized bytecode of the streaming classes. The 
> suggestion here is to support string expressions which describe the streaming 
> operations the client wishes to perform. 
> {code:java}
> search(collection1, q=*:*, fl="id,fieldA,fieldB", sort="fieldA asc")
> {code}
> With this syntax in mind, one can now express arbitrarily complex stream 
> queries with a single string.
> {code:java}
> // merge two distinct searches together on common fields
> merge(
>   search(collection1, q="id:(0 3 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection2, q="id:(1 2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc, a_s asc")
> // find top 20 unique records of a search
> top(
>   n=20,
>   unique(
> search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_f desc"),
> over="a_f desc"),
>   sort="a_f desc")
> {code}
> The syntax would support
> 1. Configurable expression names (eg. via solrconfig.xml one can map "unique" 
> to a class implementing a Unique stream class) This allows users to build 
> their own streams and use as they wish.
> 2. Named parameters (of both simple and expression types)
> 3. Unnamed, type-matched parameters (to support requiring N streams as 
> arguments to another stream)
> 4. Positional parameters
> The main goal here is to make streaming as accessible as possible and define 
> a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6449) NullPointerException in PostingsHighlighter

2015-04-24 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6449.

   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

Thanks Roman!

> NullPointerException in PostingsHighlighter
> ---
>
> Key: LUCENE-6449
> URL: https://issues.apache.org/jira/browse/LUCENE-6449
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.1
>Reporter: Roman Khmelichek
>Assignee: Michael McCandless
> Fix For: Trunk, 5.2
>
> Attachments: postingshighlighter.patch
>
>
> In case an index segment does not have any docs with the field requested for 
> highlighting indexed, there should be a null check immediately following this 
> line (in PostingsHighlighter.java):
> Terms t = r.terms(field);
> Looks like the null check was moved in the 5.1 release and this is 
> occasionally causing a NullPointerException in my near-realtime searcher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6449) NullPointerException in PostingsHighlighter

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511901#comment-14511901
 ] 

ASF subversion and git services commented on LUCENE-6449:
-

Commit 1675966 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1675966 ]

LUCENE-6449: fix NullPointerException when one segment is missing the 
highlighted field in its postings

> NullPointerException in PostingsHighlighter
> ---
>
> Key: LUCENE-6449
> URL: https://issues.apache.org/jira/browse/LUCENE-6449
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.1
>Reporter: Roman Khmelichek
>Assignee: Michael McCandless
> Attachments: postingshighlighter.patch
>
>
> In case an index segment does not have any docs with the field requested for 
> highlighting indexed, there should be a null check immediately following this 
> line (in PostingsHighlighter.java):
> Terms t = r.terms(field);
> Looks like the null check was moved in the 5.1 release and this is 
> occasionally causing a NullPointerException in my near-realtime searcher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6449) NullPointerException in PostingsHighlighter

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511900#comment-14511900
 ] 

ASF subversion and git services commented on LUCENE-6449:
-

Commit 1675965 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1675965 ]

LUCENE-6449: fix NullPointerException when one segment is missing the 
highlighted field in its postings

> NullPointerException in PostingsHighlighter
> ---
>
> Key: LUCENE-6449
> URL: https://issues.apache.org/jira/browse/LUCENE-6449
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.1
>Reporter: Roman Khmelichek
>Assignee: Michael McCandless
> Attachments: postingshighlighter.patch
>
>
> In case an index segment does not have any docs with the field requested for 
> highlighting indexed, there should be a null check immediately following this 
> line (in PostingsHighlighter.java):
> Terms t = r.terms(field);
> Looks like the null check was moved in the 5.1 release and this is 
> occasionally causing a NullPointerException in my near-realtime searcher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7468) Kerberos authentication module

2015-04-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511847#comment-14511847
 ] 

Ishan Chattopadhyaya commented on SOLR-7468:


This patch should be applied after SOLR-7274 patch is applied/committed. Some 
details and discussion regarding the kerberos plugin is in SOLR-7274.

> Kerberos authentication module
> --
>
> Key: SOLR-7468
> URL: https://issues.apache.org/jira/browse/SOLR-7468
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-7468.patch
>
>
> SOLR-7274 introduces a pluggable authentication framework. This issue 
> provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7468) Kerberos authentication module

2015-04-24 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7468:
---
Attachment: SOLR-7468.patch

> Kerberos authentication module
> --
>
> Key: SOLR-7468
> URL: https://issues.apache.org/jira/browse/SOLR-7468
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-7468.patch
>
>
> SOLR-7274 introduces a pluggable authentication framework. This issue 
> provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7274) Pluggable authentication module in Solr

2015-04-24 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7274:
---
Attachment: SOLR-7274.patch

Splitting out the framework (here) and the kerberos plugin (SOLR-7468). Here's 
a patch with just the plugin framework.

> Pluggable authentication module in Solr
> ---
>
> Key: SOLR-7274
> URL: https://issues.apache.org/jira/browse/SOLR-7274
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Anshum Gupta
> Attachments: SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, 
> SOLR-7274.patch
>
>
> It would be good to have Solr support different authentication protocols.
> To begin with, it'd be good to have support for kerberos and basic auth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7468) Kerberos authentication module

2015-04-24 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-7468:
--

 Summary: Kerberos authentication module
 Key: SOLR-7468
 URL: https://issues.apache.org/jira/browse/SOLR-7468
 Project: Solr
  Issue Type: New Feature
  Components: security
Reporter: Ishan Chattopadhyaya


SOLR-7274 introduces a pluggable authentication framework. This issue provides 
a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7467) upgrade tdigest (3.1)

2015-04-24 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-7467.

   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

> upgrade tdigest (3.1)
> -
>
> Key: SOLR-7467
> URL: https://issues.apache.org/jira/browse/SOLR-7467
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: Trunk, 5.2
>
>
> The jar is a drop in replacement, just need a trivial change to 
> ivy-versions.properties and replacing the jar's sha1 file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7467) upgrade tdigest (3.1)

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511827#comment-14511827
 ] 

ASF subversion and git services commented on SOLR-7467:
---

Commit 1675963 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1675963 ]

SOLR-7467: Upgrade t-digest to 3.1 (merge r1675949)

> upgrade tdigest (3.1)
> -
>
> Key: SOLR-7467
> URL: https://issues.apache.org/jira/browse/SOLR-7467
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> The jar is a drop in replacement, just need a trivial change to 
> ivy-versions.properties and replacing the jar's sha1 file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511795#comment-14511795
 ] 

Uwe Schindler edited comment on LUCENE-6450 at 4/24/15 9:30 PM:


bq. I'm curious about the precisionstep as well. The extra terms should give 
range queries a huge speedup if we can use them.

The problem is the following: The NRQ approach only works for ranges (at the 
bounds of the range, we use more precision terms, but in the center we use 
lower precision terms). The problem here is that we do the actual range to 
filter out those which can never match. But for those that are in the range we 
have to check if they are in the bbox. To do this, we need full precision. So 
we cannot use lower prec.

bq. Great stuff! Should this be used as the underlying implementation for 
Solr's LatLonType (which currently does not have multi-valued support)? Any 
downsides for the single-valued case?

The problem is: if you have a large bbox, and many distinct points you have to 
visit many terms, because this does not use trie algorithm of NRQ. It extends 
NRQ, but does not use the algorithm. It is a standard TermRangeQuery with some 
extra filtering. It does not even seek the terms enum! So for the single value 
case I would always prefer the 2 NRQ queries on lat and lon separately. In the 
worst case (bbox on whole earth), you have to visit *all* terms and get their 
postings => more or less the same like a default term range.

One workaround would be: If we would use hilbert curves, we can calculate the 
quadratic box around the center of the bbox that is representable as a single 
numeric range (one where no post filtering is needed). This range could be 
executed by the default NRQ algorithm with using shifted values. For the 
remaining stuff around we can visit only the high-prec terms. With the current 
Morton/Z-Curve we cannot do this. So if we don't fix this now, we must for sure 
put this into sandbox, so we have the chance to change the algorithm.

Another alternative is to just use plain NRQ (ideally also with more locality 
using hilber curves) and post filter the actual results (using doc values). 
This would also be preferable for polygons.

The current implementation is not useable for large bounding boxes covering 
many different positions! E.g. in my case (PANGAEA), we have lat/lon 
coordinates around the whole world including poles and scientists generally 
select large bboxes... It is perfectly fine for searching for shops in towns, 
of course :-)


was (Author: thetaphi):
bq. I'm curious about the precisionstep as well. The extra terms should give 
range queries a huge speedup if we can use them.

The problem is the following: The NRQ approach only works for ranges (at the 
bounds of the range, we use more precision terms, but in the center we use 
lower precision terms). The problem here is that we do the actual range to 
filter out those which can never match. But for those that are in the range we 
have to check if they are in the bbox. To do this, we need full precision. So 
we cannot use lower prec.

bq. Great stuff! Should this be used as the underlying implementation for 
Solr's LatLonType (which currently does not have multi-valued support)? Any 
downsides for the single-valued case?

The problem is: if you have a large bbox, and many distinct points you have to 
visit many terms, because this does not use trie algorithm of NRQ. It extends 
NRQ, but does not use the algorithm. It is a standard TermRangeQuery with some 
extra filtering. It does not even seek the terms enum! So for the single value 
case I would always prefer the 2 NRQ queries. In the worst case (bbox on whole 
earth), you have to visit *all* terms and get their postings => more or less 
the same like a default term range.

One workaround would be: If we would use hilbert curves, we can calculate the 
quadratic box around the center of the bbox that is representable as a single 
numeric range (one where no post filtering is needed). This range could be 
executed by the default NRQ algorithm with using shifted values. For the 
remaining stuff around we can visit only the high-prec terms. With the current 
Morton/Z-Curve we cannot do this. So if we don't fix this now, we must for sure 
put this into sandbox, so we have the chance to change the algorithm.

Another alternative is to just use plain NRQ (ideally also with more locality 
using hilber curves) and post filter the actual results (using doc values). 
This would also be preferable for polygons.

The current implementation is not useable for large bounding boxes covering 
many different positions! E.g. in my case (PANGAEA), we have lat/lon 
coordinates around the whole world including poles and scientists generally 
select large bboxes... It is perfectly fine for searching for shops in towns, 
of course :-)

> Add simple encoded GeoPo

[jira] [Commented] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511796#comment-14511796
 ] 

Ishan Chattopadhyaya commented on LUCENE-6450:
--

+1, this looks good! Just skimmed through the patch, though.

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch, LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511795#comment-14511795
 ] 

Uwe Schindler commented on LUCENE-6450:
---

bq. I'm curious about the precisionstep as well. The extra terms should give 
range queries a huge speedup if we can use them.

The problem is the following: The NRQ approach only works for ranges (at the 
bounds of the range, we use more precision terms, but in the center we use 
lower precision terms). The problem here is that we do the actual range to 
filter out those which can never match. But for those that are in the range we 
have to check if they are in the bbox. To do this, we need full precision. So 
we cannot use lower prec.

bq. Great stuff! Should this be used as the underlying implementation for 
Solr's LatLonType (which currently does not have multi-valued support)? Any 
downsides for the single-valued case?

The problem is: if you have a large bbox, and many distinct points you have to 
visit many terms, because this does not use trie algorithm of NRQ. It extends 
NRQ, but does not use the algorithm. It is a standard TermRangeQuery with some 
extra filtering. It does not even seek the terms enum! So for the single value 
case I would always prefer the 2 NRQ queries. In the worst case (bbox on whole 
earth), you have to visit *all* terms and get their postings => more or less 
the same like a default term range.

One workaround would be: If we would use hilbert curves, we can calculate the 
quadratic box around the center of the bbox that is representable as a single 
numeric range (one where no post filtering is needed). This range could be 
executed by the default NRQ algorithm with using shifted values. For the 
remaining stuff around we can visit only the high-prec terms. With the current 
Morton/Z-Curve we cannot do this. So if we don't fix this now, we must for sure 
put this into sandbox, so we have the chance to change the algorithm.

Another alternative is to just use plain NRQ (ideally also with more locality 
using hilber curves) and post filter the actual results (using doc values). 
This would also be preferable for polygons.

The current implementation is not useable for large bounding boxes covering 
many different positions! E.g. in my case (PANGAEA), we have lat/lon 
coordinates around the whole world including poles and scientists generally 
select large bboxes... It is perfectly fine for searching for shops in towns, 
of course :-)

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch, LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1671240 [1/2] - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/cloud/ core/src/java/org/apache/solr/cloud/overseer/ core/src/java/org/apache/solr/core/ core/src/java/org/

2015-04-24 Thread Chris Hostetter

Hey Shai,

The CHANGES.txt additions you made here (and for SOLR-7325) are only in 
the "Upgrading" section.  

Specific CHANGE entries (with credit to contributors) should always exist 
in one of the "Detailed Change List" sections (in this cases, probably 
either "new features" or "other changes") and then then in the Upgrading 
section there should be a summary/refrence to those CHANGES specifically 
worded as it pertains/impacts users who upgrade and what they need to know

Some examples of a "good" CHANGES.txt to model after when you think 
a change to impact people who upgrade...

https://svn.apache.org/viewvc/lucene/dev/trunk/solr/CHANGES.txt?r1=1592076&r2=1592075&pathrev=1592076
https://svn.apache.org/viewvc/lucene/dev/trunk/solr/CHANGES.txt?r1=1641819&r2=1641818&pathrev=1641819
https://svn.apache.org/viewvc/lucene/dev/trunk/solr/CHANGES.txt?r1=1646660&r2=1646659&pathrev=1646660



: Date: Sat, 04 Apr 2015 07:02:21 -
: From: sh...@apache.org
: Reply-To: dev@lucene.apache.org
: To: comm...@lucene.apache.org
: Subject: svn commit: r1671240 [1/2] - in /lucene/dev/trunk/solr: ./
: core/src/java/org/apache/solr/cloud/
: core/src/java/org/apache/solr/cloud/overseer/
: core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/handler/
:  core/src/java/org/apache/solr/handl...
: 
: Author: shaie
: Date: Sat Apr  4 07:02:20 2015
: New Revision: 1671240
: 
: URL: http://svn.apache.org/r1671240
: Log:
: SOLR-7336: Add State enum to Replica
: 
: Modified:
: lucene/dev/trunk/solr/CHANGES.txt
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/CloudDescriptor.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/LeaderInitiatedRecoveryThread.java
: lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/Overseer.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/OverseerAutoReplicaFailoverThread.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionProcessor.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/ZkController.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/cloud/overseer/ReplicaMutator.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/JarRepository.java
: lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/ZkContainer.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/search/JoinQParserPlugin.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/servlet/ZookeeperInfoServlet.java
: 
lucene/dev/trunk/solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java
: lucene/dev/trunk/solr/core/src/java/org/apache/solr/util/SolrCLI.java
: lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/AssignTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/CustomCollectionTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/DeleteInactiveReplicaTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/DeleteLastCustomShardedReplicaTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/LeaderInitiatedRecoveryOnCommitTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverUtilsTest.java
: 
lucene/dev/trunk/solr/core/src/test/org/apache/solr/cloud/SyncSliceTest.java
: 
lucene/dev/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java
: 
lucene/dev/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/request/CoreAdminRequest.java
: 
lucene/dev/trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterStateUtil.java
: 
lucene/dev/tru

[jira] [Commented] (SOLR-7221) ConcurrentUpdateSolrServer does not work with HttpClientBuilder (4.3.1)

2015-04-24 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511783#comment-14511783
 ] 

Ishan Chattopadhyaya commented on SOLR-7221:


bq. Is there a related ticket to migrate HttpSolrServer to the new httpclient 
4.3 api?
HttpClient has already been upgraded to 4.4.1. But, it still uses deprecated 
stuff, and not the HttpClientBuilder interface. Here's the related issue: 
SOLR-5604.

> ConcurrentUpdateSolrServer does not work with HttpClientBuilder (4.3.1)
> ---
>
> Key: SOLR-7221
> URL: https://issues.apache.org/jira/browse/SOLR-7221
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 4.9
>Reporter: g00glen00b
>  Labels: httpclient, solrj
>
> I recently found out about the {{ConcurrentUpdateSolrServer}} and I'm trying 
> to switch from the {{HttpSolrServer}} for batch processing.
> However, our Solr is protected with basic authentication, so we're using a 
> custom {{HttpClient}} that sends the credentials with it.
> This works fine with {{HttpSolrServer}}, but not with 
> {{ConcurrentUpdateSolrServer}}.  The {{ConcurrentUpdateSolrServer}} uses 
> {{this.server.setFollowRedirects(false)}}, but this triggers {{getParams()}} 
> on the {{HttpClient}} which throws an `UnSupportedOperationException` when 
> you use the {{InternalHttpClient}} which is the default type when using the 
> {{HttpClientBuilder}}.
> The stack trace produced is:
>  {code}
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.http.impl.client.InternalHttpClient.getParams(InternalHttpClient.java:206)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.setFollowRedirects(HttpClientUtil.java:267)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.setFollowRedirects(HttpSolrServer.java:658)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:124)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:115)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:105)
> {code}
> It's annoying of course, and I don't know who is to be "blamed". I reported 
> it here anyways because the {{getParams()}} method is deprecated.
> I'm using SolrJ 4.9, but I also noticed that it's neither working on 4.7 or 
> 4.8 or any version using HttpClient 4.3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7454) Solr 5.1 does not use SOLR_JAVA_MEM in solr.in.sh

2015-04-24 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar resolved SOLR-7454.
-
   Resolution: Duplicate
Fix Version/s: 5.2
 Assignee: Ramkumar Aiyengar

> Solr 5.1 does not use SOLR_JAVA_MEM in solr.in.sh
> -
>
> Key: SOLR-7454
> URL: https://issues.apache.org/jira/browse/SOLR-7454
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
> Environment: CentOS 7
>Reporter: Kyle Barnhart
>Assignee: Ramkumar Aiyengar
> Fix For: 5.2
>
>
> Solr 5.0 uses the SOLR_JAVA_MEM in solr.in.sh while Solr 5.1 does not.
> I was able to resolve this by commenting out line 1262 in bin/solr
> {code}
> SOLR_JAVA_MEM=()
> {code}
> and setting SOLR_JAVA_MEM in bin/solr.in.sh
> {code}
> SOLR_JAVA_MEM=('-Xms512m' '-Xmx512m')
> {code}
> instead of
> {code}
> SOLR_JAVA_MEM="-Xms512m -Xmx512m"
> {code}
>  
> This could probably be resolved with something like at line 1262 to 1269 in 
> bin/solr
> {code}
> if [ -z "$SOLR_JAVA_MEM" ]; then
>   if [ "$SOLR_HEAP" != "" ]; then
> SOLR_JAVA_MEM=("-Xms$SOLR_HEAP" "-Xmx$SOLR_HEAP")
>   else
> SOLR_JAVA_MEM=('-Xms512m' '-Xmx512m')
>   fi
> else
>   SOLR_JAVA_MEM=($SOLR_JAVA_MEM)
> fi
> {code}
> instead of
> {code}
> SOLR_JAVA_MEM=()
> if [ "$SOLR_HEAP" != "" ]; then
>   SOLR_JAVA_MEM=("-Xms$SOLR_HEAP" "-Xmx$SOLR_HEAP")
> fi
> if [ -z "$SOLR_JAVA_MEM" ]; then
>   SOLR_JAVA_MEM=('-Xms512m' '-Xmx512m')
> fi
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7467) upgrade tdigest (3.1)

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511755#comment-14511755
 ] 

ASF subversion and git services commented on SOLR-7467:
---

Commit 1675949 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1675949 ]

SOLR-7467: Upgrade t-digest to 3.1

> upgrade tdigest (3.1)
> -
>
> Key: SOLR-7467
> URL: https://issues.apache.org/jira/browse/SOLR-7467
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> The jar is a drop in replacement, just need a trivial change to 
> ivy-versions.properties and replacing the jar's sha1 file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7467) upgrade tdigest (3.1)

2015-04-24 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511750#comment-14511750
 ] 

Hoss Man commented on SOLR-7467:



Beyond some basic bug fixes, 3.1 includes 2 notable changes...

* new static {{TDigest.createDigest()}} factory method to create "a TreeDigest 
of whichever type is the currently recommended type"
** looked into switching to this to minimize code changes needed in Solr as the 
library itself improves in future versions, but this only helps for creating 
empty instances -- when attempting to merge the byte[] data from multiple 
shards, you still have to know the concrete TDigest implementation used to know 
which "fromBytes" method to call -- so it's not really useul to us yet (i filed 
https://github.com/tdunning/t-digest/issues/52)

* new MergingDigest implementation
** this looks interesting and might be worth switching to in the future, but 
based on the comments in the class level javadocs about more testing needed, 
and since the {{TDigest.createDigest()}} method mentioned above still uses 
{{AVLTreeDigest}} we should probably to leave to just leave our code alone and 
keep using {{AVLTreeDigest}} in Solr for now.



> upgrade tdigest (3.1)
> -
>
> Key: SOLR-7467
> URL: https://issues.apache.org/jira/browse/SOLR-7467
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> The jar is a drop in replacement, just need a trivial change to 
> ivy-versions.properties and replacing the jar's sha1 file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7467) upgrade tdigest (3.1)

2015-04-24 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7467:
--

 Summary: upgrade tdigest (3.1)
 Key: SOLR-7467
 URL: https://issues.apache.org/jira/browse/SOLR-7467
 Project: Solr
  Issue Type: Task
Reporter: Hoss Man
Assignee: Hoss Man



The jar is a drop in replacement, just need a trivial change to 
ivy-versions.properties and replacing the jar's sha1 file




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6449) NullPointerException in PostingsHighlighter

2015-04-24 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-6449:
--

Assignee: Michael McCandless

> NullPointerException in PostingsHighlighter
> ---
>
> Key: LUCENE-6449
> URL: https://issues.apache.org/jira/browse/LUCENE-6449
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.1
>Reporter: Roman Khmelichek
>Assignee: Michael McCandless
> Attachments: postingshighlighter.patch
>
>
> In case an index segment does not have any docs with the field requested for 
> highlighting indexed, there should be a null check immediately following this 
> line (in PostingsHighlighter.java):
> Terms t = r.terms(field);
> Looks like the null check was moved in the 5.1 release and this is 
> occasionally causing a NullPointerException in my near-realtime searcher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7464) Cluster status API call should be served by the node which it was called from

2015-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-7464:
---

Assignee: Varun Thacker

> Cluster status API call should be served by the node which it was called from
> -
>
> Key: SOLR-7464
> URL: https://issues.apache.org/jira/browse/SOLR-7464
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7464.patch
>
>
> Currently the cluster status api call has to go to the 
> OverseerCollectionProcessor to serve the request. We should directly serve 
> the request from the node where the call was made to. 
> That way we will end up putting lesser tasks into the overseer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40) - Build # 12263 - Failure!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12263/
Java: 64bit/jdk1.8.0_40 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=6013, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=6014, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=6011, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)4) Thread[id=6015, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=6016, 
name=NioSocketAcceptor-1, state=RUNNABLE, group=TGRP-SaslZkACLProviderTest] 
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.select(NioSocketAcceptor.java:234)
 at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:417)
 at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=6012, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 

[jira] [Updated] (LUCENE-6445) Highlighter TokenSources simplification; just one getAnyTokenStream()

2015-04-24 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-6445:
-
Attachment: LUCENE-6445_TokenSources_simplification.patch

Attached patch.
The 2nd method name is actually "getTermVectorTokenStreamOrNull", and I decided 
that positions on the term vector needn't be a hard requirement.  

The patch adds a test for the maxStartOffset behavior. The javadocs for these 
two methods are quite complete, including a warning about multi-valued fields.  
Solr calls one of these now with the maxStartOffset, so it will benefit.  
Updating  all the test calls was a bit tedious.

Also, this highlighter module now depends on analysis-common for the 
LimitTokenOffsetFilter.

> Highlighter TokenSources simplification; just one getAnyTokenStream()
> -
>
> Key: LUCENE-6445
> URL: https://issues.apache.org/jira/browse/LUCENE-6445
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE-6445_TokenSources_simplification.patch
>
>
> The Highlighter "TokenSources" class has quite a few utility methods 
> pertaining to getting a TokenStream from either term vectors or analyzed 
> text.  I think it's too much:
> * some go to term vectors, some don't.  But if you don't want to go to term 
> vectors, then it's quite easy for the caller to invoke the Analyzer for the 
> field value, and to get that field value.
> * Some methods return null, some never null; I forget which at a glance.
> * Some methods read the Document (to get a field value) from the IndexReader, 
> some don't.  Furthermore, it's not an ideal place to get the doc since your 
> app might be using an IndexSearcher with a document cache (e.g. 
> SolrIndexSearcher).
> * None of the methods accept a Fields instance from term vectors as a 
> parameter.  Based on how Lucene's term vector format works, this is a 
> performance trap if you don't re-use an instance across fields on the 
> document that you're highlighting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b54) - Build # 12434 - Failure!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12434/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (38 > 20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (38 > 20) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([3288972F2B56F348:BADCA8F585AA9EB0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

Re: Running 5.1.0 test-suite via maven

2015-04-24 Thread Dawid Weiss
1. does it fail when you run it via ant (ant build)?
2. the problem is very likely in dependencies declared in Maven (as
compared to what the ivy dependencies are).

Dawid

On Fri, Apr 24, 2015 at 10:24 AM, Per Steffensen  wrote:
> I have checked out tag lucene_solr_5_1_0 and try to run the test-suite using
> maven
>   ant get-maven-poms
>   cd maven-build/
>   mvn -N -Pbootstrap -DskipTests install
>   mvn -Dmaven.test.failure.ignore=true test
>
> The following tests fail running the test-suite this way using maven
> * SuggestFieldTest - all test methods fail
> * hadoop.MorphlineBasicMiniMRTest and hadoop.MorphlineGoLiveMiniMRTest  -
> setupClass fails
> * TestCoreDiscovery.testCoreDirCantRead - test fails
>
> Now trying to run in Eclipse
>   ant eclipse
>   Open Eclipse and import project
> Running the failing tests
> * SuggestFieldTest - test passes
> * hadoop.MorphlineBasicMiniMRTest and hadoop.MorphlineGoLiveMiniMRTest -
> setupClass fails
> * TestCoreDiscovery.testCoreDirCantRead - test passes
>
> Can anyone explain this? Would expect a green test-suite for a released
> solr/lucene.
> What might be wrong with SuggestFieldTest and TestCoreDiscovery tests run
> through maven (fail), compared to run through Eclipse (pass)?
> Why does the Morphline tests fail (both maven and Eclipse)?
> It is the correct way to run tests using maven? It there another way the
> test-suite is usually run (e.g. using ant)?
>
> Thanks in advance.
>
> Regards, Per Steffensen
> - fail stacktraces
> ---
> * SuggestFieldTest fails using maven (not Eclipse) like this:
> java.lang.IllegalArgumentException: An SPI class of type
> org.apache.lucene.codecs.PostingsFormat with name 'completion' does not
> exist.  You need to add the corresponding JAR file supporting this SPI to
> your classpath.  The current classpath supports the following names:
> [MockRandom, RAMOnly, LuceneFixedGap, LuceneVarGapFixedInterval,
> LuceneVarGapDocFreqInterval, TestBloomFilteredLucenePostings, Asserting,
> Lucene50, BlockTreeOrds, BloomFilter, Direct, FSTOrd50, FST50, Memory,
> SimpleText]
> at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:111)
> at
> org.apache.lucene.codecs.PostingsFormat.forName(PostingsFormat.java:100)
> at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:255)
> at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:336)
> at
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:104)
> at org.apache.lucene.index.SegmentReader.(SegmentReader.java:65)
> at
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:132)
> at
> org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:184)
> at
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:429)
> at
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:343)
> at
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:280)
> at
> org.apache.lucene.search.suggest.document.SuggestFieldTest.testReturnedDocID(SuggestFieldTest.java:459)
>
> * The Morphline tests fail (both maven and Eclipse) like this:
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.lang.NoClassDefFoundError:
> org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryWriter
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createRMApplicationHistoryWriter(ResourceManager.java:357)
> at
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:468)
> at
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:989)
> at
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:255)
> at
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at
> org.apache.solr.hadoop.hack.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:200)
> at
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at
> org.apache.hadoop.service.CompositeService.serviceStart(Comp

[jira] [Updated] (LUCENE-6453) Specialize SpanPositionQueue similar to DisiPriorityQueue

2015-04-24 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-6453:
-
Attachment: LUCENE-6453.patch

This time with upHeap void instead of boolean, return value is not used.

> Specialize SpanPositionQueue similar to DisiPriorityQueue
> -
>
> Key: LUCENE-6453
> URL: https://issues.apache.org/jira/browse/LUCENE-6453
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6453.patch, LUCENE-6453.patch
>
>
> Inline the position comparison function



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7466) Allow optional leading wildcards in complexphrase

2015-04-24 Thread Andy hardin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511629#comment-14511629
 ] 

Andy hardin commented on SOLR-7466:
---

I am currently working on a plugin for our usage which will be a version of 
ComplexPhraseQParser with {{lparser.setAllowLeadingWildcard(true);}}.  I'd like 
to see the option as shown in my example to be added to ComplexPhraseQParser, 
though.  I also have a patch in the works, but want to make sure I have good 
coverage for it.

> Allow optional leading wildcards in complexphrase
> -
>
> Key: SOLR-7466
> URL: https://issues.apache.org/jira/browse/SOLR-7466
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Andy hardin
>  Labels: complexPhrase, query-parser, wildcards
>
> Currently ComplexPhraseQParser (SOLR-1604) allows trailing wildcards on terms 
> in a phrase, but does not allow leading wildcards.  I would like the option 
> to be able to search for terms with both trailing and leading wildcards.  
> For example with:
> {!complexphrase allowLeadingWildcard=true} "j* *th"
> would match "John Smith", "Jim Smith", but not "John Schmitt"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7466) Allow optional leading wildcards in complexphrase

2015-04-24 Thread Andy hardin (JIRA)
Andy hardin created SOLR-7466:
-

 Summary: Allow optional leading wildcards in complexphrase
 Key: SOLR-7466
 URL: https://issues.apache.org/jira/browse/SOLR-7466
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Affects Versions: 4.8
Reporter: Andy hardin


Currently ComplexPhraseQParser (SOLR-1604) allows trailing wildcards on terms 
in a phrase, but does not allow leading wildcards.  I would like the option to 
be able to search for terms with both trailing and leading wildcards.  

For example with:
{!complexphrase allowLeadingWildcard=true} "j* *th"
would match "John Smith", "Jim Smith", but not "John Schmitt"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6453) Specialize SpanPositionQueue similar to DisiPriorityQueue

2015-04-24 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-6453:
-
Attachment: LUCENE-6453.patch

> Specialize SpanPositionQueue similar to DisiPriorityQueue
> -
>
> Key: LUCENE-6453
> URL: https://issues.apache.org/jira/browse/LUCENE-6453
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6453.patch
>
>
> Inline the position comparison function



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6453) Specialize SpanPositionQueue similar to DisiPriorityQueue

2015-04-24 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-6453:
-
Attachment: (was: LUCENE-6453.patch)

> Specialize SpanPositionQueue similar to DisiPriorityQueue
> -
>
> Key: LUCENE-6453
> URL: https://issues.apache.org/jira/browse/LUCENE-6453
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: Trunk, 5.x
>
>
> Inline the position comparison function



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6453) Specialize SpanPositionQueue similar to DisiPriorityQueue

2015-04-24 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-6453:
-
Attachment: LUCENE-6453.patch

> Specialize SpanPositionQueue similar to DisiPriorityQueue
> -
>
> Key: LUCENE-6453
> URL: https://issues.apache.org/jira/browse/LUCENE-6453
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6453.patch
>
>
> Inline the position comparison function



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6453) Specialize SpanPositionQueue similar to DisiPriorityQueue

2015-04-24 Thread Paul Elschot (JIRA)
Paul Elschot created LUCENE-6453:


 Summary: Specialize SpanPositionQueue similar to DisiPriorityQueue
 Key: LUCENE-6453
 URL: https://issues.apache.org/jira/browse/LUCENE-6453
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Paul Elschot
Priority: Minor
 Fix For: Trunk, 5.x


Inline the position comparison function



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7465) Flesh out solr/example/files

2015-04-24 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-7465:
---
Summary: Flesh out solr/example/files  (was: nonsensical 
solr/example/files/README.txt needs fixed/removed)

> Flesh out solr/example/files
> 
>
> Key: SOLR-7465
> URL: https://issues.apache.org/jira/browse/SOLR-7465
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.2
>
>
> this README.txt file that's actually some sort of bizare shell script exists 
> on trunk in an otherwise empty directory...
> https://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/files/README.txt?view=markup
> filed added by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652721
> all of hte other files in this directory removed by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652759



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7465) nonsensical solr/example/files/README.txt needs fixed/removed

2015-04-24 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-7465:
---
 Priority: Minor  (was: Major)
Fix Version/s: 5.2
   Issue Type: Task  (was: Bug)

> nonsensical solr/example/files/README.txt needs fixed/removed
> -
>
> Key: SOLR-7465
> URL: https://issues.apache.org/jira/browse/SOLR-7465
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 5.2
>
>
> this README.txt file that's actually some sort of bizare shell script exists 
> on trunk in an otherwise empty directory...
> https://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/files/README.txt?view=markup
> filed added by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652721
> all of hte other files in this directory removed by this commit: 
> https://svn.apache.org/viewvc?view=revision&revision=1652759



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4723 - Failure!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4723/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.test

Error Message:
There should be 3 documents because there should be two id=1 docs due to 
overwrite=false expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: There should be 3 documents because there should be 
two id=1 docs due to overwrite=false expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([B991998ABE83A57F:31C5A650107FC887]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testOverwriteOption(CloudSolrClientTest.java:171)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:129)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48

[jira] [Created] (SOLR-7465) nonsensical solr/example/files/README.txt needs fixed/removed

2015-04-24 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7465:
--

 Summary: nonsensical solr/example/files/README.txt needs 
fixed/removed
 Key: SOLR-7465
 URL: https://issues.apache.org/jira/browse/SOLR-7465
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Erik Hatcher


this README.txt file that's actually some sort of bizare shell script exists on 
trunk in an otherwise empty directory...

https://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/files/README.txt?view=markup

filed added by this commit: 
https://svn.apache.org/viewvc?view=revision&revision=1652721
all of hte other files in this directory removed by this commit: 
https://svn.apache.org/viewvc?view=revision&revision=1652759




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6451) Support non-static methods in the Javascript compiler

2015-04-24 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst resolved LUCENE-6451.

   Resolution: Fixed
Fix Version/s: 5.2
   Trunk
 Assignee: Ryan Ernst

Thanks Jack!

> Support non-static methods in the Javascript compiler
> -
>
> Key: LUCENE-6451
> URL: https://issues.apache.org/jira/browse/LUCENE-6451
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jack Conradson
>Assignee: Ryan Ernst
>Priority: Minor
> Fix For: Trunk, 5.2
>
> Attachments: LUCENE-6451.patch, LUCENE-6451.patch
>
>
> Allow methods such as date.getMonth() or string.getOrdinal() to be added in 
> the same way expression variables are now (forwarded to the bindings for 
> processing).  This change will only allow non-static methods that have zero 
> arguments due to current limitations in the architecture, and to keep the 
> change simple.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6451) Support non-static methods in the Javascript compiler

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511469#comment-14511469
 ] 

ASF subversion and git services commented on LUCENE-6451:
-

Commit 1675927 from [~rjernst] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1675927 ]

LUCENE-6451: Expressions now support bindings keys that look like zero arg 
functions (merged r1675926)

> Support non-static methods in the Javascript compiler
> -
>
> Key: LUCENE-6451
> URL: https://issues.apache.org/jira/browse/LUCENE-6451
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jack Conradson
>Priority: Minor
> Attachments: LUCENE-6451.patch, LUCENE-6451.patch
>
>
> Allow methods such as date.getMonth() or string.getOrdinal() to be added in 
> the same way expression variables are now (forwarded to the bindings for 
> processing).  This change will only allow non-static methods that have zero 
> arguments due to current limitations in the architecture, and to keep the 
> change simple.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6451) Support non-static methods in the Javascript compiler

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511463#comment-14511463
 ] 

ASF subversion and git services commented on LUCENE-6451:
-

Commit 1675926 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1675926 ]

LUCENE-6451: Expressions now support bindings keys that look like zero arg 
functions

> Support non-static methods in the Javascript compiler
> -
>
> Key: LUCENE-6451
> URL: https://issues.apache.org/jira/browse/LUCENE-6451
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jack Conradson
>Priority: Minor
> Attachments: LUCENE-6451.patch, LUCENE-6451.patch
>
>
> Allow methods such as date.getMonth() or string.getOrdinal() to be added in 
> the same way expression variables are now (forwarded to the bindings for 
> processing).  This change will only allow non-static methods that have zero 
> arguments due to current limitations in the architecture, and to keep the 
> change simple.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-24 Thread Scott Dawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511462#comment-14511462
 ] 

Scott Dawson commented on SOLR-7462:


Shawn, Erick - Thanks. I'll follow your instructions and report back when I 
have some test results.

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6450:
---
Attachment: LUCENE-6450.patch

updated patch attached. Adds experimental tag and moves GeoPointField & Query 
classes to sandbox. 

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch, LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6451) Support non-static methods in the Javascript compiler

2015-04-24 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511429#comment-14511429
 ] 

Ryan Ernst commented on LUCENE-6451:


This looks good, thanks for the additional comment and tests.  I will commit 
shortly.

> Support non-static methods in the Javascript compiler
> -
>
> Key: LUCENE-6451
> URL: https://issues.apache.org/jira/browse/LUCENE-6451
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jack Conradson
>Priority: Minor
> Attachments: LUCENE-6451.patch, LUCENE-6451.patch
>
>
> Allow methods such as date.getMonth() or string.getOrdinal() to be added in 
> the same way expression variables are now (forwarded to the bindings for 
> processing).  This change will only allow non-static methods that have zero 
> arguments due to current limitations in the architecture, and to keep the 
> change simple.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-24 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511414#comment-14511414
 ] 

Erick Erickson commented on SOLR-7462:
--

Scott:

There's some fairly detailed instructions here: 
http://wiki.apache.org/solr/HowToContribute. It's actually surprisingly easy to 
build Solr from a checkout. You need subversion command-line (there are Git 
repos too) and ant.

The first build will take a while as a bunch of stuff has to be brought in by 
ivy.

Note that the "one true build system" uses subversion and ant. The git and 
maven variants should work too but are there for convenience.

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511391#comment-14511391
 ] 

Michael McCandless commented on LUCENE-6450:


Thanks [~nknize], new patch looks great ... but can you add 
@lucene.experimental to all class-level javadocs so users know the index format 
is subject to change?

I think these classes really do belong in core: they cover the "common case" 
for spatial search.  But maybe we should start with sandbox for now since we 
may make changes that break the index format?

E.g. I think we should find a way to make use of index-time prefix terms (auto 
prefix or numeric field), because with the patch now we will visit O(N) terms 
and O(N) docs in the common case (no docs have exactly the same geo point), but 
if we can use prefix terms, we visit O(log(N)) terms and the same O(N) docs.  
The default block postings format is a far more efficient decode than the block 
terms dict, so offloading the work from terms dict -> postings should be a big 
win (and the post-filtering work would be unchanged, but would have to use doc 
values not the term).

We could do smart things in that case, e.g. carefully pick which prefix terms 
to make use of because they are 100% contained by the shape, and then OR that 
with another query that matches the "edge cells" that must do post-filtering.

Maybe we try a different space filling curve, e.g. I think Hilbert curves would 
be good since they have better spatial locality?  They do have higher 
index-time cost to encode, which is fine, and if we have to cutover to doc 
values for post-filtering anyway (if we use the prefix terms) then we wouldn't 
need to pay a Hilbert decode cost at search time.

But this all should come later: I think this patch is a huge step forward 
already.

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511388#comment-14511388
 ] 

Nicholas Knize commented on LUCENE-6450:


It certainly could be used as the implementation for the LatLonType.  Might be 
worthwhile exploring as a separate issue?

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7464) Cluster status API call should be served by the node which it was called from

2015-04-24 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7464:

Attachment: SOLR-7464.patch

Simple patch where cluster status is returned from the collections handler. I 
put all the methods which were used to get the status into a separate class 
called ClusterStatusAction because it was getting too big.

> Cluster status API call should be served by the node which it was called from
> -
>
> Key: SOLR-7464
> URL: https://issues.apache.org/jira/browse/SOLR-7464
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Priority: Minor
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7464.patch
>
>
> Currently the cluster status api call has to go to the 
> OverseerCollectionProcessor to serve the request. We should directly serve 
> the request from the node where the call was made to. 
> That way we will end up putting lesser tasks into the overseer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7406) Support DV implementation in range faceting

2015-04-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-7406.
-
   Resolution: Fixed
Fix Version/s: 5.2

> Support DV implementation in range faceting
> ---
>
> Key: SOLR-7406
> URL: https://issues.apache.org/jira/browse/SOLR-7406
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7406.patch, SOLR-7406.patch, SOLR-7406.patch, 
> SOLR-7406.patch
>
>
> interval faceting has a different implementation than range faceting based on 
> DocValues API. This is sometimes faster and doesn't rely on filters / filter 
> cache.
> I'm planning to add a "method" parameter that would allow users to choose 
> between the current implementation ("filter"?) and the DV-based 
> implementation ("dv"?). The result for both methods should be the same, but 
> performance may vary.
> Default should continue to be "filter".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7464) Cluster status API call should be served by the node which it was called from

2015-04-24 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-7464:
---

 Summary: Cluster status API call should be served by the node 
which it was called from
 Key: SOLR-7464
 URL: https://issues.apache.org/jira/browse/SOLR-7464
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Fix For: Trunk, 5.2


Currently the cluster status api call has to go to the 
OverseerCollectionProcessor to serve the request. We should directly serve the 
request from the node where the call was made to. 

That way we will end up putting lesser tasks into the overseer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7461) StatsComponent, calcdistinct, ability to disable distinctValues while keeping countDistinct

2015-04-24 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511324#comment-14511324
 ] 

Hoss Man commented on SOLR-7461:


As noted in SOLR-6349...

bq. i think the best approach would be to leave "calcDistinct" alone as it is 
now but deprecate/discourage it andmove towards adding an entirely new stats 
option for computing an aproximated count using hyperloglog (i opened a new 
issue for this: SOLR-6968)

...the problem is that the "exact" count returned by calcDistinct today 
requires that all distinctValues be aggregated (from all shards in a distrib 
setup) and dumped into a giant Set in memory.  returning the distinctValues may 
seem cumbersome to clients, but not returning them would just mask how painful 
this feature is on the server side, and the biggest problems with it (notably 
server OOMs) wouldn't go away, they'd just be harder to understand.

so i'm generally opposed to adding more flags to _hide_ what is, in my opinion, 
a broken "feature" and instead aim to move on and implement a better version of 
it (hopefully within the next week or so)

> StatsComponent, calcdistinct, ability to disable distinctValues while keeping 
> countDistinct
> ---
>
> Key: SOLR-7461
> URL: https://issues.apache.org/jira/browse/SOLR-7461
> Project: Solr
>  Issue Type: Improvement
>Reporter: James Andres
>  Labels: statscomponent
>
> When using calcdistinct with large amounts of data the distinctValues field 
> can be extremely large. In cases where the countDistinct is only required it 
> would be helpful if the server did not return distinctValues in the response.
> I'm no expert, but here are some ideas for how the syntax could look.
> {code}
> # Both countDistinct and distinctValues are returned, along with all other 
> stats
> stats.calcdistinct=true&stats.field=myfield
> # Only countDistinct and distinctValues are returned
> stats.calcdistinct=true&stats.field={!countDistinct=true 
> distinctValues=true}myfield
> # Only countDistinct is returned
> stats.calcdistinct=true&stats.field={!countDistinct=true}myfield
> # Only distinctValues is returned
> stats.calcdistinct=true&stats.field={!distinctValues=true}myfield
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7463) Stop forcing MergePolicy's "NoCFSRatio" based on the IWC "useCompoundFile" configuration

2015-04-24 Thread JIRA
Tomás Fernández Löbbe created SOLR-7463:
---

 Summary: Stop forcing MergePolicy's "NoCFSRatio" based on the IWC 
"useCompoundFile" configuration
 Key: SOLR-7463
 URL: https://issues.apache.org/jira/browse/SOLR-7463
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe


Let users specify this value via setter in the solrconfig.xml, and use Lucene's 
default if unset (0.1). 
Document "noCFSRatio" in the ref guide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6373) Complete two phase doc id iteration support for Spans

2015-04-24 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511305#comment-14511305
 ] 

Paul Elschot commented on LUCENE-6373:
--

My pleasure, thanks for benchmarking.
A last review found a no more used variable in nextStartPosition(), sorry:
{code}
diff --git 
a/lucene/core/src/java/org/apache/lucene/search/spans/SpanOrQuery.java 
b/lucene/core/src/java/org/apache/l
index eca3635..9d0d09a 100644
--- a/lucene/core/src/java/org/apache/lucene/search/spans/SpanOrQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/spans/SpanOrQuery.java
@@ -288,8 +288,6 @@ public class SpanOrQuery extends SpanQuery implements 
Cloneable {
 
   @Override
   public int nextStartPosition() throws IOException {
-DisiWrapper topDocSpans = byDocQueue.top();
-assert topDocSpans.doc != NO_MORE_DOCS;
 if (topPositionSpans == null) {
   byPositionQueue.clear();
   fillPositionQueue(); // fills byPositionQueue at first position
{code}


> Complete two phase doc id iteration support for Spans
> -
>
> Key: LUCENE-6373
> URL: https://issues.apache.org/jira/browse/LUCENE-6373
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Paul Elschot
> Fix For: Trunk, 5.2
>
> Attachments: LUCENE-6373-SpanOr.patch, LUCENE-6373.patch, 
> LUCENE-6373.patch, LUCENE-6737-SpanOr-oneTestFails.patch, 
> SpanPositionQueue.java
>
>
> Spin off from LUCENE-6308, see comments there from about 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6452) TestBooleanMinShouldMatch.testRandomQueries test failure

2015-04-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511299#comment-14511299
 ] 

Michael McCandless commented on LUCENE-6452:


+1

> TestBooleanMinShouldMatch.testRandomQueries test failure
> 
>
> Key: LUCENE-6452
> URL: https://issues.apache.org/jira/browse/LUCENE-6452
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6452.patch
>
>
> This is because scoring differences exceed the delta (hardcoded as 1e-5 in 
> queryutils).
> First, clean up the assert so its debuggable.
> Then, compute score the same way in conjunctionscorer as disjunctions and 
> minshouldmatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2219 - Failure!

2015-04-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2219/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (26 > 20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (26 > 20) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([1E39F98622562135:966DC65C8CAA4CCD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAda

[jira] [Commented] (LUCENE-6452) TestBooleanMinShouldMatch.testRandomQueries test failure

2015-04-24 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511263#comment-14511263
 ] 

Ryan Ernst commented on LUCENE-6452:


+1 to the patch.

> TestBooleanMinShouldMatch.testRandomQueries test failure
> 
>
> Key: LUCENE-6452
> URL: https://issues.apache.org/jira/browse/LUCENE-6452
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6452.patch
>
>
> This is because scoring differences exceed the delta (hardcoded as 1e-5 in 
> queryutils).
> First, clean up the assert so its debuggable.
> Then, compute score the same way in conjunctionscorer as disjunctions and 
> minshouldmatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7311) Add some infrastructure and tests to make sure Solr works well in the face of Name Node high availability and failover.

2015-04-24 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-7311.
---
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

> Add some infrastructure and tests to make sure Solr works well in the face of 
> Name Node high availability and failover.
> ---
>
> Key: SOLR-7311
> URL: https://issues.apache.org/jira/browse/SOLR-7311
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: Trunk, 5.2
>
> Attachments: SOLR-7311.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7436) Solr stops printing stacktraces in log and output

2015-04-24 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511257#comment-14511257
 ] 

Hoss Man commented on SOLR-7436:


I'm having ahard time wrapping my head arround this.

can you provide some more details please?  

* description of solr setup? (single node? cluster? jvm version?)
* configs used?
* steps to reproduce?
* definition of "a short while" ?

> Solr stops printing stacktraces in log and output
> -
>
> Key: SOLR-7436
> URL: https://issues.apache.org/jira/browse/SOLR-7436
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
> Environment: Local 5.1
>Reporter: Markus Jelsma
>
> After a short while, Solr suddenly stops printing stacktraces in the log and 
> output. 
> {code}
> 251043 [qtp1121454968-17] INFO  org.apache.solr.core.SolrCore.Request  [   
> suggests] - [suggests] webapp=/solr path=/select 
> params={q=*:*&fq={!collapse+field%3Dquery_digest}&fq={!collapse+field%3Dresult_digest}}
>  status=500 QTime=3 
> 251043 [qtp1121454968-17] ERROR org.apache.solr.servlet.SolrDispatchFilter  [ 
>   suggests] - null:java.lang.NullPointerException
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)

[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-24 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511254#comment-14511254
 ] 

Shawn Heisey commented on SOLR-7462:


Either will work, so I'd use the Java version found in your target environment, 
as that will provide the best possible optimizations in the compiled code.

if you go into the "solr" directory of the source checkout and type "ant 
package" that will create SNAPSHOT packages similar to what you can download 
from the official mirrors.  When I did this on a branch_5x snapshot, the 
following files were in the package directory, relative to the solr directory 
where I ran the command:

{noformat}
[solr@bigindy5 solr]$ ls -al package/
total 338116
drwxrwxr-x  4 solr solr  4096 Apr 24 09:45 .
drwxrwxr-x 17 solr solr  4096 Apr 24 09:34 ..
drwxrwxr-x  2 solr solr   135 Apr 24 09:45 changes
-rw-rw-r--  1 solr solr138455 Apr 24 09:46 KEYS
drwxrwxr-x  2 solr solr 6 Apr 16 17:46 maven
-rw-rw-r--  1 solr solr  37775529 Apr 24 09:38 solr-5.2.0-SNAPSHOT-src.tgz
-rw-rw-r--  1 solr solr62 Apr 24 09:38 solr-5.2.0-SNAPSHOT-src.tgz.md5
-rw-rw-r--  1 solr solr70 Apr 24 09:38 solr-5.2.0-SNAPSHOT-src.tgz.sha1
-rw-rw-r--  1 solr solr 150488544 Apr 24 09:45 solr-5.2.0-SNAPSHOT.tgz
-rw-rw-r--  1 solr solr58 Apr 24 09:45 solr-5.2.0-SNAPSHOT.tgz.md5
-rw-rw-r--  1 solr solr66 Apr 24 09:45 solr-5.2.0-SNAPSHOT.tgz.sha1
-rw-rw-r--  1 solr solr 157786320 Apr 24 09:45 solr-5.2.0-SNAPSHOT.zip
-rw-rw-r--  1 solr solr58 Apr 24 09:45 solr-5.2.0-SNAPSHOT.zip.md5
-rw-rw-r--  1 solr solr66 Apr 24 09:45 solr-5.2.0-SNAPSHOT.zip.sha1
{noformat}

You probably want to check out tags/lucene_solr_5_1_0 so you can be sure that 
the code you're starting with is identical to the version you have now.

http://wiki.apache.org/solr/HowToContribute#Contributing_Code_.28Features.2C_Bug_Fixes.2C_Tests.2C_etc29


> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7311) Add some infrastructure and tests to make sure Solr works well in the face of Name Node high availability and failover.

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511249#comment-14511249
 ] 

ASF subversion and git services commented on SOLR-7311:
---

Commit 1675891 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1675891 ]

SOLR-7311: Add some infrastructure and tests to make sure Solr works well in 
the face of Name Node high availability and failover.

> Add some infrastructure and tests to make sure Solr works well in the face of 
> Name Node high availability and failover.
> ---
>
> Key: SOLR-7311
> URL: https://issues.apache.org/jira/browse/SOLR-7311
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-7311.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6450:
---
Affects Version/s: Trunk

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511247#comment-14511247
 ] 

Yonik Seeley commented on LUCENE-6450:
--

Great stuff!  Should this be used as the underlying implementation for Solr's 
LatLonType (which currently does not have multi-valued support)?  Any downsides 
for the single-valued case?

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: Trunk, 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6450:
---
Description: 
At the moment all spatial capabilities, including basic point based indexing 
and querying, require the lucene-spatial module. The spatial module, designed 
to handle all things geo, requires dependency overhead (s4j, jts) to provide 
spatial rigor for even the most simplistic spatial search use-cases (e.g., 
lat/lon bounding box, point in poly, distance search). This feature trims the 
overhead by adding a new GeoPointField type to core along with 
GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
field is intended as a straightforward lightweight type for the most basic geo 
point use-cases without the overhead. 

The field uses simple bit twiddling operations (currently morton hashing) to 
encode lat/lon into a single long term.  The queries leverage simple 
multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
candidate terms deferring the more expensive mathematics to the smaller 
candidate sets.

  was:
At the moment all spatial capabilities, including basic point based indexing 
and querying, require the lucene-spatial module. The spatial module, designed 
to handle all things geo, requires dependency overhead (s4j, jts) to provide 
spatial rigor for even the most simplistic spatial search use-cases (e.g., 
lat/lon bounding box, point in poly, distance search). This feature trims the 
overhead by adding a new GeoPointField type to core along with 
GeoBoundingBoxQuery, GeoPolygonQuery, and GeoDistanceQuery classes to the 
.search package. This field is intended as a straightforward lightweight type 
for the most basic geo point use-cases without the overhead. 

The field uses simple bit twiddling operations (currently morton hashing) to 
encode lat/lon into a single long term.  The queries leverage simple 
multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
candidate terms deferring the more expensive mathematics to the smaller 
candidate sets.


> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery and GeoPolygonQuery classes to the .search package. This 
> field is intended as a straightforward lightweight type for the most basic 
> geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7311) Add some infrastructure and tests to make sure Solr works well in the face of Name Node high availability and failover.

2015-04-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511187#comment-14511187
 ] 

ASF subversion and git services commented on SOLR-7311:
---

Commit 1675883 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1675883 ]

SOLR-7311: Add some infrastructure and tests to make sure Solr works well in 
the face of Name Node high availability and failover.

> Add some infrastructure and tests to make sure Solr works well in the face of 
> Name Node high availability and failover.
> ---
>
> Key: SOLR-7311
> URL: https://issues.apache.org/jira/browse/SOLR-7311
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-7311.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6450:
---
Attachment: LUCENE-6450-TRUNK.patch
LUCENE-6450-5x.patch

Feedback incorporated.  Adding patch against branch_5x and trunk.

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450-5x.patch, LUCENE-6450-TRUNK.patch, 
> LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery, GeoPolygonQuery, and GeoDistanceQuery classes to the 
> .search package. This field is intended as a straightforward lightweight type 
> for the most basic geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6452) TestBooleanMinShouldMatch.testRandomQueries test failure

2015-04-24 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6452:

Attachment: LUCENE-6452.patch

> TestBooleanMinShouldMatch.testRandomQueries test failure
> 
>
> Key: LUCENE-6452
> URL: https://issues.apache.org/jira/browse/LUCENE-6452
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-6452.patch
>
>
> This is because scoring differences exceed the delta (hardcoded as 1e-5 in 
> queryutils).
> First, clean up the assert so its debuggable.
> Then, compute score the same way in conjunctionscorer as disjunctions and 
> minshouldmatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6452) TestBooleanMinShouldMatch.testRandomQueries test failure

2015-04-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511184#comment-14511184
 ] 

Robert Muir commented on LUCENE-6452:
-

Here is the seed on branch_5x:
{noformat}
ant test  -Dtestcase=TestBooleanMinShouldMatch -Dtests.method=testRandomQueries 
-Dtests.seed=2A16410BE4B0FDA7 -Dtests.slow=true -Dtests.locale=hu_HU 
-Dtests.timezone=America/Curacao -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8 -Dtests.verbose=true
{noformat}

> TestBooleanMinShouldMatch.testRandomQueries test failure
> 
>
> Key: LUCENE-6452
> URL: https://issues.apache.org/jira/browse/LUCENE-6452
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> This is because scoring differences exceed the delta (hardcoded as 1e-5 in 
> queryutils).
> First, clean up the assert so its debuggable.
> Then, compute score the same way in conjunctionscorer as disjunctions and 
> minshouldmatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6452) TestBooleanMinShouldMatch.testRandomQueries test failure

2015-04-24 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6452:
---

 Summary: TestBooleanMinShouldMatch.testRandomQueries test failure
 Key: LUCENE-6452
 URL: https://issues.apache.org/jira/browse/LUCENE-6452
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


This is because scoring differences exceed the delta (hardcoded as 1e-5 in 
queryutils).

First, clean up the assert so its debuggable.
Then, compute score the same way in conjunctionscorer as disjunctions and 
minshouldmatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6450) Add simple encoded GeoPointField type to core

2015-04-24 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511182#comment-14511182
 ] 

Nicholas Knize commented on LUCENE-6450:


bq. ds. particularly once a point-radius (circle) query is added. Did you 
forget or are you planning to add that in the future? The other super-common 
use-case is distance sorting...

I'm working on adding both point-radius query and distance sorting. I wanted to 
get the first version out for initial feedback. Seemed to work out nicely with 
all of the great suggestions so far.

bq. uwe: so shifted terms will never appear. I just have the question (why do 
this?).

Space-filling curves are highly sensitive to precision - especially the morton 
or lebesgue curve since they don't do a great job of preserving locality. 
Indexing reduced precision terms can lead to (potentially significant) false 
positives (with nonlinear error). Here's a great bl.ock visualizing the range 
query from a bounding box over a morton curve:  
http://bl.ocks.org/jaredwinick/raw/5073432/  For an average example: encoding 
32.9482, -96.4538 with step = 32 results in two terms/geo points that are >500m 
a part. The error gets worse as this precision step is lowered. With the single 
high precision encoded term the error is 1e-7 decimal degrees.

bq. mm: Is there a requirement that these poly points are clockwise or 
counter-clockwise order or something?

There is. The points have to be cw or ccw and the polygon cannot be 
self-crossing.  It won't throw any exceptions, it just won't behave as 
expected. I went ahead and updated the javadoc comment to make sure that is 
clear.

bq. mm: For GeoPolygonQuery, why do we have public factory method that takes 
the bbox? Shouldn't this be private (bbox is computed from the polygon's 
points)? Or is this for expert usage or something?

The idea here is that polygons can contain a significant number of points, and 
users may already have the BBox (cached or otherwise precomputed). I thought 
this provided a nice way to save unnecessary processing if the caller can 
provide the bbox. 

bq. ds: Have you thought about a way to use GeoPointFieldType with 
pre-projected data

Yes, this can potentially be left as an enhancement but the intent is to have 
this apply to the most basic use cases. So I'm curious as to what the other's 
think about adding this capability or just leaving that to the spatial module. 

bq. ds: GeoPointFieldType has DocValues enabled yet I see that these queries 
don't use that; or did I miss something?

Not using them yet. The intent was to use them for sorting.

bq. ds: I would love to see some randomized testing of round-trip encode-decode 
of the morton numbers.

Agree.  I'll be adding randomized testing for sure.

> Add simple encoded GeoPointField type to core
> -
>
> Key: LUCENE-6450
> URL: https://issues.apache.org/jira/browse/LUCENE-6450
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 5.x
>Reporter: Nicholas Knize
>Priority: Minor
> Attachments: LUCENE-6450.patch
>
>
> At the moment all spatial capabilities, including basic point based indexing 
> and querying, require the lucene-spatial module. The spatial module, designed 
> to handle all things geo, requires dependency overhead (s4j, jts) to provide 
> spatial rigor for even the most simplistic spatial search use-cases (e.g., 
> lat/lon bounding box, point in poly, distance search). This feature trims the 
> overhead by adding a new GeoPointField type to core along with 
> GeoBoundingBoxQuery, GeoPolygonQuery, and GeoDistanceQuery classes to the 
> .search package. This field is intended as a straightforward lightweight type 
> for the most basic geo point use-cases without the overhead. 
> The field uses simple bit twiddling operations (currently morton hashing) to 
> encode lat/lon into a single long term.  The queries leverage simple 
> multi-phase filtering that starts by leveraging NumericRangeQuery to reduce 
> candidate terms deferring the more expensive mathematics to the smaller 
> candidate sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-24 Thread Scott Dawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511155#comment-14511155
 ] 

Scott Dawson commented on SOLR-7462:


Shawn - no, I haven't tried patching it myself. I haven't built Solr before so 
I'll do some research on what is required...

Our target environment is Java 1.8. Should I build with 1.8 or 1.7?

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-24 Thread Scott Dawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511141#comment-14511141
 ] 

Scott Dawson commented on SOLR-7462:


A little more info about what I'm doing when the exception occurs... This 
happens sporadically when I'm indexing custom JSON:
$ curl 
'http://localhost:8983/solr/struct-json/update/json/docs?split=/&f=/**&srcField=display_json'
 -H 'Content-Type:application/json' --data-binary @tg.json

Here's the full stacktrace:
java.lang.ArrayIndexOutOfBoundsException:
at 
org.apache.solr.util.RecordingJSONParser.getStringChars(RecordingJSONParser.java:61)
at org.noggit.JSONParser.getString(JSONParser.java:1017)
at 
org.apache.solr.common.util.JsonRecordReader.parseSingleFieldValue(JsonRecordReader.java:513)
at 
org.apache.solr.common.util.JsonRecordReader.parseArrayFieldValue(JsonRecordReader.java:565)
at 
org.apache.solr.common.util.JsonRecordReader.parseSingleFieldValue(JsonRecordReader.java:526)
at 
org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:384)
at 
org.apache.solr.common.util.JsonRecordReader$Node.access$300(JsonRecordReader.java:154)
at 
org.apache.solr.common.util.JsonRecordReader$Node$1Wrapper.walk(JsonRecordReader.java:345)
at 
org.apache.solr.common.util.JsonRecordReader.parseSingleFieldValue(JsonRecordReader.java:529)
at 
org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:384)
at 
org.apache.solr.common.util.JsonRecordReader$Node.access$300(JsonRecordReader.java:154)
at 
org.apache.solr.common.util.JsonRecordReader$Node$1Wrapper.walk(JsonRecordReader.java:345)
at 
org.apache.solr.common.util.JsonRecordReader.parseSingleFieldValue(JsonRecordReader.java:529)
at 
org.apache.solr.common.util.JsonRecordReader.parseArrayFieldValue(JsonRecordReader.java:565)
at 
org.apache.solr.common.util.JsonRecordReader.parseSingleFieldValue(JsonRecordReader.java:526)
at 
org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:384)
at 
org.apache.solr.common.util.JsonRecordReader$Node.access$300(JsonRecordReader.java:154)
at 
org.apache.solr.common.util.JsonRecordReader$Node$1Wrapper.walk(JsonRecordReader.java:345)
at 
org.apache.solr.common.util.JsonRecordReader.parseSingleFieldValue(JsonRecordReader.java:529)
at 
org.apache.solr.common.util.JsonRecordReader$Node.handleObjectStart(JsonRecordReader.java:384)
at 
org.apache.solr.common.util.JsonRecordReader$Node.parse(JsonRecordReader.java:295)
at 
org.apache.solr.common.util.JsonRecordReader$Node.access$200(JsonRecordReader.java:154)
at 
org.apache.solr.common.util.JsonRecordReader.streamRecords(JsonRecordReader.java:138)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleSplitMode(JsonLoader.java:205)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:122)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:110)
at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:73)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:103)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.

[jira] [Commented] (SOLR-7462) ArrayIndexOutOfBoundsException in RecordingJSONParser.java

2015-04-24 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511136#comment-14511136
 ] 

Shawn Heisey commented on SOLR-7462:


This does look like an off-by-one error.

Have you tried patching the source code as you have described, compiling it, 
and using it to see if it fixes the problem?

> ArrayIndexOutOfBoundsException in RecordingJSONParser.java
> --
>
> Key: SOLR-7462
> URL: https://issues.apache.org/jira/browse/SOLR-7462
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Scott Dawson
>
> With Solr 5.1 I'm getting an occasional fatal exception during indexing. It's 
> an ArrayIndexOutOfBoundsException at line 61 of 
> org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see 
> below), it seems obvious that the if-statement at line 60 should use a 
> greater-than sign instead of greater-than-or-equals.
>   @Override
>   public CharArr getStringChars() throws IOException {
> CharArr chars = super.getStringChars();
> recordStr(chars.toString());
> position = getPosition();
> // if reading a String , the getStringChars do not return the closing 
> single quote or double quote
> //so, try to capture that
> if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 
> 60
>   char next = chars.getArray()[chars.getStart() + chars.size()]; // line 
> 61
>   if(next =='"' || next == '\'') {
> recordChar(next);
>   }
> }
> return chars;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >