[jira] [Updated] (LUCENE-7123) deduplicate/cleanup spatial distance

2016-03-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7123:

Attachment: LUCENE-7123.patch

updated patch: caught some typos/comments reviewing.

> deduplicate/cleanup spatial distance
> 
>
> Key: LUCENE-7123
> URL: https://issues.apache.org/jira/browse/LUCENE-7123
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7123.patch, LUCENE-7123.patch
>
>
> Currently there is a bit of a mess here: SloppyMath.haversin is slightly 
> different from GeoDistanceUtils.haversin, the latter is actually slightly 
> faster and uses a simple fixed earth diameter (which makes calculations 
> easier too).
> But one of these returns meters, the other kilometers. Furthermore 
> lucene/spatial now uses some sin/tan functions that were added to sloppymath 
> with some accuracy guarantees (which are untested, and not quite correct). 
> Lucene/spatial queries also inconsistently mix the two different functions 
> together for various purposes and this just causes headaches. Its tests did 
> this recently too.
> We need to clean this up, otherwise users will be confused. E.G. they will 
> see different results from expressions than from queries and not understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7123) deduplicate/cleanup spatial distance

2016-03-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7123:

Attachment: LUCENE-7123.patch

Here is a patch:
* adds haversinMeters to SloppyMath and cuts everything over to it.
* adds a test that this is within 0.01MM of the actual haversin result (if you 
were to use slower trig functions).
* moves out sin/tan stuff to the geoutils class that is using it. There is a 
TODO to remove these further.

I did benchmarking and testing as well. I think its ready.

> deduplicate/cleanup spatial distance
> 
>
> Key: LUCENE-7123
> URL: https://issues.apache.org/jira/browse/LUCENE-7123
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7123.patch
>
>
> Currently there is a bit of a mess here: SloppyMath.haversin is slightly 
> different from GeoDistanceUtils.haversin, the latter is actually slightly 
> faster and uses a simple fixed earth diameter (which makes calculations 
> easier too).
> But one of these returns meters, the other kilometers. Furthermore 
> lucene/spatial now uses some sin/tan functions that were added to sloppymath 
> with some accuracy guarantees (which are untested, and not quite correct). 
> Lucene/spatial queries also inconsistently mix the two different functions 
> together for various purposes and this just causes headaches. Its tests did 
> this recently too.
> We need to clean this up, otherwise users will be confused. E.G. they will 
> see different results from expressions than from queries and not understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7123) deduplicate/cleanup spatial distance

2016-03-20 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-7123:
---

 Summary: deduplicate/cleanup spatial distance
 Key: LUCENE-7123
 URL: https://issues.apache.org/jira/browse/LUCENE-7123
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Currently there is a bit of a mess here: SloppyMath.haversin is slightly 
different from GeoDistanceUtils.haversin, the latter is actually slightly 
faster and uses a simple fixed earth diameter (which makes calculations easier 
too).

But one of these returns meters, the other kilometers. Furthermore 
lucene/spatial now uses some sin/tan functions that were added to sloppymath 
with some accuracy guarantees (which are untested, and not quite correct). 
Lucene/spatial queries also inconsistently mix the two different functions 
together for various purposes and this just causes headaches. Its tests did 
this recently too.

We need to clean this up, otherwise users will be confused. E.G. they will see 
different results from expressions than from queries and not understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_72) - Build # 194 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/194/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_54A299F53E2D3FD7-001/solr-instance-019/./collection1/data/index.20160321071509618,
 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_54A299F53E2D3FD7-001/solr-instance-019/./collection1/data/index.20160321071509518,
 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_54A299F53E2D3FD7-001/solr-instance-019/./collection1/data]
 expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: 
[/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_54A299F53E2D3FD7-001/solr-instance-019/./collection1/data/index.20160321071509618,
 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_54A299F53E2D3FD7-001/solr-instance-019/./collection1/data/index.20160321071509518,
 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_54A299F53E2D3FD7-001/solr-instance-019/./collection1/data]
 expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([54A299F53E2D3FD7:A3D177ADF8C59031]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:818)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1248)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

Re: [JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 192 - Failure!

2016-03-20 Thread Steve Rowe
Sorry - forgot to run precommit :( - thanks Uwe! - Steve

> On Mar 20, 2016, at 7:20 PM, Uwe Schindler  wrote:
> 
> I fixed this by using the slf4j pattern, enforced by forbidden-apis + the ant 
> task "validate" ("-validate-source-patterns" regex).
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> 
>> -Original Message-
>> From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
>> Sent: Monday, March 21, 2016 12:02 AM
>> To: sar...@gmail.com; rjer...@apache.org; rm...@apache.org;
>> dev@lucene.apache.org
>> Subject: [JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) -
>> Build # 192 - Failure!
>> Importance: Low
>> 
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/192/
>> Java: 32bit/jdk-9-jigsaw-ea+110 -server -XX:+UseG1GC -XX:-CompactStrings
>> 
>> All tests passed
>> 
>> Build Log:
>> [...truncated 37040 lines...]
>> -check-forbidden-all:
>> [forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.8
>> [forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.8
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.awt.Component#getPeer() [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.awt.Font#getPeer() [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.awt.MenuComponent#getPeer() [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.awt.Toolkit#getFontPeer(java.lang.String,int) [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.util.jar.Pack200$Packer#addPropertyChangeListener(java.beans.Proper
>> tyChangeListener) [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.util.jar.Pack200$Packer#removePropertyChangeListener(java.beans.Pro
>> pertyChangeListener) [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.util.jar.Pack200$Unpacker#addPropertyChangeListener(java.beans.Pro
>> pertyChangeListener) [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.util.jar.Pack200$Unpacker#removePropertyChangeListener(java.beans.
>> PropertyChangeListener) [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.util.logging.LogManager#addPropertyChangeListener(java.beans.Prope
>> rtyChangeListener) [signature ignored]
>> [forbidden-apis] WARNING: Method not found while parsing signature:
>> java.util.logging.LogManager#removePropertyChangeListener(java.beans.Pr
>> opertyChangeListener) [signature ignored]
>> [forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.4
>> [forbidden-apis] Reading API signatures: /home/jenkins/workspace/Lucene-
>> Solr-6.x-Linux/lucene/tools/forbiddenApis/base.txt
>> [forbidden-apis] Reading API signatures: /home/jenkins/workspace/Lucene-
>> Solr-6.x-Linux/lucene/tools/forbiddenApis/servlet-api.txt
>> [forbidden-apis] Reading API signatures: /home/jenkins/workspace/Lucene-
>> Solr-6.x-Linux/lucene/tools/forbiddenApis/solr.txt
>> [forbidden-apis] Loading classes to check...
>> [forbidden-apis] Scanning classes for violations...
>> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
>> slf4j
>> classes instead]
>> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
>> (DocValuesTest.java, field declaration of 'log')
>> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
>> slf4j
>> classes instead]
>> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
>> (DocValuesTest.java:40)
>> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
>> slf4j
>> classes instead]
>> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
>> (DocValuesTest.java:467)
>> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
>> slf4j
>> classes instead]
>> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
>> (DocValuesTest.java:525)
>> [forbidden-apis] Scanned 3109 (and 1968 related) class file(s) for forbidden
>> API invocations (in 1.92s), 4 error(s).
>> 
>> BUILD FAILED
>> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:740: The
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:117: The
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build.xml:347: The
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/common-
>> build.xml:509: Check for forbidden API calls failed, see log.
>> 
>> Total time: 65 minutes 56 seconds
>> Build step 'Invoke Ant' marked build as failure
>> Archiving artifacts
>> [WARNINGS] Skipping publisher since build result is FAILURE
>> 

[JENKINS] Lucene-Solr-Tests-master - Build # 1028 - Still Failing

2016-03-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1028/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [SolrCore, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([ADC3EFD1112A9B86]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:238)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=10195, name=searcherExecutor-4489-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=10195, name=searcherExecutor-4489-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([ADC3EFD1112A9B86]:0)


FAILED:  

[jira] [Commented] (SOLR-3191) field exclusion from fl

2016-03-20 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203648#comment-15203648
 ] 

Erick Erickson commented on SOLR-3191:
--

I ran this patch over 1,000 times "last night" (well 4 days ago) and everything 
was fine.

FWIW

> field exclusion from fl
> ---
>
> Key: SOLR-3191
> URL: https://issues.apache.org/jira/browse/SOLR-3191
> Project: Solr
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
> Attachments: SOLR-3191.patch, SOLR-3191.patch, SOLR-3191.patch, 
> SOLR-3191.patch, SOLR-3191.patch
>
>
> I think it would be useful to add a way to exclude field from the Solr 
> response. If I have for example 100 stored fields and I want to return all of 
> them but one, it would be handy to list just the field I want to exclude 
> instead of the 99 fields for inclusion through fl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203639#comment-15203639
 ] 

ASF subversion and git services commented on SOLR-8878:
---

Commit 5a40ae030574aa2141d807a24f10b8d8ab4548db in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a40ae0 ]

SOLR-8878: Remove debugging


> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master, 6.0
>
> Attachments: SOLR-8878.patch, SOLR-8878.patch
>
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-20 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8878:
-
Fix Version/s: 6.0
   master

> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master, 6.0
>
> Attachments: SOLR-8878.patch, SOLR-8878.patch
>
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-20 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203634#comment-15203634
 ] 

Joel Bernstein commented on SOLR-8878:
--

This ticket has some nice API changes for both the DaemonStream and the 
TopicStream, so I'd like backport this prior to the 6.0 release. 

> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master, 6.0
>
> Attachments: SOLR-8878.patch, SOLR-8878.patch
>
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203632#comment-15203632
 ] 

ASF subversion and git services commented on SOLR-8878:
---

Commit f86ac58a5a4f1268e118c2cd7d2ec9192d91da6e in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f86ac58 ]

SOLR-8878: Allow the DaemonStream run rate be controlled by the internal stream


> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Attachments: SOLR-8878.patch, SOLR-8878.patch
>
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8865) real-time get does not retrieve values from docValues

2016-03-20 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203621#comment-15203621
 ] 

Yonik Seeley commented on SOLR-8865:


Sorry for not getting to this earlier, busy weekend and now I'm sick...

bq. Is there something we can do better for avoiding the double addition of a 
dv field in the toSolrDoc method?

Why/how does the double addition happen?


> real-time get does not retrieve values from docValues
> -
>
> Key: SOLR-8865
> URL: https://issues.apache.org/jira/browse/SOLR-8865
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8865.patch, SOLR-8865.patch, SOLR-8865.patch, 
> SOLR-8865.patch
>
>
> Uncovered during ad-hoc testing... the _version_ field, which has 
> stored=false docValues=true is not retrieved with realtime-get



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 193 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/193/
Java: 32bit/jdk-9-jigsaw-ea+110 -client -XX:+UseParallelGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6474, 
name=testExecutor-2867-thread-1, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6474, name=testExecutor-2867-thread-1, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:34579
at __randomizedtesting.SeedInfo.seed([97BB640442BEF917]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@9-ea/ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@9-ea/ThreadPoolExecutor.java:632)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:34579
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(java.base@9-ea/Native Method)
at 
java.net.SocketInputStream.socketRead(java.base@9-ea/SocketInputStream.java:116)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:170)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11304 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_97BB640442BEF917-001/init-core-data-001
   [junit4]   2> 795459 INFO  
(SUITE-UnloadDistributedZkTest-seed#[97BB640442BEF917]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 795462 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[97BB640442BEF917]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 795462 INFO  (Thread-2086) [] o.a.s.c.ZkTestServer 

[JENKINS] Lucene-Solr-Tests-master - Build # 1027 - Failure

2016-03-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1027/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([CB751DD040BCDCA8:71A772A8C39232BD]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:765)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:325)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:758)
... 40 more




Build Log:
[...truncated 10611 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203598#comment-15203598
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit 6ec8c635bf5853dfb229f89cb2818749c1cfe8ce in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ec8c63 ]

SOLR-445: cleanup some simple nocommits


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8872) ChaosMonkey depends on AbstractFullDistribZkTestBase, can't be used with MiniSolrCloudCluster

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203597#comment-15203597
 ] 

ASF subversion and git services commented on SOLR-8872:
---

Commit 1aa1ba3b3af69cad65b7a411ca88e120a418a598 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1aa1ba3 ]

SOLR-445: harden & add logging to test

also rename since chaos monkey isn't going to be involved (due to SOLR-8872)


> ChaosMonkey depends on AbstractFullDistribZkTestBase, can't be used with 
> MiniSolrCloudCluster
> -
>
> Key: SOLR-8872
> URL: https://issues.apache.org/jira/browse/SOLR-8872
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>
> After burning a bunch of hours/brain cells trying to tweak ChaosMonkey so 
> that i could use it in a MiniSolrCloud test, i finally just gave up and am 
> filing this issue as a TODO for the future.
> ChaosMonkey's functionality is directly tied to AbstractFullDistribZkTestBase 
> internals (notably the CloudJettyRunner inner class, and the shardToJetty and 
> shardToLeaderJetty Maps).
> Someone smarter then me will have to spend some time figuring out how to 
> untangle this stuff if we ever want to support using CHaosMonkey with 
> MiniSolrCloud.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203599#comment-15203599
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit 21c0fe690dc4e968e484ee906632a50bf0273786 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=21c0fe6 ]

SOLR-445: hardent the ToleratedUpdateError API to hide implementation details


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203589#comment-15203589
 ] 

ASF subversion and git services commented on SOLR-7339:
---

Commit 6ebf61535e90d264755ba72eea9ce51ea89703ff in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ebf615 ]

SOLR-7339: Upgrade to Jetty 9.3.8.v20160314


> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: SOLR-7339-jetty-9.3.8.patch, 
> SOLR-7339-jetty-9.3.8.patch, SOLR-7339-revert.patch, SOLR-7339.patch, 
> SOLR-7339.patch, SOLR-7339.patch, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203594#comment-15203594
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit aeda8dc4ae881c4ec405d70dcbf1d0b2c30871b7 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aeda8dc ]

SOLR-445: fix test bugs, and put in a stupid work around for SOLR-8862


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to know

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203595#comment-15203595
 ] 

ASF subversion and git services commented on SOLR-8862:
---

Commit aeda8dc4ae881c4ec405d70dcbf1d0b2c30871b7 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aeda8dc ]

SOLR-445: fix test bugs, and put in a stupid work around for SOLR-8862


> /live_nodes is populated too early to be very useful for clients -- 
> CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other 
> ephemeral zk node to knowwhich servers are "ready"
> --
>
> Key: SOLR-8862
> URL: https://issues.apache.org/jira/browse/SOLR-8862
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> {{/live_nodes}} is populated surprisingly early (and multiple times) in the 
> life cycle of a sole node startup, and as a result probably shouldn't be used 
> by {{CloudSolrClient}} (or other "smart" clients) for deciding what servers 
> are fair game for requests.
> we should either fix {{/live_nodes}} to be created later in the lifecycle, or 
> add some new ZK node for this purpose.
> {panel:title=original bug report}
> I haven't been able to make sense of this yet, but what i'm seeing in a new 
> SolrCloudTestCase subclass i'm writing is that the code below, which 
> (reasonably) attempts to create a collection immediately after configuring 
> the MiniSolrCloudCluster gets a "SolrServerException: No live SolrServers 
> available to handle this request" -- in spite of the fact, that (as far as i 
> can tell at first glance) MiniSolrCloudCluster's constructor is suppose to 
> block until all the servers are live..
> {code}
> configureCluster(numServers)
>   .addConfig(configName, configDir.toPath())
>   .configure();
> Map collectionProperties = ...;
> assertNotNull(cluster.createCollection(COLLECTION_NAME, numShards, 
> repFactor,
>configName, null, null, 
> collectionProperties));
> {code}
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8745) Deprecate ZkStateReader.updateClusterState(), remove uses

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203590#comment-15203590
 ] 

ASF subversion and git services commented on SOLR-8745:
---

Commit 4fbfeb01230429b073039b4d16b8871c1854f413 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4fbfeb0 ]

SOLR-8745: Move CHANGES.txt entry to 6.1


> Deprecate ZkStateReader.updateClusterState(), remove uses
> -
>
> Key: SOLR-8745
> URL: https://issues.apache.org/jira/browse/SOLR-8745
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Shalin Shekhar Mangar
>  Labels: patch, performance, solrcloud
> Fix For: master, 6.1
>
> Attachments: SOLR-8745.patch, SOLR-8745.patch
>
>
> Forcing a full ZK cluster state update creates a lot of unnecessary work and 
> load at scale.  We need to deprecate and remove existing callers.
> - The one at the start of ClusterStateUpdater thread is fine, it's a one-time 
> thing.
> - The one in OverseerCollectionMessageHandler is getting removed in SOLR-8722
> - The rest of them can be replaced with a version that only updates a single 
> collection; not everything!
> Patch will be forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8866) UpdateLog should throw an exception when serializing unknown types

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203591#comment-15203591
 ] 

ASF subversion and git services commented on SOLR-8866:
---

Commit a22099a3986de1f36f926b4e106827c5308708b0 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a22099a ]

SOLR-8866: UpdateLog now throws an error if it can't serialize a field value


> UpdateLog should throw an exception when serializing unknown types
> --
>
> Key: SOLR-8866
> URL: https://issues.apache.org/jira/browse/SOLR-8866
> Project: Solr
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: SOLR_8866_UpdateLog_show_throw_for_unknown_types.patch
>
>
> When JavaBinCodec encounters a class it doesn't have explicit knowledge of 
> how to serialize, nor does it implement the {{ObjectResolver}} interface, it 
> currently serializes the object as the classname, colon, then toString() of 
> the object.
> This may appear innocent but _not_ throwing an exception hides bugs.  One 
> example is that the UpdateLog, which uses JavaBinCodec, to save a document.  
> The result is that this bad value winds up there, gets deserialized as a 
> String in PeerSync (which uses /get) and then this value pretends to be a 
> suitable value to the final document in the leader.  But of course it isn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203592#comment-15203592
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit 8cc0a38453b389bdb031d78ad638b76dfa27f2d5 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8cc0a38 ]

SOLR-445: Merge branch 'master' into jira/SOLR-445


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203596#comment-15203596
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit 1aa1ba3b3af69cad65b7a411ca88e120a418a598 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1aa1ba3 ]

SOLR-445: harden & add logging to test

also rename since chaos monkey isn't going to be involved (due to SOLR-8872)


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203593#comment-15203593
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit 8cc0a38453b389bdb031d78ad638b76dfa27f2d5 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8cc0a38 ]

SOLR-445: Merge branch 'master' into jira/SOLR-445


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 466 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/466/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 36599 lines...]
-check-forbidden-all:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.8
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.8
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.4
[forbidden-apis] Reading API signatures: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Reading API signatures: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/tools/forbiddenApis/solr.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning classes for violations...
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest (DocValuesTest.java, 
field declaration of 'log')
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:40)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:467)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:525)
[forbidden-apis] Scanned 3110 (and 1969 related) class file(s) for forbidden 
API invocations (in 2.29s), 4 error(s).

BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:740: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:117: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build.xml:347: 
The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/common-build.xml:509:
 Check for forbidden API calls failed, see log.

Total time: 83 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-6.x - Build # 89 - Failure

2016-03-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/89/

All tests passed

Build Log:
[...truncated 36668 lines...]
-check-forbidden-all:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.8
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.8
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.4
[forbidden-apis] Reading API signatures: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Reading API signatures: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/tools/forbiddenApis/solr.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning classes for violations...
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest (DocValuesTest.java, 
field declaration of 'log')
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:40)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:467)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:525)
[forbidden-apis] Scanned 3109 (and 1968 related) class file(s) for forbidden 
API invocations (in 1.43s), 4 error(s).

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:740: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:117: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build.xml:347: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/common-build.xml:509:
 Check for forbidden API calls failed, see log.

Total time: 67 minutes 24 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

RE: [JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 192 - Failure!

2016-03-20 Thread Uwe Schindler
I fixed this by using the slf4j pattern, enforced by forbidden-apis + the ant 
task "validate" ("-validate-source-patterns" regex).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> Sent: Monday, March 21, 2016 12:02 AM
> To: sar...@gmail.com; rjer...@apache.org; rm...@apache.org;
> dev@lucene.apache.org
> Subject: [JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) -
> Build # 192 - Failure!
> Importance: Low
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/192/
> Java: 32bit/jdk-9-jigsaw-ea+110 -server -XX:+UseG1GC -XX:-CompactStrings
> 
> All tests passed
> 
> Build Log:
> [...truncated 37040 lines...]
> -check-forbidden-all:
> [forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.8
> [forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.8
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.awt.Component#getPeer() [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.awt.Font#getPeer() [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.awt.MenuComponent#getPeer() [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.awt.Toolkit#getFontPeer(java.lang.String,int) [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.util.jar.Pack200$Packer#addPropertyChangeListener(java.beans.Proper
> tyChangeListener) [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.util.jar.Pack200$Packer#removePropertyChangeListener(java.beans.Pro
> pertyChangeListener) [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.util.jar.Pack200$Unpacker#addPropertyChangeListener(java.beans.Pro
> pertyChangeListener) [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.util.jar.Pack200$Unpacker#removePropertyChangeListener(java.beans.
> PropertyChangeListener) [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.util.logging.LogManager#addPropertyChangeListener(java.beans.Prope
> rtyChangeListener) [signature ignored]
> [forbidden-apis] WARNING: Method not found while parsing signature:
> java.util.logging.LogManager#removePropertyChangeListener(java.beans.Pr
> opertyChangeListener) [signature ignored]
> [forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.4
> [forbidden-apis] Reading API signatures: /home/jenkins/workspace/Lucene-
> Solr-6.x-Linux/lucene/tools/forbiddenApis/base.txt
> [forbidden-apis] Reading API signatures: /home/jenkins/workspace/Lucene-
> Solr-6.x-Linux/lucene/tools/forbiddenApis/servlet-api.txt
> [forbidden-apis] Reading API signatures: /home/jenkins/workspace/Lucene-
> Solr-6.x-Linux/lucene/tools/forbiddenApis/solr.txt
> [forbidden-apis] Loading classes to check...
> [forbidden-apis] Scanning classes for violations...
> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
> slf4j
> classes instead]
> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
> (DocValuesTest.java, field declaration of 'log')
> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
> slf4j
> classes instead]
> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
> (DocValuesTest.java:40)
> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
> slf4j
> classes instead]
> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
> (DocValuesTest.java:467)
> [forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use 
> slf4j
> classes instead]
> [forbidden-apis]   in org.apache.solr.schema.DocValuesTest
> (DocValuesTest.java:525)
> [forbidden-apis] Scanned 3109 (and 1968 related) class file(s) for forbidden
> API invocations (in 1.92s), 4 error(s).
> 
> BUILD FAILED
> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:740: The
> following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:117: The
> following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build.xml:347: The
> following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/common-
> build.xml:509: Check for forbidden API calls failed, see log.
> 
> Total time: 65 minutes 56 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203563#comment-15203563
 ] 

ASF subversion and git services commented on SOLR-8082:
---

Commit 8c0271cbb8ff82dd3c218bcbad5834905e489273 in lucene-solr's branch 
refs/heads/branch_6_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c0271c ]

SOLR-8082: Fix forbidden APIs


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203562#comment-15203562
 ] 

ASF subversion and git services commented on SOLR-8082:
---

Commit 1d98753e2e3b4a30b799c4a15dcbea25c279979e in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1d98753 ]

SOLR-8082: Fix forbidden APIs


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203561#comment-15203561
 ] 

ASF subversion and git services commented on SOLR-8082:
---

Commit b2a4003d4c91d2e7e8f46b546bf1ac988b95ad3f in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b2a4003 ]

SOLR-8082: Fix forbidden APIs


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 21 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/21/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 36603 lines...]
-check-forbidden-all:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.8
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.8
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.4
[forbidden-apis] Reading API signatures: 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Reading API signatures: 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/tools/forbiddenApis/solr.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning classes for violations...
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest (DocValuesTest.java, 
field declaration of 'log')
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:40)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:467)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:525)
[forbidden-apis] Scanned 3109 (and 1968 related) class file(s) for forbidden 
API invocations (in 2.61s), 4 error(s).

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:740: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:117: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build.xml:347: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/common-build.xml:509: 
Check for forbidden API calls failed, see log.

Total time: 92 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 192 - Failure!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/192/
Java: 32bit/jdk-9-jigsaw-ea+110 -server -XX:+UseG1GC -XX:-CompactStrings

All tests passed

Build Log:
[...truncated 37040 lines...]
-check-forbidden-all:
[forbidden-apis] Reading bundled API signatures: jdk-unsafe-1.8
[forbidden-apis] Reading bundled API signatures: jdk-deprecated-1.8
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.awt.Component#getPeer() [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.awt.Font#getPeer() [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.awt.MenuComponent#getPeer() [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.awt.Toolkit#getFontPeer(java.lang.String,int) [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.util.jar.Pack200$Packer#addPropertyChangeListener(java.beans.PropertyChangeListener)
 [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.util.jar.Pack200$Packer#removePropertyChangeListener(java.beans.PropertyChangeListener)
 [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.util.jar.Pack200$Unpacker#addPropertyChangeListener(java.beans.PropertyChangeListener)
 [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.util.jar.Pack200$Unpacker#removePropertyChangeListener(java.beans.PropertyChangeListener)
 [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.util.logging.LogManager#addPropertyChangeListener(java.beans.PropertyChangeListener)
 [signature ignored]
[forbidden-apis] WARNING: Method not found while parsing signature: 
java.util.logging.LogManager#removePropertyChangeListener(java.beans.PropertyChangeListener)
 [signature ignored]
[forbidden-apis] Reading bundled API signatures: commons-io-unsafe-2.4
[forbidden-apis] Reading API signatures: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/forbiddenApis/base.txt
[forbidden-apis] Reading API signatures: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/forbiddenApis/servlet-api.txt
[forbidden-apis] Reading API signatures: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/forbiddenApis/solr.txt
[forbidden-apis] Loading classes to check...
[forbidden-apis] Scanning classes for violations...
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest (DocValuesTest.java, 
field declaration of 'log')
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:40)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:467)
[forbidden-apis] Forbidden class/interface use: java.util.logging.** [Use slf4j 
classes instead]
[forbidden-apis]   in org.apache.solr.schema.DocValuesTest 
(DocValuesTest.java:525)
[forbidden-apis] Scanned 3109 (and 1968 related) class file(s) for forbidden 
API invocations (in 1.92s), 4 error(s).

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:740: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:117: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build.xml:347: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/common-build.xml:509: Check 
for forbidden API calls failed, see log.

Total time: 65 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-8878) Allow the DaemonStream run rate be controlled by the internal stream

2016-03-20 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8878:
-
Attachment: SOLR-8878.patch

Added a StreamingTest for a compound DaemonStream, TopicStream.

> Allow the DaemonStream run rate be controlled by the internal stream
> 
>
> Key: SOLR-8878
> URL: https://issues.apache.org/jira/browse/SOLR-8878
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Attachments: SOLR-8878.patch, SOLR-8878.patch
>
>
> Currently the DaemonStream sleeps for one second and then checks the 
> runInterval param to determine if it needs to rerun the internal stream.
> This setup will work fine if the runInterval is longer then one second and if 
> it never changes. But with the TopicStream, you want a variable run rate. For 
> example if the TopicStream's latest run has returned documents, the next run 
> should be immediate. But if the TopicStream's latest run returned zero 
> documents then you'd want to sleep for a period of time before starting the 
> next run.
> This ticket allows the internal stream to control the DaemonStream run rate 
> by adding a *sleepMillis* key-pair to the EOF Tuple. After each run the 
> DaemonStream will check the EOF Tuple from the internal stream and if the 
> sleepMillis key-pair is present it will adjust it's run rate accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Remote file systems solrcloud

2016-03-20 Thread Christopher Luciano
Hi I'd like to know if anyone has had much success with remote file systems or 
NAS type storage backends for SolrCloud. Right now we are using local disk  in 
a Mesos cluster but it would be great if we could use something more tolerant 
to failure. We have to perform some hacks to pin instances to machines in our 
Mesos cluster.

We have considered systems like Ceph. We've heard from some of our storage 
experts that Lucene has certain issues with these types of parallel 
filesystems. Has anyone had experience with this or something like glusterfs?


Sent from my iPad
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-03-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Attachment: (was: SOLR-8208.patch)

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-03-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Attachment: SOLR-8208.patch

I moved patch to existing closeables. Now I'm looking into change in 
DocStreamer and trying to avoid it.  

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-8082.
--
Resolution: Fixed

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203525#comment-15203525
 ] 

ASF subversion and git services commented on SOLR-8082:
---

Commit c3d0276b2f50ca6c1b8dd1298fc2e214c4020dbf in lucene-solr's branch 
refs/heads/branch_6_0 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c3d0276 ]

SOLR-8082: Can't query against negative float or double values when 
indexed='false' docValues='true' multiValued='false'


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203523#comment-15203523
 ] 

ASF subversion and git services commented on SOLR-8082:
---

Commit 2668ff5abb16e867f9f770d1da65457161b52cda in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2668ff5 ]

SOLR-8082: Can't query against negative float or double values when 
indexed='false' docValues='true' multiValued='false'


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203521#comment-15203521
 ] 

ASF subversion and git services commented on SOLR-8082:
---

Commit 49d5ec02a2015ddd80059d46788b723b25cb5491 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=49d5ec0 ]

SOLR-8082: Can't query against negative float or double values when 
indexed='false' docValues='true' multiValued='false'


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203518#comment-15203518
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit a600e6f2dd121104ddf0f28cb0120544703e9455 in lucene-solr's branch 
refs/heads/branch_6_0 from [~rjernst]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a600e6f ]

LUCENE-7118: Move numDims check before modulo numDims


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-8082:
-
Fix Version/s: 6.1
   master

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-8082:


Assignee: Steve Rowe

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203517#comment-15203517
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit ab821d3a087f400206f903800f1167679dccb9ac in lucene-solr's branch 
refs/heads/branch_6x from [~rjernst]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ab821d3 ]

LUCENE-7118: Move numDims check before modulo numDims


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203516#comment-15203516
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit 7f9c4d886dbef50e31fda38d77d73a5bd3b7b8be in lucene-solr's branch 
refs/heads/master from [~rjernst]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7f9c4d8 ]

LUCENE-7118: Move numDims check before modulo numDims


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203514#comment-15203514
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit d92561e6ed0dad5cbd0488a05f2cea569ff6d5e3 in lucene-solr's branch 
refs/heads/branch_6_0 from [~rjernst]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d92561e ]

LUCENE-7118: Fix packed points upper/lower bound length check


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203513#comment-15203513
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit 38673877aef4f89dbb437249eaf201f963477f80 in lucene-solr's branch 
refs/heads/branch_6x from [~rjernst]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3867387 ]

LUCENE-7118: Fix packed points upper/lower bound length check


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203510#comment-15203510
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit 54e662b6dab9f495109b3e56aab23383aad3fb4f in lucene-solr's branch 
refs/heads/master from [~rjernst]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=54e662b ]

LUCENE-7118: Fix packed points upper/lower bound length check


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7117) PointRangeQuery.hashCode is inconsistent

2016-03-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7117.
-
Resolution: Fixed

Nick, I merged your tests in with the fix in LUCENE-7118. Thanks for adding 
them!

> PointRangeQuery.hashCode is inconsistent
> 
>
> Key: LUCENE-7117
> URL: https://issues.apache.org/jira/browse/LUCENE-7117
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7117.patch
>
>
> Like LUCENE-7085 {{PointRangeQuery.hashCode}} can produce different values 
> for the same query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7118.
-
   Resolution: Fixed
Fix Version/s: 6.0
   master

> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203508#comment-15203508
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit d8c4e6977b0e93c538813f1db1dd67fcfc199356 in lucene-solr's branch 
refs/heads/branch_6_0 from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d8c4e69 ]

LUCENE-7117, LUCENE-7118: Remove multidimensional arrays from PointRangeQuery


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7117) PointRangeQuery.hashCode is inconsistent

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203507#comment-15203507
 ] 

ASF subversion and git services commented on LUCENE-7117:
-

Commit d8c4e6977b0e93c538813f1db1dd67fcfc199356 in lucene-solr's branch 
refs/heads/branch_6_0 from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d8c4e69 ]

LUCENE-7117, LUCENE-7118: Remove multidimensional arrays from PointRangeQuery


> PointRangeQuery.hashCode is inconsistent
> 
>
> Key: LUCENE-7117
> URL: https://issues.apache.org/jira/browse/LUCENE-7117
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7117.patch
>
>
> Like LUCENE-7085 {{PointRangeQuery.hashCode}} can produce different values 
> for the same query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5616) Make grouping code use response builder needDocList

2016-03-20 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-5616.
-
Resolution: Fixed

> Make grouping code use response builder needDocList
> ---
>
> Key: SOLR-5616
> URL: https://issues.apache.org/jira/browse/SOLR-5616
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>Assignee: Erick Erickson
> Attachments: SOLR-5616.patch, SOLR-5616.patch, SOLR-5616.patch
>
>
> Right now the grouping code does this to check if it needs to generate a 
> docList for grouped results:
> {code}
> if (rb.doHighlights || rb.isDebug() || params.getBool(MoreLikeThisParams.MLT, 
> false) ){
> ...
> }
> {code}
> this is ugly because any new component that needs a doclist, from grouped 
> results, will need to modify QueryComponent to add a check to this if. 
> Ideally this should just use the rb.isNeedDocList() flag...
> Coincidentally this boolean is really never used at for non-grouped results 
> it always gets generated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-03-20 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-8599.
-
Resolution: Fixed

> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203500#comment-15203500
 ] 

Ryan Ernst commented on LUCENE-7118:


Shouldn't this line in PointRangeQuery:

{quote}
+if (upperPoint.length != upperPoint.length) {
{quote}

Be checking {{lowerPoint.length != upperPoint.length}}?

> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7117) PointRangeQuery.hashCode is inconsistent

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203498#comment-15203498
 ] 

ASF subversion and git services commented on LUCENE-7117:
-

Commit 51b109620be7d565ae816f9e327813798c001611 in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=51b1096 ]

LUCENE-7117, LUCENE-7118: Remove multidimensional arrays from PointRangeQuery


> PointRangeQuery.hashCode is inconsistent
> 
>
> Key: LUCENE-7117
> URL: https://issues.apache.org/jira/browse/LUCENE-7117
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7117.patch
>
>
> Like LUCENE-7085 {{PointRangeQuery.hashCode}} can produce different values 
> for the same query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203499#comment-15203499
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit 51b109620be7d565ae816f9e327813798c001611 in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=51b1096 ]

LUCENE-7117, LUCENE-7118: Remove multidimensional arrays from PointRangeQuery


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7117) PointRangeQuery.hashCode is inconsistent

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203492#comment-15203492
 ] 

ASF subversion and git services commented on LUCENE-7117:
-

Commit e1a1dbfabcc9defb22ba091be1633a31a2810ab8 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1a1dbf ]

LUCENE-7117, LUCENE-7118: Remove multidimensional arrays from PointRangeQuery


> PointRangeQuery.hashCode is inconsistent
> 
>
> Key: LUCENE-7117
> URL: https://issues.apache.org/jira/browse/LUCENE-7117
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7117.patch
>
>
> Like LUCENE-7085 {{PointRangeQuery.hashCode}} can produce different values 
> for the same query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7118) Remove multidimensional arrays from PointRangeQuery

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203493#comment-15203493
 ] 

ASF subversion and git services commented on LUCENE-7118:
-

Commit e1a1dbfabcc9defb22ba091be1633a31a2810ab8 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1a1dbf ]

LUCENE-7117, LUCENE-7118: Remove multidimensional arrays from PointRangeQuery


> Remove multidimensional arrays from PointRangeQuery
> ---
>
> Key: LUCENE-7118
> URL: https://issues.apache.org/jira/browse/LUCENE-7118
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7118.patch
>
>
> This use of byte[][] has caused two bugs: LUCENE-7085 and LUCENE-7117.
> It is not necessary, and causes code duplication in most Point classes 
> because they have to have a {{pack()}} that encodes to byte[] for the indexer 
> but a {{encode()}} or similar that makes multi-D byte[][] for just this query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8870) AngularJS Query tab breaks through proxy

2016-03-20 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203431#comment-15203431
 ] 

Upayavira commented on SOLR-8870:
-

Looks good to me (not tried it though)

> AngularJS Query tab breaks through proxy
> 
>
> Key: SOLR-8870
> URL: https://issues.apache.org/jira/browse/SOLR-8870
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.5
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: 404-error, angularjs, encoding, newdev
> Attachments: SOLR-8870.patch
>
>
> The AngularJS Query tab generates a request URL on this form: 
> http://localhost:8983/solr/techproducts%2Fselect?_=1458291250691=on=ram=json
>  Notice the urlencoded {{%2Fselect}} part.
> This works well locally with Jetty, but a customer has httpd as a proxy in 
> front, and we get a 404 error since the web server does not parse {{%2F}} as 
> a path separator and thus does not match the proxy rules for select.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_72) - Build # 5723 - Failure!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5723/
Java: 64bit/jdk1.8.0_72 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Error from server at http://127.0.0.1:64900//collection1: 
java.lang.NullPointerException  at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:105)
  at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:753)
  at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:736)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:420)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:462)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:518)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) 
 at java.lang.Thread.run(Thread.java:745) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:64900//collection1: 
java.lang.NullPointerException
at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:105)
at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:753)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:736)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:420)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 

[jira] [Created] (SOLR-8879) Wrong number of matches is returned when group cache limit is exceeded and some results are filtered by a post filter

2016-03-20 Thread Erez Michalak (JIRA)
Erez Michalak created SOLR-8879:
---

 Summary: Wrong number of matches is returned when group cache 
limit is exceeded and some results are filtered by a post filter
 Key: SOLR-8879
 URL: https://issues.apache.org/jira/browse/SOLR-8879
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 5.3.1
Reporter: Erez Michalak


When the group cache limit is exceeded (see the warning "The grouping cache is 
active, but not used because it exceeded the max cache limit of %d percent" at 
the Grouping class), and some of the results are filtered by a post filter, the 
number of matches at the response is wrong (doesn't take the post filter into 
account). 

Seems like this can be fixed if the following lines are added after the warning 
and before searchWithTimeLimiter:
  if (pf.postFilter != null) {
pf.postFilter.setLastDelegate(secondPhaseCollectors);
secondPhaseCollectors = pf.postFilter;
  }

(because exceeding the cache limit should work exactly as working with no cache 
at all) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5616) Make grouping code use response builder needDocList

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203381#comment-15203381
 ] 

ASF subversion and git services commented on SOLR-5616:
---

Commit fecdec6c85f6180f00e870ca8ec14058d30a1fae in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fecdec6 ]

SOLR-5616: Simplifies grouping code to use ResponseBuilder.needDocList() to 
determine if it needs to generate a doc list for grouped results.


> Make grouping code use response builder needDocList
> ---
>
> Key: SOLR-5616
> URL: https://issues.apache.org/jira/browse/SOLR-5616
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>Assignee: Erick Erickson
> Attachments: SOLR-5616.patch, SOLR-5616.patch, SOLR-5616.patch
>
>
> Right now the grouping code does this to check if it needs to generate a 
> docList for grouped results:
> {code}
> if (rb.doHighlights || rb.isDebug() || params.getBool(MoreLikeThisParams.MLT, 
> false) ){
> ...
> }
> {code}
> this is ugly because any new component that needs a doclist, from grouped 
> results, will need to modify QueryComponent to add a check to this if. 
> Ideally this should just use the rb.isNeedDocList() flag...
> Coincidentally this boolean is really never used at for non-grouped results 
> it always gets generated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203380#comment-15203380
 ] 

ASF subversion and git services commented on SOLR-8599:
---

Commit e3b785a906d6f93e04f2cb45c436516158af0425 in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3b785a ]

SOLR-8599: Improved the tests for this issue to avoid changing a variable to 
non-final


> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16278 - Failure!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16278/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:
--> http://127.0.0.1:44504/collection1:java.util.concurrent.ExecutionException: 
java.io.IOException: --> http://127.0.0.1:34360/collection1/: An exception has 
occurred on the server, refer to server log for details.

Stack Trace:
java.io.IOException: --> 
http://127.0.0.1:44504/collection1:java.util.concurrent.ExecutionException: 
java.io.IOException: --> http://127.0.0.1:34360/collection1/: An exception has 
occurred on the server, refer to server log for details.
at 
__randomizedtesting.SeedInfo.seed([4E2F6420A5FAE254:E96BDC84C841F1ED]:0)
at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:201)
at 
org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2425)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:238)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)

[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203355#comment-15203355
 ] 

Shawn Heisey commented on SOLR-6806:


Side proposal, which I'm willing to forget about: Consider putting the main DIH 
jar back into WEB-INF/lib, and possibly even moving its code into core.  The 
DIH-extras code and jars probably still belong in contrib.

I think that for a lot of users, DIH was the primary reason that they seriously 
investigated Solr.  I know it was a primary consideration when I first started. 
 For users like that, DIH is not "extra" functionality, it's part of their core 
usage.  The main DIH jar is also very small, slightly larger than 200KB.


> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2016-03-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203353#comment-15203353
 ] 

Shawn Heisey commented on SOLR-6806:


One idea that would take care of most every concern created by splitting 
artifacts in the first place:  Put *all* of the pieces that we extract from the 
main artifact (except docs, that gets its own) into a "solr-extras" artifact.  
It would be large, but there will be a lot less "if x then y" documentation 
required.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 966 - Still Failing

2016-03-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/966/

6 tests failed.
FAILED:  
org.apache.lucene.codecs.asserting.TestAssertingDocValuesFormat.testRamBytesUsed

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([D8DE6A97EC13DDB8]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.asserting.TestAssertingDocValuesFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([D8DE6A97EC13DDB8]:0)


FAILED:  
org.apache.lucene.codecs.asserting.TestAssertingStoredFieldsFormat.testRamBytesUsed

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([D8DE6A97EC13DDB8]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.asserting.TestAssertingStoredFieldsFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([D8DE6A97EC13DDB8]:0)


FAILED:  
org.apache.lucene.codecs.asserting.TestAssertingTermVectorsFormat.testRamBytesUsed

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([D8DE6A97EC13DDB8]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.asserting.TestAssertingTermVectorsFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([D8DE6A97EC13DDB8]:0)




Build Log:
[...truncated 2660 lines...]
   [junit4] Suite: 
org.apache.lucene.codecs.asserting.TestAssertingDocValuesFormat
   [junit4]   2> 3 20, 2016 12:06:42 ?? 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.codecs.asserting.TestAssertingDocValuesFormat
   [junit4]   2>1) Thread[id=1, name=main, state=WAITING, group=main]
   [junit4]   2> at java.lang.Object.wait(Native Method)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1245)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1319)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:601)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:450)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:243)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:354)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:10)
   [junit4]   2>2) Thread[id=602, 
name=SUITE-TestAssertingDocValuesFormat-seed#[D8DE6A97EC13DDB8], 
state=RUNNABLE, group=TGRP-TestAssertingDocValuesFormat]
   [junit4]   2> at java.lang.Thread.getStackTrace(Thread.java:1552)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:688)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:685)
   [junit4]   2> at java.security.AccessController.doPrivileged(Native 
Method)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getStackTrace(ThreadLeakControl.java:685)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getThreadsWithTraces(ThreadLeakControl.java:701)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.formatThreadStacksFull(ThreadLeakControl.java:681)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.access$1000(ThreadLeakControl.java:64)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:414)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:681)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:140)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:591)
   [junit4]   2>3) Thread[id=603, 
name=TEST-TestAssertingDocValuesFormat.testRamBytesUsed-seed#[D8DE6A97EC13DDB8],
 state=RUNNABLE, group=TGRP-TestAssertingDocValuesFormat]
   [junit4]   2> at java.lang.Throwable.fillInStackTrace(Native Method)
   [junit4]   2> at 

[jira] [Updated] (SOLR-8644) ArrayIndexOutOfBoundsException in BlockJoinFieldFacetAccumulator

2016-03-20 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated SOLR-8644:
---
Attachment: SOLR-8644.patch

The issue is reproduced with Unit test. It looks like BlockJoinFacetComponent 
conflicts with calculating exclusions defined for usual facet.

> ArrayIndexOutOfBoundsException in BlockJoinFieldFacetAccumulator
> 
>
> Key: SOLR-8644
> URL: https://issues.apache.org/jira/browse/SOLR-8644
> Project: Solr
>  Issue Type: Sub-task
>  Components: faceting
>Reporter: Ilya Kasnacheev
>Priority: Minor
> Fix For: 5.5, master
>
> Attachments: SOLR-8644.patch
>
>
> Not sure I can provide any minimal example, but possibly it's easier to fix 
> than describe.
> {code}
> http://localhost:8983/solr/core0/bjqfacet?q={!parent+which%3Dtype_s:parent+v%3D$cq}={!term+f%3DBRAND_s+tag%3Drbrand}Nike=true=type_s:child+AND+SIZE_s:XL={!ex%3Drbrand}BRAND_s=disp_clr
> {code}
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.solr.search.join.BlockJoinFieldFacetAccumulator$SortedIntsAggDocIterator.nextDoc(BlockJoinFieldFacetAccumulator.java:117)
> at 
> org.apache.solr.search.join.BlockJoinFieldFacetAccumulator.updateCountsWithMatchedBlock(BlockJoinFieldFacetAccumulator.java:143)
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.countFacets(BlockJoinFacetCollector.java:119)
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:106)
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1161)
> {code}
> Again it only shows up in BlockJoinFacetComponent, not in 
> BlockJoinDocSetFacetComponent
> The error is at bottom of result:
> {code}
> java.lang.ArrayIndexOutOfBoundsException
> 500
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 189 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/189/
Java: 32bit/jdk-9-jigsaw-ea+110 -client -XX:+UseSerialGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=8246, 
name=testExecutor-4113-thread-9, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8246, name=testExecutor-4113-thread-9, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:40939/gi/ym
at __randomizedtesting.SeedInfo.seed([FC8C22E475E054E]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@9-ea/ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@9-ea/ThreadPoolExecutor.java:632)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:40939/gi/ym
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(java.base@9-ea/Native Method)
at 
java.net.SocketInputStream.socketRead(java.base@9-ea/SocketInputStream.java:116)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:170)
at 
java.net.SocketInputStream.read(java.base@9-ea/SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11469 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_FC8C22E475E054E-001/init-core-data-001
   [junit4]   2> 950305 INFO  
(SUITE-UnloadDistributedZkTest-seed#[FC8C22E475E054E]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /gi/ym
   [junit4]   2> 950307 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[FC8C22E475E054E]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 950308 INFO  (Thread-2383) [] 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 465 - Failure!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/465/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'null' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":4, "params":{   "x":{ "a":"A val", 
"b":"B val", "_appends_":{"add":"first"}, 
"_invariants_":{"fixed":"f"}, "":{"v":1}},   "y":{ "p":"P 
val", "q":"Q val", "":{"v":2}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'null' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":4,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"_appends_":{"add":"first"},
"_invariants_":{"fixed":"f"},
"":{"v":1}},
  "y":{
"p":"P val",
"q":"Q val",
"":{"v":2}
at 
__randomizedtesting.SeedInfo.seed([3F4498CCD03B06F5:B710A7167EC76B0D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:264)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Closed] (SOLR-8666) Add header to SearchHandler to indicate whether solr is connection to zk

2016-03-20 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-8666.
-
   Resolution: Fixed
Fix Version/s: master

> Add header to SearchHandler to indicate whether solr is connection to zk
> 
>
> Key: SOLR-8666
> URL: https://issues.apache.org/jira/browse/SOLR-8666
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master
>
> Attachments: SOLR-8666.patch
>
>
> Currently solr update requests error out if a zookeeper check fails, however 
> SearchHandler does not do these checks. As a result, if a request is sent to 
> a node which should be part of a SolrCloud but is not connected to zookeeper 
> and thinks that its Active, it's possible the response is composed of stale 
> data. 
> The purpose of this header is to allow the client to decide whether or not 
> the result data should be considered valid.
> This patch also returns the {{zkConnected}} header in the ping handler to 
> allow external health checks to use this information. 
> See [SOLR-8599] for an example of when this situation can arise. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-20 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203322#comment-15203322
 ] 

Dawid Weiss commented on LUCENE-7122:
-

If you do keep it inside one class it'll get so hairy nobody will be able to 
understand it. Subclassing/ interfaces may cause call site degeneration into 
polymorphic calls, leading to inefficiencies.

I would personally leave it as is, without trying to optimize, but if it's 
really a gain worth the short then I'd keep a separate class for it (fully 
optimized for a particular use case, without multiple layered conditional 
logic).

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203308#comment-15203308
 ] 

Robert Muir commented on LUCENE-7122:
-

Can we not make any more crazy abstractions here, please?

Its already screwed up how many of these BytesRefArray-type abstractions we 
have: far too many, none are justified.


> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 20 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/20/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([674F4802A00CDE77:BF0267D17BD7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (LUCENE-7121) BKDWriter should not store ords when documents are single valued

2016-03-20 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7121.

Resolution: Fixed

> BKDWriter should not store ords when documents are single valued
> 
>
> Key: LUCENE-7121
> URL: https://issues.apache.org/jira/browse/LUCENE-7121
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7121.patch
>
>
> Since we now have stats for points fields, it's easy to know up front whether 
> the field you are about to build a BKD tree for is single valued or not.
> If it is single valued, we can optimize space by not storing the ordinal to 
> identify a point, since its docID also uniquely identifies it.
> This saves 4 bytes per point, which for the 1D case is non-trivial (12 bytes 
> down to 8 bytes per doc), and even for the 2D case is good reduction (16 
> bytes down to 12 bytes per doc).
> This is an optimization ... I won't push it into 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7121) BKDWriter should not store ords when documents are single valued

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203290#comment-15203290
 ] 

ASF subversion and git services commented on LUCENE-7121:
-

Commit cc01ba39cd98bf2274f2ac98c44bcd60028697b3 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cc01ba3 ]

LUCENE-7121: don't write ord for single-valued points, saving 4 bytes per point


> BKDWriter should not store ords when documents are single valued
> 
>
> Key: LUCENE-7121
> URL: https://issues.apache.org/jira/browse/LUCENE-7121
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7121.patch
>
>
> Since we now have stats for points fields, it's easy to know up front whether 
> the field you are about to build a BKD tree for is single valued or not.
> If it is single valued, we can optimize space by not storing the ordinal to 
> identify a point, since its docID also uniquely identifies it.
> This saves 4 bytes per point, which for the 1D case is non-trivial (12 bytes 
> down to 8 bytes per doc), and even for the 2D case is good reduction (16 
> bytes down to 12 bytes per doc).
> This is an optimization ... I won't push it into 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7121) BKDWriter should not store ords when documents are single valued

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203287#comment-15203287
 ] 

ASF subversion and git services commented on LUCENE-7121:
-

Commit d392940092187ba88be0d2b0882c23800f44a74e in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d392940 ]

LUCENE-7121: don't write ord for single-valued points, saving 4 bytes per point


> BKDWriter should not store ords when documents are single valued
> 
>
> Key: LUCENE-7121
> URL: https://issues.apache.org/jira/browse/LUCENE-7121
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7121.patch
>
>
> Since we now have stats for points fields, it's easy to know up front whether 
> the field you are about to build a BKD tree for is single valued or not.
> If it is single valued, we can optimize space by not storing the ordinal to 
> identify a point, since its docID also uniquely identifies it.
> This saves 4 bytes per point, which for the 1D case is non-trivial (12 bytes 
> down to 8 bytes per doc), and even for the 2D case is good reduction (16 
> bytes down to 12 bytes per doc).
> This is an optimization ... I won't push it into 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_72) - Build # 188 - Failure!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/188/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=7366, 
name=testExecutor-3377-thread-2, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=7366, name=testExecutor-3377-thread-2, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:39077
at __randomizedtesting.SeedInfo.seed([E21E9796849CC2C2]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:39077
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11374 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_E21E9796849CC2C2-001/init-core-data-001
   [junit4]   2> 868968 INFO  
(SUITE-UnloadDistributedZkTest-seed#[E21E9796849CC2C2]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 868969 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[E21E9796849CC2C2]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 868969 INFO  (Thread-2131) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 868969 INFO  (Thread-2131) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3153 - Failure!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3153/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:53613/_/b/c8n_1x3_lf_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:53613/_/b/c8n_1x3_lf_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([9C08B46F1EE4FBB9:145C8BB5B0189641]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:648)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:609)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:595)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:174)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-8874) Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203273#comment-15203273
 ] 

ASF subversion and git services commented on SOLR-8874:
---

Commit 2ea8d4cd7b39e675ebc2cad6a9bfa983872599a0 in lucene-solr's branch 
refs/heads/branch_6_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2ea8d4c ]

SOLR-8874: Update Maven config to correctly set tests.disableHdfs


> Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests
> 
>
> Key: SOLR-8874
> URL: https://issues.apache.org/jira/browse/SOLR-8874
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8874.patch, SOLR-8874.patch, SOLR-8874.patch, 
> SOLR-8874.patch
>
>
> We now have one more week to prepare our build for Java 9 Jigsaw. The next 
> Java 9 EA build will now contain the new Java 9 module system. From that time 
> on, it is no longer possible to test Java 9 unless we fix remaining bugs. 
> Currently Solr does not pass at all, because almost every test fails because 
> the RAMUsageEstimator tries to look into objects in static field where the 
> internals were hidden by Java 9:
> {noformat}
>[junit4] ERROR   0.00s | SolrRequestParserTest (suite) <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: Unable to 
> access 'private final sun.nio.fs.WindowsFileSystem sun.nio.fs
> .WindowsPath.fs' to estimate memory usage
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([C6C2FAD07A66283B]:0)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.j
> ava:127)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>[junit4]>at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>[junit4]>at 
> java.lang.Thread.run(java.base@9-ea/Thread.java:804)
>[junit4]> Caused by: java.lang.reflect.InaccessibleObjectException: 
> Unable to make member of class sun.nio.fs.WindowsPath access
> ible:  module java.base does not export sun.nio.fs to unnamed module @436813f3
>[junit4]>at 
> sun.reflect.Reflection.throwInaccessibleObjectException(java.base@9-ea/Reflection.java:420)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkCanSetAccessible(java.base@9-ea/AccessibleObject.java:174)
>[junit4]>at 
> java.lang.reflect.Field.checkCanSetAccessible(java.base@9-ea/Field.java:170)
>[junit4]>at 
> java.lang.reflect.Field.setAccessible(java.base@9-ea/Field.java:164)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:597)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:594)
>[junit4]>at 
> java.security.AccessController.doPrivileged(java.base@9-ea/Native Method)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:594)
>[junit4]>... 13 more
>[junit4] Completed [1/1 (1!)] in 8.46s, 12 tests, 1 error 

[jira] [Commented] (SOLR-8874) Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203271#comment-15203271
 ] 

ASF subversion and git services commented on SOLR-8874:
---

Commit 0f60ce61eb9d28910c5934929d6d300f047ed1ce in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0f60ce6 ]

SOLR-8874: Update Maven config to correctly set tests.disableHdfs


> Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests
> 
>
> Key: SOLR-8874
> URL: https://issues.apache.org/jira/browse/SOLR-8874
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8874.patch, SOLR-8874.patch, SOLR-8874.patch, 
> SOLR-8874.patch
>
>
> We now have one more week to prepare our build for Java 9 Jigsaw. The next 
> Java 9 EA build will now contain the new Java 9 module system. From that time 
> on, it is no longer possible to test Java 9 unless we fix remaining bugs. 
> Currently Solr does not pass at all, because almost every test fails because 
> the RAMUsageEstimator tries to look into objects in static field where the 
> internals were hidden by Java 9:
> {noformat}
>[junit4] ERROR   0.00s | SolrRequestParserTest (suite) <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: Unable to 
> access 'private final sun.nio.fs.WindowsFileSystem sun.nio.fs
> .WindowsPath.fs' to estimate memory usage
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([C6C2FAD07A66283B]:0)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.j
> ava:127)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>[junit4]>at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>[junit4]>at 
> java.lang.Thread.run(java.base@9-ea/Thread.java:804)
>[junit4]> Caused by: java.lang.reflect.InaccessibleObjectException: 
> Unable to make member of class sun.nio.fs.WindowsPath access
> ible:  module java.base does not export sun.nio.fs to unnamed module @436813f3
>[junit4]>at 
> sun.reflect.Reflection.throwInaccessibleObjectException(java.base@9-ea/Reflection.java:420)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkCanSetAccessible(java.base@9-ea/AccessibleObject.java:174)
>[junit4]>at 
> java.lang.reflect.Field.checkCanSetAccessible(java.base@9-ea/Field.java:170)
>[junit4]>at 
> java.lang.reflect.Field.setAccessible(java.base@9-ea/Field.java:164)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:597)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:594)
>[junit4]>at 
> java.security.AccessController.doPrivileged(java.base@9-ea/Native Method)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:594)
>[junit4]>... 13 more
>[junit4] Completed [1/1 (1!)] in 8.46s, 12 tests, 1 error 

[jira] [Commented] (SOLR-8874) Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203270#comment-15203270
 ] 

ASF subversion and git services commented on SOLR-8874:
---

Commit 3a4e1d114219e0f9a28cf49c51ed9928913d2cb3 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3a4e1d1 ]

SOLR-8874: Update Maven config to correctly set tests.disableHdfs


> Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests
> 
>
> Key: SOLR-8874
> URL: https://issues.apache.org/jira/browse/SOLR-8874
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8874.patch, SOLR-8874.patch, SOLR-8874.patch, 
> SOLR-8874.patch
>
>
> We now have one more week to prepare our build for Java 9 Jigsaw. The next 
> Java 9 EA build will now contain the new Java 9 module system. From that time 
> on, it is no longer possible to test Java 9 unless we fix remaining bugs. 
> Currently Solr does not pass at all, because almost every test fails because 
> the RAMUsageEstimator tries to look into objects in static field where the 
> internals were hidden by Java 9:
> {noformat}
>[junit4] ERROR   0.00s | SolrRequestParserTest (suite) <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: Unable to 
> access 'private final sun.nio.fs.WindowsFileSystem sun.nio.fs
> .WindowsPath.fs' to estimate memory usage
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([C6C2FAD07A66283B]:0)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.j
> ava:127)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>[junit4]>at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>[junit4]>at 
> java.lang.Thread.run(java.base@9-ea/Thread.java:804)
>[junit4]> Caused by: java.lang.reflect.InaccessibleObjectException: 
> Unable to make member of class sun.nio.fs.WindowsPath access
> ible:  module java.base does not export sun.nio.fs to unnamed module @436813f3
>[junit4]>at 
> sun.reflect.Reflection.throwInaccessibleObjectException(java.base@9-ea/Reflection.java:420)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkCanSetAccessible(java.base@9-ea/AccessibleObject.java:174)
>[junit4]>at 
> java.lang.reflect.Field.checkCanSetAccessible(java.base@9-ea/Field.java:170)
>[junit4]>at 
> java.lang.reflect.Field.setAccessible(java.base@9-ea/Field.java:164)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:597)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:594)
>[junit4]>at 
> java.security.AccessController.doPrivileged(java.base@9-ea/Native Method)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:594)
>[junit4]>... 13 more
>[junit4] Completed [1/1 (1!)] in 8.46s, 12 tests, 1 error <<< 

[jira] [Comment Edited] (LUCENE-7114) analyzers-common tests fail with JDK9 EA 110 build

2016-03-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203239#comment-15203239
 ] 

Uwe Schindler edited comment on LUCENE-7114 at 3/20/16 10:52 AM:
-

bq. Bugs get fixed by the developers quickly, but then take weeks to find their 
way into a build, yet the builds are still horrendously unstable, so whats the 
point of all the delay?

I think this should again be brought up on the Hotspot mailing list. I already 
did this on the core-dev list, but the people there are different and they 
don't have the problems. Non-Hotspot fixes get it into the EA builds quite fast.

The problem are Hotspot fixes. As noted in my article about FOSDEM (which was 
titled "How free is OpenJDK" in German version: see 
https://jaxenter.com/java-9-steals-spotlight-open-jdk-project-takes-back-123721.html),
 the problems in Hotspot are more complicated. Also other non-Oracle 
developers, e.g. from SAP, were complaining about the processes inside Hotspot 
and how patches get accepted. There is also a separate test suite that is not 
public! Before this one not passes, it does not get into EA builds and so on. 
To me it looks like there is a lot of additional bureaucratic "quality" control 
behind the scenes that delay inclusion into the EA builds. It looks like the 
stuff is merged several times to other branches (check out JIRA about that). 
The frightening fact is: these "quality" checks don't ensure quality, as we all 
see. So it would be better to remove that bureaucracy and let the community 
test as soon as the patch is committed to repository. We should mention that 
over an over on conferences, discussions, and the mailing lists.


was (Author: thetaphi):
bq. Bugs get fixed by the developers quickly, but then take weeks to find their 
way into a build, yet the builds are still horrendously unstable, so whats the 
point of all the delay?

I think this should again be brought up on the Hotspot mailing list. I already 
did this on the core-dev list, but the people there are different and they 
don't have the problems. Non-Hotspot fixes get it into the EA builds quite fast.

The problem are Hotspot fixes. As noted in my article about FOSDEM (which was 
titled "How free is OpenJDK in German"; see 
https://jaxenter.com/java-9-steals-spotlight-open-jdk-project-takes-back-123721.html),
 the problems in Hotspot are more complicated. Also other non-Oracle 
developers, e.g. from SAP, were complaining about the processes inside Hotspot 
and how patches get accepted. There is also a separate test suite that is not 
public! Before this one not passes, it does not get into EA builds and so on. 
To me it looks like there is a lot of additional bureaucratic "quality" control 
behind the scenes that delay inclusion into the EA builds. It looks like the 
stuff is merged several times to other branches (check out JIRA about that). 
The frightening fact is: these "quality" checks don't ensure quality, as we all 
see. So it would be better to remove that bureaucracy and let the community 
test as soon as the patch is committed to repository. We should mention that 
over an over on conferences, discussions, and the mailing lists.

> analyzers-common tests fail with JDK9 EA 110 build
> --
>
> Key: LUCENE-7114
> URL: https://issues.apache.org/jira/browse/LUCENE-7114
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Looks like this:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.fr.TestFrenchLightStemFilter
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFrenchLightStemFilter -Dtests.method=testVocabulary 
> -Dtests.seed=4044297F9BFA5E32 -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=ACT 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.44s J0 | TestFrenchLightStemFilter.testVocabulary <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
> expected: but was:
> {noformat}
> So far i see these failing with french and portuguese. It may be a hotspot 
> issue, as these tests stem more than 10,000 words.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7114) analyzers-common tests fail with JDK9 EA 110 build

2016-03-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203239#comment-15203239
 ] 

Uwe Schindler commented on LUCENE-7114:
---

bq. Bugs get fixed by the developers quickly, but then take weeks to find their 
way into a build, yet the builds are still horrendously unstable, so whats the 
point of all the delay?

I think this should again be brought up on the Hotspot mailing list. I already 
did this on the core-dev list, but the people there are different and they 
don't have the problems. Non-Hotspot fixes get it into the EA builds quite fast.

The problem are Hotspot fixes. As noted in my article about FOSDEM (which was 
titled "How free is OpenJDK in German"; see 
https://jaxenter.com/java-9-steals-spotlight-open-jdk-project-takes-back-123721.html),
 the problems in Hotspot are more complicated. Also other non-Oracle 
developers, e.g. from SAP, were complaining about the processes inside Hotspot 
and how patches get accepted. There is also a separate test suite that is not 
public! Before this one not passes, it does not get into EA builds and so on. 
To me it looks like there is a lot of additional bureaucratic "quality" control 
behind the scenes that delay inclusion into the EA builds. It looks like the 
stuff is merged several times to other branches (check out JIRA about that). 
The frightening fact is: these "quality" checks don't ensure quality, as we all 
see. So it would be better to remove that bureaucracy and let the community 
test as soon as the patch is committed to repository. We should mention that 
over an over on conferences, discussions, and the mailing lists.

> analyzers-common tests fail with JDK9 EA 110 build
> --
>
> Key: LUCENE-7114
> URL: https://issues.apache.org/jira/browse/LUCENE-7114
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Looks like this:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.fr.TestFrenchLightStemFilter
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFrenchLightStemFilter -Dtests.method=testVocabulary 
> -Dtests.seed=4044297F9BFA5E32 -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=ACT 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.44s J0 | TestFrenchLightStemFilter.testVocabulary <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
> expected: but was:
> {noformat}
> So far i see these failing with french and portuguese. It may be a hotspot 
> issue, as these tests stem more than 10,000 words.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-03-20 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8208:
---
Attachment: SOLR-8208.diff

Hi Mikhail,

I updated the code with lastest code in trunk. All tests are passed now.

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203224#comment-15203224
 ] 

Uwe Schindler commented on LUCENE-7122:
---

Maybe a common base class?

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203198#comment-15203198
 ] 

Michael McCandless commented on LUCENE-7122:


bq. I'd say do create a separate class...

This (forking {{BytesRefArray}}) has its costs too, and I thought keeping a 
single class and having it handle its storage more efficiently is the lesser 
evil.

But I'll explore the forking option ...

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7114) analyzers-common tests fail with JDK9 EA 110 build

2016-03-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203196#comment-15203196
 ] 

Uwe Schindler commented on LUCENE-7114:
---

bq. Maybe a good idea for the openjdk project, stop adding features for a bit 
and get things stabilized so the jdk works at all. We can't test anymore!

I agree! Especially the compact strings are the most useless and risky patch of 
all of those. When I see that I feel like back in ASCII days of 1980.

> analyzers-common tests fail with JDK9 EA 110 build
> --
>
> Key: LUCENE-7114
> URL: https://issues.apache.org/jira/browse/LUCENE-7114
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Looks like this:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.fr.TestFrenchLightStemFilter
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFrenchLightStemFilter -Dtests.method=testVocabulary 
> -Dtests.seed=4044297F9BFA5E32 -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=ACT 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.44s J0 | TestFrenchLightStemFilter.testVocabulary <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
> expected: but was:
> {noformat}
> So far i see these failing with french and portuguese. It may be a hotspot 
> issue, as these tests stem more than 10,000 words.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8876) Morphlines tests fail with Java 9 (Jigsaw)

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203191#comment-15203191
 ] 

ASF subversion and git services commented on SOLR-8876:
---

Commit 19b4168b3f6ef7c6614ece040948cab9a05be32b in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=19b4168 ]

SOLR-8874, SOLR-8876: Disable more Hadoop tests with Java 9


> Morphlines tests fail with Java 9 (Jigsaw)
> --
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce
>Reporter: Uwe Schindler
>
> They fail with: "No command builder registered for name".
> I added an assume through SOLR-8874.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8874) Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203190#comment-15203190
 ] 

ASF subversion and git services commented on SOLR-8874:
---

Commit 19b4168b3f6ef7c6614ece040948cab9a05be32b in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=19b4168 ]

SOLR-8874, SOLR-8876: Disable more Hadoop tests with Java 9


> Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests
> 
>
> Key: SOLR-8874
> URL: https://issues.apache.org/jira/browse/SOLR-8874
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8874.patch, SOLR-8874.patch, SOLR-8874.patch, 
> SOLR-8874.patch
>
>
> We now have one more week to prepare our build for Java 9 Jigsaw. The next 
> Java 9 EA build will now contain the new Java 9 module system. From that time 
> on, it is no longer possible to test Java 9 unless we fix remaining bugs. 
> Currently Solr does not pass at all, because almost every test fails because 
> the RAMUsageEstimator tries to look into objects in static field where the 
> internals were hidden by Java 9:
> {noformat}
>[junit4] ERROR   0.00s | SolrRequestParserTest (suite) <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: Unable to 
> access 'private final sun.nio.fs.WindowsFileSystem sun.nio.fs
> .WindowsPath.fs' to estimate memory usage
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([C6C2FAD07A66283B]:0)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.j
> ava:127)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>[junit4]>at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>[junit4]>at 
> java.lang.Thread.run(java.base@9-ea/Thread.java:804)
>[junit4]> Caused by: java.lang.reflect.InaccessibleObjectException: 
> Unable to make member of class sun.nio.fs.WindowsPath access
> ible:  module java.base does not export sun.nio.fs to unnamed module @436813f3
>[junit4]>at 
> sun.reflect.Reflection.throwInaccessibleObjectException(java.base@9-ea/Reflection.java:420)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkCanSetAccessible(java.base@9-ea/AccessibleObject.java:174)
>[junit4]>at 
> java.lang.reflect.Field.checkCanSetAccessible(java.base@9-ea/Field.java:170)
>[junit4]>at 
> java.lang.reflect.Field.setAccessible(java.base@9-ea/Field.java:164)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:597)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:594)
>[junit4]>at 
> java.security.AccessController.doPrivileged(java.base@9-ea/Native Method)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:594)
>[junit4]>... 13 more
>[junit4] Completed [1/1 (1!)] in 8.46s, 12 tests, 1 error <<< 

[jira] [Commented] (SOLR-8874) Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203192#comment-15203192
 ] 

ASF subversion and git services commented on SOLR-8874:
---

Commit 2e06790bdcf0b26fcc8ecf518284432153dd6a7c in lucene-solr's branch 
refs/heads/branch_6_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2e06790 ]

SOLR-8874, SOLR-8876: Disable more Hadoop tests with Java 9


> Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests
> 
>
> Key: SOLR-8874
> URL: https://issues.apache.org/jira/browse/SOLR-8874
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8874.patch, SOLR-8874.patch, SOLR-8874.patch, 
> SOLR-8874.patch
>
>
> We now have one more week to prepare our build for Java 9 Jigsaw. The next 
> Java 9 EA build will now contain the new Java 9 module system. From that time 
> on, it is no longer possible to test Java 9 unless we fix remaining bugs. 
> Currently Solr does not pass at all, because almost every test fails because 
> the RAMUsageEstimator tries to look into objects in static field where the 
> internals were hidden by Java 9:
> {noformat}
>[junit4] ERROR   0.00s | SolrRequestParserTest (suite) <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: Unable to 
> access 'private final sun.nio.fs.WindowsFileSystem sun.nio.fs
> .WindowsPath.fs' to estimate memory usage
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([C6C2FAD07A66283B]:0)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.j
> ava:127)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>[junit4]>at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>[junit4]>at 
> java.lang.Thread.run(java.base@9-ea/Thread.java:804)
>[junit4]> Caused by: java.lang.reflect.InaccessibleObjectException: 
> Unable to make member of class sun.nio.fs.WindowsPath access
> ible:  module java.base does not export sun.nio.fs to unnamed module @436813f3
>[junit4]>at 
> sun.reflect.Reflection.throwInaccessibleObjectException(java.base@9-ea/Reflection.java:420)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkCanSetAccessible(java.base@9-ea/AccessibleObject.java:174)
>[junit4]>at 
> java.lang.reflect.Field.checkCanSetAccessible(java.base@9-ea/Field.java:170)
>[junit4]>at 
> java.lang.reflect.Field.setAccessible(java.base@9-ea/Field.java:164)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:597)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:594)
>[junit4]>at 
> java.security.AccessController.doPrivileged(java.base@9-ea/Native Method)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:594)
>[junit4]>... 13 more
>[junit4] Completed [1/1 (1!)] in 8.46s, 12 tests, 1 error <<< 

[jira] [Commented] (SOLR-8876) Morphlines tests fail with Java 9 (Jigsaw)

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203193#comment-15203193
 ] 

ASF subversion and git services commented on SOLR-8876:
---

Commit 2e06790bdcf0b26fcc8ecf518284432153dd6a7c in lucene-solr's branch 
refs/heads/branch_6_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2e06790 ]

SOLR-8874, SOLR-8876: Disable more Hadoop tests with Java 9


> Morphlines tests fail with Java 9 (Jigsaw)
> --
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce
>Reporter: Uwe Schindler
>
> They fail with: "No command builder registered for name".
> I added an assume through SOLR-8874.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8876) Morphlines tests fail with Java 9 (Jigsaw)

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203189#comment-15203189
 ] 

ASF subversion and git services commented on SOLR-8876:
---

Commit 91424ae9633b2f382799691693dd4ce8ed216cb8 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=91424ae ]

SOLR-8874, SOLR-8876: Disable more Hadoop tests with Java 9


> Morphlines tests fail with Java 9 (Jigsaw)
> --
>
> Key: SOLR-8876
> URL: https://issues.apache.org/jira/browse/SOLR-8876
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - MapReduce
>Reporter: Uwe Schindler
>
> They fail with: "No command builder registered for name".
> I added an assume through SOLR-8874.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8874) Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests

2016-03-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203188#comment-15203188
 ] 

ASF subversion and git services commented on SOLR-8874:
---

Commit 91424ae9633b2f382799691693dd4ce8ed216cb8 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=91424ae ]

SOLR-8874, SOLR-8876: Disable more Hadoop tests with Java 9


> Add fixes and workaround for Java 9 Jigsaw (Module System) to Solr tests
> 
>
> Key: SOLR-8874
> URL: https://issues.apache.org/jira/browse/SOLR-8874
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master, 6.0, 6.1
>
> Attachments: SOLR-8874.patch, SOLR-8874.patch, SOLR-8874.patch, 
> SOLR-8874.patch
>
>
> We now have one more week to prepare our build for Java 9 Jigsaw. The next 
> Java 9 EA build will now contain the new Java 9 module system. From that time 
> on, it is no longer possible to test Java 9 unless we fix remaining bugs. 
> Currently Solr does not pass at all, because almost every test fails because 
> the RAMUsageEstimator tries to look into objects in static field where the 
> internals were hidden by Java 9:
> {noformat}
>[junit4] ERROR   0.00s | SolrRequestParserTest (suite) <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: Unable to 
> access 'private final sun.nio.fs.WindowsFileSystem sun.nio.fs
> .WindowsPath.fs' to estimate memory usage
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([C6C2FAD07A66283B]:0)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:602)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.j
> ava:127)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>[junit4]>at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>[junit4]>at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>[junit4]>at 
> java.lang.Thread.run(java.base@9-ea/Thread.java:804)
>[junit4]> Caused by: java.lang.reflect.InaccessibleObjectException: 
> Unable to make member of class sun.nio.fs.WindowsPath access
> ible:  module java.base does not export sun.nio.fs to unnamed module @436813f3
>[junit4]>at 
> sun.reflect.Reflection.throwInaccessibleObjectException(java.base@9-ea/Reflection.java:420)
>[junit4]>at 
> java.lang.reflect.AccessibleObject.checkCanSetAccessible(java.base@9-ea/AccessibleObject.java:174)
>[junit4]>at 
> java.lang.reflect.Field.checkCanSetAccessible(java.base@9-ea/Field.java:170)
>[junit4]>at 
> java.lang.reflect.Field.setAccessible(java.base@9-ea/Field.java:164)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:597)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$3.run(RamUsageEstimator.java:594)
>[junit4]>at 
> java.security.AccessController.doPrivileged(java.base@9-ea/Native Method)
>[junit4]>at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:594)
>[junit4]>... 13 more
>[junit4] Completed [1/1 (1!)] in 8.46s, 12 tests, 1 error <<< 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 16275 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16275/
Java: 32bit/jdk-9-jigsaw-ea+110 -client -XX:+UseParallelGC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.hadoop.MorphlineMapperTest: 
1) Thread[id=17, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest] at 
jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
 at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
 at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=17, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([143F319AE52DC18B]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=17, 
name=Thread-2, state=TIMED_WAITING, group=TGRP-MorphlineMapperTest] at 
jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
 at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
 at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=17, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([143F319AE52DC18B]:0)


FAILED:  org.apache.solr.hadoop.MorphlineMapperTest.testMapper

Error Message:
No command builder registered for name: separateAttachments near: { # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-map-reduce/test/J0/temp/solr.hadoop.MorphlineMapperTest_143F319AE52DC18B-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 28 "separateAttachments" : {} }

Stack Trace:
org.kitesdk.morphline.api.MorphlineCompilationException: No command builder 
registered for name: separateAttachments near: {
# 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-map-reduce/test/J0/temp/solr.hadoop.MorphlineMapperTest_143F319AE52DC18B-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 28
"separateAttachments" : {}
}
at 
__randomizedtesting.SeedInfo.seed([143F319AE52DC18B:DE4B41968E1288FA]:0)
at 
org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:281)
at 
org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:249)
at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 17 - Still Failing

2016-03-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/17/

2 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testTargetCollectionNotAvailable

Error Message:
Timeout while trying to assert replication errors

Stack Trace:
java.lang.AssertionError: Timeout while trying to assert replication errors
at 
__randomizedtesting.SeedInfo.seed([7FFD6E506C30D409:99693423676FB921]:0)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testTargetCollectionNotAvailable(CdcrReplicationDistributedZkTest.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.AssertionError
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 16274 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16274/
Java: 32bit/jdk-9-jigsaw-ea+110 -server -XX:+UseG1GC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.hadoop.MorphlineMapperTest: 
1) Thread[id=19, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest] at 
jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
 at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
 at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=19, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([5B3B037167D7E1BA]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=19, 
name=Thread-2, state=TIMED_WAITING, group=TGRP-MorphlineMapperTest] at 
jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
 at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
 at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=19, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at jdk.internal.misc.Unsafe.park(java.base@9-ea/Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(java.base@9-ea/LockSupport.java:230)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1063)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@9-ea/AbstractQueuedSynchronizer.java:1356)
at 
java.util.concurrent.CountDownLatch.await(java.base@9-ea/CountDownLatch.java:278)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([5B3B037167D7E1BA]:0)


FAILED:  org.apache.solr.hadoop.MorphlineMapperTest.testMapper

Error Message:
No command builder registered for name: separateAttachments near: { # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_5B3B037167D7E1BA-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 28 "separateAttachments" : {} }

Stack Trace:
org.kitesdk.morphline.api.MorphlineCompilationException: No command builder 
registered for name: separateAttachments near: {
# 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_5B3B037167D7E1BA-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 28
"separateAttachments" : {}
}
at 
__randomizedtesting.SeedInfo.seed([5B3B037167D7E1BA:914F737D0CE8A8CB]:0)
at 
org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:281)
at 
org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:249)
at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 19 - Still Failing!

2016-03-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/19/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([4FB7C79581C7ECEA:C7E3F84F2F3B8112]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

  1   2   >