[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b78) - Build # 14304 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14304/
Java: 64bit/jdk1.9.0-ea-b78 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=10622, 
name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=10626, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=10623, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=10625, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=10624, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 
   1) Thread[id=10622, name=apacheds, state=WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=10626, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGR

[jira] [Commented] (LUCENE-6815) Should DisjunctionScorer advance more lazily?

2015-09-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14909065#comment-14909065
 ] 

David Smiley commented on LUCENE-6815:
--

Cool idea!

> Should DisjunctionScorer advance more lazily?
> -
>
> Key: LUCENE-6815
> URL: https://issues.apache.org/jira/browse/LUCENE-6815
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
>
> Today if you call DisjunctionScorer.advance(X), it will try to advance all 
> sub scorers to X. However, if DisjunctionScorer is being intersected with 
> another scorer (which is almost always the case as we use BooleanScorer for 
> top-level disjunctions), we could stop as soon as we find one matching sub 
> scorer, and only advance the remaining sub scorers when freq() or score() is 
> called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 286 - Still Failing

2015-09-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/286/

No tests ran.

Build Log:
[...truncated 52463 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (10.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 28.2 MB in 0.04 sec (717.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 65.2 MB in 0.09 sec (688.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 75.6 MB in 0.11 sec (705.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5904 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5904 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 209 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.3.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1416, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1361, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1399, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 728, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
 line 1354, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/build.xml:527:
 exec returned: 1

Total time: 38 minutes 7 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b78) - Build # 14014 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14014/
Java: 64bit/jdk1.9.0-ea-b78 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=769, 
name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=773, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=771, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=772, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=770, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 
   1) Thread[id=769, name=apacheds, state=WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=773, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerb

[jira] [Updated] (SOLR-8086) Add support for SELECT DISTINCT queries to the SQL interface

2015-09-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8086:
-
Description: 
This ticket will add the SELECT DISTINCT query to the SQL interface.

There will be a Map/Reduce implementation using the UniqueStream and a JSON 
Facet API implementation using the FacetStream. SQL clients will be able to 
switch between Map/Reduce and JSON Facet API using the *aggregationMode* 
[map_reduce or facet] http param introduced in SOLR-7903.


  was:
This ticket will add the SELECT DISTINCT query to the SQL interface.

There will be a Map/Reduce implementation using the UniqueStream and a JSON 
Facet API implementation using the FacetStream. A flag will be added to the 
FacetStream to only add the distinct terms to the tuples and ignore the facet 
counts.

SQL clients will be able to switch between Map/Reduce and JSON Facet API using 
the *aggregationMode* [map_reduce or facet] http param introduced in SOLR-7903.



> Add support for SELECT DISTINCT queries to the SQL interface
> 
>
> Key: SOLR-8086
> URL: https://issues.apache.org/jira/browse/SOLR-8086
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> This ticket will add the SELECT DISTINCT query to the SQL interface.
> There will be a Map/Reduce implementation using the UniqueStream and a JSON 
> Facet API implementation using the FacetStream. SQL clients will be able to 
> switch between Map/Reduce and JSON Facet API using the *aggregationMode* 
> [map_reduce or facet] http param introduced in SOLR-7903.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8098) Immutable ConfigSets can still change in ZK

2015-09-25 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-8098:


 Summary: Immutable ConfigSets can still change in ZK
 Key: SOLR-8098
 URL: https://issues.apache.org/jira/browse/SOLR-8098
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.3, Trunk
Reporter: Gregory Chanan


I don't think this is necessarily a bug, just writing it down here and tracking 
it so we have a reference.

Came up with an interesting case when I was testing Immutable ConfigSets.  I 
had defined a managed schema example as Immutable and was checking that all the 
files defined as part of the ConfigSet were present in ZooKeeper.  
Occasionally, schema.xml was missing, which was surprising to me because I 
wasn't able to modify the schema (because it was Immutable).  Turns out that 
the managed schema renames the schema.xml when a collection is created using 
the schema, it doesn't need to be modified.

In theory we could build in some smarts to the ManagedIndexSchema to know it's 
immutable and so can avoid doing the rename, but it doesn't seem like we can 
handle this in general, i.e. any user-written schema can just do all sorts of 
schema modifications under the covers and decide not to check immutability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2015-09-25 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-8097:
--

 Summary: Implement a builder pattern for constructing a Solrj 
client
 Key: SOLR-8097
 URL: https://issues.apache.org/jira/browse/SOLR-8097
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: Trunk
Reporter: Hrishikesh Gadre
Priority: Minor


Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
as follows,
public CloudSolrClient(String zkHost) 
public CloudSolrClient(String zkHost, HttpClient httpClient) 
public CloudSolrClient(Collection zkHosts, String chroot)
public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
httpClient)
public CloudSolrClient(String zkHost, boolean updatesToLeaders)
public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
httpClient)

It is kind of problematic while introducing an additional parameters (since we 
need to introduce additional constructors). Instead it will be helpful to 
provide SolrClient Builder which can provide either default values or support 
overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 804 - Still Failing

2015-09-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/804/

5 tests failed.
REGRESSION:  org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.test

Error Message:
Timeout waiting for all live and active

Stack Trace:
java.lang.AssertionError: Timeout waiting for all live and active
at 
__randomizedtesting.SeedInfo.seed([2F57DB516E201E99:A703E48BC0DC7361]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.testBasics(SharedFSAutoReplicaFailoverTest.java:238)
at 
org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.test(SharedFSAutoReplicaFailoverTest.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRun

[JENKINS] Lucene-Solr-5.3-Linux (32bit/jdk1.7.0_80) - Build # 252 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.3-Linux/252/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([A6F89DC09DFF476A:4FA226F80366D7C2]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:765)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:rows=20&version=2.2&q=id:2&start=0&qt=standard
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:758)
... 40 more




Build Log:
[...truncated 10882 lines...]
   [junit4] Suite: org.ap

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14301 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14301/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([C101A9817F500A8A:6645112512EB1933]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplication(CdcrReplicationHandlerTest.java:86)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State

Re: Cross-node joins

2015-09-25 Thread Erick Erickson
yeah, the streaming stuff is pretty bleeding-edge but pretty cool.

Your understanding is accurate, the pathological case is the reason
it's not been implemented in core Solr. I suppose you could do exactly
what you outlined, just with two queries.

for SOLR-4095, why would this affect sharding for your main collection?
The groups collection is just a separate collection, I don't see why you
think it would affect sharding of the main collection. That just means I
don't understand your problem probably...

Best,
Erick

On Fri, Sep 25, 2015 at 12:42 PM, Scott Blum  wrote:

> Yep, we looked at that, but unfortunately the frequency of group updates
> and number of users would it infeasible to reindex all group members any
> time a group changes.
>
> On Fri, Sep 25, 2015 at 3:36 PM, Alexandre Rafalovitch  > wrote:
>
>> How often do the group characteristics change? Because you might be
>> better off flattening this at the index time. As in.
>> Users->characteristics, rather than Users->Groups->characteristics.
>> And update the users when the group characteristics change. And if
>> characteristics are non-stored but only indexed or - better-yet? -
>> docvalues, you will not pay much for it with space either.
>>
>> Regards,
>>Alex.
>> 
>> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
>> http://www.solr-start.com/
>>
>>
>> On 25 September 2015 at 15:30, Scott Blum  wrote:
>> > Hi Erick,
>> >
>> > Thanks for the thoughtful reply!
>> >
>> > The context is essentially that I have Groups and Users, and a User can
>> > belong to multiple groups.  So if I need to do a query like "Find all
>> Users
>> > who are members of a Group, for which the Group has certain
>> > characteristics", then I need to do something like {!join from=GroupId
>> > to=UserGroupIds}GroupPermission:admin.  We've already sharded our corpus
>> > such that any given user and that user's associate data have to be on
>> the
>> > same core, but we can't shard the groups that way, since a user could
>> belong
>> > to multiple groups.
>> >
>> > Thanks for the pointer to SOLR-4905, that would probably work for us,
>> as we
>> > could put all the group docs into a separate collection, replicate it
>> > everywhere, and do local cross-collection joins.  My main worry there is
>> > that having to shard our data in such a way to support this one case
>> would
>> > be a lot of extra operational work over time, and lock us into a pretty
>> > proscriptive data architecture just to solve this one issue.
>> >
>> > SOLR-7090 is closer to what I was hoping for.  Perhaps I could do
>> something
>> > to help that effort.  I didn't realize that existed, I've been looking
>> at
>> > LUCENE-3759 and wondering how to make that go.
>> >
>> >> In essence, This Is A Hard Problem in the Solr world to
>> >> make performant. You'd have to get all of the date from the "from"
>> >> core across the wire to the "to" node, potentially this would
>> >> be the entire corpus.
>> >
>> >
>> > Hopefully it wouldn't be that bad?  My understanding of how queries are
>> > really processed is pretty naive, but I'm imagining that if you have a
>> top
>> > level query containing a collection-wide join, you'd make one
>> distributed
>> > request (to all shards) to resolve the  join into a term query, then a
>> > second one to process the top level request, sending the term list out
>> of
>> > each shard.  I get that there's a pathological case there where the
>> number
>> > of terms explodes, but in theory this wouldn't be too different from
>> > something you do from a client:
>> >
>> > 1) Run the join query as a facet query.  Instead of retrieving any docs,
>> > just facet the "from" field to get a term list.
>> > 2) Run a normal query with the resulting term list.
>> >
>> >>
>> >> You might look at some of the Streaming Aggregation stuff, that
>> >> has some capabilities here too.
>> >
>> >
>> > That's on my radar too.   I did start reading about it, but it looked
>> like
>> > joins were still Work-In-Progress (SOLR-7584), and at any rate the
>> streaming
>> > stuff seems so bleeding edge to me (the only doc I've been able to find
>> on
>> > it is from heliosearch) that I was daunted.
>> >
>> > Thanks!
>> > Scott
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 30 - Still Failing

2015-09-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/30/

No tests ran.

Build Log:
[...truncated 53041 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.03 sec (5.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.1-src.tgz...
   [smoker] 28.5 MB in 0.04 sec (677.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.1.tgz...
   [smoker] 65.7 MB in 0.09 sec (740.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.1.zip...
   [smoker] 75.9 MB in 0.11 sec (699.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.1.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.1-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.3.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1449, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1432, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 762, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1387, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 80 - Still Failing!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/80/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([ACB8A307C3BFB052:91600D2BFB51EE22]:0)
at 
org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestAuthenticationFramework

Error Message:
98 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestAuthenticationFramework: 1) Thread[id=1996, 
name=Scheduler-604061645, state=TIMED_WAITING, 
group=TGRP-TestAuthenticationFramework] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 

[jira] [Created] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-09-25 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-6817:
---

 Summary: ComplexPhraseQueryParser.ComplexPhraseQuery does not 
display slop in toString()
 Key: LUCENE-6817
 URL: https://issues.apache.org/jira/browse/LUCENE-6817
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Priority: Trivial
 Fix For: Trunk


This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
slop factor which, when the result of parsing is dumped to logs, for example, 
can be confusing.

I'm heading for a weekend out of office in a few hours... so in the spirit of 
not committing and running away ( :) ), if anybody wishes to tackle this, go 
ahead.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 78 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/78/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 10815 lines...]
2015-09-25 20:24:12
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode):

"pumper-watchdog" #421 daemon prio=5 os_prio=64 tid=0x02b5e000 
nid=0x1a8 waiting on condition [0x80ffa71fe000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
com.carrotsearch.ant.tasks.junit4.LocalSlaveStreamHandler$3.run(LocalSlaveStreamHandler.java:126)

"pumper-events" #420 daemon prio=5 os_prio=64 tid=0x01871800 nid=0x1a7 
waiting on condition [0x80ffa67fc000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
com.carrotsearch.ant.tasks.junit4.TailInputStream.read(TailInputStream.java:61)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
- locked <0xcc991638> (a java.io.InputStreamReader)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonReader.fillBuffer(JsonReader.java:1300)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonReader.nextQuotedValue(JsonReader.java:1030)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonReader.nextString(JsonReader.java:827)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.TypeAdapters$25.read(TypeAdapters.java:646)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.TypeAdapters$25.read(TypeAdapters.java:642)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.Streams.parse(Streams.java:44)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.TreeTypeAdapter.read(TreeTypeAdapter.java:54)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:103)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:196)
at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.fromJson(Gson.java:810)
at 
com.carrotsearch.ant.tasks.junit4.events.Deserializer.deserialize(Deserializer.java:31)
at 
com.carrotsearch.ant.tasks.junit4.LocalSlaveStreamHandler.pumpEvents(LocalSlaveStreamHandler.java:210)
at 
com.carrotsearch.ant.tasks.junit4.LocalSlaveStreamHandler$2.run(LocalSlaveStreamHandler.java:112)

"pumper-stderr" #419 daemon prio=5 os_prio=64 tid=0x02209800 nid=0x1a6 
runnable [0x80ffa85fe000]
   java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:233)
at 
java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:648)
at org.apache.tools.ant.taskdefs.StreamPumper.run(StreamPumper.java:132)
at java.lang.Thread.run(Thread.java:745)

"pumper-stdout" #418 daemon prio=5 os_prio=64 tid=0x01d60800 nid=0x1a5 
runnable [0x80ffa75fe000]
   java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:255)
at 
java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:657)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
- locked <0xcc8d2d30> (a java.io.BufferedInputStream)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at org.apache.tools.ant.taskdefs.StreamPumper.run(StreamPumper.java:132)
at java.lang.Thread.run(Thread.java:745)

"pool-30-thread-2" #412 prio=5 os_prio=64 tid=0x02b1b800 nid=0x1a0 in 
Object.wait() [0x80ffa81fe000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.lang.UNIXProcess.waitFor(UNIXProcess.java:396)
- locked <0xcc8d2cf0> (a java.lang.UNIXProcess)
at org.apache.tools.ant.taskdefs.Execute.waitFor(Execute.java:586)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:516)
at 
com.carrotsearch.ant.tasks.junit4.JUnit4.forkProcess(JUnit4.java:1628)
at 
com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1448)
at com

Re: Cross-node joins

2015-09-25 Thread Scott Blum
Yep, we looked at that, but unfortunately the frequency of group updates
and number of users would it infeasible to reindex all group members any
time a group changes.

On Fri, Sep 25, 2015 at 3:36 PM, Alexandre Rafalovitch 
wrote:

> How often do the group characteristics change? Because you might be
> better off flattening this at the index time. As in.
> Users->characteristics, rather than Users->Groups->characteristics.
> And update the users when the group characteristics change. And if
> characteristics are non-stored but only indexed or - better-yet? -
> docvalues, you will not pay much for it with space either.
>
> Regards,
>Alex.
> 
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>
>
> On 25 September 2015 at 15:30, Scott Blum  wrote:
> > Hi Erick,
> >
> > Thanks for the thoughtful reply!
> >
> > The context is essentially that I have Groups and Users, and a User can
> > belong to multiple groups.  So if I need to do a query like "Find all
> Users
> > who are members of a Group, for which the Group has certain
> > characteristics", then I need to do something like {!join from=GroupId
> > to=UserGroupIds}GroupPermission:admin.  We've already sharded our corpus
> > such that any given user and that user's associate data have to be on the
> > same core, but we can't shard the groups that way, since a user could
> belong
> > to multiple groups.
> >
> > Thanks for the pointer to SOLR-4905, that would probably work for us, as
> we
> > could put all the group docs into a separate collection, replicate it
> > everywhere, and do local cross-collection joins.  My main worry there is
> > that having to shard our data in such a way to support this one case
> would
> > be a lot of extra operational work over time, and lock us into a pretty
> > proscriptive data architecture just to solve this one issue.
> >
> > SOLR-7090 is closer to what I was hoping for.  Perhaps I could do
> something
> > to help that effort.  I didn't realize that existed, I've been looking at
> > LUCENE-3759 and wondering how to make that go.
> >
> >> In essence, This Is A Hard Problem in the Solr world to
> >> make performant. You'd have to get all of the date from the "from"
> >> core across the wire to the "to" node, potentially this would
> >> be the entire corpus.
> >
> >
> > Hopefully it wouldn't be that bad?  My understanding of how queries are
> > really processed is pretty naive, but I'm imagining that if you have a
> top
> > level query containing a collection-wide join, you'd make one distributed
> > request (to all shards) to resolve the  join into a term query, then a
> > second one to process the top level request, sending the term list out of
> > each shard.  I get that there's a pathological case there where the
> number
> > of terms explodes, but in theory this wouldn't be too different from
> > something you do from a client:
> >
> > 1) Run the join query as a facet query.  Instead of retrieving any docs,
> > just facet the "from" field to get a term list.
> > 2) Run a normal query with the resulting term list.
> >
> >>
> >> You might look at some of the Streaming Aggregation stuff, that
> >> has some capabilities here too.
> >
> >
> > That's on my radar too.   I did start reading about it, but it looked
> like
> > joins were still Work-In-Progress (SOLR-7584), and at any rate the
> streaming
> > stuff seems so bleeding edge to me (the only doc I've been able to find
> on
> > it is from heliosearch) that I was daunted.
> >
> > Thanks!
> > Scott
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2015-09-25 Thread Sean Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908558#comment-14908558
 ] 

Sean Xie commented on SOLR-7883:


The error is on mlt handler, MLT as search component is working fine, but if 
adding the MLT handler:

/mlt?

Adding the facet=on seems to throw the exception.

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt&q=id:item1&mlt.fl=content}}
> This doesn't: {{?qt=mlt&q=id:item1&mlt.fl=content&facet=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cross-node joins

2015-09-25 Thread Alexandre Rafalovitch
How often do the group characteristics change? Because you might be
better off flattening this at the index time. As in.
Users->characteristics, rather than Users->Groups->characteristics.
And update the users when the group characteristics change. And if
characteristics are non-stored but only indexed or - better-yet? -
docvalues, you will not pay much for it with space either.

Regards,
   Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 25 September 2015 at 15:30, Scott Blum  wrote:
> Hi Erick,
>
> Thanks for the thoughtful reply!
>
> The context is essentially that I have Groups and Users, and a User can
> belong to multiple groups.  So if I need to do a query like "Find all Users
> who are members of a Group, for which the Group has certain
> characteristics", then I need to do something like {!join from=GroupId
> to=UserGroupIds}GroupPermission:admin.  We've already sharded our corpus
> such that any given user and that user's associate data have to be on the
> same core, but we can't shard the groups that way, since a user could belong
> to multiple groups.
>
> Thanks for the pointer to SOLR-4905, that would probably work for us, as we
> could put all the group docs into a separate collection, replicate it
> everywhere, and do local cross-collection joins.  My main worry there is
> that having to shard our data in such a way to support this one case would
> be a lot of extra operational work over time, and lock us into a pretty
> proscriptive data architecture just to solve this one issue.
>
> SOLR-7090 is closer to what I was hoping for.  Perhaps I could do something
> to help that effort.  I didn't realize that existed, I've been looking at
> LUCENE-3759 and wondering how to make that go.
>
>> In essence, This Is A Hard Problem in the Solr world to
>> make performant. You'd have to get all of the date from the "from"
>> core across the wire to the "to" node, potentially this would
>> be the entire corpus.
>
>
> Hopefully it wouldn't be that bad?  My understanding of how queries are
> really processed is pretty naive, but I'm imagining that if you have a top
> level query containing a collection-wide join, you'd make one distributed
> request (to all shards) to resolve the  join into a term query, then a
> second one to process the top level request, sending the term list out of
> each shard.  I get that there's a pathological case there where the number
> of terms explodes, but in theory this wouldn't be too different from
> something you do from a client:
>
> 1) Run the join query as a facet query.  Instead of retrieving any docs,
> just facet the "from" field to get a term list.
> 2) Run a normal query with the resulting term list.
>
>>
>> You might look at some of the Streaming Aggregation stuff, that
>> has some capabilities here too.
>
>
> That's on my radar too.   I did start reading about it, but it looked like
> joins were still Work-In-Progress (SOLR-7584), and at any rate the streaming
> stuff seems so bleeding edge to me (the only doc I've been able to find on
> it is from heliosearch) that I was daunted.
>
> Thanks!
> Scott
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cross-node joins

2015-09-25 Thread Scott Blum
Hi Erick,

Thanks for the thoughtful reply!

The context is essentially that I have Groups and Users, and a User can
belong to multiple groups.  So if I need to do a query like "Find all Users
who are members of a Group, for which the Group has certain
characteristics", then I need to do something like {!join from=GroupId
to=UserGroupIds}GroupPermission:admin.  We've already sharded our corpus
such that any given user and that user's associate data have to be on the
same core, but we can't shard the groups that way, since a user could
belong to multiple groups.

Thanks for the pointer to SOLR-4905
, that would probably work
for us, as we could put all the group docs into a separate collection,
replicate it everywhere, and do local cross-collection joins.  My main
worry there is that having to shard our data in such a way to support this
one case would be a lot of extra operational work over time, and lock us
into a pretty proscriptive data architecture just to solve this one issue.

SOLR-7090  is closer to
what I was hoping for.  Perhaps I could do something to help that effort.
I didn't realize that existed, I've been looking at LUCENE-3759
 and wondering how to
make that go.

In essence, This Is A Hard Problem in the Solr world to
> make performant. You'd have to get all of the date from the "from"
> core across the wire to the "to" node, potentially this would
> be the entire corpus.
>

Hopefully it wouldn't be that bad?  My understanding of how queries are
really processed is pretty naive, but I'm imagining that if you have a top
level query containing a collection-wide join, you'd make one distributed
request (to all shards) to resolve the  join into a term query, then a
second one to process the top level request, sending the term list out of
each shard.  I get that there's a pathological case there where the number
of terms explodes, but in theory this wouldn't be too different from
something you do from a client:

1) Run the join query as a facet query.  Instead of retrieving any docs,
just facet the "from" field to get a term list.
2) Run a normal query with the resulting term list.


> You might look at some of the Streaming Aggregation stuff, that
> has some capabilities here too.
>

That's on my radar too.   I did start reading about it, but it looked like
joins were still Work-In-Progress (SOLR-7584
), and at any rate the
streaming stuff seems so bleeding edge to me (the only doc I've been able
to find on it is from heliosearch) that I was daunted.

Thanks!
Scott


[jira] [Comment Edited] (SOLR-7883) MoreLikeThis is incompatible with facets

2015-09-25 Thread Sean Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908504#comment-14908504
 ] 

Sean Xie edited comment on SOLR-7883 at 9/25/15 6:53 PM:
-

Ran into the same exception. Could you please share how to do the prepare step 
of the FacetComponent when configuring the Request Handler?


was (Author: seanxie):
Ran into the same exception. Could you please share how to do the prepare step 
of the FacetComponent when configuring the Request Handler.

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt&q=id:item1&mlt.fl=content}}
> This doesn't: {{?qt=mlt&q=id:item1&mlt.fl=content&facet=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2015-09-25 Thread Sean Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908504#comment-14908504
 ] 

Sean Xie commented on SOLR-7883:


Ran into the same exception. Could you please share how to do the prepare step 
of the FacetComponent when configuring the Request Handler.

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt&q=id:item1&mlt.fl=content}}
> This doesn't: {{?qt=mlt&q=id:item1&mlt.fl=content&facet=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6168) enhance collapse QParser so that "group head" documents can be selected by more complex sort options

2015-09-25 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908446#comment-14908446
 ] 

David Boychuck commented on SOLR-6168:
--

Does it make sense to include SOLR-6345 in this work?

> enhance collapse QParser so that "group head" documents can be selected by 
> more complex sort options
> 
>
> Key: SOLR-6168
> URL: https://issues.apache.org/jira/browse/SOLR-6168
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.7.1, 4.8.1
>Reporter: Umesh Prasad
>Assignee: Joel Bernstein
> Attachments: CollapsingQParserPlugin-6168.patch.1-1stcut, 
> SOLR-6168-group-head-inconsistent-with-sort.patch, SOLR-6168.patch, 
> SOLR-6168.patch, SOLR-6168.patch
>
>
> The fundemental goal of this issue is add additional support to the 
> CollapseQParser so that as an alternative to the existing min/max localparam 
> options, more robust sort syntax can be used to sort on multiple criteria 
> when selecting the "group head" documents used to represent each collapsed 
> group.
> Since support for arbitrary, multi-clause, sorting is almost certainly going 
> to require more RAM then the existing min/max functionaly, this new 
> functionality should be in addition to the existing min/max localparam 
> implementation, not a replacement of it.
> (NOTE: early comments made in this jira may be confusing in historical 
> context due to the way this issue was originally filed as a bug report)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cross-node joins

2015-09-25 Thread Alexandre Rafalovitch

I had some performance issues, so I decided to update core Lucene
code. Now I have 3 problems:
*) performance issues
*) hard to debug code
*) annoyed Solr developers whose use cases I did not consider

Not that I am reading random JIRAs or anything.


Seriously though, for those trying to step-through the code, it might
be worth checking out something like Chronon
http://chrononsystems.com/,
https://www.jetbrains.com/idea/help/debugging-with-chronon.html . I
played with it as part of IntelliJ Ultima plugin (no extra charge) and
it was quite interesting. Though when a plugin, it has to start
Jetty/Solr in-process, which might be harder (but not impossible) with
Solr 5. I am hoping to revisit and blog about how to do that at some
point soon.

Regards,
   Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 25 September 2015 at 13:23, Erick Erickson  wrote:
> Well, let's start by backing up a bit. what is the
> problem you're trying to solve that needs cross-node
> joins? Trying to be sure this isn't an XY problem.
>
> There is some work in this area, See:
> https://issues.apache.org/jira/browse/SOLR-7090 (not committed)
> and
> https://issues.apache.org/jira/browse/SOLR-4905
>
> neither one really do what you want, the second one provides some
> capability, but there are restrictions.
>
> In essence, This Is A Hard Problem in the Solr world to
> make performant. You'd have to get all of the date from the "from"
> core across the wire to the "to" node, potentially this would
> be the entire corpus.
>
> you might look at some of the Streaming Aggregation stuff, that
> has some capabilities here too.
>
> Best,
> Erick
>
>
> On Fri, Sep 25, 2015 at 10:05 AM, Scott Blum  wrote:
>>
>> Hi team,
>>
>> Understanding the scalability limitations, I wanted to work on cross-node
>> joins.  I've been staring at the JoinQuery code (and tried stepping through
>> a lot of it in a debugger) but it's been rough going to understand.
>>
>> Is there anyone who might be able to help me understand what's going on,
>> or offer ideas and suggestions?  I asked on IRC but haven't had much luck
>> yet.
>>
>> Thanks!
>> Scott
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-09-25 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908365#comment-14908365
 ] 

Gregory Chanan commented on SOLR-6915:
--

Great.  I seem to recall that the latest releases weren't compatible with 
whatever MiniKDC was expecting, so we may need Hadoop MiniKDC to adopt and 
release those changes first.

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.1, Trunk
>
> Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
> tests-failures.txt
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cross-node joins

2015-09-25 Thread Erick Erickson
Well, let's start by backing up a bit. what is the
problem you're trying to solve that needs cross-node
joins? Trying to be sure this isn't an XY problem.

There is some work in this area, See:
https://issues.apache.org/jira/browse/SOLR-7090 (not committed)
and
https://issues.apache.org/jira/browse/SOLR-4905

neither one really do what you want, the second one provides some
capability, but there are restrictions.

In essence, This Is A Hard Problem in the Solr world to
make performant. You'd have to get all of the date from the "from"
core across the wire to the "to" node, potentially this would
be the entire corpus.

you might look at some of the Streaming Aggregation stuff, that
has some capabilities here too.

Best,
Erick


On Fri, Sep 25, 2015 at 10:05 AM, Scott Blum  wrote:

> Hi team,
>
> Understanding the scalability limitations, I wanted to work on cross-node
> joins.  I've been staring at the JoinQuery code (and tried stepping through
> a lot of it in a debugger) but it's been rough going to understand.
>
> Is there anyone who might be able to help me understand what's going on,
> or offer ideas and suggestions?  I asked on IRC but haven't had much luck
> yet.
>
> Thanks!
> Scott
>
>


Cross-node joins

2015-09-25 Thread Scott Blum
Hi team,

Understanding the scalability limitations, I wanted to work on cross-node
joins.  I've been staring at the JoinQuery code (and tried stepping through
a lot of it in a debugger) but it's been rough going to understand.

Is there anyone who might be able to help me understand what's going on, or
offer ideas and suggestions?  I asked on IRC but haven't had much luck yet.

Thanks!
Scott


[JENKINS] Lucene-Solr-5.3-Linux (64bit/jdk1.7.0_80) - Build # 249 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.3-Linux/249/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.DistributedTermsComponentTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.handler.component.DistributedTermsComponentTest: 1) 
Thread[id=7523, name=searcherExecutor-2880-thread-1, state=WAITING, 
group=TGRP-DistributedTermsComponentTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=7514, 
name=qtp2087422927-7514, state=RUNNABLE, 
group=TGRP-DistributedTermsComponentTest] at 
java.util.WeakHashMap.get(WeakHashMap.java:471) at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:101)
 at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:219)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:453)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
 at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) 
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)  
   at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
 at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)   
  at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)   
  at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)  
   at org.eclipse.jetty.server.Server.handle(Server.java:499) at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) 
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.handler.component.DistributedTermsComponentTest: 
   1) Thread[id=7523, name=searcherExecutor-2880-thread-1, state=WAITING, 
group=TGRP-DistributedTermsComponentTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
   2) Thread[id=7514, name=qtp2087422927-7514, state=RUNNABLE, 
group=TGRP-DistributedTermsComponentTest]
at java.util.WeakHashMap.get(WeakHashMap.java:471)
at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:101)
at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCa

[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-25 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: (was: SOLR-8030-test.patch)

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-25 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: (was: SOLR-8030-test.patch)

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-25 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: SOLR-8030-test.patch

Ok, I have found my problem with the test. It needs an FS directory.
This patch is a simplified test for this issue.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch, SOLR-8030-test.patch, 
> SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8096) Major faceting performance regressions

2015-09-25 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908063#comment-14908063
 ] 

Yonik Seeley commented on SOLR-8096:


Once again, UnInvertedField was not part of the lucene FieldCache.  It was a 
Solr class cached in SolrIndexSearcher (via fieldValueCache), did not implement 
the DocValues API, etc.  The *lucene* FieldCache was made package protected (an 
implementation detail) so one would need to access it via DocValues.  That's 
what the issue was about.

bq. So the committers decided to step forward and remove the top-level 
facetting (which was long overdue).

Where was this discussion?  I see nothing about it on LUCENE-5666
And of course I would have given a -1 to such a change for being dogmatic over 
practical and not caring about our users.

bq. I was informed about the changes mentioned here

Where did this discussion take place? I can't find it in any public forum.

bq. I was always in favour of removing those top-level facetting algorithms. So 
they still have my strong +1.

With no benchmarking of how the replacement performed?  No option to use the 
old method if a user *wanted* to? Without any public discussion of the impacts? 
 Without any note in Solr's CHANGES?

So you were strongly for the change, but you knew I'd most likely be against 
it, right (based on previous discussions about top-level data structures)?



> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, Trunk
>Reporter: Yonik Seeley
>Priority: Critical
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was *secretly removed* as part of LUCENE-5666, 
> causing severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6305) BooleanQuery.equals should ignore clause order

2015-09-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908027#comment-14908027
 ] 

Michael McCandless commented on LUCENE-6305:


bq. So I don't think we need to make it optional, we can just enable it all the 
time?

+1

> BooleanQuery.equals should ignore clause order
> --
>
> Key: LUCENE-6305
> URL: https://issues.apache.org/jira/browse/LUCENE-6305
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6305.patch, LUCENE-6305.patch
>
>
> BooleanQuery.equals is sensitive to the order in which clauses have been 
> added. So for instance "+A +B" would be considered different from "+B +A" 
> although it generates the same matches and scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7183) SaslZkACLProviderTest reproducible failures due to poor locale blacklisting

2015-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908024#comment-14908024
 ] 

Uwe Schindler commented on SOLR-7183:
-

DIRAPI-219 was solved now. Looks like a bugfix relaese was done!?

> SaslZkACLProviderTest reproducible failures due to poor locale blacklisting
> ---
>
> Key: SOLR-7183
> URL: https://issues.apache.org/jira/browse/SOLR-7183
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Gregory Chanan
> Fix For: 5.2
>
> Attachments: SOLR-7183.patch
>
>
> SaslZkACLProviderTest has this blacklist of locales...
> {code}
>   // These Locales don't generate dates that are compatibile with Hadoop 
> MiniKdc.
>   protected final static List brokenLocales =
> Arrays.asList(
>   "th_TH_TH_#u-nu-thai",
>   "ja_JP_JP_#u-ca-japanese",
>   "hi_IN");
> {code}
> ..but this list is incomplete -- notably because it only focuses on one 
> specific Thai variant, and then does a string Locale.toString() comparison.  
> so at a minimum {{-Dtests.locale=th_TH}} also fails - i suspect there are 
> other variants that will fail as well
> * if there is a bug in "Hadoop MiniKdc" then that bug should be filed in 
> jira, and there should be Solr jira that refers to it -- the Solr jira URL 
> needs to be included her in the test case so developers in the future can 
> understand the context and have some idea of if/when the third-party lib bug 
> is fixed
> * if we need to work around some Locales because of this bug, then Locale 
> comparisons need be based on whatever aspects of the Locale are actually 
> problematic
> see for example SOLR-6387 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/solr/contrib/morphlines-core/src/test/org/apache/solr/morphlines/solr/AbstractSolrMorphlineZkTestBase.java?r1=1618676&r2=1618675&pathrev=1618676
> Or SOLR-6991 + TIKA-1526 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_0/solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/ExtractingRequestHandlerTest.java?r1=1653708&r2=1653707&pathrev=1653708



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908023#comment-14908023
 ] 

Uwe Schindler commented on SOLR-6915:
-

DIRAPI-219 is now solved. Looks like a bugfix release was done.

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.1, Trunk
>
> Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
> tests-failures.txt
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6766) Make index sorting a first-class citizen

2015-09-25 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6766:
-
Attachment: LUCENE-6766.patch

Here is a first prototype that:
 - moves sorting logic from misc to core
 - removes SortingMergePolicy
 - adds an "indexSort" parameter to IndexWriterConfig and SegmentInfo, with 
null meaning that the index order is unspecified
 - SimpleTextCodec (de)serializes this indexOrder parameter, other codecs 
ignore it for now
 - refactors a bit the doc ID remapping logic in IndexWriter when there have 
been deletions while some segments were being merged

Open question: how should we serialize the SortField objects? Should we have a 
fixed list of supported SortField parameters or should we allow SortField 
parameters to serialize themselves?

There are lots of things we could do on the search side, but for now I'd like 
to focus on the indexing side and making sure the sort order of segments is 
easily accessible.

> Make index sorting a first-class citizen
> 
>
> Key: LUCENE-6766
> URL: https://issues.apache.org/jira/browse/LUCENE-6766
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6766.patch
>
>
> Today index sorting is a very expert feature. You need to use a custom merge 
> policy, custom collectors, etc. I would like to explore making it a 
> first-class citizen so that:
>  - the sort order could be configured on IndexWriterConfig
>  - segments would record the sort order that was used to write them
>  - IndexSearcher could automatically early terminate when computing top docs 
> on a sort order that is a prefix of the sort order of a segment (and if the 
> user is not interested in totalHits).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14297 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14297/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 5756 lines...]
   [junit4] ERROR: JVM J2 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk1.8.0_60/jre/bin/java -XX:-UseCompressedOops 
-XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=8481460B3C792C28 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=6.0.0 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=6.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/test/J2
 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=UTF-8 -classpath 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/classes/test:/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/test-framework/classes/java:/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/classes/java:/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/classes/java:/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/test-framework/lib/junit-4.10.jar:/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.1.17.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/home/jenkins/tools/java/64bit/jdk1.8.0_60/lib/tools.jar:/var/lib/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.1.17.jar
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/test/temp/junit4-J2-20150925_121444_151.events
 
@/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/test/temp/junit4-J2-20150925_121444_151.suites
 -stdin
   [

[jira] [Commented] (LUCENE-6305) BooleanQuery.equals should ignore clause order

2015-09-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907964#comment-14907964
 ] 

Adrien Grand commented on LUCENE-6305:
--

I don't think we can order queries in a deterministic way in 
BooleanQuery.Builder given that queries are not comparable? So we need the 
MultiSet?

I wrote a simple micro-benchmark and was able to create more than 150k 
100-clauses boolean queries per second on a single core, which is much faster 
than a typical per-core search request throughput. So I don't think we need to 
make it optional, we can just enable it all the time?

> BooleanQuery.equals should ignore clause order
> --
>
> Key: LUCENE-6305
> URL: https://issues.apache.org/jira/browse/LUCENE-6305
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6305.patch, LUCENE-6305.patch
>
>
> BooleanQuery.equals is sensitive to the order in which clauses have been 
> added. So for instance "+A +B" would be considered different from "+B +A" 
> although it generates the same matches and scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6785) Consider merging Query.rewrite() into Query.createWeight()

2015-09-25 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6785:
-
Attachment: LUCENE-6785-alt.patch

Here is a patch for the alternative idea (lucene-core only).

> Consider merging Query.rewrite() into Query.createWeight()
> --
>
> Key: LUCENE-6785
> URL: https://issues.apache.org/jira/browse/LUCENE-6785
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-6785-alt.patch, LUCENE-6785.patch, 
> LUCENE-6785.patch
>
>
> Prompted by the discussion on LUCENE-6590.
> Query.rewrite() is a bit of an oddity.  You call it to create a query for a 
> specific IndexSearcher, and to ensure that you get a query implementation 
> that has a working createWeight() method.  However, Weight itself already 
> encapsulates the notion of a per-searcher query.
> You also need to repeatedly call rewrite() until the query has stopped 
> rewriting itself, which is a bit trappy - there are a few places (in 
> highlighting code for example) that just call rewrite() once, rather than 
> looping round as IndexSearcher.rewrite() does.  Most queries don't need to be 
> called multiple times, however, so this seems a bit redundant.  And the ones 
> that do currently return un-rewritten queries can be changed simply enough to 
> rewrite them.
> Finally, in pretty much every case I can find in the codebase, rewrite() is 
> called purely as a prelude to createWeight().  This means, in the case of for 
> example large BooleanQueries, we end up cloning the whole query structure, 
> only to throw it away immediately.
> I'd like to try removing rewrite() entirely, and merging the logic into 
> createWeight(), simplifying the API and removing the trap where code only 
> calls rewrite once.  What do people think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3547 - Failure

2015-09-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3547/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.TestReplicaProperties.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:36767/ps_vc, http://127.0.0.1:43234/ps_vc, 
http://127.0.0.1:59886/ps_vc, http://127.0.0.1:54226/ps_vc, 
http://127.0.0.1:58665/ps_vc]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:36767/ps_vc, 
http://127.0.0.1:43234/ps_vc, http://127.0.0.1:59886/ps_vc, 
http://127.0.0.1:54226/ps_vc, http://127.0.0.1:58665/ps_vc]
at 
__randomizedtesting.SeedInfo.seed([18DFA9AB1D7F22D9:908B9671B3834F21]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ReplicaPropertiesBase.doPropertyAction(ReplicaPropertiesBase.java:51)
at 
org.apache.solr.cloud.TestReplicaProperties.clusterAssignPropertyTest(TestReplicaProperties.java:183)
at 
org.apache.solr.cloud.TestReplicaProperties.test(TestReplicaProperties.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Commented] (LUCENE-6815) Should DisjunctionScorer advance more lazily?

2015-09-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907890#comment-14907890
 ] 

Adrien Grand commented on LUCENE-6815:
--

Indeed. Another cost that would be interesting to take into account is the cost 
of matching a Scorer (LUCENE-6276) so that we try to match the cheapest scorers 
first.

> Should DisjunctionScorer advance more lazily?
> -
>
> Key: LUCENE-6815
> URL: https://issues.apache.org/jira/browse/LUCENE-6815
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
>
> Today if you call DisjunctionScorer.advance(X), it will try to advance all 
> sub scorers to X. However, if DisjunctionScorer is being intersected with 
> another scorer (which is almost always the case as we use BooleanScorer for 
> top-level disjunctions), we could stop as soon as we find one matching sub 
> scorer, and only advance the remaining sub scorers when freq() or score() is 
> called. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6744) fl renaming / alias of uniqueKey field generates null pointer exception in SolrCloud configuration

2015-09-25 Thread laigood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907886#comment-14907886
 ] 

laigood commented on SOLR-6744:
---

I have this problem in 5.2.1

> fl renaming / alias of uniqueKey field generates null pointer exception in 
> SolrCloud configuration
> --
>
> Key: SOLR-6744
> URL: https://issues.apache.org/jira/browse/SOLR-6744
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.1
> Environment: Multiple replicas on SolrCloud config.  This specific 
> example with 4 shard, 3 replica per shard config.  This bug does NOT exist 
> when query is handled by single core.
>Reporter: Garth Grimm
>Priority: Minor
>
> If trying to rename the uniqueKey field using 'fl' in a distributed query 
> (ie: SolrCloud config), an NPE is thrown.
> The workarround is to redundently request the uniqueKey field, once with the 
> desired alias, and once with the original name
> Example...
> http://localhost:8983/solr/cloudcollection/select?q=*%3A*&wt=xml&indent=true&fl=key:id
> Work around:
> http://localhost:8983/solr/cloudcollection/select?q=*%3A*&wt=xml&indent=true&fl=key:id&fl=id
> Error w/o work around...
> {code}
> 500 name="QTime">11*:* name="indent">truekey:id name="wt">xml name="trace">java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>   at java.lang.Thread.run(Thread.java:745)
> 500
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucen

[jira] [Updated] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?

2015-09-25 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6780:
---
Attachment: LUCENE-6780-heap-used-hack.patch

I was concerned about the peak heap used for "worst case" queries here, so I 
hacked up a quick patch (attached) to test this:

{noformat}
GeoPointInBBoxQuery: field=point: Lower Left: 
[-170.79577068315723,-88.3524701239041] Upper Right: 
[115.75692731020496,51.78004322487766]
  --> 940,015 terms = 34,065,696 bytes

GeoPointDistanceQuery: field=point: Center: 
[-95.87683480747508,-83.99672364681616] Distance: 826889.911703281 meters]
  --> 179,562 terms = 12,446,344 bytes
{noformat}

The patch just records over time the largest number of terms created by the 
query, and then I ran the test for many iterations.

I think this is too high, e.g. too many of these queries in flight at once can 
mean an unexpected OOME.

But I think before we address this we should address the correctness issues 
(the failing seeds for {{TestGeoUtils.testGeoRelations}}).

It could be that to fix these, we place soft limits on how large each query is 
allowed to be?  Meaning, a user who's willing to have more error, willing to 
use more heap, can increase the limit if they want, but by default the limit 
protects the more common use case with smaller shapes.


> GeoPointDistanceQuery doesn't work with a large radius?
> ---
>
> Key: LUCENE-6780
> URL: https://issues.apache.org/jira/browse/LUCENE-6780
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-6780-heap-used-hack.patch, LUCENE-6780.patch, 
> LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch
>
>
> I'm working on LUCENE-6698 but struggling with test failures ...
> Then I noticed that TestGeoPointQuery's test never tests on large distances, 
> so I modified the test to sometimes do so (like TestBKDTree) and hit test 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8096) Major faceting performance regressions

2015-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907870#comment-14907870
 ] 

Uwe Schindler edited comment on SOLR-8096 at 9/25/15 10:06 AM:
---

bq. Use of the highly optimized faceting that Solr had for multi-valued fields 
over relatively static indexes was secretly removed as part of LUCENE-5666, 
causing severe performance regressions.

Hi, the removal was not "secret". Removal of FieldCache from Lucene (and 
replacement by UninvertingReader) was discussed on the Issue tracker, although 
interest by Solr people was small. I think this is the main issue here. 
Sometimes it would be good to have Solr committers taking part of discussions 
on Lucene issues. If you want to make Solr bettre, you should also help in 
making Lucene better!

The old field cache was also put into a separate module (with the new DocValues 
emulating-API), because we (Lucene Committers) knew that Solr still uses it. 
Sure, we could have used UninvertingReader on top of 
SlowCompositeReaderWrapper, but this would bring other slowness! So the 
committers decided to step forward and remove the top-level facetting (which 
was long overdue).

It was announced in several talks about Lucene 5 that FieldCache was removed 
and all facetting in Solr was implicitely changed to only use per segment field 
caches (e.g., see my talk @ fosdem 2015, JAX 2015, or berlinbuzzwords - around 
one of the last slides). Maybe there should have been added a changes entry 
also to the Solr CHANGES.txt about this, but this was forgotten.

The CHANGES.txt about this entry was, the first line mentions that facetting in 
Solr is involved. Any Solr committer could have looked into the code and bring 
up complaints about those changes in the issue tracker also after this commit 
has been done:

{quote}
* LUCENE-5666: Change uninverted access (sorting, faceting, grouping, etc)
  to use the DocValues API instead of FieldCache. For FieldCache functionality,
  use UninvertingReader in lucene/misc (or implement your own FilterReader).
  UninvertingReader is more efficient: supports multi-valued numeric fields,
  detects when a multi-valued field is single-valued, reuses caches
  of compatible types (e.g. SORTED also supports BINARY and SORTED_SET access
  without insanity).  "Insanity" is no longer possible unless you explicitly 
want it. 
  Rename FieldCache* and DocTermOrds* classes in the search package to 
DocValues*. 
  Move SortedSetSortField to core and add SortedSetFieldSource to queries/, 
which
  takes the same selectors. Add helper methods to DocValues.java that are 
better 
  suited for search code (never return null, etc).  (Mike McCandless, Robert 
Muir)
{quote}

So everybody was informed.

bq. The people who did this are elasticsearch employees. That is one way to 
deal with Solr's faster faceting!

This is speculation and really a bad behaviour on an Open Source issue tracker. 
We should discuss here about technical stuff, not make any assumptions about 
what people intend to do. This statement was posted by a person 
([~mmurphy3141]) who I never met in person, and who really seldem took place in 
Lucene/Solr discussions at all. So I don't think we should count on that. It is 
also bad behaviour to accuse committers on twitter about sabotage: 
https://twitter.com/mmurphy3141/status/647254551356162048; please don't do 
this. I would ask to remove this tweet, thanks.

I was informed about the changes mentioned here and I strongly agree with the 
committers behind LUCENE-5666. I was always in favour of removing those 
top-level facetting algorithms. So they still have my strong +1. On my Solr 
customers I have seen nobody who complained about slow top-level facetting 
recently (because I told them long time ago to no longer use those outdated 
top-level algorithms if they have dynamic indexes). Of course I don't know 
about people using static indexes.

The right thing to do for Solr people would be to remove those top-level stuff 
completely. This is no longer fitting the new reader structure (composite and 
atomic/leaf readers) of Lucene 3 (with API cleanups to better reflect the new 
structure in Lucene 4). Lucene 3 is now several years retired already! So there 
was long time to fix Solr's facetting to go away from top-level. People with 
static indexes can still force merge their index and will have the same 
performance with the new algorithms.

Please keep in mind that it took about half a year until the first one 
recognized a problem like this, which makes me think that only few people are 
using those mostly-static indexes. 

*We should work on this issue to fix the issue, not accuse people, thanks!*


was (Author: thetaphi):
bq. Use of the highly optimized faceting that Solr had for multi-valued fields 
over relatively static indexes was secretly removed as part of LUCENE-5666, 
causing s

[jira] [Comment Edited] (SOLR-8096) Major faceting performance regressions

2015-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907870#comment-14907870
 ] 

Uwe Schindler edited comment on SOLR-8096 at 9/25/15 10:04 AM:
---

bq. Use of the highly optimized faceting that Solr had for multi-valued fields 
over relatively static indexes was secretly removed as part of LUCENE-5666, 
causing severe performance regressions.

Hi, the removal was not "secret". Removal of FieldCache from Lucene (and 
replacement by UninvertingReader) was discussed on the Issue tracker, although 
interest by Solr people was small. I think this is the main issue here. 
Sometimes it would be good to have Solr committers taking part of discussions 
on Lucene issues. If you want to make Solr bettre, you should also help in 
making Lucene better!

The old field cache was also put into a separate module (with the new DocValues 
emulating-API), because we (Lucene Committers) knew that Solr still uses it. 
Sure, we could have used UninvertingReader on top of 
SlowCompositeReaderWrapper, but this would bring other slowness! So the 
committers decided to step forward and remove the top-level facetting (which 
was long overdue).

It was announced in several talks about Lucene 5 that FieldCache was removed 
and all facetting in Solr was implicitely changed to only use per segment field 
caches (e.g., see my talk @ focdem 2015, JAX 2015, or berlinbuzzwords - around 
one of the last slides). Maybe there should have been added a changes entry 
also to the Solr CHANGES.txt about this, but 

The CHANGES.txt about this entry was, the first line mentions that facetting in 
Solr is involved. Any Solr committer could have looked into the code and bring 
up complaints about those changes in the issue tracker also after this commit 
has been done:

{quote}
* LUCENE-5666: Change uninverted access (sorting, faceting, grouping, etc)
  to use the DocValues API instead of FieldCache. For FieldCache functionality,
  use UninvertingReader in lucene/misc (or implement your own FilterReader).
  UninvertingReader is more efficient: supports multi-valued numeric fields,
  detects when a multi-valued field is single-valued, reuses caches
  of compatible types (e.g. SORTED also supports BINARY and SORTED_SET access
  without insanity).  "Insanity" is no longer possible unless you explicitly 
want it. 
  Rename FieldCache* and DocTermOrds* classes in the search package to 
DocValues*. 
  Move SortedSetSortField to core and add SortedSetFieldSource to queries/, 
which
  takes the same selectors. Add helper methods to DocValues.java that are 
better 
  suited for search code (never return null, etc).  (Mike McCandless, Robert 
Muir)
{quote}

So everybody was informed.

bq. The people who did this are elasticsearch employees. That is one way to 
deal with Solr's faster faceting!

This is speculation and really a bad behaviour on an Open Source issue tracker. 
We should discuss here about technical stuff, not make any assumptions about 
what people intend to do. This statement was posted by a person 
([~mmurphy3141]) who I never met in person, and who really seldem took place in 
Lucene/Solr discussions at all. So I don't think we should count on that. It is 
also bad behaviour to accuse committers on twitter about sabotage: 
https://twitter.com/mmurphy3141/status/647254551356162048; please don't do 
this. I would ask to remove this tweet, thanks.

I was informed about the changes mentioned here and I strongly agree with the 
committers behind LUCENE-5666. I was always in favour of removing those 
top-level facetting algorithms. So they still have my strong +1. On my Solr 
customers I have seen nobody who complained about slow top-level facetting 
recently (because I told them long time ago to no longer use those outdated 
top-level algorithms if they have dynamic indexes). Of course I don't know 
about people using static indexes.

The right thing to do for Solr people would be to remove those top-level stuff 
completely. This is no longer fitting the new reader structure (composite and 
atomic/leaf readers) of Lucene 3 (with API cleanups to better reflect the new 
structure in Lucene 4). Lucene 3 is now several years retired already! So there 
was long time to fix Solr's facetting to go away from top-level. People with 
static indexes can still force merge their index and will have the same 
performance with the new algorithms.

Please keep in mind that it took about half a year until the first one 
recognized a problem like this, which makes me think that only few people are 
using those mostly-static indexes. 

*We should work on this issue to fix the issue, not accuse people, thanks!*


was (Author: thetaphi):
bq. Use of the highly optimized faceting that Solr had for multi-valued fields 
over relatively static indexes was secretly removed as part of LUCENE-5666, 
causing severe performance r

[jira] [Commented] (LUCENE-6785) Consider merging Query.rewrite() into Query.createWeight()

2015-09-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907871#comment-14907871
 ] 

Adrien Grand commented on LUCENE-6785:
--

Sorry for the late reply, I was on vacation and just returned yesterday. 
Overall I'm torn with this patch because I like it a lot from a usability 
perspective (I really hate how you need to call rewrite in a loop today before 
calling createWeight) but Query.rewrite was our only opportunity to perform 
some kind of query optimization, and it's gone now. I can give a try to the 
alternative I mentionned above on lucene-core to see how things fit together.

> Consider merging Query.rewrite() into Query.createWeight()
> --
>
> Key: LUCENE-6785
> URL: https://issues.apache.org/jira/browse/LUCENE-6785
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-6785.patch, LUCENE-6785.patch
>
>
> Prompted by the discussion on LUCENE-6590.
> Query.rewrite() is a bit of an oddity.  You call it to create a query for a 
> specific IndexSearcher, and to ensure that you get a query implementation 
> that has a working createWeight() method.  However, Weight itself already 
> encapsulates the notion of a per-searcher query.
> You also need to repeatedly call rewrite() until the query has stopped 
> rewriting itself, which is a bit trappy - there are a few places (in 
> highlighting code for example) that just call rewrite() once, rather than 
> looping round as IndexSearcher.rewrite() does.  Most queries don't need to be 
> called multiple times, however, so this seems a bit redundant.  And the ones 
> that do currently return un-rewritten queries can be changed simply enough to 
> rewrite them.
> Finally, in pretty much every case I can find in the codebase, rewrite() is 
> called purely as a prelude to createWeight().  This means, in the case of for 
> example large BooleanQueries, we end up cloning the whole query structure, 
> only to throw it away immediately.
> I'd like to try removing rewrite() entirely, and merging the logic into 
> createWeight(), simplifying the API and removing the trap where code only 
> calls rewrite once.  What do people think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8096) Major faceting performance regressions

2015-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907870#comment-14907870
 ] 

Uwe Schindler commented on SOLR-8096:
-

bq. Use of the highly optimized faceting that Solr had for multi-valued fields 
over relatively static indexes was secretly removed as part of LUCENE-5666, 
causing severe performance regressions.

Hi, the removal was not "secret". Removal of FieldCache from Lucene (and 
replacement by UninvertingReader) was discussed on the Issue tracker, although 
interest by Solr people was small. I think this is the main issue here. 
Sometimes it would be good to have Solr committers taking part of discussions 
on Lucene issues. If you want to make Solr bettre, you should also help in 
making Lucene better!

The old field cache was also put into a separate module (with the new DocValues 
emulating-API), because we (Lucene Committers) knew that Solr still uses it. 
Sure, we could have used UninvertingReader on top of 
SlowCompositeReaderWrapper, but this would bring other slowness! So the 
committers decided to step forward and remove the top-level facetting (which 
was long overdue).

It was announced in several talks about Lucene 5 that FieldCache was removed 
and all facetting in Solr was implicitely changed to only use per segment field 
caches (e.g., see my talk @ focdem 2015, JAX 2015, or berlinbuzzwords - around 
one of the last slides). Maybe there should have been added a changes entry 
also to the Solr CHANGES.txt about this, but 

The CHANGES.txt about this entry was, the first line mentions that facetting in 
Solr is involved. Any Solr committer could have looked into the code and bring 
up complaints about those changes in the issue tracker also after this commit 
has been done:

{quote}
* LUCENE-5666: Change uninverted access (sorting, faceting, grouping, etc)
  to use the DocValues API instead of FieldCache. For FieldCache functionality,
  use UninvertingReader in lucene/misc (or implement your own FilterReader).
  UninvertingReader is more efficient: supports multi-valued numeric fields,
  detects when a multi-valued field is single-valued, reuses caches
  of compatible types (e.g. SORTED also supports BINARY and SORTED_SET access
  without insanity).  "Insanity" is no longer possible unless you explicitly 
want it. 
  Rename FieldCache* and DocTermOrds* classes in the search package to 
DocValues*. 
  Move SortedSetSortField to core and add SortedSetFieldSource to queries/, 
which
  takes the same selectors. Add helper methods to DocValues.java that are 
better 
  suited for search code (never return null, etc).  (Mike McCandless, Robert 
Muir)
{quote}


bq. The people who did this are elasticsearch employees. That is one way to 
deal with Solr's faster faceting!

This is speculation and really a bad behaviour on an Open Source issue tracker. 
We should discuss here about technical stuff, not make any assumptions about 
what people intend to do. This statement was posted by a person 
([~mmurphy3141]) who I never met in person, and who really seldem took place in 
Lucene/Solr discussions at all. So I don't think we should count on that. It is 
also bad behaviour to accuse committers on twitter about sabotage: 
https://twitter.com/mmurphy3141/status/647254551356162048; please don't do 
this. I would ask to remove this tweet, thanks.

I was informed about the changes mentioned here and I strongly agree with the 
committers behind LUCENE-5666. I was always in favour of removing those 
top-level facetting algorithms. So they still have my strong +1. On my Solr 
customers I have seen nobody who complained about slow top-level facetting 
(because I told them long time ago to no longer use those outdated top-level 
algorithms if they have dynamic indexes).

The right thing to do for Solr people would be to remove those top-level stuff 
completely. This is no longer fitting the new reader structure (composite and 
atomic/leaf readers) of Lucene 3 (with API cleanups to better reflect the new 
structure in Lucene 4). Lucene 3 is now several years retired already! So there 
was long time to fix Solr's facetting to go away from top-level. People with 
static indexes can still force merge their index and will have the same 
performance with the new algorithms.

Please keep in mind that it took about half a year until the first one 
recognized a problem like this, which makes me think that only few people are 
using those mostly-static indexes. 

*We should work on this issue to fix the issue, not accuse people, thanks!*

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, Trunk
>Reporter: Yonik Seeley
>Priority: Criti

[jira] [Comment Edited] (SOLR-8096) Major faceting performance regressions

2015-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907870#comment-14907870
 ] 

Uwe Schindler edited comment on SOLR-8096 at 9/25/15 9:49 AM:
--

bq. Use of the highly optimized faceting that Solr had for multi-valued fields 
over relatively static indexes was secretly removed as part of LUCENE-5666, 
causing severe performance regressions.

Hi, the removal was not "secret". Removal of FieldCache from Lucene (and 
replacement by UninvertingReader) was discussed on the Issue tracker, although 
interest by Solr people was small. I think this is the main issue here. 
Sometimes it would be good to have Solr committers taking part of discussions 
on Lucene issues. If you want to make Solr bettre, you should also help in 
making Lucene better!

The old field cache was also put into a separate module (with the new DocValues 
emulating-API), because we (Lucene Committers) knew that Solr still uses it. 
Sure, we could have used UninvertingReader on top of 
SlowCompositeReaderWrapper, but this would bring other slowness! So the 
committers decided to step forward and remove the top-level facetting (which 
was long overdue).

It was announced in several talks about Lucene 5 that FieldCache was removed 
and all facetting in Solr was implicitely changed to only use per segment field 
caches (e.g., see my talk @ focdem 2015, JAX 2015, or berlinbuzzwords - around 
one of the last slides). Maybe there should have been added a changes entry 
also to the Solr CHANGES.txt about this, but 

The CHANGES.txt about this entry was, the first line mentions that facetting in 
Solr is involved. Any Solr committer could have looked into the code and bring 
up complaints about those changes in the issue tracker also after this commit 
has been done:

{quote}
* LUCENE-5666: Change uninverted access (sorting, faceting, grouping, etc)
  to use the DocValues API instead of FieldCache. For FieldCache functionality,
  use UninvertingReader in lucene/misc (or implement your own FilterReader).
  UninvertingReader is more efficient: supports multi-valued numeric fields,
  detects when a multi-valued field is single-valued, reuses caches
  of compatible types (e.g. SORTED also supports BINARY and SORTED_SET access
  without insanity).  "Insanity" is no longer possible unless you explicitly 
want it. 
  Rename FieldCache* and DocTermOrds* classes in the search package to 
DocValues*. 
  Move SortedSetSortField to core and add SortedSetFieldSource to queries/, 
which
  takes the same selectors. Add helper methods to DocValues.java that are 
better 
  suited for search code (never return null, etc).  (Mike McCandless, Robert 
Muir)
{quote}

So everybody was informed.

bq. The people who did this are elasticsearch employees. That is one way to 
deal with Solr's faster faceting!

This is speculation and really a bad behaviour on an Open Source issue tracker. 
We should discuss here about technical stuff, not make any assumptions about 
what people intend to do. This statement was posted by a person 
([~mmurphy3141]) who I never met in person, and who really seldem took place in 
Lucene/Solr discussions at all. So I don't think we should count on that. It is 
also bad behaviour to accuse committers on twitter about sabotage: 
https://twitter.com/mmurphy3141/status/647254551356162048; please don't do 
this. I would ask to remove this tweet, thanks.

I was informed about the changes mentioned here and I strongly agree with the 
committers behind LUCENE-5666. I was always in favour of removing those 
top-level facetting algorithms. So they still have my strong +1. On my Solr 
customers I have seen nobody who complained about slow top-level facetting 
(because I told them long time ago to no longer use those outdated top-level 
algorithms if they have dynamic indexes).

The right thing to do for Solr people would be to remove those top-level stuff 
completely. This is no longer fitting the new reader structure (composite and 
atomic/leaf readers) of Lucene 3 (with API cleanups to better reflect the new 
structure in Lucene 4). Lucene 3 is now several years retired already! So there 
was long time to fix Solr's facetting to go away from top-level. People with 
static indexes can still force merge their index and will have the same 
performance with the new algorithms.

Please keep in mind that it took about half a year until the first one 
recognized a problem like this, which makes me think that only few people are 
using those mostly-static indexes. 

*We should work on this issue to fix the issue, not accuse people, thanks!*


was (Author: thetaphi):
bq. Use of the highly optimized faceting that Solr had for multi-valued fields 
over relatively static indexes was secretly removed as part of LUCENE-5666, 
causing severe performance regressions.

Hi, the removal was not "secret". Removal of FieldCache f

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1551: POMs out of sync

2015-09-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1551/

No tests ran.

Build Log:
[...truncated 25261 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:791: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:290: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:416:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2162:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/analysis/build.xml:122:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1656:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:574:
 Error deploying artifact 'org.apache.lucene:lucene-analyzers-uima:jar': Error 
installing artifact's metadata: Error while deploying metadata: Failed to 
transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-analyzers-uima/maven-metadata.xml.sha1.
 Return code is: 502

Total time: 11 minutes 47 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (LUCENE-6816) MinShouldMatchSumScorer should support approximations

2015-09-25 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6816:


 Summary: MinShouldMatchSumScorer should support approximations
 Key: LUCENE-6816
 URL: https://issues.apache.org/jira/browse/LUCENE-6816
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


This is a tricky yet interesting improvement.

The approximation would return a document as soon as {{min_should_match}} sub 
approximations match. Then when matching, it would iterate over approximations 
that are on the current document, and if {{matches()}} fails, it would try to 
replace the current approximation with one from the tail (approximations that 
are behind the current document).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7190) Remove unused UninvertedField

2015-09-25 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907832#comment-14907832
 ] 

Mikhail Khludnev commented on SOLR-7190:


bq. According to Shalin comment, what will happen now if a request has the 
facet.method=fc and we don't have docValues for the field of interest for 
faceting calculations ?

{{UninvertingReader}} builds docValues-*like* data structure in heap

> Remove unused UninvertedField
> -
>
> Key: SOLR-7190
> URL: https://issues.apache.org/jira/browse/SOLR-7190
> Project: Solr
>  Issue Type: Task
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 5.2, Trunk
>
>
> I was surprised to find that UninvertedField is no longer used in Solr. The 
> only references to UninvertedField is from the fieldValueCache inside 
> SolrIndexSearcher and that itself is not used anywhere in SolrIndexSearcher 
> except for initialization and regeneration. I can't trace when Solr stopped 
> using this class but in any case, we should remove it.
> In a related note, Lucene's DocTermOrds has a copy of the class level 
> javadocs of UninvertedField (which extends DocTermOrds). This was done in in 
> LUCENE-5666.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7190) Remove unused UninvertedField

2015-09-25 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907797#comment-14907797
 ] 

Alessandro Benedetti commented on SOLR-7190:


According to Shalin comment, what will happen now if a request has the 
facet.method=fc and we don't have docValues for the field of interest for 
faceting calculations ?

Have I confused the message ? DocValues should not be mandatory right ?
Is now another way to access the algorithm without docValues ?

> Remove unused UninvertedField
> -
>
> Key: SOLR-7190
> URL: https://issues.apache.org/jira/browse/SOLR-7190
> Project: Solr
>  Issue Type: Task
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 5.2, Trunk
>
>
> I was surprised to find that UninvertedField is no longer used in Solr. The 
> only references to UninvertedField is from the fieldValueCache inside 
> SolrIndexSearcher and that itself is not used anywhere in SolrIndexSearcher 
> except for initialization and regeneration. I can't trace when Solr stopped 
> using this class but in any case, we should remove it.
> In a related note, Lucene's DocTermOrds has a copy of the class level 
> javadocs of UninvertedField (which extends DocTermOrds). This was done in in 
> LUCENE-5666.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 79 - Failure!

2015-09-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/79/
Java: multiarch/jdk1.7.0 -d64 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.update.HardAutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([7CB78DFEF0FCCF5A:C665E28673D2214F]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:765)
at 
org.apache.solr.update.HardAutoCommitTest.testCommitWithin(HardAutoCommitTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:start=0&rows=20&qt=standard&q=id:529&version=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:758)
... 40 more




Build Log:
[...truncated 10542 lines...]
   [junit4] S

[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 330 - Still Failing

2015-09-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/330/

No tests ran.

Build Log:
[...truncated 52980 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.03 sec (5.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.4.0-src.tgz...
   [smoker] 28.5 MB in 0.04 sec (794.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.4.0.tgz...
   [smoker] 66.2 MB in 0.08 sec (830.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.4.0.zip...
   [smoker] 76.5 MB in 0.09 sec (823.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.4.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6128 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6128 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.4.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6128 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6128 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.4.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 211 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 211 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.3.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1449, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1432, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 762, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/dev-tools/scripts/smokeTestRelease.py",
 line 1387, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?