[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15216 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15216/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseParallelGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:58927/p_ew/c/awholynewcollection_0: non 
ok status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58927/p_ew/c/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([CA4F90A4381D77E8:421BAF7E96E11A10]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 701 - Still Failing

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/701/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([41017B4938DCBABE]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:453)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:225)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=8407, name=searcherExecutor-3431-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=8407, name=searcherExecutor-3431-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([41017B4938DCBABE]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=8407, name=searcherExecutor-3431-thread-1, state=WAITING, 
group=TGRP-TestLa

[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 408 - Failure

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/408/

No tests ran.

Build Log:
[...truncated 53104 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (10.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.5.0-src.tgz...
   [smoker] 28.7 MB in 0.04 sec (766.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.tgz...
   [smoker] 66.2 MB in 0.09 sec (753.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.zip...
   [smoker] 76.6 MB in 0.10 sec (786.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (22.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.5.0-src.tgz...
   [smoker] 37.4 MB in 0.46 sec (80.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.tgz...
   [smoker] 130.1 MB in 2.08 sec (62.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.zip...
   [smoker] 137.9 MB in 2.39 sec (57.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.5.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/

[jira] [Updated] (SOLR-8419) TermVectorComponent distributed-search issues

2015-12-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-8419:
---
Attachment: SOLR_8419.patch

The attach patch:
* Fixes the invalid/confusing response when there's a distributed single-pass 
situation.
* Removed {{uniqueKeyFieldName}} as a key in the TV response NamedList.  Okay I 
didn't have to do this but this seemed totally out of place.  
HighlightComponent & DebugComponent don't do this.
* Added test that fails without these changes -- the distrib.singlePass case.

The changes also then allows for an eventual refactoring of common code in 
finishStage (the loop filling {{arr}}).  This is the part affected by a 
distrib.singlePass bug in 3 search components.  I won't do that refactoring 
here though; I'll do it in BNGS-8059.

Assuming tests pass I'll commit this in a couple days.

> TermVectorComponent distributed-search issues
> -
>
> Key: SOLR-8419
> URL: https://issues.apache.org/jira/browse/SOLR-8419
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.5
>
> Attachments: SOLR_8419.patch
>
>
> TermVectorComponent supports distributed-search since SOLR-3229 added it.  
> Unlike most other components, this one tries to support schemas without a 
> UniqueKey.  However it's logic for attempting to do this was made faulty with 
> the introduction of distrib.singlePass, and furthermore this part wasn't 
> tested any way.  In this issue I want to remove support for schemas lacking a 
> UniqueKey with this component (only for distributed-search).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8419) TermVectorComponent distributed-search issues

2015-12-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-8419:
---
Issue Type: Bug  (was: Improvement)

> TermVectorComponent distributed-search issues
> -
>
> Key: SOLR-8419
> URL: https://issues.apache.org/jira/browse/SOLR-8419
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.5
>
>
> TermVectorComponent supports distributed-search since SOLR-3229 added it.  
> Unlike most other components, this one tries to support schemas without a 
> UniqueKey.  However it's logic for attempting to do this was made faulty with 
> the introduction of distrib.singlePass, and furthermore this part wasn't 
> tested any way.  In this issue I want to remove support for schemas lacking a 
> UniqueKey with this component (only for distributed-search).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15215 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15215/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([C2A9D8A16D754120]:0)




Build Log:
[...truncated 11024 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestMiniSolrCloudClusterSSL
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestMiniSolrCloudClusterSSL_C2A9D8A16D754120-001/init-core-data-001
   [junit4]   2> 274168 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 274170 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 274170 INFO  (Thread-776) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 274170 INFO  (Thread-776) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 274270 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:59148
   [junit4]   2> 274270 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 274270 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 274273 INFO  (zkCallback-183-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@111416 name:ZooKeeperConnection 
Watcher:127.0.0.1:59148 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 274273 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 274273 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 274273 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml
   [junit4]   2> 274277 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[C2A9D8A16D754120]-worker) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/clusterprops.json
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-1) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-2) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-4) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-5) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274292 INFO  (jetty-launcher-182-thread-3) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5f0f78{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@ee8c81{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@19ef8f4{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-5) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@101d8d2{/solr,null,AVAILABLE}
   [junit4]   2> 274295 INFO  (jetty-launcher-182-thread-4) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@fb41ff{/solr,null,AVAILABLE}
   [junit4]   2> 274302 INFO  (jetty-launcher-182-thread-3) [] 
o.e.j.u.s.SslContextFactory x509=X509@f0c41(solrtest,h=[],w=[]) for 
SslContextFactory@d5c651(file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore,file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore)
   [junit4]   2> 274302 INFO  (jetty-launcher-182-thread-5) [] 
o.e.j.u.s.SslContextFactory x509=X509@19cdbe7(solrtest,h=[],w=[]) for 
SslContextFactory@962662(file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore,file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore)
   [junit4]   2> 274302 INFO  (jetty-launcher-182-thread-4) [] 
o.e.j.u.s.SslContextFactory x509=X509@228d94(solrtest,h=[],w=[]) for 
SslContextFactory@1ac5673(file:///home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/server/etc/test/solrtest.keystore,file:///home/jenkins/worksp

Re: JSON "fields" vs defaults

2015-12-15 Thread Jack Krupansky
In a normal query multiple fl parameters are additive, but they
collectively override whatever fl parameter(s) may have been specified in
"defaults", right? I mean, that's why Solr has "appends" in addition to
"defaults", right"?

Ah, but I see in the JSON Request API doc that it says "Multi-valued
elements like fields and filter are appended", seeming to imply that the
"defaults" section will be treated as if it were "appends", it would seem,
at least for how "fields" is treated.

See:
https://cwiki.apache.org/confluence/display/solr/JSON+Request+API

Filter seems to make sense for this auto-appends mode, but fields/fl don't
seem to benefit from appending rather than treating the defaults section in
the traditional manner, I think.

-- Jack Krupansky

On Tue, Dec 15, 2015 at 9:06 PM, Yonik Seeley  wrote:

> Multiple "fl" parameters are additive, so it would make sense that
> "fields" is also (for fl and field in the same request).  If that's
> true for "fl" as a default and "fl" as a query param, then it seems
> like that should be true for the other variants.
>
> If "fl" as a query param and "fl" in a JSON params block don't act the
> same, that should probably be a bug?
>
> -Yonik
>
>
> On Tue, Dec 15, 2015 at 7:55 PM, Jack Krupansky
>  wrote:
> > Yonik? The doc is weak in this area. In fact, I see a comment on it from
> > Cassandra directed to you to verify the JSON to parameter mapping. It
> would
> > be nice to have a clear statement of the semantics for JSON "fields"
> > parameter and how it may or may not interact with the Solr fl parameter.
> >
> > -- Jack Krupansky
> >
> > On Thu, Dec 10, 2015 at 3:55 PM, Ryan Josal  wrote:
> >>
> >> I didn't see a Jira open in this, so I wanted to see if it's expected.
> If
> >> you pass "fields":[...] in a SOLR JSON API request, it does not override
> >> what's the default in the handler config.  I had fl=* as a default, so
> I saw
> >> "fields" have no effect, while "params":{"fl":...} worked as expected.
> >> After stepping through the debugger I noticed it was just appending
> "fields"
> >> at the end of everything else (including after solr config appends, if
> it
> >> makes a difference).
> >>
> >> If this is not expected I will create a Jira and maybe have time to
> >> provide a patch.
> >>
> >> Ryan
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Comment Edited] (SOLR-8191) CloudSolrStream close method NullPointerException

2015-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059442#comment-15059442
 ] 

Joel Bernstein edited comment on SOLR-8191 at 12/16/15 3:37 AM:


Sure, I should have some time to review this ticket and SOLR-8190 tomorrow.


was (Author: joel.bernstein):
Sure, I should have some time review this ticket and SOLR-8190 tomorrow.

> CloudSolrStream close method NullPointerException
> -
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8191) CloudSolrStream close method NullPointerException

2015-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059442#comment-15059442
 ] 

Joel Bernstein commented on SOLR-8191:
--

Sure, I should have some time review this ticket and SOLR-8190 tomorrow.

> CloudSolrStream close method NullPointerException
> -
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8191) CloudSolrStream close method NullPointerException

2015-12-15 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059422#comment-15059422
 ] 

Jason Gerlowski commented on SOLR-8191:
---

After a little more attention, I think it'd probably be safer to null-check in 
{{constructStreams}} as well (see above).  That said, I'm not sure if there was 
a rationale for avoiding this so far. [~risdenk], was there a reason you hadn't 
done this in your initial patch?

In any case, the patch does fix the NPE's exposed by closing the streams in the 
Streaming tests, so in that respect it looks good to me.

Can someone with more familiarity with this part of the codebase (ideally 
someone willing to consider merging this) take a look at this patch please

> CloudSolrStream close method NullPointerException
> -
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1047 - Still Failing

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1047/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
KeeperErrorCode = ConnectionLoss for /clusterstate.json

Stack Trace:
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /clusterstate.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
at 
org.apache.solr.common.cloud.ZkStateReader.refreshLegacyClusterState(ZkStateReader.java:478)
at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:258)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForCollectionToDisappear(AbstractDistribZkTestBase.java:199)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.assertCollectionNotExists(AbstractFullDistribZkTestBase.java:1772)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deleteCollectionRemovesStaleZkCollectionsNode(CollectionsAPIDistributedZkTest.java:198)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOve

Re: JSON "fields" vs defaults

2015-12-15 Thread Yonik Seeley
Multiple "fl" parameters are additive, so it would make sense that
"fields" is also (for fl and field in the same request).  If that's
true for "fl" as a default and "fl" as a query param, then it seems
like that should be true for the other variants.

If "fl" as a query param and "fl" in a JSON params block don't act the
same, that should probably be a bug?

-Yonik


On Tue, Dec 15, 2015 at 7:55 PM, Jack Krupansky
 wrote:
> Yonik? The doc is weak in this area. In fact, I see a comment on it from
> Cassandra directed to you to verify the JSON to parameter mapping. It would
> be nice to have a clear statement of the semantics for JSON "fields"
> parameter and how it may or may not interact with the Solr fl parameter.
>
> -- Jack Krupansky
>
> On Thu, Dec 10, 2015 at 3:55 PM, Ryan Josal  wrote:
>>
>> I didn't see a Jira open in this, so I wanted to see if it's expected. If
>> you pass "fields":[...] in a SOLR JSON API request, it does not override
>> what's the default in the handler config.  I had fl=* as a default, so I saw
>> "fields" have no effect, while "params":{"fl":...} worked as expected.
>> After stepping through the debugger I noticed it was just appending "fields"
>> at the end of everything else (including after solr config appends, if it
>> makes a difference).
>>
>> If this is not expected I will create a Jira and maybe have time to
>> provide a patch.
>>
>> Ryan

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2015-12-15 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-7964:
-
Attachment: SOLR_7964.patch

- Fixed code to allow highlighting while building final Solr response... the 
Lucene response remaining unchanged

- Did some clean-up from LUCENE-6004 to not override {{lookup()}} for 
highlighting in {{AnalyzingInfixLookupFactory.java}} and 
{{BlendedInfixLookupFactory.java}}. Instead the highlighted field is set while 
building final Solr response. The Lucene response remaining unchanged. This 
helps avoid unnecessary loop and data copy.

- All tests are passing

Please [~mikemccand], could you be kind enough to have a look when you have the 
chance.

Thank you very much.

 

> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: SOLR_7964.patch
>
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SortingLeafReader and IndexWriter.addIndexes

2015-12-15 Thread John Wang
Hi folks:

I am interested in using the SortingLeafReader to sort my index. According
to examples, calling IndexWriter.addIndexes on the wrapper
SortingLeafReader would do the trick.

In the recent releases, IndexWriter.addIndexes api is now only taking a
CodecReader. Is there another way to do indexing sorting?

Appreciate any help.

Thanks

-John


[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 260 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/260/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior

Error Message:
Illegal state, was: down expected:active clusterState:live 
nodes:[]collections:{c1=DocCollection(c1)={   "shards":{"shard1":{   
"parent":null,   "range":null,   "state":"active",   
"replicas":{"core_node1":{   "base_url":"http://127.0.0.1/solr";,
   "node_name":"node1",   "core":"core1",   "roles":"", 
  "state":"down",   "router":{"name":"implicit"}}, 
test=LazyCollectionRef(test)}

Stack Trace:
java.lang.AssertionError: Illegal state, was: down expected:active 
clusterState:live nodes:[]collections:{c1=DocCollection(c1)={
  "shards":{"shard1":{
  "parent":null,
  "range":null,
  "state":"active",
  "replicas":{"core_node1":{
  "base_url":"http://127.0.0.1/solr";,
  "node_name":"node1",
  "core":"core1",
  "roles":"",
  "state":"down",
  "router":{"name":"implicit"}}, test=LazyCollectionRef(test)}
at 
__randomizedtesting.SeedInfo.seed([F137852DF4029A5B:992986C11692C015]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.verifyReplicaStatus(AbstractDistribZkTestBase.java:237)
at 
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior(OverseerTest.java:1262)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15214 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15214/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SyncSliceTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([371390BD3BD05494:BF47AF67952C396C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SyncSliceTest.waitTillAllNodesActive(SyncSliceTest.java:239)
at org.apache.solr.cloud.SyncSliceTest.test(SyncSliceTest.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


[jira] [Updated] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2015-12-15 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-7964:
-
Description: 
When using the new suggester context filtering query param 
{{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
{{suggest.highlight=true}} has no effect.


  was:
When using the new suggester context filtering query param {{suggest.cfq}} 
introduced in SOLR-7888, the param {{suggest.highlight=true}} has no effect.

This is a bug that needs to be addressed here


> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JSON "fields" vs defaults

2015-12-15 Thread Jack Krupansky
Yonik? The doc is weak in this area. In fact, I see a comment on it from
Cassandra directed to you to verify the JSON to parameter mapping. It would
be nice to have a clear statement of the semantics for JSON "fields"
parameter and how it may or may not interact with the Solr fl parameter.

-- Jack Krupansky

On Thu, Dec 10, 2015 at 3:55 PM, Ryan Josal  wrote:

> I didn't see a Jira open in this, so I wanted to see if it's expected. If
> you pass "fields":[...] in a SOLR JSON API request, it does not override
> what's the default in the handler config.  I had fl=* as a default, so I
> saw "fields" have no effect, while "params":{"fl":...} worked as expected.
> After stepping through the debugger I noticed it was just appending
> "fields" at the end of everything else (including after solr config
> appends, if it makes a difference).
>
> If this is not expected I will create a Jira and maybe have time to
> provide a patch.
>
> Ryan
>


[jira] [Commented] (SOLR-8191) CloudSolrStream close method NullPointerException

2015-12-15 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059213#comment-15059213
 ] 

Jason Gerlowski commented on SOLR-8191:
---

The current patch still applies cleanly.  Fixing this NPE might not be hugely 
important, but this bug is blocking SOLR-8190, which would be a nice 
improvement IMO. (Not a huge deal, but still a nice little tidbit).

Looking at {{CloudSolrStream}} a little closer though, it seems odd to perform 
a null-check on {{close()}}, but not any of the other places that 
{{cloudSolrClient}} is used.  For instance, check out the protected 
{{constructStreams()}} method, which is invoked on each call to {{open()}}.

Those are just my observations at a glance.  I'm not very familiar with the 
SolrJ code, so maybe this isn't actually an issue.  Just wanted to mention it.  
I'm going to tinker around with this more tonight to see if I can learn more.

> CloudSolrStream close method NullPointerException
> -
>
> Key: SOLR-8191
> URL: https://issues.apache.org/jira/browse/SOLR-8191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8191.patch
>
>
> CloudSolrStream doesn't check if cloudSolrClient or solrStreams is null 
> yielding a NullPointerException in those cases when close() is called on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-15 Thread Nirmala Venkatraman (JIRA)
Nirmala Venkatraman created SOLR-8422:
-

 Summary: Basic Authentication plugin is not working correctly in 
solrcloud
 Key: SOLR-8422
 URL: https://issues.apache.org/jira/browse/SOLR-8422
 Project: Solr
  Issue Type: Bug
  Components: Authentication
Affects Versions: 5.3.1
 Environment: Solrcloud
Reporter: Nirmala Venkatraman


Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
have 64 collections, each having 2 replicas distributed across the 5 servers in 
the solr cloud. A sample screen shot of the collections/shard locations shown 
below:-

Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
servers in the solrcloud and the request is sent to a server  which  doesn't 
have the collection
Here is the request sent by the indexing tool  to sgdsolar1, that includes the 
correct BasicAuth credentials

Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
collection1, but no basic auth header is being passed. 

As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
all the way back to solr indexing tool
9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
/solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
 HTTP/1.1" 401 366

2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
r:core_node1 x:collection1_shard1_replica1] 
o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. failed 
permission 
org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall USER_REQUIRED 
auth header null context : userPrincipal: [null] type: [READ], collections: 
[collection1,], Path: [/get] path : /get params 
:fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!

Step 3 - In another solrcloud , if the indexing tool sends the solr get request 
to the server that has the collection1, I see that basic authentication working 
as expected.

I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
solr-core and solr-solrj jar files under the solr-webapp folder that were 
provided via earlier patches that Anshum/Noble worked on:-
SOLR-8167 fixes the POST issue 
SOLR-8326  fixing PKIAuthenticationPlugin.
SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8190) Implement Closeable on TupleStream

2015-12-15 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-8190:
--
Attachment: SOLR-8190.patch

Would be nice to get this rolling again.  To keep it up to date, I've updated 
the patch to apply cleanly off of trunk.

Tests still fail due to the NPE addressed in the (unresolved) SOLR-8091.

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch, SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059140#comment-15059140
 ] 

Hoss Man commented on SOLR-8421:


tomas: sure, and maybe (in a weird situation) that's a totally valid and 
intended chroot, but having some better logging about which chroot is used, and 
maybe warning if the chroot looks suspicious, would help.

(Ideally we could connect w/o the chroot ourselves first, and log a very 
explicit warning if that path doesn't exist -- noting exactly what path is 
being attempted -- but i'm not sure how nicely that type of approach plays with 
the various ZK security models that i know people are / have-been working on 
supporting)

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059131#comment-15059131
 ] 

Tomás Fernández Löbbe commented on SOLR-8421:
-

I thought about it. I believe the bad error is because it's considering 
everything since the first slash the chroot ("/test,localhost:2182/test")

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059132#comment-15059132
 ] 

Hoss Man commented on SOLR-8421:


At a minimum, after parsing the zkhosts but before any zk connections are 
attempted at all, we could hueristically look at the chroot we've parsed and 
log a WARN if it looks like it mistakenly contains other host:port pairs.

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8421) improve error message when zkHost with multiple hosts and redundent chroot specified

2015-12-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8421:
---
Description: 
If a user mistakenly tries to specify the chroot on every zk host:port  in the 
zkhosts string, the error they get is confusing.

we should try to improve the error/logging to make it more evident what the 
problem is

{panel:title=initial bug report from user}
I'm trying to run Solr Cloud with following command:

{code}
./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
{code}

And getting error:

{code}
749  ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified in 
ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
{code}

Node "/test" exists in zookeeper. And both:

{code}
./bin/solr -f -c -z localhost:2181,localhost:2182
{code}

{code}
./bin/solr -f -c -z localhost:2181/test
{code}

works fine.

But I cannot get it work with multiple nodes and chroot specified.
{panel}

  was:
I'm trying to run Solr Cloud with following command:

{code}
./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
{code}

And getting error:

{code}
749  ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified in 
ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
{code}

Node "/test" exists in zookeeper. And both:

{code}
./bin/solr -f -c -z localhost:2181,localhost:2182
{code}

{code}
./bin/solr -f -c -z localhost:2181/test
{code}

works fine.

But I cannot get it work with multiple nodes and chroot specified.

 Issue Type: Improvement  (was: Bug)
Summary: improve error message when zkHost with multiple hosts and 
redundent chroot specified  (was: zkHost with chroot and multiple hosts not 
working)

> improve error message when zkHost with multiple hosts and redundent chroot 
> specified
> 
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> If a user mistakenly tries to specify the chroot on every zk host:port  in 
> the zkhosts string, the error they get is confusing.
> we should try to improve the error/logging to make it more evident what the 
> problem is
> {panel:title=initial bug report from user}
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8421) zkHost with chroot and multiple hosts not working

2015-12-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-8421:


why don't we re-purpose this issue to try and make the error/logging more clear 
about what's happening in these cases?

> zkHost with chroot and multiple hosts not working
> -
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8421) zkHost with chroot and multiple hosts not working

2015-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe closed SOLR-8421.
---
Resolution: Not A Problem

chroot needs to be added only once after the list of hosts, not after each 
host. See 
https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#TakingSolrtoProduction-ZooKeeperchroot
Please use the users list for these kind of questions

> zkHost with chroot and multiple hosts not working
> -
>
> Key: SOLR-8421
> URL: https://issues.apache.org/jira/browse/SOLR-8421
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
> Environment: Solr Cloud
>Reporter: Dmitry Myaskovskiy
>
> I'm trying to run Solr Cloud with following command:
> {code}
> ./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
> {code}
> And getting error:
> {code}
> 749  ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
> {code}
> Node "/test" exists in zookeeper. And both:
> {code}
> ./bin/solr -f -c -z localhost:2181,localhost:2182
> {code}
> {code}
> ./bin/solr -f -c -z localhost:2181/test
> {code}
> works fine.
> But I cannot get it work with multiple nodes and chroot specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7996) Evaluate moving SolrIndexSearcher creation logic to a factory

2015-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059047#comment-15059047
 ] 

Tomás Fernández Löbbe commented on SOLR-7996:
-

[~jej2003] (in reply to [this 
email|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201512.mbox/%3CCAL3VrCereW7L7xjDab7B8nkd0Xw9-HfPH_BoX=cn-_byb9r...@mail.gmail.com%3E]),
 some time ago I worked on a SolrSearcherFactory as part of SOLR-5621 (a more 
ambitious Jira than this), the idea now is slightly different, but maybe it 
helps, at least a similar thing is what I had in mind when I created the Jira 
(also, making the factory configurable).
Maybe we should also move the "wrapReader" method from SolrIndexSearcher to the 
factory?

> Evaluate moving SolrIndexSearcher creation logic to a factory
> -
>
> Key: SOLR-7996
> URL: https://issues.apache.org/jira/browse/SOLR-7996
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>
> Moving this logic away from SolrCore is already a win, plus it should make it 
> easier to unit test and extend for advanced use cases.
> See discussion here: http://search-lucene.com/m/eHNlWNCtoeLwQp 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2889 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2889/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=9021, name=Thread-3402, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9021, name=Thread-3402, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: 
IOException occured when talking to server at: 
http://127.0.0.1:59676/collection2_shard4_replica1
at __randomizedtesting.SeedInfo.seed([A900DE3E4B8F9D2D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:635)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:982)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:642)
Caused by: org.apache.solr.client.solrj.SolrServerException: IOException 
occured when talking to server at: 
http://127.0.0.1:59676/collection2_shard4_replica1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:589)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient$2.call(CloudSolrClient.java:608)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient$2.call(CloudSolrClient.java:605)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.http.NoHttpResponseException: 127.0.0.1:59676 failed to 
respond
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
... 11 more


FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, test, 
score],okFieldNames=[null, id, test, score]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, test, score],okFieldNames=[null, id, test, 
s

[jira] [Created] (SOLR-8421) zkHost with chroot and multiple hosts not working

2015-12-15 Thread Dmitry Myaskovskiy (JIRA)
Dmitry Myaskovskiy created SOLR-8421:


 Summary: zkHost with chroot and multiple hosts not working
 Key: SOLR-8421
 URL: https://issues.apache.org/jira/browse/SOLR-8421
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.4
 Environment: Solr Cloud
Reporter: Dmitry Myaskovskiy


I'm trying to run Solr Cloud with following command:

{code}
./bin/solr -f -c -z localhost:2181/test,localhost:2182/test
{code}

And getting error:

{code}
749  ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified in 
ZkHost but the znode doesn't exist. localhost:2181/test,localhost:2182/test
{code}

Node "/test" exists in zookeeper. And both:

{code}
./bin/solr -f -c -z localhost:2181,localhost:2182
{code}

{code}
./bin/solr -f -c -z localhost:2181/test
{code}

works fine.

But I cannot get it work with multiple nodes and chroot specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8395) query-time join (with scoring) for numeric fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8395:
---
Attachment: SOLR-8395.patch

It's nearly shocked me. The first path with multivalue fields ("uid_ls_dv", 
"rel_from_ls_dv") works out of the box even without LUCENE-5868!!
The answer is in 
[TrieField.createFields()|https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/schema/TrieField.java#L725]
 for mv dv numerics Solr creates SortedSetDVs encoded as numbers and it works 
fine as-is. See also SOLR-7878. 
Thus, the only way to break the test is to use single valued docval fields. 
That's what I did in the [^SOLR-8395.patch]. Now it fails
{code}
java.lang.IllegalStateException: unexpected docvalues type NUMERIC for field 
'rel_to_l_dv' (expected one of [SORTED, SORTED_SET]). Use UninvertingReader or 
index with docvalues.
..
at org.apache.lucene.index.DocValues.checkField(DocValues.java:208)
at org.apache.lucene.index.DocValues.getSortedSet(DocValues.java:306)
at 
org.apache.lucene.search.join.DocValuesTermsCollector.lambda$1(DocValuesTermsCollector.java:59)
at ..
at 
org.apache.lucene.search.join.JoinUtil.createJoinQuery(JoinUtil.java:146)
..
org.apache.solr.search.join.TestScoreJoinQPNoScore.testJoinNumeric(TestScoreJoinQPNoScore.java:71)
{code}

If you are going to work on it pls make sure ints and longs are covered both. I 
see one more trick in TrieField.createFields(). 
  

> query-time join (with scoring) for numeric fields
> -
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch, SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. IAlongside with that we can set _multipleValues_ parameters 
> giving _fromField_ cardinality declared in schema,   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8374) Issue with _text_ field in schema file

2015-12-15 Thread Romit Singhai (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058928#comment-15058928
 ] 

Romit Singhai edited comment on SOLR-8374 at 12/15/15 10:07 PM:


Hi Varun,

This information will be useful for people using Solr 5.2.1 on HDP2.3.2 as the 
comments in the schema.xml file are confusing.


was (Author: romits):
Hi Varun,

This information will be useful for people using Solr 5.2.1 on HDP2.3.2 as the 
comments the schema.xml file are confusing.

> Issue with _text_ field in schema file
> --
>
> Key: SOLR-8374
> URL: https://issues.apache.org/jira/browse/SOLR-8374
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Affects Versions: 5.2.1
>Reporter: Romit Singhai
>Priority: Critical
>  Labels: patch
>
> In the data_driven_schema_configs, the warning say that _text_ field  can be 
> removed if not needed. The  hadoop indexer fails  to index data  as ping 
> command could not find the collection required for indexing.
> The ping command for collection needs to be fixed (making _text_ optional) as 
> _text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8374) Issue with _text_ field in schema file

2015-12-15 Thread Romit Singhai (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058928#comment-15058928
 ] 

Romit Singhai commented on SOLR-8374:
-

Hi Varun,

This information will be useful for people using Solr 5.2.1 on HDP2.3.2 as the 
comments the schema.xml file are confusing.

> Issue with _text_ field in schema file
> --
>
> Key: SOLR-8374
> URL: https://issues.apache.org/jira/browse/SOLR-8374
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Affects Versions: 5.2.1
>Reporter: Romit Singhai
>Priority: Critical
>  Labels: patch
>
> In the data_driven_schema_configs, the warning say that _text_ field  can be 
> removed if not needed. The  hadoop indexer fails  to index data  as ping 
> command could not find the collection required for indexing.
> The ping command for collection needs to be fixed (making _text_ optional) as 
> _text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 14915 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14915/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=[],fields=[score, [test, id],okFieldNames=[null, score, test, 
id]],reqFieldNames=[id,...> but was:<...s=[],fields=[score, [id, 
test],okFieldNames=[null, score, id, test]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=[],fields=[score, [test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=[],fields=[score, [id, test],okFieldNames=[null, score, id, 
test]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([B7F092E8551B055A:66D125E96A0168F2]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLea

[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058884#comment-15058884
 ] 

ASF subversion and git services commented on SOLR-8388:
---

Commit 1720257 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720257 ]

SOLR-8388: ReturnFieldsTest.testToString() fix (don't assume ordering within 
sets' values) (merge in revision 1720253 from trunk)

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch, 
> SOLR-8388-part3of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2944 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2944/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:52435/solr/testSolrCloudCollection]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:52435/solr/testSolrCloudCollection]
at 
__randomizedtesting.SeedInfo.seed([5F9367AFC2610A47:B140D97F478D30A3]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck(PingRequestHandlerTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuite

[jira] [Resolved] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-8372.

   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 700 - Failure

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/700/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([3EDFC1D20AC92D22]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:453)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:225)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=8586, name=searcherExecutor-3600-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=8586, name=searcherExecutor-3600-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([3EDFC1D20AC92D22]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=8586, name=searcherExecutor-3600-thread-1, state=WAITING, 
group=TGRP-TestLa

[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058828#comment-15058828
 ] 

ASF subversion and git services commented on SOLR-8388:
---

Commit 1720253 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1720253 ]

SOLR-8388: ReturnFieldsTest.testToString() fix (don't assume ordering within 
sets' values)

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch, 
> SOLR-8388-part3of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058814#comment-15058814
 ] 

ASF subversion and git services commented on SOLR-8372:
---

Commit 1720251 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720251 ]

SOLR-8372: continue buffering if recovery is canceled/failed

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058808#comment-15058808
 ] 

ASF subversion and git services commented on SOLR-8372:
---

Commit 1720250 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1720250 ]

SOLR-8372: continue buffering if recovery is canceled/failed

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-7730.

Resolution: Fixed

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058798#comment-15058798
 ] 

ASF subversion and git services commented on SOLR-7730:
---

Commit 1720248 from m...@apache.org in branch 'dev/branches/lucene_solr_5_4'
[ https://svn.apache.org/r1720248 ]

SOLR-7730: mention in 5.4.0's Optimizations

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6926) Take matchCost into account for MUST_NOT clauses

2015-12-15 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058762#comment-15058762
 ] 

Paul Elschot commented on LUCENE-6926:
--

I tried implementing this NOT wrapper, but it is not feasible because the 
nextDoc() implementation will have to do a linear scan as long as the wrapped 
iterator provides consecutive docs.
So this might be nice in theory, but it will not perform well.

That means that I can't easily improve on the latest patch, it looks good, and 
core tests pass here.

> Take matchCost into account for MUST_NOT clauses
> 
>
> Key: LUCENE-6926
> URL: https://issues.apache.org/jira/browse/LUCENE-6926
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6926.patch, LUCENE-6926.patch
>
>
> ReqExclScorer potentially has two TwoPhaseIterators to check: the one for the 
> positive clause and the one for the negative clause. It should leverage the 
> match cost API to check the least costly one first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8395) query-time join (with scoring) for numeric fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8395:
---
Summary: query-time join (with scoring) for numeric fields  (was: 
query-time join for numeric fields)

> query-time join (with scoring) for numeric fields
> -
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. IAlongside with that we can set _multipleValues_ parameters 
> giving _fromField_ cardinality declared in schema,   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058747#comment-15058747
 ] 

ASF subversion and git services commented on SOLR-7730:
---

Commit 1720241 from m...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720241 ]

SOLR-7730: mention in 5.4.0's Optimizations

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058739#comment-15058739
 ] 

ASF subversion and git services commented on SOLR-7730:
---

Commit 1720239 from m...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1720239 ]

SOLR-7730: mention in 5.4.0's Optimizations

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058733#comment-15058733
 ] 

Mark Miller commented on SOLR-8415:
---

bq. Docs should go on the wiki somewhere.

I'd start looking around 
https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_80) - Build # 5344 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5344/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, test, 
score],okFieldNames=[null, id, test, score]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, test, score],okFieldNames=[null, id, test, 
score]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([C9C9D225B44E3C25:18E865248B54518D]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRu

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3838 - Failure

2015-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3838/

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, score, 
test],okFieldNames=[null, id, score, test]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, score, test],okFieldNames=[null, id, score, 
test]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([6B722657F6CE495C:BA539156C9D424F4]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang

[jira] [Updated] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-15 Thread Tom Hill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Hill updated SOLR-8420:
---
Attachment: 0001-Fix-overflow-in-date-statistics.patch

One line fix, plus tests.

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-15 Thread Tom Hill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Hill updated SOLR-8420:
---
Comment: was deleted

(was: One line fix, plus tests.)

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-15 Thread Tom Hill (JIRA)
Tom Hill created SOLR-8420:
--

 Summary: Date statistics: sumOfSquares overflows long
 Key: SOLR-8420
 URL: https://issues.apache.org/jira/browse/SOLR-8420
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 5.4
Reporter: Tom Hill
Priority: Minor


The values for Dates are large enough that squaring them overflows a "long" 
field. This should be converted to a double. 

StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add a 
cast to double 

sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8419) TermVectorComponent distributed-search issues

2015-12-15 Thread David Smiley (JIRA)
David Smiley created SOLR-8419:
--

 Summary: TermVectorComponent distributed-search issues
 Key: SOLR-8419
 URL: https://issues.apache.org/jira/browse/SOLR-8419
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.5


TermVectorComponent supports distributed-search since SOLR-3229 added it.  
Unlike most other components, this one tries to support schemas without a 
UniqueKey.  However it's logic for attempting to do this was made faulty with 
the introduction of distrib.singlePass, and furthermore this part wasn't tested 
any way.  In this issue I want to remove support for schemas lacking a 
UniqueKey with this component (only for distributed-search).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3229) TermVectorComponent does not return terms in distributed search

2015-12-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058606#comment-15058606
 ] 

Hoss Man commented on SOLR-3229:


I don't remember this issue at all, and w/o digging into it's history or 
looking at teh commits, i'm going to reply simply to this sentence...

bq. UniqueKey should be required for distributed-search to get TV info back. 

I have no objection to this.  if that's not how it works now, then i'm 
surprised and If i'm responsible for the code/decision in question then my 
suspicion is that it's simply because this issue predated SolrCloud and most of 
the other current "rules" regarding "distributed search" -- back when it was a 
query time only concept and people manually partitioned their shards.  it 
certainly pre-dates distrib.singlePass.

open/link a new jira w/whatever changes you think make sense to the existing 
functionality.

> TermVectorComponent does not return terms in distributed search
> ---
>
> Key: SOLR-3229
> URL: https://issues.apache.org/jira/browse/SOLR-3229
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.0-ALPHA
> Environment: Ubuntu 11.10, openjdk-6
>Reporter: Hang Xie
>Assignee: Hoss Man
>  Labels: patch
> Fix For: 4.0, Trunk
>
> Attachments: SOLR-3229.patch, TermVectorComponent.patch
>
>
> TermVectorComponent does not return terms in distributed search, the 
> distributedProcess() incorrectly uses Solr Unique Key to do subrequests, 
> while process() expects Lucene document ids. Also, parameters are transferred 
> in different format thus making distributed search returns no result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8415:

Attachment: SOLR-8415.patch

Attaching a new patch that includes some tests for converting both ways between 
secure and non secure nodes.

Docs should go on the wiki somewhere. I'll write them up as soon as somebody 
gives me a nudge to help find a good home for them.

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-15 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058575#comment-15058575
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Thanks Mark. I don't think I'll use automated scripts, I'll most likely put 
together something that will translate raw history revision-by-revision 
(cleaning up the dump local SVN first). It can take a long time if it's a 
one-time conversion. I realize it's mind-bending, but let's see if it works. 
I'll need some time to work through it, these are huge files.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> Goals:
> - selectively drop unnecessary stuff from history (cms/, javadocs/, JARs and 
> perhaps other binaries),
> - *preserve* history of all core sources. So svn log IndexWriter has to go 
> back all the way back to when Doug was young and pretty. Ooops, he's still 
> pretty of course.
> - provide a way to link git history with svn revisions. I would, ideally, 
> include a "imported from svn:rev XXX" in the commit log message.
> - annotate release tags and branches. I don't care much about interim 
> branches -- they are not important to me (please speak up if you think 
> otherwise).
> Non goals
> - no need to preserve "exact" history from SVN (the project may skip JARs, 
> etc.). Ability to build ancient versions is not an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8410) the "read" permission must include all 'read' paths

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058549#comment-15058549
 ] 

ASF subversion and git services commented on SOLR-8410:
---

Commit 1720223 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1720223 ]

SOLR-8410: Add all read paths to 'read' permission in 
RuleBasedAuthorizationPlugin

> the "read" permission must include all 'read' paths
> ---
>
> Key: SOLR-8410
> URL: https://issues.apache.org/jira/browse/SOLR-8410
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8410.patch
>
>
> In {{RuleBasedAuthorizedPlugin}} "read" permission should also include the 
> following paths
> * /browse
> * /export
> * /spell
> * /suggest
> * /tvrh
> * /terms
> * /clustering 
> * /elevate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8388:
--
Attachment: SOLR-8388-part3of2.patch

fix the {{ReturnFieldsTest.testToString}} test added by part2of2 (the 
stringified fields include sets and the test incorrectly assumed a particular 
ordering for the sets' values)

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch, 
> SOLR-8388-part3of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8410) the "read" permission must include all 'read' paths

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058567#comment-15058567
 ] 

ASF subversion and git services commented on SOLR-8410:
---

Commit 1720226 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720226 ]

SOLR-8410: Add all read paths to 'read' permission in 
RuleBasedAuthorizationPlugin

> the "read" permission must include all 'read' paths
> ---
>
> Key: SOLR-8410
> URL: https://issues.apache.org/jira/browse/SOLR-8410
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8410.patch
>
>
> In {{RuleBasedAuthorizedPlugin}} "read" permission should also include the 
> following paths
> * /browse
> * /export
> * /spell
> * /suggest
> * /tvrh
> * /terms
> * /clustering 
> * /elevate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3229) TermVectorComponent does not return terms in distributed search

2015-12-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058557#comment-15058557
 ] 

David Smiley commented on SOLR-3229:


[~hossman] HighlightComponent, DebugComponent, and TermVectorComponent have a 
very similar bit of code in their finishStage() method. HighlightComponent & 
DebugComponent's version was recently found to be buggy -- SOLR-8060 and 
SOLR-8059.  The Highlight side was recently fixed and I'm about to do the same 
for Debug side.  But I'd like to refactor out some common lines of code between 
all 3 to ease maintenance.  However the TV side has this odd bit where it if it 
can't lookup the shard doc by it's unique key, that it adds this to the 
response any way (~line 458).  _I would rather we remove this; I think it's not 
something we should support.  UniqueKey should be required for 
distributed-search to get TV info back_.  The code that's here now incorrectly 
assumes that if it was unable to lookup the key in the resultIds that it's 
because the schema has no uniqueKey.  But another reason is just that it's a 
distrib.singlePass distributed search (related to the 2 bugs I'm looking at in 
Highlight & Debug components).  Do you support my recommendation?

> TermVectorComponent does not return terms in distributed search
> ---
>
> Key: SOLR-3229
> URL: https://issues.apache.org/jira/browse/SOLR-3229
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.0-ALPHA
> Environment: Ubuntu 11.10, openjdk-6
>Reporter: Hang Xie
>Assignee: Hoss Man
>  Labels: patch
> Fix For: 4.0, Trunk
>
> Attachments: SOLR-3229.patch, TermVectorComponent.patch
>
>
> TermVectorComponent does not return terms in distributed search, the 
> distributedProcess() incorrectly uses Solr Unique Key to do subrequests, 
> while process() expects Lucene document ids. Also, parameters are transferred 
> in different format thus making distributed search returns no result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mark Miller
Let's just make some JIRA issues. I'm not worried about volunteers for any
of it yet, just a direction we agree upon. Once we know where we are going,
we generally don't have a big volunteer problem. We haven't heard from Uwe
yet, but really does seem like moving to Git makes the most sense.

I'm certainly willing to spend some free time on this.

- Mark

On Tue, Dec 15, 2015 at 1:22 PM Dawid Weiss  wrote:

>
> Oh, just for completeness -- moving to git is not just about the version
> management, it's also:
>
> 1) all the scripts that currently do validations, etc.
> 2) what to do with svn:* properties
> 3) what to do with empty folders (not available in git).
>
> I don't volunteer to solve these :)
>
> Dawid
>
>
> On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss 
> wrote:
>
>>
>> Ok, give me some time and I'll see what I can achieve. Now that I
>> actually wrote an SVN dump parser (validator and serializer) things are
>> under much better control...
>>
>> I'll try to achieve the following:
>>
>> 1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
>> and perhaps other binaries),
>> 2) *preserve* history of all core sources. So svn log IndexWriter has to
>> go back all the way back to when Doug was young and pretty. Ooops, he's
>> still pretty of course.
>> 3) provide a way to link git history with svn revisions. I would,
>> ideally, include a "imported from svn:rev XXX" in the commit log message.
>> 4) annotate release tags and branches. I don't care much about interim
>> branches -- they are not important to me (please speak up if you think
>> otherwise).
>>
>> Dawid
>>
>> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>>
>>> If Dawid is volunteering to sort out this mess, +1 to let him make it
>>> a move to git. I don't care if we disagree about JARs, I trust he will
>>> do a good job and that is more important.
>>>
>>> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
>>> wrote:
>>> >
>>> > It's not true that nobody is working on this. I have been working on
>>> the SVN
>>> > dump in the meantime. You would not believe how incredibly complex the
>>> > process of processing that (remote) dump is. Let me highlight a few key
>>> > issues:
>>> >
>>> > 1) There is no "one" Lucene SVN repository that can be transferred to
>>> git.
>>> > The history is a mess. Trunk, branches, tags -- all change paths at
>>> various
>>> > points in history. Entire projects are copied from *outside* the
>>> official
>>> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
>>> > example).
>>> >
>>> > 2) The history of commits to Lucene's subpath of the SVN is ~50k
>>> commits.
>>> > ASF's commit history in which those 50k commits live is 1.8 *million*
>>> > commits. I think the git-svn sync crashes due to the sheer number of
>>> (empty)
>>> > commits in between actual changes.
>>> >
>>> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
>>> > patch, for example, but there are others (the second larger is
>>> 190megs, the
>>> > third is 136 megs).
>>> >
>>> > 4) The size of JARs is really not an issue. The entire SVN repo I
>>> mirrored
>>> > locally (including empty interim commits to cater for svn:mergeinfos)
>>> is 4G.
>>> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
>>> Mahout)
>>> > then I bet the entire history can fit in 1G total. Of course stripping
>>> JARs
>>> > is also doable.
>>> >
>>> > 5) There is lots of junk at the main SVN path so you can't just
>>> version the
>>> > top-level folder. If you wanted to checkout /asf/lucene then the size
>>> of the
>>> > resulting folder is enormous -- I terminated the checkout after I
>>> reached
>>> > over 20 gigs. Well, technically you *could* do it, it'd preserve
>>> perfect
>>> > history, but I wouldn't want to git co a past version that checks out
>>> all
>>> > the tags, branches, etc. This has to be mapped in a sensible way.
>>> >
>>> > What I think is that all the above makes (straightforward) conversion
>>> to git
>>> > problematic. Especially moving paths are a problem -- how to mark tags/
>>> > branches, where the main line of development is, etc. This conversion
>>> would
>>> > have to be guided and hand-tuned to make sense. This effort would only
>>> pay
>>> > for itself if we move to git, otherwise I don't see the benefit. Paul's
>>> > script is fine for keeping short-term history.
>>> >
>>> > Dawid
>>> >
>>> > P.S. Either the SVN repo at Apache is broken or the SVN is broken,
>>> which
>>> > makes processing SVN history even more fun. This dump indicates Tika
>>> being
>>> > moved from the incubator to Lucene:
>>> >
>>> > svnrdump dump -r 712381 --incremental
>>> https://svn.apache.org/repos/asf/ >
>>> > out
>>> >
>>> > But when you dump just Lucene's subpath, the output is broken (last
>>> > changeset in the file is an invalid changeset, it carries no target):
>>> >
>>> > svnrdump dump -r 712381 --incremental
>>> > https://svn.apache.org/repos/asf/lucene > out
>>> >
>>> >
>>> >

[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058546#comment-15058546
 ] 

Mark Miller commented on LUCENE-6933:
-

For some reference, here is a wiki page managing Mavens migration to git: 
https://cwiki.apache.org/confluence/display/MAVEN/Git+Migration

Here is one of the infra JIRA's: 
https://issues.apache.org/jira/browse/INFRA-5266 Migrate Maven subprojects to 
git (surefire,scm,wagon)

Not all very relatable to us in a lot of ways, but a root to get into INFRA 
tickets for a past migration.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>
> Goals:
> - selectively drop unnecessary stuff from history (cms/, javadocs/, JARs and 
> perhaps other binaries),
> - *preserve* history of all core sources. So svn log IndexWriter has to go 
> back all the way back to when Doug was young and pretty. Ooops, he's still 
> pretty of course.
> - provide a way to link git history with svn revisions. I would, ideally, 
> include a "imported from svn:rev XXX" in the commit log message.
> - annotate release tags and branches. I don't care much about interim 
> branches -- they are not important to me (please speak up if you think 
> otherwise).
> Non goals
> - no need to preserve "exact" history from SVN (the project may skip JARs, 
> etc.). Ability to build ancient versions is not an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5474 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5474/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/mmdq/u", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/mmdq/u",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([95DDF873C6DFF776:4D90D524310252D6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.Test

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Robert Muir
If Dawid is volunteering to sort out this mess, +1 to let him make it
a move to git. I don't care if we disagree about JARs, I trust he will
do a good job and that is more important.

On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss  wrote:
>
> It's not true that nobody is working on this. I have been working on the SVN
> dump in the meantime. You would not believe how incredibly complex the
> process of processing that (remote) dump is. Let me highlight a few key
> issues:
>
> 1) There is no "one" Lucene SVN repository that can be transferred to git.
> The history is a mess. Trunk, branches, tags -- all change paths at various
> points in history. Entire projects are copied from *outside* the official
> Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
> example).
>
> 2) The history of commits to Lucene's subpath of the SVN is ~50k commits.
> ASF's commit history in which those 50k commits live is 1.8 *million*
> commits. I think the git-svn sync crashes due to the sheer number of (empty)
> commits in between actual changes.
>
> 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
> patch, for example, but there are others (the second larger is 190megs, the
> third is 136 megs).
>
> 4) The size of JARs is really not an issue. The entire SVN repo I mirrored
> locally (including empty interim commits to cater for svn:mergeinfos) is 4G.
> If you strip the stuff like javadocs and side projects (Nutch, Tika, Mahout)
> then I bet the entire history can fit in 1G total. Of course stripping JARs
> is also doable.
>
> 5) There is lots of junk at the main SVN path so you can't just version the
> top-level folder. If you wanted to checkout /asf/lucene then the size of the
> resulting folder is enormous -- I terminated the checkout after I reached
> over 20 gigs. Well, technically you *could* do it, it'd preserve perfect
> history, but I wouldn't want to git co a past version that checks out all
> the tags, branches, etc. This has to be mapped in a sensible way.
>
> What I think is that all the above makes (straightforward) conversion to git
> problematic. Especially moving paths are a problem -- how to mark tags/
> branches, where the main line of development is, etc. This conversion would
> have to be guided and hand-tuned to make sense. This effort would only pay
> for itself if we move to git, otherwise I don't see the benefit. Paul's
> script is fine for keeping short-term history.
>
> Dawid
>
> P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
> makes processing SVN history even more fun. This dump indicates Tika being
> moved from the incubator to Lucene:
>
> svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/ >
> out
>
> But when you dump just Lucene's subpath, the output is broken (last
> changeset in the file is an invalid changeset, it carries no target):
>
> svnrdump dump -r 712381 --incremental
> https://svn.apache.org/repos/asf/lucene > out
>
>
>
> On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley  wrote:
>>
>> If we move to git, stripping out jars seems to be an independent decision?
>> Can you even strip out jars and preserve history (i.e. not change
>> hashes and invalidate everyone's forks/clones)?
>> I did run across this:
>>
>> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
>>
>> -Yonik
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-15 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-6933:
---

 Summary: Create a (cleaned up) SVN history in git
 Key: LUCENE-6933
 URL: https://issues.apache.org/jira/browse/LUCENE-6933
 Project: Lucene - Core
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss


Goals:
- selectively drop unnecessary stuff from history (cms/, javadocs/, JARs and 
perhaps other binaries),
- *preserve* history of all core sources. So svn log IndexWriter has to go back 
all the way back to when Doug was young and pretty. Ooops, he's still pretty of 
course.
- provide a way to link git history with svn revisions. I would, ideally, 
include a "imported from svn:rev XXX" in the commit log message.
- annotate release tags and branches. I don't care much about interim branches 
-- they are not important to me (please speak up if you think otherwise).

Non goals
- no need to preserve "exact" history from SVN (the project may skip JARs, 
etc.). Ability to build ancient versions is not an issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8393) Component for Solr resource usage planning

2015-12-15 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-8393:
---
Attachment: SOLR-8393.patch

Fix disappearing collections when using collection param (should not be able to 
modify clusterState's getCollections result...)

> Component for Solr resource usage planning
> --
>
> Key: SOLR-8393
> URL: https://issues.apache.org/jira/browse/SOLR-8393
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
> Attachments: SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, 
> SOLR-8393.patch
>
>
> One question that keeps coming back is how much disk and RAM do I need to run 
> Solr. The most common response is that it highly depends on your data. While 
> true, it makes for frustrated users trying to plan their deployments. 
> The idea I'm bringing is to create a new component that will attempt to 
> extrapolate resources needed in the future by looking at resources currently 
> used. By adding a parameter for the target number of documents, current 
> resources are adapted by a ratio relative to current number of documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
I know that, but I meant historical checkouts -- and if you add fake files
you're altering history :)

D.

On Tue, Dec 15, 2015 at 7:24 PM, Mike Drob  wrote:

> 3 is typically solved by adding a .gitignore or .gitkeep file in what
> would be an empty directory, if the directory itself is important.
>
>
> On Tue, Dec 15, 2015 at 12:21 PM, Dawid Weiss 
> wrote:
>
>>
>> Oh, just for completeness -- moving to git is not just about the version
>> management, it's also:
>>
>> 1) all the scripts that currently do validations, etc.
>> 2) what to do with svn:* properties
>> 3) what to do with empty folders (not available in git).
>>
>> I don't volunteer to solve these :)
>>
>> Dawid
>>
>>
>> On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss 
>> wrote:
>>
>>>
>>> Ok, give me some time and I'll see what I can achieve. Now that I
>>> actually wrote an SVN dump parser (validator and serializer) things are
>>> under much better control...
>>>
>>> I'll try to achieve the following:
>>>
>>> 1) selectively drop unnecessary stuff from history (cms/, javadocs/,
>>> JARs and perhaps other binaries),
>>> 2) *preserve* history of all core sources. So svn log IndexWriter has to
>>> go back all the way back to when Doug was young and pretty. Ooops, he's
>>> still pretty of course.
>>> 3) provide a way to link git history with svn revisions. I would,
>>> ideally, include a "imported from svn:rev XXX" in the commit log message.
>>> 4) annotate release tags and branches. I don't care much about interim
>>> branches -- they are not important to me (please speak up if you think
>>> otherwise).
>>>
>>> Dawid
>>>
>>> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>>>
 If Dawid is volunteering to sort out this mess, +1 to let him make it
 a move to git. I don't care if we disagree about JARs, I trust he will
 do a good job and that is more important.

 On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
 wrote:
 >
 > It's not true that nobody is working on this. I have been working on
 the SVN
 > dump in the meantime. You would not believe how incredibly complex the
 > process of processing that (remote) dump is. Let me highlight a few
 key
 > issues:
 >
 > 1) There is no "one" Lucene SVN repository that can be transferred to
 git.
 > The history is a mess. Trunk, branches, tags -- all change paths at
 various
 > points in history. Entire projects are copied from *outside* the
 official
 > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator,
 for
 > example).
 >
 > 2) The history of commits to Lucene's subpath of the SVN is ~50k
 commits.
 > ASF's commit history in which those 50k commits live is 1.8 *million*
 > commits. I think the git-svn sync crashes due to the sheer number of
 (empty)
 > commits in between actual changes.
 >
 > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
 > patch, for example, but there are others (the second larger is
 190megs, the
 > third is 136 megs).
 >
 > 4) The size of JARs is really not an issue. The entire SVN repo I
 mirrored
 > locally (including empty interim commits to cater for svn:mergeinfos)
 is 4G.
 > If you strip the stuff like javadocs and side projects (Nutch, Tika,
 Mahout)
 > then I bet the entire history can fit in 1G total. Of course
 stripping JARs
 > is also doable.
 >
 > 5) There is lots of junk at the main SVN path so you can't just
 version the
 > top-level folder. If you wanted to checkout /asf/lucene then the size
 of the
 > resulting folder is enormous -- I terminated the checkout after I
 reached
 > over 20 gigs. Well, technically you *could* do it, it'd preserve
 perfect
 > history, but I wouldn't want to git co a past version that checks out
 all
 > the tags, branches, etc. This has to be mapped in a sensible way.
 >
 > What I think is that all the above makes (straightforward) conversion
 to git
 > problematic. Especially moving paths are a problem -- how to mark
 tags/
 > branches, where the main line of development is, etc. This conversion
 would
 > have to be guided and hand-tuned to make sense. This effort would
 only pay
 > for itself if we move to git, otherwise I don't see the benefit.
 Paul's
 > script is fine for keeping short-term history.
 >
 > Dawid
 >
 > P.S. Either the SVN repo at Apache is broken or the SVN is broken,
 which
 > makes processing SVN history even more fun. This dump indicates Tika
 being
 > moved from the incubator to Lucene:
 >
 > svnrdump dump -r 712381 --incremental
 https://svn.apache.org/repos/asf/ >
 > out
 >
 > But when you dump just Lucene's subpath, the output is broken (last
 > changeset in the file is an invalid changeset, it carries no target):
 >
 > svnrdump dump 

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mike Drob
3 is typically solved by adding a .gitignore or .gitkeep file in what would
be an empty directory, if the directory itself is important.


On Tue, Dec 15, 2015 at 12:21 PM, Dawid Weiss  wrote:

>
> Oh, just for completeness -- moving to git is not just about the version
> management, it's also:
>
> 1) all the scripts that currently do validations, etc.
> 2) what to do with svn:* properties
> 3) what to do with empty folders (not available in git).
>
> I don't volunteer to solve these :)
>
> Dawid
>
>
> On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss 
> wrote:
>
>>
>> Ok, give me some time and I'll see what I can achieve. Now that I
>> actually wrote an SVN dump parser (validator and serializer) things are
>> under much better control...
>>
>> I'll try to achieve the following:
>>
>> 1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
>> and perhaps other binaries),
>> 2) *preserve* history of all core sources. So svn log IndexWriter has to
>> go back all the way back to when Doug was young and pretty. Ooops, he's
>> still pretty of course.
>> 3) provide a way to link git history with svn revisions. I would,
>> ideally, include a "imported from svn:rev XXX" in the commit log message.
>> 4) annotate release tags and branches. I don't care much about interim
>> branches -- they are not important to me (please speak up if you think
>> otherwise).
>>
>> Dawid
>>
>> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>>
>>> If Dawid is volunteering to sort out this mess, +1 to let him make it
>>> a move to git. I don't care if we disagree about JARs, I trust he will
>>> do a good job and that is more important.
>>>
>>> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
>>> wrote:
>>> >
>>> > It's not true that nobody is working on this. I have been working on
>>> the SVN
>>> > dump in the meantime. You would not believe how incredibly complex the
>>> > process of processing that (remote) dump is. Let me highlight a few key
>>> > issues:
>>> >
>>> > 1) There is no "one" Lucene SVN repository that can be transferred to
>>> git.
>>> > The history is a mess. Trunk, branches, tags -- all change paths at
>>> various
>>> > points in history. Entire projects are copied from *outside* the
>>> official
>>> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
>>> > example).
>>> >
>>> > 2) The history of commits to Lucene's subpath of the SVN is ~50k
>>> commits.
>>> > ASF's commit history in which those 50k commits live is 1.8 *million*
>>> > commits. I think the git-svn sync crashes due to the sheer number of
>>> (empty)
>>> > commits in between actual changes.
>>> >
>>> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
>>> > patch, for example, but there are others (the second larger is
>>> 190megs, the
>>> > third is 136 megs).
>>> >
>>> > 4) The size of JARs is really not an issue. The entire SVN repo I
>>> mirrored
>>> > locally (including empty interim commits to cater for svn:mergeinfos)
>>> is 4G.
>>> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
>>> Mahout)
>>> > then I bet the entire history can fit in 1G total. Of course stripping
>>> JARs
>>> > is also doable.
>>> >
>>> > 5) There is lots of junk at the main SVN path so you can't just
>>> version the
>>> > top-level folder. If you wanted to checkout /asf/lucene then the size
>>> of the
>>> > resulting folder is enormous -- I terminated the checkout after I
>>> reached
>>> > over 20 gigs. Well, technically you *could* do it, it'd preserve
>>> perfect
>>> > history, but I wouldn't want to git co a past version that checks out
>>> all
>>> > the tags, branches, etc. This has to be mapped in a sensible way.
>>> >
>>> > What I think is that all the above makes (straightforward) conversion
>>> to git
>>> > problematic. Especially moving paths are a problem -- how to mark tags/
>>> > branches, where the main line of development is, etc. This conversion
>>> would
>>> > have to be guided and hand-tuned to make sense. This effort would only
>>> pay
>>> > for itself if we move to git, otherwise I don't see the benefit. Paul's
>>> > script is fine for keeping short-term history.
>>> >
>>> > Dawid
>>> >
>>> > P.S. Either the SVN repo at Apache is broken or the SVN is broken,
>>> which
>>> > makes processing SVN history even more fun. This dump indicates Tika
>>> being
>>> > moved from the incubator to Lucene:
>>> >
>>> > svnrdump dump -r 712381 --incremental
>>> https://svn.apache.org/repos/asf/ >
>>> > out
>>> >
>>> > But when you dump just Lucene's subpath, the output is broken (last
>>> > changeset in the file is an invalid changeset, it carries no target):
>>> >
>>> > svnrdump dump -r 712381 --incremental
>>> > https://svn.apache.org/repos/asf/lucene > out
>>> >
>>> >
>>> >
>>> > On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley 
>>> wrote:
>>> >>
>>> >> If we move to git, stripping out jars seems to be an independent
>>> decision?
>>> >> Can you even strip out jars and preserve history (i.

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
Oh, just for completeness -- moving to git is not just about the version
management, it's also:

1) all the scripts that currently do validations, etc.
2) what to do with svn:* properties
3) what to do with empty folders (not available in git).

I don't volunteer to solve these :)

Dawid


On Tue, Dec 15, 2015 at 7:09 PM, Dawid Weiss  wrote:

>
> Ok, give me some time and I'll see what I can achieve. Now that I actually
> wrote an SVN dump parser (validator and serializer) things are under much
> better control...
>
> I'll try to achieve the following:
>
> 1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
> and perhaps other binaries),
> 2) *preserve* history of all core sources. So svn log IndexWriter has to
> go back all the way back to when Doug was young and pretty. Ooops, he's
> still pretty of course.
> 3) provide a way to link git history with svn revisions. I would, ideally,
> include a "imported from svn:rev XXX" in the commit log message.
> 4) annotate release tags and branches. I don't care much about interim
> branches -- they are not important to me (please speak up if you think
> otherwise).
>
> Dawid
>
> On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:
>
>> If Dawid is volunteering to sort out this mess, +1 to let him make it
>> a move to git. I don't care if we disagree about JARs, I trust he will
>> do a good job and that is more important.
>>
>> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
>> wrote:
>> >
>> > It's not true that nobody is working on this. I have been working on
>> the SVN
>> > dump in the meantime. You would not believe how incredibly complex the
>> > process of processing that (remote) dump is. Let me highlight a few key
>> > issues:
>> >
>> > 1) There is no "one" Lucene SVN repository that can be transferred to
>> git.
>> > The history is a mess. Trunk, branches, tags -- all change paths at
>> various
>> > points in history. Entire projects are copied from *outside* the
>> official
>> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
>> > example).
>> >
>> > 2) The history of commits to Lucene's subpath of the SVN is ~50k
>> commits.
>> > ASF's commit history in which those 50k commits live is 1.8 *million*
>> > commits. I think the git-svn sync crashes due to the sheer number of
>> (empty)
>> > commits in between actual changes.
>> >
>> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
>> > patch, for example, but there are others (the second larger is 190megs,
>> the
>> > third is 136 megs).
>> >
>> > 4) The size of JARs is really not an issue. The entire SVN repo I
>> mirrored
>> > locally (including empty interim commits to cater for svn:mergeinfos)
>> is 4G.
>> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
>> Mahout)
>> > then I bet the entire history can fit in 1G total. Of course stripping
>> JARs
>> > is also doable.
>> >
>> > 5) There is lots of junk at the main SVN path so you can't just version
>> the
>> > top-level folder. If you wanted to checkout /asf/lucene then the size
>> of the
>> > resulting folder is enormous -- I terminated the checkout after I
>> reached
>> > over 20 gigs. Well, technically you *could* do it, it'd preserve perfect
>> > history, but I wouldn't want to git co a past version that checks out
>> all
>> > the tags, branches, etc. This has to be mapped in a sensible way.
>> >
>> > What I think is that all the above makes (straightforward) conversion
>> to git
>> > problematic. Especially moving paths are a problem -- how to mark tags/
>> > branches, where the main line of development is, etc. This conversion
>> would
>> > have to be guided and hand-tuned to make sense. This effort would only
>> pay
>> > for itself if we move to git, otherwise I don't see the benefit. Paul's
>> > script is fine for keeping short-term history.
>> >
>> > Dawid
>> >
>> > P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
>> > makes processing SVN history even more fun. This dump indicates Tika
>> being
>> > moved from the incubator to Lucene:
>> >
>> > svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/
>> >
>> > out
>> >
>> > But when you dump just Lucene's subpath, the output is broken (last
>> > changeset in the file is an invalid changeset, it carries no target):
>> >
>> > svnrdump dump -r 712381 --incremental
>> > https://svn.apache.org/repos/asf/lucene > out
>> >
>> >
>> >
>> > On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley 
>> wrote:
>> >>
>> >> If we move to git, stripping out jars seems to be an independent
>> decision?
>> >> Can you even strip out jars and preserve history (i.e. not change
>> >> hashes and invalidate everyone's forks/clones)?
>> >> I did run across this:
>> >>
>> >>
>> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
>> >>
>> >> -Yonik
>> >>
>> >> -
>> >> To unsubscr

[jira] [Commented] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058463#comment-15058463
 ] 

Steve Rowe commented on SOLR-7730:
--

bq. attaching SOLR-7730-changes.patch move it from 5.3 to 5.4 Optimizations. 
Steve Rowe Should I commit it to trunk and 5x?

+1, LGTM.

In addtion to trunk and 5x, I think you should also commit it to the 
lucene_solr_5_4 branch, in case there is a 5.4.1 release.

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
Ok, give me some time and I'll see what I can achieve. Now that I actually
wrote an SVN dump parser (validator and serializer) things are under much
better control...

I'll try to achieve the following:

1) selectively drop unnecessary stuff from history (cms/, javadocs/, JARs
and perhaps other binaries),
2) *preserve* history of all core sources. So svn log IndexWriter has to go
back all the way back to when Doug was young and pretty. Ooops, he's still
pretty of course.
3) provide a way to link git history with svn revisions. I would, ideally,
include a "imported from svn:rev XXX" in the commit log message.
4) annotate release tags and branches. I don't care much about interim
branches -- they are not important to me (please speak up if you think
otherwise).

Dawid

On Tue, Dec 15, 2015 at 7:03 PM, Robert Muir  wrote:

> If Dawid is volunteering to sort out this mess, +1 to let him make it
> a move to git. I don't care if we disagree about JARs, I trust he will
> do a good job and that is more important.
>
> On Tue, Dec 15, 2015 at 12:44 PM, Dawid Weiss 
> wrote:
> >
> > It's not true that nobody is working on this. I have been working on the
> SVN
> > dump in the meantime. You would not believe how incredibly complex the
> > process of processing that (remote) dump is. Let me highlight a few key
> > issues:
> >
> > 1) There is no "one" Lucene SVN repository that can be transferred to
> git.
> > The history is a mess. Trunk, branches, tags -- all change paths at
> various
> > points in history. Entire projects are copied from *outside* the official
> > Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
> > example).
> >
> > 2) The history of commits to Lucene's subpath of the SVN is ~50k commits.
> > ASF's commit history in which those 50k commits live is 1.8 *million*
> > commits. I think the git-svn sync crashes due to the sheer number of
> (empty)
> > commits in between actual changes.
> >
> > 3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
> > patch, for example, but there are others (the second larger is 190megs,
> the
> > third is 136 megs).
> >
> > 4) The size of JARs is really not an issue. The entire SVN repo I
> mirrored
> > locally (including empty interim commits to cater for svn:mergeinfos) is
> 4G.
> > If you strip the stuff like javadocs and side projects (Nutch, Tika,
> Mahout)
> > then I bet the entire history can fit in 1G total. Of course stripping
> JARs
> > is also doable.
> >
> > 5) There is lots of junk at the main SVN path so you can't just version
> the
> > top-level folder. If you wanted to checkout /asf/lucene then the size of
> the
> > resulting folder is enormous -- I terminated the checkout after I reached
> > over 20 gigs. Well, technically you *could* do it, it'd preserve perfect
> > history, but I wouldn't want to git co a past version that checks out all
> > the tags, branches, etc. This has to be mapped in a sensible way.
> >
> > What I think is that all the above makes (straightforward) conversion to
> git
> > problematic. Especially moving paths are a problem -- how to mark tags/
> > branches, where the main line of development is, etc. This conversion
> would
> > have to be guided and hand-tuned to make sense. This effort would only
> pay
> > for itself if we move to git, otherwise I don't see the benefit. Paul's
> > script is fine for keeping short-term history.
> >
> > Dawid
> >
> > P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
> > makes processing SVN history even more fun. This dump indicates Tika
> being
> > moved from the incubator to Lucene:
> >
> > svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/
> >
> > out
> >
> > But when you dump just Lucene's subpath, the output is broken (last
> > changeset in the file is an invalid changeset, it carries no target):
> >
> > svnrdump dump -r 712381 --incremental
> > https://svn.apache.org/repos/asf/lucene > out
> >
> >
> >
> > On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley  wrote:
> >>
> >> If we move to git, stripping out jars seems to be an independent
> decision?
> >> Can you even strip out jars and preserve history (i.e. not change
> >> hashes and invalidate everyone's forks/clones)?
> >> I did run across this:
> >>
> >>
> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
> >>
> >> -Yonik
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-7730:
---
Attachment: SOLR-7730-changes.patch

attaching [^SOLR-7730-changes.patch] move it from 5.3 to 5.4 Optimizations. 
[~steve_rowe] Should I commit it to trunk and 5x? 

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, 
> SOLR-7730-changes.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7730) speed-up faceting on doc values fields

2015-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reopened SOLR-7730:


> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Dawid Weiss
It's not true that nobody is working on this. I have been working on the
SVN dump in the meantime. You would not believe how incredibly complex the
process of processing that (remote) dump is. Let me highlight a few key
issues:

1) There is no "one" Lucene SVN repository that can be transferred to git.
The history is a mess. Trunk, branches, tags -- all change paths at various
points in history. Entire projects are copied from *outside* the official
Lucene ASF path (when Solr, Nutch or Tika moved from the incubator, for
example).

2) The history of commits to Lucene's subpath of the SVN is ~50k commits.
ASF's commit history in which those 50k commits live is 1.8 *million*
commits. I think the git-svn sync crashes due to the sheer number of
(empty) commits in between actual changes.

3) There are a few commits that are gigantic. I mentioned Grant's 1.2G
patch, for example, but there are others (the second larger is 190megs, the
third is 136 megs).

4) The size of JARs is really not an issue. The entire SVN repo I mirrored
locally (including empty interim commits to cater for svn:mergeinfos) is
4G. If you strip the stuff like javadocs and side projects (Nutch, Tika,
Mahout) then I bet the entire history can fit in 1G total. Of course
stripping JARs is also doable.

5) There is lots of junk at the main SVN path so you can't just version the
top-level folder. If you wanted to checkout /asf/lucene then the size of
the resulting folder is enormous -- I terminated the checkout after I
reached over 20 gigs. Well, technically you *could* do it, it'd preserve
perfect history, but I wouldn't want to git co a past version that checks
out all the tags, branches, etc. This has to be mapped in a sensible way.

What I think is that all the above makes (straightforward) conversion to
git problematic. Especially moving paths are a problem -- how to mark tags/
branches, where the main line of development is, etc. This conversion would
have to be guided and hand-tuned to make sense. This effort would only pay
for itself if we move to git, otherwise I don't see the benefit. Paul's
script is fine for keeping short-term history.

Dawid

P.S. Either the SVN repo at Apache is broken or the SVN is broken, which
makes processing SVN history even more fun. This dump indicates Tika being
moved from the incubator to Lucene:

svnrdump dump -r 712381 --incremental https://svn.apache.org/repos/asf/ >
out

But when you dump just Lucene's subpath, the output is broken (last
changeset in the file is an invalid changeset, it carries no target):

svnrdump dump -r 712381 --incremental
https://svn.apache.org/repos/asf/lucene > out



On Tue, Dec 15, 2015 at 6:04 PM, Yonik Seeley  wrote:

> If we move to git, stripping out jars seems to be an independent decision?
> Can you even strip out jars and preserve history (i.e. not change
> hashes and invalidate everyone's forks/clones)?
> I did run across this:
>
> http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history
>
> -Yonik
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8372:
---
Attachment: SOLR-8372.patch

Here's a patch that allows bufferUpdates() to be called more than once, and 
removes the call to dropBufferedUpdates() from RecoveryStrategy.

Previously, if bufferUpdates() was called in a state!=ACTIVE, we simply 
returned w/o changing the state.  This is now logged at least.

This has an additional side effect of having buffered versions in our log that 
were never applied to the index.  This seems OK though... better not to lose 
updates in general.

> Canceled recovery can lead to data loss
> ---
>
> Key: SOLR-8372
> URL: https://issues.apache.org/jira/browse/SOLR-8372
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: SOLR-8372.patch
>
>
> A recovery via index replication tells the update log to start buffering 
> updates.  If that recovery is canceled for whatever reason by the replica, 
> the RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering 
> and places the UpdateLog back in active mode.  If updates come from the 
> leader after this point (and before ReplicationStrategy retries recovery), 
> the update will be processed as normal and added to the transaction log. If 
> the server is bounced, those last updates to the transaction log look normal 
> (no FLAG_GAP) and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 14913 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14913/
Java: 32bit/jdk1.7.0_80 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test], id],reqFieldNames=...> but was:<...s=(globs=[],fields=[[test, score, 
id],okFieldNames=[null, test, score], id],reqFieldNames=...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test], id],reqFieldNames=...> but 
was:<...s=(globs=[],fields=[[test, score, id],okFieldNames=[null, test, score], 
id],reqFieldNames=...>
at 
__randomizedtesting.SeedInfo.seed([80FDE798363497B6:51DC5099092EFA1E]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.j

[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 259 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/259/
Java: multiarch/jdk1.7.0 -d32 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.ReturnFieldsTest.testToString

Error Message:
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test, id]],reqFieldNames=[id,...> but was:<...s=(globs=[],fields=[[id, score, 
test],okFieldNames=[null, id, score, test]],reqFieldNames=[id,...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...s=(globs=[],fields=[[score, test, 
id],okFieldNames=[null, score, test, id]],reqFieldNames=[id,...> but 
was:<...s=(globs=[],fields=[[id, score, test],okFieldNames=[null, id, score, 
test]],reqFieldNames=[id,...>
at 
__randomizedtesting.SeedInfo.seed([EC8AEC777DC963ED:3DAB5B7642D30E45]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementR

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Yonik Seeley
If we move to git, stripping out jars seems to be an independent decision?
Can you even strip out jars and preserve history (i.e. not change
hashes and invalidate everyone's forks/clones)?
I did run across this:
http://stackoverflow.com/questions/17470780/is-it-possible-to-slim-a-git-repository-without-rewriting-history

-Yonik

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Jack Krupansky
And if nobody steps up and "solves" the current technical issue will that
simply accelerate the (desired) shift to using git as the main repo for
future Lucene/Solr development? Would there be any downside to that outcome?

Is there any formal Apache policy for new projects as to whether they can
use git exclusively? Any examples of Apache projects that moved from svn to
git?

+1 for moving to git (with full non-jar history) if after all of this time
and hand-wringing "all the King's horses and all the King's men couldn't
put git-svn back together again". I'd rather see Lucene/Solr committers
focused on new feature development rather than doing Infra's job, and if
Infra can't do it easily, why not shift to a solution that has much less
downside and baggage and has a brighter future.

-- Jack Krupansky

On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller  wrote:

> Anyone willing to lead this discussion to some kind of better resolution?
> Did that whole back and forth help with any ideas on the best path forward?
> I know it's a complicated issue, git / svn, the light side, the dark side,
> but doesn't GitHub also depend on this mirroring? It's going to be super
> annoying when I can no longer pull from a relatively up to date git remote.
>
> Who has boiled down the correct path?
>
> - Mark
>
> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:
>
>> FYI.
>>
>> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
>> - the above, tar.bz2: 1.2G
>>
>> Sadly, I didn't succeed at recreating a local SVN repo from those
>> incremental dumps. svnadmin load fails with a cryptic error related to
>> the fact that revision number of node-copy operations refer to
>> original SVN numbers and they're apparently renumbered on import.
>> svnadmin isn't smart enough to somehow keep a reference of those
>> original numbers and svndumpfilter can't work with incremental dump
>> files... A seemingly trivial task of splitting a repo on a clean
>> boundary seems incredibly hard with SVN...
>>
>> If anybody wishes to play with the dump files, here they are:
>> http://goo.gl/m6q3J8
>>
>> Dawid
>>
>> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
>> > You can't avoid having the history in SVN. The ASF has one large repo,
>> and
>> > won't be deleting that repo, so the history will survive in perpetuity,
>> > regardless of what we do now.
>> >
>> > Upayavira
>> >
>> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
>> >
>> > It seems you'd want to preserve that history in a frozen/archiced
>> Apache Svn
>> > repo for Lucene. Then make the new git repo slimmer before switching.
>> Folks
>> > that want very old versions or doing research can at least go through
>> the
>> > original SVN repo.
>> >
>> > On Tuesday, December 8, 2015, Dawid Weiss 
>> wrote:
>> >
>> > One more thing, perhaps of importance, the raw Lucene repo contains
>> > all the history of projects that then turned top-level (Nutch,
>> > Mahout). These could also be dropped (or ignored) when converting to
>> > git. If we agree JARs are not relevant, why should projects not
>> > directly related to Lucene/ Solr be?
>> >
>> > Dawid
>> >
>> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
>> wrote:
>> >>> Don’t know how much we have of historic jars in our history.
>> >>
>> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
>> >> that does the following:
>> >>
>> >> 1) git log all revisions touching
>> https://svn.apache.org/repos/asf/lucene
>> >> 2) grep revision numbers
>> >> 3) use svnrdump to get every single commit (revision) above, in
>> >> incremental mode.
>> >>
>> >> This will allow me to:
>> >>
>> >> 1) recreate only Lucene/ Solr SVN, locally.
>> >> 2) measure the size of SVN repo.
>> >> 3) measure the size of any conversion to git (even if it's one-by-one
>> >> checkout, then-sync with git).
>> >>
>> >> From what I see up until now size should not be an issue at all. Even
>> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
>> >> (and I'm about 75% done). There is one interesting super-large commit,
>> >> this one:
>> >>
>> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
>> >>
>> 
>> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
>> >> line
>> >>
>> >> LUCENE-2748: bring in old Lucene docs
>> >>
>> >> This commit diff weights... wait for it... 1.3G! I didn't check what
>> >> it actually was.
>> >>
>> >> Will keep you posted.
>> >>
>> >> D.
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>> >
>> >
>> >
>> > --
>> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections,
>> LLC |
>> > 240.476.9983
>> > Author:Relevant Search
>> > This e-mail and all contents, including attachments, is considered to be
>> > Company Confidenti

Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Scott Blum
Let's just move to git. It's almost 2016. I suspect many contributors are
probably primarily working off the github mirror anyway.  Is there any
great argument for delaying?
On Dec 15, 2015 11:51 AM, "Mark Miller"  wrote:

> I don't think you will get a volunteer until someone sums up the
> discussion with a proposal that someone is not going to veto or something.
> We can't expect everyone to read the same tea leaves and come to the same
> conclusion.
>
> Perhaps a stripped down mirror is the consensus. I'd rather we had some
> agreement on what we were going to do though, rather than an agreement to
> investigate. If we think stripping down is a technically feasible, and no
> one is going to violently disagree still, then let's decide to do that.
>
> - Mark
>
>
>
> On Tue, Dec 15, 2015 at 11:39 AM Doug Turnbull <
> dturnb...@opensourceconnections.com> wrote:
>
>> I thought the general consensus at minimum was to investigate a git
>> mirror that stripped some artifacts out (jars etc) to lighten up the work
>> of the process. If at some point the project switched to git, such a mirror
>> might be a suitable git repo for the project with archived older versions
>> in SVN.
>>
>> I think probably what is lacking is a volunteer to figure it all out.
>>
>>
>> -Doug
>>
>> On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller 
>> wrote:
>>
>>> Anyone willing to lead this discussion to some kind of better
>>> resolution? Did that whole back and forth help with any ideas on the best
>>> path forward? I know it's a complicated issue, git / svn, the light side,
>>> the dark side, but doesn't GitHub also depend on this mirroring? It's going
>>> to be super annoying when I can no longer pull from a relatively up to date
>>> git remote.
>>>
>>> Who has boiled down the correct path?
>>>
>>> - Mark
>>>
>>> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss 
>>> wrote:
>>>
 FYI.

 - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
 - the above, tar.bz2: 1.2G

 Sadly, I didn't succeed at recreating a local SVN repo from those
 incremental dumps. svnadmin load fails with a cryptic error related to
 the fact that revision number of node-copy operations refer to
 original SVN numbers and they're apparently renumbered on import.
 svnadmin isn't smart enough to somehow keep a reference of those
 original numbers and svndumpfilter can't work with incremental dump
 files... A seemingly trivial task of splitting a repo on a clean
 boundary seems incredibly hard with SVN...

 If anybody wishes to play with the dump files, here they are:
 http://goo.gl/m6q3J8

 Dawid

 On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
 > You can't avoid having the history in SVN. The ASF has one large
 repo, and
 > won't be deleting that repo, so the history will survive in
 perpetuity,
 > regardless of what we do now.
 >
 > Upayavira
 >
 > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
 >
 > It seems you'd want to preserve that history in a frozen/archiced
 Apache Svn
 > repo for Lucene. Then make the new git repo slimmer before switching.
 Folks
 > that want very old versions or doing research can at least go through
 the
 > original SVN repo.
 >
 > On Tuesday, December 8, 2015, Dawid Weiss 
 wrote:
 >
 > One more thing, perhaps of importance, the raw Lucene repo contains
 > all the history of projects that then turned top-level (Nutch,
 > Mahout). These could also be dropped (or ignored) when converting to
 > git. If we agree JARs are not relevant, why should projects not
 > directly related to Lucene/ Solr be?
 >
 > Dawid
 >
 > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
 wrote:
 >>> Don’t know how much we have of historic jars in our history.
 >>
 >> I actually do know. Or will know. In about ~10 hours. I wrote a
 script
 >> that does the following:
 >>
 >> 1) git log all revisions touching
 https://svn.apache.org/repos/asf/lucene
 >> 2) grep revision numbers
 >> 3) use svnrdump to get every single commit (revision) above, in
 >> incremental mode.
 >>
 >> This will allow me to:
 >>
 >> 1) recreate only Lucene/ Solr SVN, locally.
 >> 2) measure the size of SVN repo.
 >> 3) measure the size of any conversion to git (even if it's one-by-one
 >> checkout, then-sync with git).
 >>
 >> From what I see up until now size should not be an issue at all. Even
 >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
 >> (and I'm about 75% done). There is one interesting super-large
 commit,
 >> this one:
 >>
 >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
 >>
 
 >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) |

[jira] [Reopened] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reopened SOLR-8388:
---

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mark Miller
I don't think you will get a volunteer until someone sums up the discussion
with a proposal that someone is not going to veto or something. We can't
expect everyone to read the same tea leaves and come to the same
conclusion.

Perhaps a stripped down mirror is the consensus. I'd rather we had some
agreement on what we were going to do though, rather than an agreement to
investigate. If we think stripping down is a technically feasible, and no
one is going to violently disagree still, then let's decide to do that.

- Mark



On Tue, Dec 15, 2015 at 11:39 AM Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:

> I thought the general consensus at minimum was to investigate a git mirror
> that stripped some artifacts out (jars etc) to lighten up the work of the
> process. If at some point the project switched to git, such a mirror might
> be a suitable git repo for the project with archived older versions in SVN.
>
> I think probably what is lacking is a volunteer to figure it all out.
>
>
> -Doug
>
> On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller 
> wrote:
>
>> Anyone willing to lead this discussion to some kind of better resolution?
>> Did that whole back and forth help with any ideas on the best path forward?
>> I know it's a complicated issue, git / svn, the light side, the dark side,
>> but doesn't GitHub also depend on this mirroring? It's going to be super
>> annoying when I can no longer pull from a relatively up to date git remote.
>>
>> Who has boiled down the correct path?
>>
>> - Mark
>>
>> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:
>>
>>> FYI.
>>>
>>> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
>>> - the above, tar.bz2: 1.2G
>>>
>>> Sadly, I didn't succeed at recreating a local SVN repo from those
>>> incremental dumps. svnadmin load fails with a cryptic error related to
>>> the fact that revision number of node-copy operations refer to
>>> original SVN numbers and they're apparently renumbered on import.
>>> svnadmin isn't smart enough to somehow keep a reference of those
>>> original numbers and svndumpfilter can't work with incremental dump
>>> files... A seemingly trivial task of splitting a repo on a clean
>>> boundary seems incredibly hard with SVN...
>>>
>>> If anybody wishes to play with the dump files, here they are:
>>> http://goo.gl/m6q3J8
>>>
>>> Dawid
>>>
>>> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
>>> > You can't avoid having the history in SVN. The ASF has one large repo,
>>> and
>>> > won't be deleting that repo, so the history will survive in perpetuity,
>>> > regardless of what we do now.
>>> >
>>> > Upayavira
>>> >
>>> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
>>> >
>>> > It seems you'd want to preserve that history in a frozen/archiced
>>> Apache Svn
>>> > repo for Lucene. Then make the new git repo slimmer before switching.
>>> Folks
>>> > that want very old versions or doing research can at least go through
>>> the
>>> > original SVN repo.
>>> >
>>> > On Tuesday, December 8, 2015, Dawid Weiss 
>>> wrote:
>>> >
>>> > One more thing, perhaps of importance, the raw Lucene repo contains
>>> > all the history of projects that then turned top-level (Nutch,
>>> > Mahout). These could also be dropped (or ignored) when converting to
>>> > git. If we agree JARs are not relevant, why should projects not
>>> > directly related to Lucene/ Solr be?
>>> >
>>> > Dawid
>>> >
>>> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
>>> wrote:
>>> >>> Don’t know how much we have of historic jars in our history.
>>> >>
>>> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
>>> >> that does the following:
>>> >>
>>> >> 1) git log all revisions touching
>>> https://svn.apache.org/repos/asf/lucene
>>> >> 2) grep revision numbers
>>> >> 3) use svnrdump to get every single commit (revision) above, in
>>> >> incremental mode.
>>> >>
>>> >> This will allow me to:
>>> >>
>>> >> 1) recreate only Lucene/ Solr SVN, locally.
>>> >> 2) measure the size of SVN repo.
>>> >> 3) measure the size of any conversion to git (even if it's one-by-one
>>> >> checkout, then-sync with git).
>>> >>
>>> >> From what I see up until now size should not be an issue at all. Even
>>> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
>>> >> (and I'm about 75% done). There is one interesting super-large commit,
>>> >> this one:
>>> >>
>>> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
>>> >>
>>> 
>>> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
>>> >> line
>>> >>
>>> >> LUCENE-2748: bring in old Lucene docs
>>> >>
>>> >> This commit diff weights... wait for it... 1.3G! I didn't check what
>>> >> it actually was.
>>> >>
>>> >> Will keep you posted.
>>> >>
>>> >> D.
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For a

[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058305#comment-15058305
 ] 

Christine Poerschke commented on SOLR-8388:
---

Thanks Steve. Looking into.

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Doug Turnbull
I thought the general consensus at minimum was to investigate a git mirror
that stripped some artifacts out (jars etc) to lighten up the work of the
process. If at some point the project switched to git, such a mirror might
be a suitable git repo for the project with archived older versions in SVN.

I think probably what is lacking is a volunteer to figure it all out.

-Doug

On Tue, Dec 15, 2015 at 11:32 AM, Mark Miller  wrote:

> Anyone willing to lead this discussion to some kind of better resolution?
> Did that whole back and forth help with any ideas on the best path forward?
> I know it's a complicated issue, git / svn, the light side, the dark side,
> but doesn't GitHub also depend on this mirroring? It's going to be super
> annoying when I can no longer pull from a relatively up to date git remote.
>
> Who has boiled down the correct path?
>
> - Mark
>
> On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:
>
>> FYI.
>>
>> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
>> - the above, tar.bz2: 1.2G
>>
>> Sadly, I didn't succeed at recreating a local SVN repo from those
>> incremental dumps. svnadmin load fails with a cryptic error related to
>> the fact that revision number of node-copy operations refer to
>> original SVN numbers and they're apparently renumbered on import.
>> svnadmin isn't smart enough to somehow keep a reference of those
>> original numbers and svndumpfilter can't work with incremental dump
>> files... A seemingly trivial task of splitting a repo on a clean
>> boundary seems incredibly hard with SVN...
>>
>> If anybody wishes to play with the dump files, here they are:
>> http://goo.gl/m6q3J8
>>
>> Dawid
>>
>> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
>> > You can't avoid having the history in SVN. The ASF has one large repo,
>> and
>> > won't be deleting that repo, so the history will survive in perpetuity,
>> > regardless of what we do now.
>> >
>> > Upayavira
>> >
>> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
>> >
>> > It seems you'd want to preserve that history in a frozen/archiced
>> Apache Svn
>> > repo for Lucene. Then make the new git repo slimmer before switching.
>> Folks
>> > that want very old versions or doing research can at least go through
>> the
>> > original SVN repo.
>> >
>> > On Tuesday, December 8, 2015, Dawid Weiss 
>> wrote:
>> >
>> > One more thing, perhaps of importance, the raw Lucene repo contains
>> > all the history of projects that then turned top-level (Nutch,
>> > Mahout). These could also be dropped (or ignored) when converting to
>> > git. If we agree JARs are not relevant, why should projects not
>> > directly related to Lucene/ Solr be?
>> >
>> > Dawid
>> >
>> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
>> wrote:
>> >>> Don’t know how much we have of historic jars in our history.
>> >>
>> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
>> >> that does the following:
>> >>
>> >> 1) git log all revisions touching
>> https://svn.apache.org/repos/asf/lucene
>> >> 2) grep revision numbers
>> >> 3) use svnrdump to get every single commit (revision) above, in
>> >> incremental mode.
>> >>
>> >> This will allow me to:
>> >>
>> >> 1) recreate only Lucene/ Solr SVN, locally.
>> >> 2) measure the size of SVN repo.
>> >> 3) measure the size of any conversion to git (even if it's one-by-one
>> >> checkout, then-sync with git).
>> >>
>> >> From what I see up until now size should not be an issue at all. Even
>> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
>> >> (and I'm about 75% done). There is one interesting super-large commit,
>> >> this one:
>> >>
>> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
>> >>
>> 
>> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
>> >> line
>> >>
>> >> LUCENE-2748: bring in old Lucene docs
>> >>
>> >> This commit diff weights... wait for it... 1.3G! I didn't check what
>> >> it actually was.
>> >>
>> >> Will keep you posted.
>> >>
>> >> D.
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>> >
>> >
>> >
>> > --
>> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections,
>> LLC |
>> > 240.476.9983
>> > Author:Relevant Search
>> > This e-mail and all contents, including attachments, is considered to be
>> > Company Confidential unless explicitly stated otherwise, regardless of
>> > whether attachments are marked as such.
>> >
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --
> - Mark
> about.me/markrmiller
>



-- 
*Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections


Re: Lucene/Solr git mirror will soon turn off

2015-12-15 Thread Mark Miller
Anyone willing to lead this discussion to some kind of better resolution?
Did that whole back and forth help with any ideas on the best path forward?
I know it's a complicated issue, git / svn, the light side, the dark side,
but doesn't GitHub also depend on this mirroring? It's going to be super
annoying when I can no longer pull from a relatively up to date git remote.

Who has boiled down the correct path?

- Mark

On Wed, Dec 9, 2015 at 6:07 AM Dawid Weiss  wrote:

> FYI.
>
> - All of Lucene's SVN, incremental deltas, uncompressed: 5.0G
> - the above, tar.bz2: 1.2G
>
> Sadly, I didn't succeed at recreating a local SVN repo from those
> incremental dumps. svnadmin load fails with a cryptic error related to
> the fact that revision number of node-copy operations refer to
> original SVN numbers and they're apparently renumbered on import.
> svnadmin isn't smart enough to somehow keep a reference of those
> original numbers and svndumpfilter can't work with incremental dump
> files... A seemingly trivial task of splitting a repo on a clean
> boundary seems incredibly hard with SVN...
>
> If anybody wishes to play with the dump files, here they are:
> http://goo.gl/m6q3J8
>
> Dawid
>
> On Tue, Dec 8, 2015 at 10:49 PM, Upayavira  wrote:
> > You can't avoid having the history in SVN. The ASF has one large repo,
> and
> > won't be deleting that repo, so the history will survive in perpetuity,
> > regardless of what we do now.
> >
> > Upayavira
> >
> > On Tue, Dec 8, 2015, at 09:24 PM, Doug Turnbull wrote:
> >
> > It seems you'd want to preserve that history in a frozen/archiced Apache
> Svn
> > repo for Lucene. Then make the new git repo slimmer before switching.
> Folks
> > that want very old versions or doing research can at least go through the
> > original SVN repo.
> >
> > On Tuesday, December 8, 2015, Dawid Weiss  wrote:
> >
> > One more thing, perhaps of importance, the raw Lucene repo contains
> > all the history of projects that then turned top-level (Nutch,
> > Mahout). These could also be dropped (or ignored) when converting to
> > git. If we agree JARs are not relevant, why should projects not
> > directly related to Lucene/ Solr be?
> >
> > Dawid
> >
> > On Tue, Dec 8, 2015 at 10:05 PM, Dawid Weiss 
> wrote:
> >>> Don’t know how much we have of historic jars in our history.
> >>
> >> I actually do know. Or will know. In about ~10 hours. I wrote a script
> >> that does the following:
> >>
> >> 1) git log all revisions touching
> https://svn.apache.org/repos/asf/lucene
> >> 2) grep revision numbers
> >> 3) use svnrdump to get every single commit (revision) above, in
> >> incremental mode.
> >>
> >> This will allow me to:
> >>
> >> 1) recreate only Lucene/ Solr SVN, locally.
> >> 2) measure the size of SVN repo.
> >> 3) measure the size of any conversion to git (even if it's one-by-one
> >> checkout, then-sync with git).
> >>
> >> From what I see up until now size should not be an issue at all. Even
> >> with all binary blobs so far the SVN incremental dumps measure ~3.7G
> >> (and I'm about 75% done). There is one interesting super-large commit,
> >> this one:
> >>
> >> svn log -r1240618 https://svn.apache.org/repos/asf/lucene
> >> 
> >> r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1
> >> line
> >>
> >> LUCENE-2748: bring in old Lucene docs
> >>
> >> This commit diff weights... wait for it... 1.3G! I didn't check what
> >> it actually was.
> >>
> >> Will keep you posted.
> >>
> >> D.
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
> >
> >
> > --
> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections,
> LLC |
> > 240.476.9983
> > Author:Relevant Search
> > This e-mail and all contents, including attachments, is considered to be
> > Company Confidential unless explicitly stated otherwise, regardless of
> > whether attachments are marked as such.
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058262#comment-15058262
 ] 

Steve Rowe commented on SOLR-8388:
--

My Jenkins found a reproducible ReturnsFieldsTest.testToString failure (Linux, 
Oracle Java7, branch_5x):

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ReturnFieldsTest 
-Dtests.method=testToString -Dtests.seed=4E6AE8A4D715B23B -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=sk -Dtests.timezone=Europe/Brussels -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.04s | ReturnFieldsTest.testToString <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: 
expected:<...s=(globs=[],fields=[[score, test, id],okFieldNames=[null, score, 
test], id],reqFieldNames=...> but was:<...s=(globs=[],fields=[[test, score, 
id],okFieldNames=[null, test, score], id],reqFieldNames=...>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4E6AE8A4D715B23B:9F4B5FA5E80FDF93]:0)
   [junit4]>at 
org.apache.solr.search.ReturnFieldsTest.testToString(ReturnFieldsTest.java:109)
{noformat}

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15210 - Failure!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15210/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:43234/collMinRf_1x3_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Timeout 
occured while waiting response from server at: 
http://127.0.0.1:43234/collMinRf_1x3_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([4883AA101CB37E2A:C0D795CAB24F13D2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:635)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:982)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:609)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:194)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.S

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 261 - Still Failing!

2015-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/261/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
ERROR: SolrIndexSearcher opens=27 closes=26

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=27 closes=26
at __randomizedtesting.SeedInfo.seed([45F43F257D825341]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:453)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:225)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestCoreDiscovery: 
1) Thread[id=14668, name=searcherExecutor-6412-thread-1, state=WAITING, 
group=TGRP-TestCoreDiscovery] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)   
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestCoreDiscovery: 
   1) Thread[id=14668, name=searcherExecutor-6412-thread-1, state=WAITING, 
group=TGRP-TestCoreDiscovery]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([45F43F257D825341]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
There are still zombie threads that couldn't be terminated:1)

[jira] [Updated] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2015-12-15 Thread Jens Wille (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Wille updated SOLR-8418:
-
Attachment: SOLR-8418.patch

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
> Attachments: SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2015-12-15 Thread Jens Wille (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058234#comment-15058234
 ] 

Jens Wille commented on SOLR-8418:
--

I've attached a patch that fixes the issue for all three classes and adds a 
test for each of them.

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
> Attachments: SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2015-12-15 Thread Jens Wille (JIRA)
Jens Wille created SOLR-8418:


 Summary: BoostQuery cannot be cast to TermQuery
 Key: SOLR-8418
 URL: https://issues.apache.org/jira/browse/SOLR-8418
 Project: Solr
  Issue Type: Bug
  Components: MoreLikeThis
Affects Versions: 5.4
Reporter: Jens Wille


As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
r1701621 to use the new API. In SOLR-7912, I adapted that code for 
{{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
parameter just failed for me after updating to 5.4 with the following error 
message:

{code}
java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
cast to org.apache.lucene.search.TermQuery
at 
org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
at org.apache.solr.search.QParser.getQuery(QParser.java:141)
at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2015-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058198#comment-15058198
 ] 

Stéphane Campinas edited comment on LUCENE-6932 at 12/15/15 3:23 PM:
-

A possible solution for this bug is in the attached file issue6932.patch.
The problem is that the "bufferPosition" variable is overwritten in the "seek" 
method, although it was set to BUFFER_SIZE since EOF should be thrown.


was (Author: stephane.campi...@gmail.com):
A possible solution

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: Trunk
>Reporter: Stéphane Campinas
> Attachments: issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2015-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stéphane Campinas updated LUCENE-6932:
--
Attachment: issue6932.patch

A possible solution

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: Trunk
>Reporter: Stéphane Campinas
> Attachments: issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8388.
---
Resolution: Fixed

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058195#comment-15058195
 ] 

ASF subversion and git services commented on SOLR-8388:
---

Commit 1720180 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720180 ]

SOLR-8388: more TestSolrQueryResponse.java tests; add SolrReturnFields.toString 
method, ReturnFieldsTest.testToString test; (merge in revision 1720160 from 
trunk)

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8388-part1of2.patch, SOLR-8388-part2of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Expected EOFException is not thrown

2015-12-15 Thread Stéphane Campinas
https://issues.apache.org/jira/browse/LUCENE-6932

On 15 December 2015 at 15:03, Robert Muir  wrote:

> at a glance, this looks like a bug in RAMDirectory to me. Can you open an
> issue?
>
> On Tue, Dec 15, 2015 at 9:52 AM, Stéphane Campinas
>  wrote:
> > Hi,
> >
> > In the JUnit test case from the attached file, I call
> "IndexInput.seek()" on a position past
> > EOF. However, there is no EOFException that is thrown.
> >
> > To reproduce the error, please use the seed test:
> -Dtests.seed=8273A81C129D35E2
> >
> > Could you confirm that this is indeed a bug, or if it is a misuse of the
> > API on my part.
> >
> > If you do confirm it as a bug, I will open an issue on JIRA with a
> > patch.
> >
> > Thanks,
> >
> > --
> > Stéphane Campinas
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Campinas Stéphane


[jira] [Updated] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2015-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stéphane Campinas updated LUCENE-6932:
--
Attachment: testcase.txt

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: Trunk
>Reporter: Stéphane Campinas
> Attachments: testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2015-12-15 Thread JIRA
Stéphane Campinas created LUCENE-6932:
-

 Summary: Seek past EOF with RAMDirectory should throw EOFException
 Key: LUCENE-6932
 URL: https://issues.apache.org/jira/browse/LUCENE-6932
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: Trunk
Reporter: Stéphane Campinas


In the JUnit test case from the attached file, I call "IndexInput.seek()" on a 
position past
EOF. However, there is no EOFException that is thrown.

To reproduce the error, please use the seed test: -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Expected EOFException is not thrown

2015-12-15 Thread Robert Muir
at a glance, this looks like a bug in RAMDirectory to me. Can you open an issue?

On Tue, Dec 15, 2015 at 9:52 AM, Stéphane Campinas
 wrote:
> Hi,
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
>
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2
>
> Could you confirm that this is indeed a bug, or if it is a misuse of the
> API on my part.
>
> If you do confirm it as a bug, I will open an issue on JIRA with a
> patch.
>
> Thanks,
>
> --
> Stéphane Campinas
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >