[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 218 - Still Failing!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/218/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.rule.RulesTest.doIntegrationTest

Error Message:
Error from server at http://127.0.0.1:41621: KeeperErrorCode = NoNode for 
/overseer/collection-queue-work/qnr-00

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41621: KeeperErrorCode = NoNode for 
/overseer/collection-queue-work/qnr-00
at 
__randomizedtesting.SeedInfo.seed([C1616811C592:24522F90DA129790]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.rule.RulesTest.doIntegrationTest(RulesTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9194) Enhance the bin/solr script to perform file operations to/from Zookeeper

2016-06-23 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-9194:
-
Summary: Enhance the bin/solr script to perform file operations to/from 
Zookeeper  (was: Enhance the bin/solr script to put and get arbitrary files 
to/from Zookeeper)

> Enhance the bin/solr script to perform file operations to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.5 #20: POMs out of sync

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.5/20/

No tests ran.

Build Log:
[...truncated 29115 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.5/build.xml:766: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.5/build.xml:299: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.5/lucene/build.xml:420: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.5/lucene/common-build.xml:2273:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.5/lucene/common-build.xml:1701:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.5/lucene/common-build.xml:612:
 Error deploying artifact 'org.apache.lucene:lucene-misc:jar': Error deploying 
artifact: Failed to transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-misc/5.5.2-SNAPSHOT/lucene-misc-5.5.2-20160624.042730-8-sources.jar.
 Return code is: 502

Total time: 30 minutes 9 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8546) TestLazyCores is failing a lot on the Jenkins cluster.

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347663#comment-15347663
 ] 

ASF subversion and git services commented on SOLR-8546:
---

Commit caec6b40fd4645cc9184085c1c8742e94726ee72 in lucene-solr's branch 
refs/heads/branch_6x from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=caec6b4 ]

SOLR-8546: TestLazyCores is failing a lot on the Jenkins cluster.


> TestLazyCores is failing a lot on the Jenkins cluster.
> --
>
> Key: SOLR-8546
> URL: https://issues.apache.org/jira/browse/SOLR-8546
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Erick Erickson
> Attachments: SOLR-8546.patch
>
>
> Looks like two issues:
> * A thread leak due to 3DsearcherExecutor
> * An ObjectTracker fail because a SolrCore is left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 668 - Still Failing!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/668/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 10843 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/init-core-data-001
   [junit4]   2> 587459 INFO  
(SUITE-TestReplicationHandler-seed#[449BE8EBB4908E9A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=None)
   [junit4]   2> 587461 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.SolrTestCaseJ4 ###Starting doTestRepeater
   [junit4]   2> 587462 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001/collection1
   [junit4]   2> 587466 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 587467 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@f03d8be{/solr,null,AVAILABLE}
   [junit4]   2> 587467 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@3c3faed1{HTTP/1.1,[http/1.1]}{127.0.0.1:60974}
   [junit4]   2> 587468 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.e.j.s.Server Started @590914ms
   [junit4]   2> 587468 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001/collection1/data,
 hostContext=/solr, hostPort=60974}
   [junit4]   2> 587468 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
sun.misc.Launcher$AppClassLoader@6d06d69c
   [junit4]   2> 587468 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001'
   [junit4]   2> 587468 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 587469 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.SolrResourceLoader solr home defaulted to 'solr/' (could not find 
system property or JNDI)
   [junit4]   2> 587469 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001/solr.xml
   [junit4]   2> 587473 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.CorePropertiesLocator Config-defined core root directory: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001/.
   [junit4]   2> 587473 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.CoreContainer New CoreContainer 2088626431
   [junit4]   2> 587473 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001]
   [junit4]   2> 587473 WARN  
(TEST-TestReplicationHandler.doTestRepeater-seed#[449BE8EBB4908E9A]) [] 
o.a.s.c.CoreContainer Couldn't add files from 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001/lib
 to classpath: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_449BE8EBB4908E9A-001/solr-instance-001/lib
   [junit4]   2> 587473 INFO  

[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 92 - Failure

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/92/

No tests ran.

Build Log:
[...truncated 40519 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.2.0-src.tgz...
   [smoker] 29.8 MB in 0.03 sec (1115.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.2.0.tgz...
   [smoker] 64.3 MB in 0.06 sec (1103.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.2.0.zip...
   [smoker] 74.9 MB in 0.07 sec (1084.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6024 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6024 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 224 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (193.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.2.0-src.tgz...
   [smoker] 39.1 MB in 0.61 sec (64.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.2.0.tgz...
   [smoker] 137.0 MB in 1.17 sec (117.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.2.0.zip...
   [smoker] 145.6 MB in 0.85 sec (170.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.2.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]  
   

[jira] [Updated] (SOLR-8546) TestLazyCores is failing a lot on the Jenkins cluster.

2016-06-23 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-8546:
-
Attachment: SOLR-8546.patch

Oh my this is embarrassing, how long this has lingered. I finally got it to 
fail locally and...

The good news is it's a test problem, not the code.
The good news is that the test fix is trivial.

The bad news is it's so stupid. There are two calls to random.nextint() that 
look like this
int blah = random.nextInt(1);
some stuff
int blort = random.nextInt(blah).

Whenever blah == 0 it throws an error since blah must be positive.

Checking this in, but I'll keep this JIRA open for a while to see if Jenkins is 
happy as well as beasting it a _lot_ locally.

> TestLazyCores is failing a lot on the Jenkins cluster.
> --
>
> Key: SOLR-8546
> URL: https://issues.apache.org/jira/browse/SOLR-8546
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Erick Erickson
> Attachments: SOLR-8546.patch
>
>
> Looks like two issues:
> * A thread leak due to 3DsearcherExecutor
> * An ObjectTracker fail because a SolrCore is left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8546) TestLazyCores is failing a lot on the Jenkins cluster.

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347645#comment-15347645
 ] 

ASF subversion and git services commented on SOLR-8546:
---

Commit bc1237a646066706a027ee42b975cf3aea82a37f in lucene-solr's branch 
refs/heads/master from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc1237a ]

SOLR-8546: TestLazyCores is failing a lot on the Jenkins cluster.


> TestLazyCores is failing a lot on the Jenkins cluster.
> --
>
> Key: SOLR-8546
> URL: https://issues.apache.org/jira/browse/SOLR-8546
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Erick Erickson
>
> Looks like two issues:
> * A thread leak due to 3DsearcherExecutor
> * An ObjectTracker fail because a SolrCore is left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9245) docBoost is still compounded on copyField

2016-06-23 Thread Daiki Ikawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daiki Ikawa updated SOLR-9245:
--
Affects Version/s: master (7.0)
   5.5.2
   5.4.1

> docBoost is still compounded on copyField
> -
>
> Key: SOLR-9245
> URL: https://issues.apache.org/jira/browse/SOLR-9245
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1, 5.5.2, master (7.0)
>Reporter: Daiki Ikawa
>
> in some cases, the issue [SOLR-3981] is still unresolved.
> schema.xml 
> {noformat}
>stored="false" multiValued="true" />
>   
> {noformat}
> and MyLocalApplicationSampleWrong.java
> {noformat}
> doc.addfield("source_text","foo");
> doc.addfield("hoge_text", "bar");
> doc.setDocumentBoost(10);
> {noformat}
> then I got fieldNorm value which greater than 1E9 (docboot is still 
> compounded), 
> because the "compoundBoost" is applied twice when copyFileds generated
> faster than destinationField generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper

2016-06-23 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347550#comment-15347550
 ] 

Erick Erickson commented on SOLR-9194:
--

Jan:

Thanks for testing and the feedback.

Your "X's"
bq: When mis-typing, such as omitting the -z, we print the error msg followed 
by the full usage. Suggest we instead print the helpful error message followed 
by Type bin/solr zk -help for usage help
 
Done. The long help messages do, indeed, get in the way of figuring out what 
was wrong.

bq: This log msg from CloudSolrClient is annoying:

Made it into a DEBUG rather than INFO.

bq: Typo: Name of the...

fixed

bq: The command bin/solr zk rm -r / succeeds, rendering Solr useless

Well don't do that ;). I put in a check and the script now barfs in that 
situation.

bq: Why do we write "Solr MUST be started on.

because when I was copying things around during the original 
upconfig/downconfig I used CloudSolrClient like other tools in SolrCLI did and 
there is a check to see if Zookeeper has been initialized. In other words 
because I didn't look at it carefully enough ;) I just proofed out using 
SolrZkClient and at least it works on the 'cp' command. So I should be able to 
convert the rest. That's annoyed me for quite a while but I never tried getting 
around it. Now I have.

bq: Could we wish for a solr zk ls command?
Yep, we sure could. Testing the cp command just made me wish for one, I think 
I'll put it in while I'm at it.

Thanks again! Probably get another patch up this weekend with all this, 
including the ls command.


> Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9167) Unable to connect to solr via solrj jdbc driver

2016-06-23 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347547#comment-15347547
 ] 

Kevin Risden commented on SOLR-9167:


I'm thinking this should be reopened to address the handling of non ZK 
addresses when trying to connect. I don't think there are any tests that go 
against a non ZK address. This should at least try to fail gracefully instead 
of the error message about IOexception from ZK.

> Unable to connect to solr via solrj jdbc driver 
> 
>
> Key: SOLR-9167
> URL: https://issues.apache.org/jira/browse/SOLR-9167
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 6.0
> Environment: java.version=1.8.0_77
> java.vendor=Oracle Corporation
> os.name=Mac OS X
> os.arch=x86_64
> os.version=10.11.5
>Reporter: Christian Schwarzinger
>Priority: Minor
>
> Getting the following error, when trying to connect to solr via jdbc driver.
> {panel:title=client 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> (ClientCnxn.java:1102) - Session 0x0 for server 
> fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:8983, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Packet len1213486160 is out of range!
>   at 
> org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {panel}
> This is imho. caused by the following server error:
> {panel:title=server 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> Illegal character 0x0 in state=START for buffer 
> HeapByteBuffer@5cc6fe87[p=1,l=49,c=8192,r=48]={\x00<<<\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>>>charset=UTF-8\r\nCo...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
> {panel}
> Using http interface for sql via curl works however:
> {code}
> bin/solr start -cloud
> bin/solr create -c test
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/test/update/json/docs' --data-binary '
> {
>   "id": "1",
>   "title": "Doc 1"
> }'
> curl 'http://localhost:8983/solr/test/update?commit=true'
> curl --data-urlencode 'stmt=SELECT count(*) FROM test' 
> http://localhost:8983/solr/test/sql?aggregationMode=facet
> {code}
> This is the code, that fails:
> {code}
> Connection con = 
> DriverManager.getConnection("jdbc:solr://localhost:8983?collection=test=map_reduce=2");
> {code}
> taken from: 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface
> Same error also occurs in 6.1.0-68 developer snapshot.
> Background: I'm trying to write a solr sql connector for Jedox BI Suite, 
> which should allow for better integration of solr into BI processes. Any 
> advice / help appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 100 - Still Failing

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/100/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:50255/khmqk/l

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:50255/khmqk/l
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:601)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:399)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:515)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-9055) Make collection backup/restore extensible

2016-06-23 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347447#comment-15347447
 ] 

Hrishikesh Gadre commented on SOLR-9055:


[~markrmil...@gmail.com] [~varunthacker] Based on our discussion in SOLR-7374, 
I created SOLR-9242 to track changes required to support collection level 
backup/restore for other file systems. Once those changes are committed, I will 
submit another patch here. It would include following,

- Add Solr/Lucene version to check the compatibility between the backup version 
and the version of Solr on which it is being restored.
- Add a backup implementation version to check the compatibility between the 
"restore" implementation and backup format.
- Introduce a Strategy interface to define how the Solr index data is backed up 
(e.g. using file copy approach).

> Make collection backup/restore extensible
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
> Attachments: SOLR-9055.patch, SOLR-9055.patch, SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use

2016-06-23 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-9242:
---
Attachment: SOLR-9242.patch

[~varunthacker] Please find the patch attached. It includes following,

- Updated the collection level backup/restore implementation to use the 
BackupRepository interface.
- Removed the cluster property to define a default backup location. The default 
will now be associated with the backup repository configuration.
- Unified the backup/restore API parameter constants in CoreAdminParams class 
(and removed duplicate declarations in other classes).
- Added unit test to verify the HDFS integration for collection level 
backup/restore.


> Collection level backup/restore should provide a param for specifying the 
> repository implementation it should use
> -
>
> Key: SOLR-9242
> URL: https://issues.apache.org/jira/browse/SOLR-9242
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9242.patch
>
>
> SOLR-7374 provides BackupRepository interface to enable storing Solr index 
> data to a configured file-system (e.g. HDFS, local file-system etc.). This 
> JIRA is to track the work required to extend this functionality at the 
> collection level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7354) MoreLikeThis incorrectly does toString on Field object

2016-06-23 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll reassigned LUCENE-7354:
---

Assignee: Grant Ingersoll

> MoreLikeThis incorrectly does toString on Field object
> --
>
> Key: LUCENE-7354
> URL: https://issues.apache.org/jira/browse/LUCENE-7354
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.0.1, 5.5.1, master (7.0)
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
>
> In MoreLikeThis.java, circa line 763, when calling addTermFrequencies on a 
> Field object, we are incorrectly calling toString on the Field object, which 
> puts the Field attributes (indexed, stored, et. al) into the String that is 
> returned.
> I'll put up a patch/fix shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9244) Lots of "Previous SolrRequestInfo was not closed" in Solr log

2016-06-23 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347349#comment-15347349
 ] 

Gary Lee edited comment on SOLR-9244 at 6/23/16 10:39 PM:
--

SOLR-8657 appears to detail another path in which the same issue occurs - lots 
of those errors polluting the logs. In our case it seems that HttpSolrCall.call 
isn't properly clearing the SolrRequestInfo. See where I added a call to clear 
the request info below:
{noformat}
  public Action call() throws IOException {
MDCLoggingContext.reset();
MDCLoggingContext.setNode(cores);

if (cores == null) {
  sendError(503, "Server is shutting down or failed to initialize");
  return RETURN;
}

if (solrDispatchFilter.abortErrorMessage != null) {
  sendError(500, solrDispatchFilter.abortErrorMessage);
  return RETURN;
}

try {
  init();
...
SolrRequestInfo.setRequestInfo(new SolrRequestInfo(solrReq, 
solrRsp));
execute(solrRsp);
HttpCacheHeaderUtil.checkHttpCachingVeto(solrRsp, resp, reqMethod);
Iterator> headers = solrRsp.httpHeaders();
while (headers.hasNext()) {
  Map.Entry entry = headers.next();
  resp.addHeader(entry.getKey(), entry.getValue());
}
QueryResponseWriter responseWriter = 
core.getQueryResponseWriter(solrReq);
if (invalidStates != null) 
solrReq.getContext().put(CloudSolrClient.STATE_VERSION, invalidStates);
writeResponse(solrRsp, responseWriter, reqMethod);
  }
  return RETURN;
default: return action;
  }
} catch (Throwable ex) {
  sendError(ex);
  // walk the the entire cause chain to search for an Error
  Throwable t = ex;
  while (t != null) {
if (t instanceof Error) {
  if (t != ex) {
log.error("An Error was wrapped in another exception - please 
report complete stacktrace on SOLR-6161", ex);
  }
  throw (Error) t;
}
t = t.getCause();
  }
  return RETURN;
} finally {
// I WOULD HAVE EXPECTED SolrRequestInfo.clearRequestInfo(); call here
  MDCLoggingContext.clear();
}

  }
{noformat}

So yes appears to be the same issue as SOLR-8657, but this details another code 
path that needs to be addressed.


was (Author: gary.lee):
SOLR-8657 appears to detail another path in which the same issue occurs - lots 
of those errors polluting the logs. In our case it seems that HttpSolrCall.call 
isn't properly clearing the SolrRequestInfo:
{noformat}
  public Action call() throws IOException {
MDCLoggingContext.reset();
MDCLoggingContext.setNode(cores);

if (cores == null) {
  sendError(503, "Server is shutting down or failed to initialize");
  return RETURN;
}

if (solrDispatchFilter.abortErrorMessage != null) {
  sendError(500, solrDispatchFilter.abortErrorMessage);
  return RETURN;
}

try {
  init();
...
SolrRequestInfo.setRequestInfo(new SolrRequestInfo(solrReq, 
solrRsp));
execute(solrRsp);
HttpCacheHeaderUtil.checkHttpCachingVeto(solrRsp, resp, reqMethod);
Iterator> headers = solrRsp.httpHeaders();
while (headers.hasNext()) {
  Map.Entry entry = headers.next();
  resp.addHeader(entry.getKey(), entry.getValue());
}
QueryResponseWriter responseWriter = 
core.getQueryResponseWriter(solrReq);
if (invalidStates != null) 
solrReq.getContext().put(CloudSolrClient.STATE_VERSION, invalidStates);
writeResponse(solrRsp, responseWriter, reqMethod);
  }
  return RETURN;
default: return action;
  }
} catch (Throwable ex) {
  sendError(ex);
  // walk the the entire cause chain to search for an Error
  Throwable t = ex;
  while (t != null) {
if (t instanceof Error) {
  if (t != ex) {
log.error("An Error was wrapped in another exception - please 
report complete stacktrace on SOLR-6161", ex);
  }
  throw (Error) t;
}
t = t.getCause();
  }
  return RETURN;
} finally {
// I WOULD HAVE EXPECTED SolrRequestInfo.clearRequestInfo(); call here
  MDCLoggingContext.clear();
}

  }
{noformat}

So yes appears to be the same issue as SOLR-8657, but this details another code 
path that needs to be addressed.

> Lots of "Previous SolrRequestInfo was not closed" in Solr log
> -
>
> Key: SOLR-9244
> URL: https://issues.apache.org/jira/browse/SOLR-9244
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.3.1
>Reporter: 

[jira] [Commented] (SOLR-9244) Lots of "Previous SolrRequestInfo was not closed" in Solr log

2016-06-23 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347349#comment-15347349
 ] 

Gary Lee commented on SOLR-9244:


SOLR-8657 appears to detail another path in which the same issue occurs - lots 
of those errors polluting the logs. In our case it seems that HttpSolrCall.call 
isn't properly clearing the SolrRequestInfo:
{noformat}
  public Action call() throws IOException {
MDCLoggingContext.reset();
MDCLoggingContext.setNode(cores);

if (cores == null) {
  sendError(503, "Server is shutting down or failed to initialize");
  return RETURN;
}

if (solrDispatchFilter.abortErrorMessage != null) {
  sendError(500, solrDispatchFilter.abortErrorMessage);
  return RETURN;
}

try {
  init();
...
SolrRequestInfo.setRequestInfo(new SolrRequestInfo(solrReq, 
solrRsp));
execute(solrRsp);
HttpCacheHeaderUtil.checkHttpCachingVeto(solrRsp, resp, reqMethod);
Iterator> headers = solrRsp.httpHeaders();
while (headers.hasNext()) {
  Map.Entry entry = headers.next();
  resp.addHeader(entry.getKey(), entry.getValue());
}
QueryResponseWriter responseWriter = 
core.getQueryResponseWriter(solrReq);
if (invalidStates != null) 
solrReq.getContext().put(CloudSolrClient.STATE_VERSION, invalidStates);
writeResponse(solrRsp, responseWriter, reqMethod);
  }
  return RETURN;
default: return action;
  }
} catch (Throwable ex) {
  sendError(ex);
  // walk the the entire cause chain to search for an Error
  Throwable t = ex;
  while (t != null) {
if (t instanceof Error) {
  if (t != ex) {
log.error("An Error was wrapped in another exception - please 
report complete stacktrace on SOLR-6161", ex);
  }
  throw (Error) t;
}
t = t.getCause();
  }
  return RETURN;
} finally {
// I WOULD HAVE EXPECTED SolrRequestInfo.clearRequestInfo(); call here
  MDCLoggingContext.clear();
}

  }
{noformat}

So yes appears to be the same issue as SOLR-8657, but this details another code 
path that needs to be addressed.

> Lots of "Previous SolrRequestInfo was not closed" in Solr log
> -
>
> Key: SOLR-9244
> URL: https://issues.apache.org/jira/browse/SOLR-9244
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.3.1
>Reporter: Gary Lee
>Priority: Minor
> Fix For: 5.3.1
>
>
> After upgrading to Solr 5.3.1, we started seeing a lot of "Previous 
> SolrRequestInfo was not closed" ERROR level messages in the logs. Upon 
> further inspection, it appears this is a sanity check and not an error that 
> needs attention. It appears that the SolrRequestInfo isn't freed in one 
> particular path (no corresponding call to SolrRequestInfo.clearRequestInfo in 
> HttpSolrCall.call), which often leads to a lot of these messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-23 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-9076.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.2

Committed to 7.0 and 6.2.  Thanks Mark!

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7354) MoreLikeThis incorrectly does toString on Field object

2016-06-23 Thread Grant Ingersoll (JIRA)
Grant Ingersoll created LUCENE-7354:
---

 Summary: MoreLikeThis incorrectly does toString on Field object
 Key: LUCENE-7354
 URL: https://issues.apache.org/jira/browse/LUCENE-7354
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.5.1, 6.0.1, master (7.0)
Reporter: Grant Ingersoll
Priority: Minor


In MoreLikeThis.java, circa line 763, when calling addTermFrequencies on a 
Field object, we are incorrectly calling toString on the Field object, which 
puts the Field attributes (indexed, stored, et. al) into the String that is 
returned.

I'll put up a patch/fix shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347282#comment-15347282
 ] 

ASF subversion and git services commented on SOLR-9076:
---

Commit b76f64fdc0559a7b94feb2b97c78c9c151f8f477 in lucene-solr's branch 
refs/heads/branch_6x from [~gchanan]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b76f64f ]

SOLR-9076: Update to Hadoop 2.7.2


> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 268 - Failure!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/268/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch

Error Message:
CollectionStateWatcher wasn't cleared after completion

Stack Trace:
java.lang.AssertionError: CollectionStateWatcher wasn't cleared after completion
at 
__randomizedtesting.SeedInfo.seed([59685B84B5A64F41:45394F4F2ABD07F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 13202 lines...]
   [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-9219) Make hdfs blockcache read buffer size configurable.

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347233#comment-15347233
 ] 

ASF subversion and git services commented on SOLR-9219:
---

Commit dae777899aeba7203329d32556f688acd3c11a8f in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dae7778 ]

SOLR-9219: Make hdfs blockcache read buffer sizes configurable and improve 
cache concurrency.


> Make hdfs blockcache read buffer size configurable.
> ---
>
> Key: SOLR-9219
> URL: https://issues.apache.org/jira/browse/SOLR-9219
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9219.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9244) Lots of "Previous SolrRequestInfo was not closed" in Solr log

2016-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347229#comment-15347229
 ] 

Tomás Fernández Löbbe commented on SOLR-9244:
-

Probably the same issue as reported in SOLR-8657. If so, we can close this one 
as duplicate and update the affected versions in SOLR-8657.

> Lots of "Previous SolrRequestInfo was not closed" in Solr log
> -
>
> Key: SOLR-9244
> URL: https://issues.apache.org/jira/browse/SOLR-9244
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.3.1
>Reporter: Gary Lee
>Priority: Minor
> Fix For: 5.3.1
>
>
> After upgrading to Solr 5.3.1, we started seeing a lot of "Previous 
> SolrRequestInfo was not closed" ERROR level messages in the logs. Upon 
> further inspection, it appears this is a sanity check and not an error that 
> needs attention. It appears that the SolrRequestInfo isn't freed in one 
> particular path (no corresponding call to SolrRequestInfo.clearRequestInfo in 
> HttpSolrCall.call), which often leads to a lot of these messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9219) Make hdfs blockcache read buffer size configurable.

2016-06-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347212#comment-15347212
 ] 

Mark Miller commented on SOLR-9219:
---

Hang on, I have the backport ready, just have not committed yet.

> Make hdfs blockcache read buffer size configurable.
> ---
>
> Key: SOLR-9219
> URL: https://issues.apache.org/jira/browse/SOLR-9219
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9219.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Andriy Rysin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347153#comment-15347153
 ] 

Andriy Rysin commented on LUCENE-7287:
--

Ok, then I'll prepare the changes as part of this ticket.

I've looked deeper into the morfologik dictionaries we have in LanguageTool and 
the Polish one has token+lemma normalized (with POS tags concatenated for each 
unique token+lemma), other dictionaries including Ukrainian have separate 
records thus token+lemma is not unique. I've sent an email to the morfologik 
guys and once I get an explanation I'll update the dictionary appropriately so 
we don't have have duplicates.

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347139#comment-15347139
 ] 

ASF subversion and git services commented on SOLR-9076:
---

Commit f273cb1b3ae722ee58b289653ad8a3bc5066838f in lucene-solr's branch 
refs/heads/master from [~gchanan]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f273cb1 ]

SOLR-9076: Update to Hadoop 2.7.2


> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-23 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-9076:
-
Attachment: SOLR-9076.patch

Here's a patch that passed the test and precommits.  Only change from previous 
patch is it removes the org.htrace versions (which error'ed out in the 
precommit because they aren't used anymore) and removes the org.htrace 
license/notice/sha1.

I will commit this shortly.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5930 - Failure!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5930/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([B53DB6B02861416F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11927 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrVersionReplicationTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.CdcrVersionReplicationTest_B53DB6B02861416F-001\init-core-data-001
   [junit4]   2> 1909436 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[B53DB6B02861416F]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1909436 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[B53DB6B02861416F]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1909439 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[B53DB6B02861416F]) [ 
   ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1909439 INFO  (Thread-5604) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1909439 INFO  (Thread-5604) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1909539 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[B53DB6B02861416F]) [ 
   ] o.a.s.c.ZkTestServer start zk server on port:62346
   [junit4]   2> 1909539 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[B53DB6B02861416F]) [ 
   ] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1909540 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[B53DB6B02861416F]) [ 
   ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1909544 INFO  (zkCallback-2841-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@11d32d5c 
name:ZooKeeperConnection Watcher:127.0.0.1:62346 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 

Re: VOTE: Apache Solr Ref Guide for 6.1

2016-06-23 Thread Tommaso Teofili
+1

Tommaso

Il giorno gio 23 giu 2016 alle ore 17:35 Anshum Gupta <
ans...@anshumgupta.net> ha scritto:

> +1
>
> On Tue, Jun 21, 2016 at 11:19 AM, Cassandra Targett 
> wrote:
>
>> Please VOTE to release the Apache Solr Ref Guide for 6.1.
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.1-RC0/
>>
>> $ more /apache-solr-ref-guide-6.1.pdf.sha1
>> 5929b03039e99644bc4ef23b37088b343e2ff0c8  apache-solr-ref-guide-6.1.pdf
>>
>> Here's my +1.
>>
>> Thanks,
>> Cassandra
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> Anshum Gupta
>


[jira] [Updated] (SOLR-9246) Errors for Streaming Expressions using JDBC (Oracle) stream source

2016-06-23 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-9246:
--
Attachment: SOLR-9246.patch

On an unknown java class name an exception will now be thrown with the message 
{code}
Unable to determine the valueSelector for column '' (col #) of 
java class '' and type ''
{code}

For example
{code}
Unable to determine the valueSelector for column 'UNSP' (col #2) of java class 
'[B' and type 'BINARY'
{code}

The weird looking java class name is because there doesn't appear to be a known 
java class for a BINARY type.

Due to error handling within the JDBCStream, this exception will be caught and 
wrapped as the cause of an IOException. The full exception trace will look like 
this
{code}
java.io.IOException: Failed to generate value selectors for sqlQuery 'select 
ID,UNSP from UNSUPPORTED_COLUMNS' against JDBC connection 'jdbc:hsqldb:mem:.'
  at   
Caused by: java.sql.SQLException: Unable to determine the valueSelector for 
column 'UNSP' (col #2) of java class '[B' and type 'BINARY'
  at 
{code}

> Errors for Streaming Expressions using JDBC (Oracle) stream source
> --
>
> Key: SOLR-9246
> URL: https://issues.apache.org/jira/browse/SOLR-9246
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0.1
> Environment: Windows 7
>Reporter: Hui Liu
> Attachments: Re Errors for Streaming Expressions using JDBC (Oracle) 
> stream source.txt, SOLR-9246.patch
>
>
> I have Solr 6.0.0 installed on my PC (windows 7), I was experimenting with 
> ‘Streaming Expression’ by using Oracle jdbc as the 
> stream source, but got 'null pointer' errors, below is the details on how to 
> reproduce this error:
> 1. create a collection 'document6' which only contain long and string data 
> type, 
> schema.xml for Solr collection 'document6': (newly created empty collections 
> with 2 shards) 
> ===
> 
>   
>  
>  
>   docValues="true" />
>   precisionStep="0" positionIncrementGap="0"/>
>  
> 
>
> 
>   
>omitNorms="true"/>
>
>
>   multiValued="false"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>
>   document_id
>   document_id
> 
> 2. create a new Oracle (version 11.2.0.3) table 'document6' that only contain 
> columns whose jdbc type is long and string, 
> create table document6 
> (document_id number(12) not null,
>  sender_msg_dest varchar2(256),
>  recip_msg_dest  varchar2(256),
>  document_type   varchar2(20),
>  document_keyvarchar2(100));
> loaded 9 records;
> Oracle table 'document6': (newly created Oracle table with 9 records) 
> =
> QA_DOCREP@qlgdb1 > desc document6
>  Name  Null?Type
>  -  
> 
>  DOCUMENT_ID   NOT NULL NUMBER(12)
>  SENDER_MSG_DESTVARCHAR2(256)
>  RECIP_MSG_DEST VARCHAR2(256)
>  DOCUMENT_TYPE  VARCHAR2(20)
>  DOCUMENT_KEY   VARCHAR2(100)
> 3. tried this jdbc streaming expression in my browser, getting the error 
> stack (see below)
> http://localhost:8988/solr/document6/stream?expr=jdbc(connection="jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql="SELECT
>  document_id,sender_msg_dest,recip_msg_dest,document_type,document_key FROM 
> document6",sort="document_id asc",driver="oracle.jdbc.driver.OracleDriver")
> errors in solr.log
> ==
> 2016-06-23 14:07:02.833 INFO  (qtp1389647288-139) [c:document6 s:shard2 
> r:core_node1 x:document6_shard2_replica1] o.a.s.c.S.Request 
> [document6_shard2_replica1]  webapp=/solr path=/stream 
> params={expr=jdbc(connection%3D"jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql%3D"SELECT+document_id,sender_msg_dest,recip_msg_dest,document_type,document_key+FROM+document6",sort%3D"document_id+asc",driver%3D"oracle.jdbc.driver.OracleDriver")}
>  status=0 QTime=1
> 2016-06-23 14:07:05.282 ERROR (qtp1389647288-139) [c:document6 s:shard2 
> r:core_node1 x:document6_shard2_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.read(JDBCStream.java:305)
>   at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.read(ExceptionStream.java:64)
>   at 
> org.apache.solr.handler.StreamHandler$TimerStream.read(StreamHandler.java:374)

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1050 - Still Failing

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1050/

11 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=224483, name=collection4, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=224483, name=collection4, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:33329: collection already exists: 
awholynewstresscollection_collection4_2
at __randomizedtesting.SeedInfo.seed([4B4D9AC6DBEDE1B0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1620)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:988)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [RawDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [RawDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([4B4D9AC6DBEDE1B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Commented] (SOLR-9246) Errors for Streaming Expressions using JDBC (Oracle) stream source

2016-06-23 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346979#comment-15346979
 ] 

Dennis Gove commented on SOLR-9246:
---

I'm gonna go ahead and post a patch with the exception.

> Errors for Streaming Expressions using JDBC (Oracle) stream source
> --
>
> Key: SOLR-9246
> URL: https://issues.apache.org/jira/browse/SOLR-9246
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0.1
> Environment: Windows 7
>Reporter: Hui Liu
> Attachments: Re Errors for Streaming Expressions using JDBC (Oracle) 
> stream source.txt
>
>
> I have Solr 6.0.0 installed on my PC (windows 7), I was experimenting with 
> ‘Streaming Expression’ by using Oracle jdbc as the 
> stream source, but got 'null pointer' errors, below is the details on how to 
> reproduce this error:
> 1. create a collection 'document6' which only contain long and string data 
> type, 
> schema.xml for Solr collection 'document6': (newly created empty collections 
> with 2 shards) 
> ===
> 
>   
>  
>  
>   docValues="true" />
>   precisionStep="0" positionIncrementGap="0"/>
>  
> 
>
> 
>   
>omitNorms="true"/>
>
>
>   multiValued="false"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>
>   document_id
>   document_id
> 
> 2. create a new Oracle (version 11.2.0.3) table 'document6' that only contain 
> columns whose jdbc type is long and string, 
> create table document6 
> (document_id number(12) not null,
>  sender_msg_dest varchar2(256),
>  recip_msg_dest  varchar2(256),
>  document_type   varchar2(20),
>  document_keyvarchar2(100));
> loaded 9 records;
> Oracle table 'document6': (newly created Oracle table with 9 records) 
> =
> QA_DOCREP@qlgdb1 > desc document6
>  Name  Null?Type
>  -  
> 
>  DOCUMENT_ID   NOT NULL NUMBER(12)
>  SENDER_MSG_DESTVARCHAR2(256)
>  RECIP_MSG_DEST VARCHAR2(256)
>  DOCUMENT_TYPE  VARCHAR2(20)
>  DOCUMENT_KEY   VARCHAR2(100)
> 3. tried this jdbc streaming expression in my browser, getting the error 
> stack (see below)
> http://localhost:8988/solr/document6/stream?expr=jdbc(connection="jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql="SELECT
>  document_id,sender_msg_dest,recip_msg_dest,document_type,document_key FROM 
> document6",sort="document_id asc",driver="oracle.jdbc.driver.OracleDriver")
> errors in solr.log
> ==
> 2016-06-23 14:07:02.833 INFO  (qtp1389647288-139) [c:document6 s:shard2 
> r:core_node1 x:document6_shard2_replica1] o.a.s.c.S.Request 
> [document6_shard2_replica1]  webapp=/solr path=/stream 
> params={expr=jdbc(connection%3D"jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql%3D"SELECT+document_id,sender_msg_dest,recip_msg_dest,document_type,document_key+FROM+document6",sort%3D"document_id+asc",driver%3D"oracle.jdbc.driver.OracleDriver")}
>  status=0 QTime=1
> 2016-06-23 14:07:05.282 ERROR (qtp1389647288-139) [c:document6 s:shard2 
> r:core_node1 x:document6_shard2_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.read(JDBCStream.java:305)
>   at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.read(ExceptionStream.java:64)
>   at 
> org.apache.solr.handler.StreamHandler$TimerStream.read(StreamHandler.java:374)
>   at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:305)
>   at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
>   at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
>   at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
>  

[jira] [Commented] (SOLR-9247) SolrCore Initialization Failures - Fail on create a new core

2016-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346968#comment-15346968
 ] 

Fabrício Pereira commented on SOLR-9247:


I install/upgrade) the Solr with the command:
sudo /opt/programs/solr/bin/install_solr_service.sh solr-6.1.0.tgz -f -i 
/opt/programs -u fabricio

And after I had change the user from instalation path solr:
chown -hR fabricio:fabricio /opt/programs/solr
chown -hR fabricio:fabricio /opt/programs/solr-6.1.0

> SolrCore Initialization Failures - Fail on create a new core
> 
>
> Key: SOLR-9247
> URL: https://issues.apache.org/jira/browse/SOLR-9247
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 6.1.1
> Environment: Solr 6.1.0, Ubuntu 14.04, Java 1.8.0_91
>Reporter: Fabrício Pereira
>  Labels: core, create
>
> I try create a new core with info below.
> name: tweets
> instanceDir: tweets
> dataDir: data
> config: solrconfig.xml
> schema: schema.xml
> Then I get the error on top view:
> Error CREATEing SolrCore 'tweets': Unable to create core [tweets] Caused by: 
> Can't find resource 'solrconfig.xml' in classpath or 
> '/opt/programs/solr/server/solr/tweets'
> SolrCore Initialization Failures
> tweets: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not load conf for core tweets: Error loading solr config from 
> /opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
> Follow the my log tail:
> 2016-06-23 18:13:18.796 INFO  (qtp225493257-20) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={indexInfo=false=json&_=1466705604765} status=0 QTime=0
> 2016-06-23 18:13:18.797 INFO  (qtp225493257-17) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores params={wt=json&_=1466705604765} 
> status=0 QTime=0
> 2016-06-23 18:13:18.800 INFO  (qtp225493257-18) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/info/system params={wt=json&_=1466705604767} 
> status=0 QTime=2
> 2016-06-23 18:13:20.257 INFO  (qtp225493257-34) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={indexInfo=false=json&_=1466705606228} status=0 QTime=0
> 2016-06-23 18:13:20.258 INFO  (qtp225493257-17) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores params={wt=json&_=1466705606228} 
> status=0 QTime=0
> 2016-06-23 18:13:20.261 INFO  (qtp225493257-21) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/info/system params={wt=json&_=1466705606229} 
> status=0 QTime=3
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.h.a.CoreAdminOperation core create command 
> schema=schema.xml=data=tweets=CREATE=solrconfig.xml=tweets=json&_=1466705606228
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] o.a.s.c.CoreDescriptor 
> Created CoreDescriptor: {name=tweets, config=solrconfig.xml, 
> loadOnStartup=true, schema=schema.xml, 
> configSetProperties=configsetprops.json, transient=false, dataDir=data}
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
> '/opt/programs/solr/server/solr/tweets'
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.c.SolrResourceLoader using system property solr.solr.home: 
> /opt/programs/solr/server/solr
> 2016-06-23 18:13:30.469 ERROR (qtp225493257-30) [   ] o.a.s.c.CoreContainer 
> Error creating core [tweets]: Could not load conf for core tweets: Error 
> loading solr config from 
> /opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
> org.apache.solr.common.SolrException: Could not load conf for core tweets: 
> Error loading solr config from 
> /opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
> at 
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:86)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:749)
> at 
> org.apache.solr.handler.admin.CoreAdminOperation$1.call(CoreAdminOperation.java:119)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:367)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:663)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>  

[jira] [Comment Edited] (SOLR-9247) SolrCore Initialization Failures - Fail on create a new core

2016-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346968#comment-15346968
 ] 

Fabrício Pereira edited comment on SOLR-9247 at 6/23/16 6:49 PM:
-

I install (upgrade) the Solr with the command:
sudo /opt/programs/solr/bin/install_solr_service.sh solr-6.1.0.tgz -f -i 
/opt/programs -u fabricio

And after I had change the user from instalation path solr:
chown -hR fabricio:fabricio /opt/programs/solr
chown -hR fabricio:fabricio /opt/programs/solr-6.1.0


was (Author: fabriciorsf):
I install/upgrade) the Solr with the command:
sudo /opt/programs/solr/bin/install_solr_service.sh solr-6.1.0.tgz -f -i 
/opt/programs -u fabricio

And after I had change the user from instalation path solr:
chown -hR fabricio:fabricio /opt/programs/solr
chown -hR fabricio:fabricio /opt/programs/solr-6.1.0

> SolrCore Initialization Failures - Fail on create a new core
> 
>
> Key: SOLR-9247
> URL: https://issues.apache.org/jira/browse/SOLR-9247
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 6.1.1
> Environment: Solr 6.1.0, Ubuntu 14.04, Java 1.8.0_91
>Reporter: Fabrício Pereira
>  Labels: core, create
>
> I try create a new core with info below.
> name: tweets
> instanceDir: tweets
> dataDir: data
> config: solrconfig.xml
> schema: schema.xml
> Then I get the error on top view:
> Error CREATEing SolrCore 'tweets': Unable to create core [tweets] Caused by: 
> Can't find resource 'solrconfig.xml' in classpath or 
> '/opt/programs/solr/server/solr/tweets'
> SolrCore Initialization Failures
> tweets: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Could not load conf for core tweets: Error loading solr config from 
> /opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
> Follow the my log tail:
> 2016-06-23 18:13:18.796 INFO  (qtp225493257-20) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={indexInfo=false=json&_=1466705604765} status=0 QTime=0
> 2016-06-23 18:13:18.797 INFO  (qtp225493257-17) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores params={wt=json&_=1466705604765} 
> status=0 QTime=0
> 2016-06-23 18:13:18.800 INFO  (qtp225493257-18) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/info/system params={wt=json&_=1466705604767} 
> status=0 QTime=2
> 2016-06-23 18:13:20.257 INFO  (qtp225493257-34) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores 
> params={indexInfo=false=json&_=1466705606228} status=0 QTime=0
> 2016-06-23 18:13:20.258 INFO  (qtp225493257-17) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/cores params={wt=json&_=1466705606228} 
> status=0 QTime=0
> 2016-06-23 18:13:20.261 INFO  (qtp225493257-21) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/info/system params={wt=json&_=1466705606229} 
> status=0 QTime=3
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.h.a.CoreAdminOperation core create command 
> schema=schema.xml=data=tweets=CREATE=solrconfig.xml=tweets=json&_=1466705606228
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] o.a.s.c.CoreDescriptor 
> Created CoreDescriptor: {name=tweets, config=solrconfig.xml, 
> loadOnStartup=true, schema=schema.xml, 
> configSetProperties=configsetprops.json, transient=false, dataDir=data}
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
> '/opt/programs/solr/server/solr/tweets'
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
> 2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
> o.a.s.c.SolrResourceLoader using system property solr.solr.home: 
> /opt/programs/solr/server/solr
> 2016-06-23 18:13:30.469 ERROR (qtp225493257-30) [   ] o.a.s.c.CoreContainer 
> Error creating core [tweets]: Could not load conf for core tweets: Error 
> loading solr config from 
> /opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
> org.apache.solr.common.SolrException: Could not load conf for core tweets: 
> Error loading solr config from 
> /opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
> at 
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:86)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:749)
> at 
> org.apache.solr.handler.admin.CoreAdminOperation$1.call(CoreAdminOperation.java:119)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:367)
> at 
> 

[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346949#comment-15346949
 ] 

Ahmet Arslan commented on LUCENE-7287:
--

This is a new feature that is never released, new ticket may not be needed.

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-06-23 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346944#comment-15346944
 ] 

Pushkar Raste commented on SOLR-9207:
-

Here is high level description

PeerSync currently computes versions the node recovery is missing and then 
sends all the version numbers to a replica to get corresponding updates. When a 
node under recovery is missing too many updates, the payload of {{getUpdates}} 
goes above 2MB and jetty would reject the request. Problem can be solved using 
one of the following technique

# Increasing jetty payload limit pay solve this problem. We still would be 
sending a lot of data over the network, which might not be needed.
# Stream versions to replica while asking for updates. 
# Request versions in chunks of about 90K versions at a time
# gzip versions , and unzip it on the other side.
# Ask for version using version ranges instead of sending individual versions.

Approaches 1-3 require sending lot of data over the wire. 
Approach #3 also requires making multiple calls. Additionally #3 might not be 
feasible consider how current code works by submitting requests to 
{{shardHandler}} and calling {{handleResponse}}.
#4 may work, but looks a little inelegant. 

Hence I settle on approach #5 (suggested by Ramkumar). Here is how it works 
* Let's say replica has version [1, 2, 3, 4, 5, 6] and leader has versions [1, 
2, 3, 4, 5, 6, 10, -11, 12, 13, 15, 18]
* While recovery using {{PeerSync}} strategy, replica computes, that range it 
is missing is {{10...18}}
* Replica now requests for versions by specifying range {{10...18}} instead of 
sending all the individual versions (namely 10,11,-11,12,13,15,18)
* I have made using version ranges for PeerSync configurable, by introducing 
following configuration section
{code}
  
${solr.peerSync.useRangeVersions:false}
  
{code}
* Further I have it backwards compatible and a recovering node will use version 
ranges only if node it asks for updates can process version ranges

> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Priority: Minor
> Attachments: SOLR-9207.patch
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9219) Make hdfs blockcache read buffer size configurable.

2016-06-23 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346918#comment-15346918
 ] 

Varun Thacker commented on SOLR-9219:
-

Hi Mark,

Is this only meant for master? SOLR-7374 ran into a merge issue in branch_6x. 

If you plan on backporting this issue for branch_6x then I can hold off 
backporting SOLR-7374
Else I can just the other constructor provided by HdfsDirectory and backport



> Make hdfs blockcache read buffer size configurable.
> ---
>
> Key: SOLR-9219
> URL: https://issues.apache.org/jira/browse/SOLR-9219
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9219.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9247) SolrCore Initialization Failures - Fail on create a new core

2016-06-23 Thread JIRA
Fabrício Pereira created SOLR-9247:
--

 Summary: SolrCore Initialization Failures - Fail on create a new 
core
 Key: SOLR-9247
 URL: https://issues.apache.org/jira/browse/SOLR-9247
 Project: Solr
  Issue Type: Bug
  Components: Server
Affects Versions: 6.1.1
 Environment: Solr 6.1.0, Ubuntu 14.04, Java 1.8.0_91
Reporter: Fabrício Pereira


I try create a new core with info below.
name: tweets
instanceDir: tweets
dataDir: data
config: solrconfig.xml
schema: schema.xml

Then I get the error on top view:
Error CREATEing SolrCore 'tweets': Unable to create core [tweets] Caused by: 
Can't find resource 'solrconfig.xml' in classpath or 
'/opt/programs/solr/server/solr/tweets'

SolrCore Initialization Failures
tweets: 
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
Could not load conf for core tweets: Error loading solr config from 
/opt/programs/solr/server/solr/tweets/conf/solrconfig.xml

Follow the my log tail:
2016-06-23 18:13:18.796 INFO  (qtp225493257-20) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={indexInfo=false=json&_=1466705604765} status=0 QTime=0
2016-06-23 18:13:18.797 INFO  (qtp225493257-17) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores params={wt=json&_=1466705604765} status=0 
QTime=0
2016-06-23 18:13:18.800 INFO  (qtp225493257-18) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/info/system params={wt=json&_=1466705604767} 
status=0 QTime=2
2016-06-23 18:13:20.257 INFO  (qtp225493257-34) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={indexInfo=false=json&_=1466705606228} status=0 QTime=0
2016-06-23 18:13:20.258 INFO  (qtp225493257-17) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores params={wt=json&_=1466705606228} status=0 
QTime=0
2016-06-23 18:13:20.261 INFO  (qtp225493257-21) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/info/system params={wt=json&_=1466705606229} 
status=0 QTime=3
2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
o.a.s.h.a.CoreAdminOperation core create command 
schema=schema.xml=data=tweets=CREATE=solrconfig.xml=tweets=json&_=1466705606228
2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] o.a.s.c.CoreDescriptor 
Created CoreDescriptor: {name=tweets, config=solrconfig.xml, 
loadOnStartup=true, schema=schema.xml, configSetProperties=configsetprops.json, 
transient=false, dataDir=data}
2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'/opt/programs/solr/server/solr/tweets'
2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
2016-06-23 18:13:30.469 INFO  (qtp225493257-30) [   ] 
o.a.s.c.SolrResourceLoader using system property solr.solr.home: 
/opt/programs/solr/server/solr
2016-06-23 18:13:30.469 ERROR (qtp225493257-30) [   ] o.a.s.c.CoreContainer 
Error creating core [tweets]: Could not load conf for core tweets: Error 
loading solr config from 
/opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
org.apache.solr.common.SolrException: Could not load conf for core tweets: 
Error loading solr config from 
/opt/programs/solr/server/solr/tweets/conf/solrconfig.xml
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:86)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:749)
at 
org.apache.solr.handler.admin.CoreAdminOperation$1.call(CoreAdminOperation.java:119)
at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:367)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:663)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 

[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346907#comment-15346907
 ] 

ASF subversion and git services commented on SOLR-7374:
---

Commit 07be2c42ba24fea7c4e84836aa4c3f8d059f71d6 in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=07be2c4 ]

SOLR-7374: Core level backup/restore now supports specifying a directory 
implementation


> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-23 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-7374:
---

Assignee: Varun Thacker  (was: Mark Miller)

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 217 - Still Failing!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/217/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([AFF332B71BBD8DC0:F18E9049BFA0C43D]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState(TestLeaderInitiatedRecoveryThread.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Andriy Rysin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346878#comment-15346878
 ] 

Andriy Rysin commented on LUCENE-7287:
--

Hmm, that does not look right. Yes we can either use 
RemoveDuplicatesTokenFilterFactory (we'll have to add that to the 
UkrainianMorfologikAnalyzer too) or I need to rebuild the dictionary to remove 
the duplicates (probably preferred way).
The problem is that currently the dictionary is the POS dictionary so there may 
be duplicate lemma records as long as the POS tags are different.
I am thinking to file new jira issue for that and will provide a pull request, 
does that make sense?

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346875#comment-15346875
 ] 

Ahmet Arslan commented on LUCENE-7287:
--

Hi, 
multiple tokens OK, but multiple identical tokens look weird, no?
Have you checked the screenshot that includes 
RemoveDuplicatesTokenFilterFactory (RDTF)?

bq. Shall I create mappings_uk.txt so we can use it in solr?

Lets ask Michael. 
Either separate file or we can just recommend to use mapping char filter the 
recommended mappings.
May be we can place the uk_mappings.txt file under 
https://github.com/apache/lucene-solr/tree/master/solr/server/solr/configsets/sample_techproducts_configs/conf/lang

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Andriy Rysin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346862#comment-15346862
 ] 

Andriy Rysin commented on LUCENE-7287:
--

Thanks Ahmet!
Shall I create mappings_uk.txt so we can use it in solr?
As for the multiple tokens, MorfologikFilter produces lemmas so (how I 
understand) it may have multiple tokens in the output for single token in the 
input.

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346857#comment-15346857
 ] 

Ahmet Arslan commented on LUCENE-7287:
--

Please see screenshots in the attachments section at the begging of the page 
and let me know what you think.

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-7287:
-
Attachment: Screen Shot 2016-06-23 at 8.41.28 PM.png

Here is the screen shot of analysis admin page, with 
RemoveDuplicatesTokenFilter added.
{code:xml}
   

   





  

{code}

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-7287:
-
Attachment: Screen Shot 2016-06-23 at 8.23.01 PM.png

 {code:xml}
  

   




  

{code}

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-23 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346820#comment-15346820
 ] 

Hrishikesh Gadre commented on SOLR-7374:


[~varunthacker] 

bq. Do you plan on tackling SOLR-9242 as well?

Yup. I already have a patch ready for submission. Just waiting for this patch 
to get committed to trunk.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346816#comment-15346816
 ] 

Ahmet Arslan commented on LUCENE-7287:
--

Hi,

I was able to run the analyzer successfully. Without mapping chart filter. 
Because character mappings are hardcoded into code.
I am attaching an analysis screen shot. However, it looks like we need a remove 
duplicates token filter at the end.
It looks like Morfologik filter injects multiple tokens at the same position

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-23 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346795#comment-15346795
 ] 

Varun Thacker commented on SOLR-7374:
-

I've got precommit to pass. Running the test suite one more time and then 
committing it.

I pondered marking BackupRepository as experimental in case we need iron it out 
, but decided against it . 

Do you plan on tackling SOLR-9242 as well?

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-23 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346787#comment-15346787
 ] 

Hrishikesh Gadre commented on SOLR-7374:


[~markrmil...@gmail.com] Thanks for the insight. [~varunthacker] I am ok with 
the current test configuration. Let me know if anything is needed from my side.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7352) TestSimpleExplanationsWithFillerDocs failures

2016-06-23 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7352.
--
   Resolution: Fixed
Fix Version/s: 6.2
   master (7.0)

> TestSimpleExplanationsWithFillerDocs failures
> -
>
> Key: LUCENE-7352
> URL: https://issues.apache.org/jira/browse/LUCENE-7352
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7352.patch
>
>
> Policeman Jenkins found reproducible {{testDMQ8()}} and {{testDMQ9()}} 
> failures on master 
> [http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17037/]:
> {noformat}
> Checking out Revision ece9d85cbea962fd7d327010f1ba184cefdfa8ed 
> (refs/remotes/origin/master)
> [...]
> [junit4] Suite: org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleExplanationsWithFillerDocs -Dtests.method=testDMQ8 
> -Dtests.seed=882B619046E7216B -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=fo -Dtests.timezone=Asia/Ashgabat -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J2 | TestSimpleExplanationsWithFillerDocs.testDMQ8 
> <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> (+((field:yy (field:w5)^100.0) | (field:xx)^10.0)~0.5 -extra:extra) 
> NEVER:MATCH: score(doc=713)=-1.9956312E-6 != explanationScore=-3.9912625E-6 
> Explanation: -3.9912625E-6 = sum of:
>[junit4]>   -3.9912625E-6 = weight(field:w5 in 713) 
> [RandomSimilarity], result of:
>[junit4]> -3.9912625E-6 = score(IBSimilarity, doc=713, freq=1.0), 
> computed from:
>[junit4]>   100.0 = boost
>[junit4]>   0.0 = NormalizationH2, computed from: 
>[junit4]> 1.0 = tf
>[junit4]> 5.502638 = avgFieldLength
>[junit4]> 5.6493154E19 = len
>[junit4]>   0.2533109 = LambdaTTF, computed from: 
>[junit4]> 2256.0 = totalTermFreq
>[junit4]> 8909.0 = numberOfDocuments
>[junit4]>   -3.9912624E-8 = DistributionSPL
>[junit4]>  expected:<-1.9956312E-6> but was:<-3.9912625E-6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([882B619046E7216B:D4118BC551735775]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:50)
>[junit4]>  at junit.framework.Assert.failNotEquals(Assert.java:287)
>[junit4]>  at junit.framework.Assert.assertEquals(Assert.java:120)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:501)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:196)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:183)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>[junit4]>  at 
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>[junit4]>  at 
> org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>[junit4]>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.checkExplanations(CheckHits.java:310)
>[junit4]>  at 
> org.apache.lucene.search.QueryUtils.checkExplanations(QueryUtils.java:104)
>[junit4]>  at 
> 

[jira] [Commented] (LUCENE-7352) TestSimpleExplanationsWithFillerDocs failures

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346768#comment-15346768
 ] 

ASF subversion and git services commented on LUCENE-7352:
-

Commit 1e4d51f4085664ef073ecac18dd572b0a9a02757 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e4d51f ]

LUCENE-7352: Fix CheckHits for DisjunctionMax queries that generate negative 
scores.


> TestSimpleExplanationsWithFillerDocs failures
> -
>
> Key: LUCENE-7352
> URL: https://issues.apache.org/jira/browse/LUCENE-7352
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
> Attachments: LUCENE-7352.patch
>
>
> Policeman Jenkins found reproducible {{testDMQ8()}} and {{testDMQ9()}} 
> failures on master 
> [http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17037/]:
> {noformat}
> Checking out Revision ece9d85cbea962fd7d327010f1ba184cefdfa8ed 
> (refs/remotes/origin/master)
> [...]
> [junit4] Suite: org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleExplanationsWithFillerDocs -Dtests.method=testDMQ8 
> -Dtests.seed=882B619046E7216B -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=fo -Dtests.timezone=Asia/Ashgabat -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J2 | TestSimpleExplanationsWithFillerDocs.testDMQ8 
> <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> (+((field:yy (field:w5)^100.0) | (field:xx)^10.0)~0.5 -extra:extra) 
> NEVER:MATCH: score(doc=713)=-1.9956312E-6 != explanationScore=-3.9912625E-6 
> Explanation: -3.9912625E-6 = sum of:
>[junit4]>   -3.9912625E-6 = weight(field:w5 in 713) 
> [RandomSimilarity], result of:
>[junit4]> -3.9912625E-6 = score(IBSimilarity, doc=713, freq=1.0), 
> computed from:
>[junit4]>   100.0 = boost
>[junit4]>   0.0 = NormalizationH2, computed from: 
>[junit4]> 1.0 = tf
>[junit4]> 5.502638 = avgFieldLength
>[junit4]> 5.6493154E19 = len
>[junit4]>   0.2533109 = LambdaTTF, computed from: 
>[junit4]> 2256.0 = totalTermFreq
>[junit4]> 8909.0 = numberOfDocuments
>[junit4]>   -3.9912624E-8 = DistributionSPL
>[junit4]>  expected:<-1.9956312E-6> but was:<-3.9912625E-6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([882B619046E7216B:D4118BC551735775]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:50)
>[junit4]>  at junit.framework.Assert.failNotEquals(Assert.java:287)
>[junit4]>  at junit.framework.Assert.assertEquals(Assert.java:120)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:501)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:196)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:183)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>[junit4]>  at 
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>[junit4]>  at 
> org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>[junit4]>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>   

[jira] [Commented] (LUCENE-7352) TestSimpleExplanationsWithFillerDocs failures

2016-06-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346767#comment-15346767
 ] 

ASF subversion and git services commented on LUCENE-7352:
-

Commit c5defadd70a9f91bc31012b7c31c39f16d883849 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c5defad ]

LUCENE-7352: Fix CheckHits for DisjunctionMax queries that generate negative 
scores.


> TestSimpleExplanationsWithFillerDocs failures
> -
>
> Key: LUCENE-7352
> URL: https://issues.apache.org/jira/browse/LUCENE-7352
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
> Attachments: LUCENE-7352.patch
>
>
> Policeman Jenkins found reproducible {{testDMQ8()}} and {{testDMQ9()}} 
> failures on master 
> [http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17037/]:
> {noformat}
> Checking out Revision ece9d85cbea962fd7d327010f1ba184cefdfa8ed 
> (refs/remotes/origin/master)
> [...]
> [junit4] Suite: org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleExplanationsWithFillerDocs -Dtests.method=testDMQ8 
> -Dtests.seed=882B619046E7216B -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=fo -Dtests.timezone=Asia/Ashgabat -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J2 | TestSimpleExplanationsWithFillerDocs.testDMQ8 
> <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> (+((field:yy (field:w5)^100.0) | (field:xx)^10.0)~0.5 -extra:extra) 
> NEVER:MATCH: score(doc=713)=-1.9956312E-6 != explanationScore=-3.9912625E-6 
> Explanation: -3.9912625E-6 = sum of:
>[junit4]>   -3.9912625E-6 = weight(field:w5 in 713) 
> [RandomSimilarity], result of:
>[junit4]> -3.9912625E-6 = score(IBSimilarity, doc=713, freq=1.0), 
> computed from:
>[junit4]>   100.0 = boost
>[junit4]>   0.0 = NormalizationH2, computed from: 
>[junit4]> 1.0 = tf
>[junit4]> 5.502638 = avgFieldLength
>[junit4]> 5.6493154E19 = len
>[junit4]>   0.2533109 = LambdaTTF, computed from: 
>[junit4]> 2256.0 = totalTermFreq
>[junit4]> 8909.0 = numberOfDocuments
>[junit4]>   -3.9912624E-8 = DistributionSPL
>[junit4]>  expected:<-1.9956312E-6> but was:<-3.9912625E-6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([882B619046E7216B:D4118BC551735775]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:50)
>[junit4]>  at junit.framework.Assert.failNotEquals(Assert.java:287)
>[junit4]>  at junit.framework.Assert.assertEquals(Assert.java:120)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:501)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:196)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:183)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>[junit4]>  at 
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>[junit4]>  at 
> org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>[junit4]>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 667 - Still Failing!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/667/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not find collection:.system

Stack Trace:
java.lang.AssertionError: Could not find collection:.system
at 
__randomizedtesting.SeedInfo.seed([EAA29D29CB38C1CE:32EFB07E3CE5646E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:154)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:134)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-9246) Errors for Streaming Expressions using JDBC (Oracle) stream source

2016-06-23 Thread Hui Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Liu updated SOLR-9246:
--
Attachment: Re Errors for Streaming Expressions using JDBC (Oracle) stream 
source.txt

Attach the original email threads.

> Errors for Streaming Expressions using JDBC (Oracle) stream source
> --
>
> Key: SOLR-9246
> URL: https://issues.apache.org/jira/browse/SOLR-9246
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0.1
> Environment: Windows 7
>Reporter: Hui Liu
> Attachments: Re Errors for Streaming Expressions using JDBC (Oracle) 
> stream source.txt
>
>
> I have Solr 6.0.0 installed on my PC (windows 7), I was experimenting with 
> ‘Streaming Expression’ by using Oracle jdbc as the 
> stream source, but got 'null pointer' errors, below is the details on how to 
> reproduce this error:
> 1. create a collection 'document6' which only contain long and string data 
> type, 
> schema.xml for Solr collection 'document6': (newly created empty collections 
> with 2 shards) 
> ===
> 
>   
>  
>  
>   docValues="true" />
>   precisionStep="0" positionIncrementGap="0"/>
>  
> 
>
> 
>   
>omitNorms="true"/>
>
>
>   multiValued="false"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>   docValues="true"/>
>
>   document_id
>   document_id
> 
> 2. create a new Oracle (version 11.2.0.3) table 'document6' that only contain 
> columns whose jdbc type is long and string, 
> create table document6 
> (document_id number(12) not null,
>  sender_msg_dest varchar2(256),
>  recip_msg_dest  varchar2(256),
>  document_type   varchar2(20),
>  document_keyvarchar2(100));
> loaded 9 records;
> Oracle table 'document6': (newly created Oracle table with 9 records) 
> =
> QA_DOCREP@qlgdb1 > desc document6
>  Name  Null?Type
>  -  
> 
>  DOCUMENT_ID   NOT NULL NUMBER(12)
>  SENDER_MSG_DESTVARCHAR2(256)
>  RECIP_MSG_DEST VARCHAR2(256)
>  DOCUMENT_TYPE  VARCHAR2(20)
>  DOCUMENT_KEY   VARCHAR2(100)
> 3. tried this jdbc streaming expression in my browser, getting the error 
> stack (see below)
> http://localhost:8988/solr/document6/stream?expr=jdbc(connection="jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql="SELECT
>  document_id,sender_msg_dest,recip_msg_dest,document_type,document_key FROM 
> document6",sort="document_id asc",driver="oracle.jdbc.driver.OracleDriver")
> errors in solr.log
> ==
> 2016-06-23 14:07:02.833 INFO  (qtp1389647288-139) [c:document6 s:shard2 
> r:core_node1 x:document6_shard2_replica1] o.a.s.c.S.Request 
> [document6_shard2_replica1]  webapp=/solr path=/stream 
> params={expr=jdbc(connection%3D"jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql%3D"SELECT+document_id,sender_msg_dest,recip_msg_dest,document_type,document_key+FROM+document6",sort%3D"document_id+asc",driver%3D"oracle.jdbc.driver.OracleDriver")}
>  status=0 QTime=1
> 2016-06-23 14:07:05.282 ERROR (qtp1389647288-139) [c:document6 s:shard2 
> r:core_node1 x:document6_shard2_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.read(JDBCStream.java:305)
>   at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.read(ExceptionStream.java:64)
>   at 
> org.apache.solr.handler.StreamHandler$TimerStream.read(StreamHandler.java:374)
>   at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:305)
>   at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
>   at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
>   at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
>   at 

[jira] [Created] (SOLR-9246) Errors for Streaming Expressions using JDBC (Oracle) stream source

2016-06-23 Thread Hui Liu (JIRA)
Hui Liu created SOLR-9246:
-

 Summary: Errors for Streaming Expressions using JDBC (Oracle) 
stream source
 Key: SOLR-9246
 URL: https://issues.apache.org/jira/browse/SOLR-9246
 Project: Solr
  Issue Type: Bug
Affects Versions: 6.0.1
 Environment: Windows 7
Reporter: Hui Liu


I have Solr 6.0.0 installed on my PC (windows 7), I was experimenting with 
‘Streaming Expression’ by using Oracle jdbc as the 
stream source, but got 'null pointer' errors, below is the details on how to 
reproduce this error:

1. create a collection 'document6' which only contain long and string data 
type, 

schema.xml for Solr collection 'document6': (newly created empty collections 
with 2 shards) 
===

  
 
 
 
 
 

   

  
  
   
   
 
 
 
 
 
 
   
  document_id
  document_id


2. create a new Oracle (version 11.2.0.3) table 'document6' that only contain 
columns whose jdbc type is long and string, 

create table document6 
(document_id number(12) not null,
 sender_msg_dest varchar2(256),
 recip_msg_dest  varchar2(256),
 document_type   varchar2(20),
 document_keyvarchar2(100));

loaded 9 records;

Oracle table 'document6': (newly created Oracle table with 9 records) 
=
QA_DOCREP@qlgdb1 > desc document6
 Name  Null?Type
 -  
 DOCUMENT_ID   NOT NULL NUMBER(12)
 SENDER_MSG_DESTVARCHAR2(256)
 RECIP_MSG_DEST VARCHAR2(256)
 DOCUMENT_TYPE  VARCHAR2(20)
 DOCUMENT_KEY   VARCHAR2(100)

3. tried this jdbc streaming expression in my browser, getting the error stack 
(see below)

http://localhost:8988/solr/document6/stream?expr=jdbc(connection="jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql="SELECT
 document_id,sender_msg_dest,recip_msg_dest,document_type,document_key FROM 
document6",sort="document_id asc",driver="oracle.jdbc.driver.OracleDriver")

errors in solr.log
==
2016-06-23 14:07:02.833 INFO  (qtp1389647288-139) [c:document6 s:shard2 
r:core_node1 x:document6_shard2_replica1] o.a.s.c.S.Request 
[document6_shard2_replica1]  webapp=/solr path=/stream 
params={expr=jdbc(connection%3D"jdbc:oracle:thin:qa_docrep/abc...@lit-racq01-scan.qa.gxsonline.net:1521/qlgdb",sql%3D"SELECT+document_id,sender_msg_dest,recip_msg_dest,document_type,document_key+FROM+document6",sort%3D"document_id+asc",driver%3D"oracle.jdbc.driver.OracleDriver")}
 status=0 QTime=1
2016-06-23 14:07:05.282 ERROR (qtp1389647288-139) [c:document6 s:shard2 
r:core_node1 x:document6_shard2_replica1] o.a.s.c.s.i.s.ExceptionStream 
java.lang.NullPointerException
at 
org.apache.solr.client.solrj.io.stream.JDBCStream.read(JDBCStream.java:305)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.read(ExceptionStream.java:64)
at 
org.apache.solr.handler.StreamHandler$TimerStream.read(StreamHandler.java:374)
at 
org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:305)
at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
at 
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
at 
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
at 
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
  

[jira] [Commented] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-06-23 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346632#comment-15346632
 ] 

Shalin Shekhar Mangar commented on SOLR-9207:
-

Thanks Pushkar. PeerSync doesn't stream so this is not surprising. Which 
solution have you implemented in the patch? A rough description would go a long 
way. Also, there are some unrelated changes in TSTLookup which don't belong 
here. A workaround would be to increase the default upload limit in Jetty.

> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Priority: Minor
> Attachments: SOLR-9207.patch
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7353) ScandinavianFoldingFilterFactory and ScandinavianNormalizationFilterFactory should implement MultiTermAwareComponent

2016-06-23 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7353:


 Summary: ScandinavianFoldingFilterFactory and 
ScandinavianNormalizationFilterFactory should implement MultiTermAwareComponent
 Key: LUCENE-7353
 URL: https://issues.apache.org/jira/browse/LUCENE-7353
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


These token filters are safe to apply for multi-term queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper

2016-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346622#comment-15346622
 ] 

Jan Høydahl commented on SOLR-9194:
---

Got the patch applied. Here are some comments:

(x) When mis-typing, such as omitting the {{-z}},  we print the error msg 
followed by the full usage. Suggest we instead print the helpful error message 
followed by {{Type bin/solr zk -help for usage help}}
(/) There's a debug print that should probably go away? {{run_tool cp -src 
build.xml -dst zk:/ -zkHost localhost:9983 -recurse false}} (bin/solr line 1001)
(/) Tested cp both ways and zk->zk, recursive error msg and and with and 
without trailing slask (/)
(/) Tested rm on both file and folder
(/) Tested with ZK_HOST set in solr.in.sh 
(/) Tested mv of znode
(x) This log msg from {{CloudSolrClient}} is annoying: {{INFO  - 2016-06-23 
16:22:05.124; org.apache.solr.client.solrj.impl.CloudSolrClient; Final 
constructed zkHost string: localhost:9983}}, and it is followed by a blank 
line..
(/) Tested upconfig
(x) Typo: _Name of the configset in Zookeeper that will be the *destinatino* 
of_...
(x) The command {{bin/solr zk rm -r /}} succeeds, rendering Solr useless :-) 
Should we simply thow an error instead?

Why do we write "_Solr MUST be started once to initialize Zookeeper before 
using these commands_"? Cannot this script put e.g. {{security.json}} in the 
chroot even if Solr is not yet started?

Could we wish for a {{solr zk ls}} command? But that should ba a follow-on 
ticket.

> Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9194.patch, SOLR-9194.patch
>
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: Apache Solr Ref Guide for 6.1

2016-06-23 Thread Anshum Gupta
+1

On Tue, Jun 21, 2016 at 11:19 AM, Cassandra Targett 
wrote:

> Please VOTE to release the Apache Solr Ref Guide for 6.1.
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.1-RC0/
>
> $ more /apache-solr-ref-guide-6.1.pdf.sha1
> 5929b03039e99644bc4ef23b37088b343e2ff0c8  apache-solr-ref-guide-6.1.pdf
>
> Here's my +1.
>
> Thanks,
> Cassandra
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


[JENKINS] Lucene-Solr-SmokeRelease-5.5 - Build # 21 - Still Failing

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/21/

No tests ran.

Build Log:
[...truncated 39773 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (17.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.5.2-src.tgz...
   [smoker] 28.7 MB in 0.02 sec (1166.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.2.tgz...
   [smoker] 63.4 MB in 0.06 sec (1151.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.2.zip...
   [smoker] 73.9 MB in 0.06 sec (1172.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.5.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.2.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.2-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 220 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 220 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker]   Backcompat testing not required for release 6.0.1 because 
it's not less than 5.5.2
   [smoker]   Backcompat testing not required for release 6.0.0 because 
it's not less than 5.5.2
   [smoker]   Backcompat testing not required for release 6.1.0 because 
it's not less than 5.5.2
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (223.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.5.2-src.tgz...
   [smoker] 37.6 MB in 1.83 sec (20.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.2.tgz...
   [smoker] 130.5 MB in 7.55 sec (17.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.2.zip...
   [smoker] 138.4 MB in 4.54 sec (30.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.5.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.5.2.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.2/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.2/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   

[jira] [Updated] (SOLR-9193) Add scoreNodes Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Summary: Add scoreNodes Streaming Expression  (was: Add the scoreNodes 
Streaming Expression)

> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *nodeFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the scoreNodes Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Description: 
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. 

The computed score will be added to each node in the *nodeScore* field. The 
docFreq of the node across the entire collection will be added to each node in 
the *nodeFreq* field. Other streaming expressions can then perform a ranking 
based on the nodeScore or compute their own score using the nodeFreq.

proposed syntax:
{code}
top(n="10",
  sort="nodeScore desc",
  scoreNodes(gatherNodes(...))) 
{code}








  was:
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes (SOLR-8925) expression and us a tf-idf scoring algorithm 
to score the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. The score will be added to each node in the "nscore" field. The 
underlying gatherNodes expression will perform the aggregation providing the tf.

proposed syntax:
{code}
top(n="5",
  sort="nscore desc",
  scoreNodes(gatherNodes(...))) 
{code}









> Add the scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *nodeFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the scoreNodes Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Description: 
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes (SOLR-8925) expression and us a tf-idf scoring algorithm 
to score the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. The score will be added to each node in the "nscore" field. The 
underlying gatherNodes expression will perform the aggregation providing the tf.

proposed syntax:
{code}
top(n="5",
  sort="nscore desc",
  scoreNodes(gatherNodes(...))) 
{code}








  was:
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. 

The computed score will be added to each node in the *nodeScore* field. The 
docFreq of the node across the entire collection will be added to each node in 
the *nodeFreq* field. Other streaming expressions can then perform a ranking 
based on the nodeScore or compute their own score using the nodeFreq.

proposed syntax:
{code}
top(n="5",
  sort="nodeScore desc",
  scoreNodes(gatherNodes(...))) 
{code}









> Add the scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes (SOLR-8925) expression and us a tf-idf scoring 
> algorithm to score the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. The score will be added to each node in the "nscore" 
> field. The underlying gatherNodes expression will perform the aggregation 
> providing the tf.
> proposed syntax:
> {code}
> top(n="5",
>   sort="nscore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9243:
-
Fix Version/s: 6.2

> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> This ticket will add a terms.list parameter to the TermsComponent to retrieve 
> Terms and docFreq for a specific list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the scoreNodes Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Fix Version/s: 6.2

> Add the scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *nodeFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="5",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the scoreNodes Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Description: 
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. 

The computed score will be added to each node in the *nodeScore* field. The 
docFreq of the node across the entire collection will be added to each node in 
the *nodeFreq* field. Other streaming expressions can then perform a ranking 
based on the nodeScore or compute their own score using the nodeFreq.

proposed syntax:
{code}
top(n="5",
  sort="nodeScore desc",
  scoreNodes(gatherNodes(...))) 
{code}








  was:
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. 

The computed score will be added to each node in the *nodeScore* field. The 
docFreq of the node across the entire collection will be added to each node in 
the *nodeFreq" field. Other streaming expressions can then perform a ranking 
based on the nodeScore or compute their own score using the nodeFreq.

proposed syntax:
{code}
top(n="5",
  sort="nodeScore desc",
  scoreNodes(gatherNodes(...))) 
{code}









> Add the scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *nodeFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="5",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the scoreNodes Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Description: 
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. 

The computed score will be added to each node in the *nodeScore* field. The 
docFreq of the node across the entire collection will be added to each node in 
the *nodeFreq" field. Other streaming expressions can then perform a ranking 
based on the nodeScore or compute their own score using the nodeFreq.

proposed syntax:
{code}
top(n="5",
  sort="nodeScore desc",
  scoreNodes(gatherNodes(...))) 
{code}








  was:
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. The score will be added to each node in the "nscore" field. The 
underlying gatherNodes expression will perform the aggregation providing the tf.

proposed syntax:
{code}
top(n="5",
  sort="nscore desc",
  scoreNodes(gatherNodes(...))) 
{code}









> Add the scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *nodeFreq" field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="5",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the scoreNodes Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Summary: Add the scoreNodes Streaming Expression  (was: Add the nodeRank 
Streaming Expression)

> Add the scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. The score will be added to each node in the "nscore" 
> field. The underlying gatherNodes expression will perform the aggregation 
> providing the tf.
> proposed syntax:
> {code}
> top(n="5",
>   sort="nscore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the nodeRank Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Description: 
The scoreNodes Streaming Expression is another *GraphExpression*. It will 
decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The scoreNodes expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then assign the score 
to each node. The score will be added to each node in the "nscore" field. The 
underlying gatherNodes expression will perform the aggregation providing the tf.

proposed syntax:
{code}
top(n="5",
  sort="nscore desc",
  scoreNodes(gatherNodes(...))) 
{code}








  was:
The nodeScore Streaming Expression is another GraphExpression. It will decorate 
a gatherNodes expression and us a tf-idf scoring algorithm to score the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The nodeScore expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then perform then 
assign a score to each node. The score will be added to each node in the 
"nscore" field. The underlying gatherNodes expression will perform the 
aggregation providing the tf.

proposed syntax:
{code}
top(n="5",
  sort="nscore desc",
  nodeScore(gatherNodes(...))) 
{code}









> Add the nodeRank Streaming Expression
> -
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. The score will be added to each node in the "nscore" 
> field. The underlying gatherNodes expression will perform the aggregation 
> providing the tf.
> proposed syntax:
> {code}
> top(n="5",
>   sort="nscore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9193) Add the nodeRank Streaming Expression

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Description: 
The nodeScore Streaming Expression is another GraphExpression. It will decorate 
a gatherNodes expression and us a tf-idf scoring algorithm to score the nodes.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The nodeScore expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then perform then 
assign a score to each node. The score will be added to each node in the 
"nscore" field. The underlying gatherNodes expression will perform the 
aggregation providing the tf.

proposed syntax:
{code}
top(n="5",
  sort="nscore desc",
  nodeScore(gatherNodes(...))) 
{code}








  was:
The nodeRank Streaming Expression is another GraphExpression. It will decorate 
a gatherNodes expression and us a tf-idf ranking algorithm to rank the nodes to 
support recommendations.

The gatherNodes expression only gathers nodes and aggregations. This is similar 
in nature to tf in search ranking, where the number of times a node appears in 
the traversal represents the tf. But this skews recommendations towards nodes 
that appear frequently in the index.

Using the idf for each node we can score each node as a function of tf and idf. 
This will provide a boost to nodes that appear less frequently in the index. 

The nodeRank expression will gather the idf's from the shards for each node 
emitted by the underlying gatherNodes expression. It will then perform the 
ranking. The underlying gatherNodes expression will perform the aggregation 
providing the tf.


> Add the nodeRank Streaming Expression
> -
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The nodeScore Streaming Expression is another GraphExpression. It will 
> decorate a gatherNodes expression and us a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf and 
> idf. This will provide a boost to nodes that appear less frequently in the 
> index. 
> The nodeScore expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then perform then 
> assign a score to each node. The score will be added to each node in the 
> "nscore" field. The underlying gatherNodes expression will perform the 
> aggregation providing the tf.
> proposed syntax:
> {code}
> top(n="5",
>   sort="nscore desc",
>   nodeScore(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 338 - Still Failing!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/338/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Thu Jun 23 05:55:19 
AKDT 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Thu Jun 23 05:55:19 AKDT 2016
at 
__randomizedtesting.SeedInfo.seed([361CB47A3C3B2120:EDB7B4BC39134893]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1501)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:853)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11434 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   

[jira] [Updated] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-06-23 Thread Pushkar Raste (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pushkar Raste updated SOLR-9207:

Attachment: SOLR-9207.patch

> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Priority: Minor
> Attachments: SOLR-9207.patch
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9243:
-
Description: 
This ticket will add a terms.list parameter to the TermsComponent to retrieve 
Terms and docFreq for a specific list of Terms.

This is needed to support SOLR-9193 which needs to fetch the docFreq for a list 
of Terms.

This should also be useful as a general tool for fetching docFreq given a list 
of Terms.

  was:
Currently the TermsComponent is used to retrieve Terms given a set of 
parameters.

This ticket will add the ability to retrieve Terms and docFreq for a specific 
list of Terms.

This is needed to support SOLR-9193 which needs to fetch the docFreq for a list 
of Terms.

This should also be useful as a general tool for fetching docFreq given a list 
of Terms.


> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> This ticket will add a terms.list parameter to the TermsComponent to retrieve 
> Terms and docFreq for a specific list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-23 Thread Andriy Rysin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346413#comment-15346413
 ] 

Andriy Rysin commented on LUCENE-7287:
--

Sure, I can add a comment, but I guess I need to test the solution first and as 
I am not familiar with solr so it may take me few days. Unless [~iorixxx] 
already verified this solution then we can just post it.

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 284 - Failure

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/284/

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Thu Jun 23 14:37:02 
CEST 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Thu Jun 23 14:37:02 CEST 2016
at 
__randomizedtesting.SeedInfo.seed([F2A73B26F91730E7:290C3BE0FC3F5954]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1508)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:858)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11003 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9243:
-
Component/s: SearchComponents - other

> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> Currently the TermsComponent is used to retrieve Terms given a set of 
> parameters.
> This ticket will add the ability to retrieve Terms and docFreq for a specific 
> list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9243:
-
Labels:   (was: Ter)

> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> Currently the TermsComponent is used to retrieve Terms given a set of 
> parameters.
> This ticket will add the ability to retrieve Terms and docFreq for a specific 
> list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9243:
-
Labels: Ter  (was: )

> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> Currently the TermsComponent is used to retrieve Terms given a set of 
> parameters.
> This ticket will add the ability to retrieve Terms and docFreq for a specific 
> list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-06-23 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9243:
-
Summary: Add terms.list parameter to the TermsComponent to fetch the 
docFreq for a list of terms  (was: Add ability for the TermsComponent to fetch 
the docFreq for a list of terms)

> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> Currently the TermsComponent is used to retrieve Terms given a set of 
> parameters.
> This ticket will add the ability to retrieve Terms and docFreq for a specific 
> list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9167) Unable to connect to solr via solrj jdbc driver

2016-06-23 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346307#comment-15346307
 ] 

Joel Bernstein commented on SOLR-9167:
--

Another round of the SQL/JDBC work will be coming along over the next couple of 
releases. We can try to find ways to make this simpler and possibly support a 
non-cloud mode.

> Unable to connect to solr via solrj jdbc driver 
> 
>
> Key: SOLR-9167
> URL: https://issues.apache.org/jira/browse/SOLR-9167
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 6.0
> Environment: java.version=1.8.0_77
> java.vendor=Oracle Corporation
> os.name=Mac OS X
> os.arch=x86_64
> os.version=10.11.5
>Reporter: Christian Schwarzinger
>Priority: Minor
>
> Getting the following error, when trying to connect to solr via jdbc driver.
> {panel:title=client 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> (ClientCnxn.java:1102) - Session 0x0 for server 
> fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:8983, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Packet len1213486160 is out of range!
>   at 
> org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {panel}
> This is imho. caused by the following server error:
> {panel:title=server 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> Illegal character 0x0 in state=START for buffer 
> HeapByteBuffer@5cc6fe87[p=1,l=49,c=8192,r=48]={\x00<<<\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>>>charset=UTF-8\r\nCo...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
> {panel}
> Using http interface for sql via curl works however:
> {code}
> bin/solr start -cloud
> bin/solr create -c test
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/test/update/json/docs' --data-binary '
> {
>   "id": "1",
>   "title": "Doc 1"
> }'
> curl 'http://localhost:8983/solr/test/update?commit=true'
> curl --data-urlencode 'stmt=SELECT count(*) FROM test' 
> http://localhost:8983/solr/test/sql?aggregationMode=facet
> {code}
> This is the code, that fails:
> {code}
> Connection con = 
> DriverManager.getConnection("jdbc:solr://localhost:8983?collection=test=map_reduce=2");
> {code}
> taken from: 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface
> Same error also occurs in 6.1.0-68 developer snapshot.
> Background: I'm trying to write a solr sql connector for Jedox BI Suite, 
> which should allow for better integration of solr into BI processes. Any 
> advice / help appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9167) Unable to connect to solr via solrj jdbc driver

2016-06-23 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346304#comment-15346304
 ] 

Joel Bernstein commented on SOLR-9167:
--

It's in the documentation here 
https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface#ParallelSQLInterface-JDBCDriver.

But agreed that it could be called out with more emphasis.


> Unable to connect to solr via solrj jdbc driver 
> 
>
> Key: SOLR-9167
> URL: https://issues.apache.org/jira/browse/SOLR-9167
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 6.0
> Environment: java.version=1.8.0_77
> java.vendor=Oracle Corporation
> os.name=Mac OS X
> os.arch=x86_64
> os.version=10.11.5
>Reporter: Christian Schwarzinger
>Priority: Minor
>
> Getting the following error, when trying to connect to solr via jdbc driver.
> {panel:title=client 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> (ClientCnxn.java:1102) - Session 0x0 for server 
> fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:8983, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Packet len1213486160 is out of range!
>   at 
> org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {panel}
> This is imho. caused by the following server error:
> {panel:title=server 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> Illegal character 0x0 in state=START for buffer 
> HeapByteBuffer@5cc6fe87[p=1,l=49,c=8192,r=48]={\x00<<<\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>>>charset=UTF-8\r\nCo...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
> {panel}
> Using http interface for sql via curl works however:
> {code}
> bin/solr start -cloud
> bin/solr create -c test
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/test/update/json/docs' --data-binary '
> {
>   "id": "1",
>   "title": "Doc 1"
> }'
> curl 'http://localhost:8983/solr/test/update?commit=true'
> curl --data-urlencode 'stmt=SELECT count(*) FROM test' 
> http://localhost:8983/solr/test/sql?aggregationMode=facet
> {code}
> This is the code, that fails:
> {code}
> Connection con = 
> DriverManager.getConnection("jdbc:solr://localhost:8983?collection=test=map_reduce=2");
> {code}
> taken from: 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface
> Same error also occurs in 6.1.0-68 developer snapshot.
> Background: I'm trying to write a solr sql connector for Jedox BI Suite, 
> which should allow for better integration of solr into BI processes. Any 
> advice / help appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 523 - Failure

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/523/

No tests ran.

Build Log:
[...truncated 40558 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.8 MB in 0.03 sec (988.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 64.2 MB in 0.06 sec (1018.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 74.8 MB in 0.07 sec (1035.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6018 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6018 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 224 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (40.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 39.0 MB in 0.84 sec (46.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 135.3 MB in 2.40 sec (56.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 144.0 MB in 0.68 sec (211.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]  

[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 337 - Still Failing!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/337/
Java: 32bit/jdk1.7.0_80 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:39834/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:39834/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([B1EA783DA6AF81D3:39BE47E70853EC2B]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 40 - Still Failing

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/40/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:42919/h/k","node_name":"127.0.0.1:42919_h%2Fk","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:52098/h/k;,   
"node_name":"127.0.0.1:52098_h%2Fk",   "state":"down"}, 
"core_node2":{   "state":"down",   
"base_url":"http://127.0.0.1:33169/h/k;,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:33169_h%2Fk"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:42919/h/k;,   
"node_name":"127.0.0.1:42919_h%2Fk",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:42919/h/k","node_name":"127.0.0.1:42919_h%2Fk","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:52098/h/k;,
  "node_name":"127.0.0.1:52098_h%2Fk",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:33169/h/k;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:33169_h%2Fk"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:42919/h/k;,
  "node_name":"127.0.0.1:42919_h%2Fk",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([D42E1C6755947D0D:5C7A23BDFB6810F5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 

[jira] [Commented] (LUCENE-7352) TestSimpleExplanationsWithFillerDocs failures

2016-06-23 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346143#comment-15346143
 ] 

Michael McCandless commented on LUCENE-7352:


+1

Thank you for digging [~jpountz]!

> TestSimpleExplanationsWithFillerDocs failures
> -
>
> Key: LUCENE-7352
> URL: https://issues.apache.org/jira/browse/LUCENE-7352
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
> Attachments: LUCENE-7352.patch
>
>
> Policeman Jenkins found reproducible {{testDMQ8()}} and {{testDMQ9()}} 
> failures on master 
> [http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17037/]:
> {noformat}
> Checking out Revision ece9d85cbea962fd7d327010f1ba184cefdfa8ed 
> (refs/remotes/origin/master)
> [...]
> [junit4] Suite: org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleExplanationsWithFillerDocs -Dtests.method=testDMQ8 
> -Dtests.seed=882B619046E7216B -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=fo -Dtests.timezone=Asia/Ashgabat -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J2 | TestSimpleExplanationsWithFillerDocs.testDMQ8 
> <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> (+((field:yy (field:w5)^100.0) | (field:xx)^10.0)~0.5 -extra:extra) 
> NEVER:MATCH: score(doc=713)=-1.9956312E-6 != explanationScore=-3.9912625E-6 
> Explanation: -3.9912625E-6 = sum of:
>[junit4]>   -3.9912625E-6 = weight(field:w5 in 713) 
> [RandomSimilarity], result of:
>[junit4]> -3.9912625E-6 = score(IBSimilarity, doc=713, freq=1.0), 
> computed from:
>[junit4]>   100.0 = boost
>[junit4]>   0.0 = NormalizationH2, computed from: 
>[junit4]> 1.0 = tf
>[junit4]> 5.502638 = avgFieldLength
>[junit4]> 5.6493154E19 = len
>[junit4]>   0.2533109 = LambdaTTF, computed from: 
>[junit4]> 2256.0 = totalTermFreq
>[junit4]> 8909.0 = numberOfDocuments
>[junit4]>   -3.9912624E-8 = DistributionSPL
>[junit4]>  expected:<-1.9956312E-6> but was:<-3.9912625E-6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([882B619046E7216B:D4118BC551735775]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:50)
>[junit4]>  at junit.framework.Assert.failNotEquals(Assert.java:287)
>[junit4]>  at junit.framework.Assert.assertEquals(Assert.java:120)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:501)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:196)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:183)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>[junit4]>  at 
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>[junit4]>  at 
> org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>[junit4]>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.checkExplanations(CheckHits.java:310)
>[junit4]>  at 
> org.apache.lucene.search.QueryUtils.checkExplanations(QueryUtils.java:104)
>[junit4]>  at 
> 

[jira] [Commented] (SOLR-9167) Unable to connect to solr via solrj jdbc driver

2016-06-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346039#comment-15346039
 ] 

Uwe Schindler commented on SOLR-9167:
-

I'd propose a syntax like {{jdbc:solr:http://localhost:8983/solr}} (of course 
with path like SolrJ).

> Unable to connect to solr via solrj jdbc driver 
> 
>
> Key: SOLR-9167
> URL: https://issues.apache.org/jira/browse/SOLR-9167
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 6.0
> Environment: java.version=1.8.0_77
> java.vendor=Oracle Corporation
> os.name=Mac OS X
> os.arch=x86_64
> os.version=10.11.5
>Reporter: Christian Schwarzinger
>Priority: Minor
>
> Getting the following error, when trying to connect to solr via jdbc driver.
> {panel:title=client 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> (ClientCnxn.java:1102) - Session 0x0 for server 
> fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:8983, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Packet len1213486160 is out of range!
>   at 
> org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {panel}
> This is imho. caused by the following server error:
> {panel:title=server 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> Illegal character 0x0 in state=START for buffer 
> HeapByteBuffer@5cc6fe87[p=1,l=49,c=8192,r=48]={\x00<<<\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>>>charset=UTF-8\r\nCo...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
> {panel}
> Using http interface for sql via curl works however:
> {code}
> bin/solr start -cloud
> bin/solr create -c test
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/test/update/json/docs' --data-binary '
> {
>   "id": "1",
>   "title": "Doc 1"
> }'
> curl 'http://localhost:8983/solr/test/update?commit=true'
> curl --data-urlencode 'stmt=SELECT count(*) FROM test' 
> http://localhost:8983/solr/test/sql?aggregationMode=facet
> {code}
> This is the code, that fails:
> {code}
> Connection con = 
> DriverManager.getConnection("jdbc:solr://localhost:8983?collection=test=map_reduce=2");
> {code}
> taken from: 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface
> Same error also occurs in 6.1.0-68 developer snapshot.
> Background: I'm trying to write a solr sql connector for Jedox BI Suite, 
> which should allow for better integration of solr into BI processes. Any 
> advice / help appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9167) Unable to connect to solr via solrj jdbc driver

2016-06-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346037#comment-15346037
 ] 

Uwe Schindler commented on SOLR-9167:
-

We should document that somewhere explicitely that the JDBC driver has to point 
to the ZK address and not give it a HTTP listener address. I had several 
customers that wanted to try the JDBC driver and failed.

Nevertheless, would it not also be an option to allow to the HTTP address with 
the JDBC driver? I know that streaming API only works with SolrCloud, but maybe 
that would allow people to try the JDBC driver also without a SolrCloud setup. 
Is this possible at all?

> Unable to connect to solr via solrj jdbc driver 
> 
>
> Key: SOLR-9167
> URL: https://issues.apache.org/jira/browse/SOLR-9167
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 6.0
> Environment: java.version=1.8.0_77
> java.vendor=Oracle Corporation
> os.name=Mac OS X
> os.arch=x86_64
> os.version=10.11.5
>Reporter: Christian Schwarzinger
>Priority: Minor
>
> Getting the following error, when trying to connect to solr via jdbc driver.
> {panel:title=client 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> (ClientCnxn.java:1102) - Session 0x0 for server 
> fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:8983, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Packet len1213486160 is out of range!
>   at 
> org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {panel}
> This is imho. caused by the following server error:
> {panel:title=server 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> Illegal character 0x0 in state=START for buffer 
> HeapByteBuffer@5cc6fe87[p=1,l=49,c=8192,r=48]={\x00<<<\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>>>charset=UTF-8\r\nCo...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
> {panel}
> Using http interface for sql via curl works however:
> {code}
> bin/solr start -cloud
> bin/solr create -c test
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/test/update/json/docs' --data-binary '
> {
>   "id": "1",
>   "title": "Doc 1"
> }'
> curl 'http://localhost:8983/solr/test/update?commit=true'
> curl --data-urlencode 'stmt=SELECT count(*) FROM test' 
> http://localhost:8983/solr/test/sql?aggregationMode=facet
> {code}
> This is the code, that fails:
> {code}
> Connection con = 
> DriverManager.getConnection("jdbc:solr://localhost:8983?collection=test=map_reduce=2");
> {code}
> taken from: 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface
> Same error also occurs in 6.1.0-68 developer snapshot.
> Background: I'm trying to write a solr sql connector for Jedox BI Suite, 
> which should allow for better integration of solr into BI processes. Any 
> advice / help appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9167) Unable to connect to solr via solrj jdbc driver

2016-06-23 Thread Christian Schwarzinger (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Schwarzinger closed SOLR-9167.


> Unable to connect to solr via solrj jdbc driver 
> 
>
> Key: SOLR-9167
> URL: https://issues.apache.org/jira/browse/SOLR-9167
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 6.0
> Environment: java.version=1.8.0_77
> java.vendor=Oracle Corporation
> os.name=Mac OS X
> os.arch=x86_64
> os.version=10.11.5
>Reporter: Christian Schwarzinger
>Priority: Minor
>
> Getting the following error, when trying to connect to solr via jdbc driver.
> {panel:title=client 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> (ClientCnxn.java:1102) - Session 0x0 for server 
> fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:8983, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Packet len1213486160 is out of range!
>   at 
> org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {panel}
> This is imho. caused by the following server error:
> {panel:title=server 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> Illegal character 0x0 in state=START for buffer 
> HeapByteBuffer@5cc6fe87[p=1,l=49,c=8192,r=48]={\x00<<<\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>>>charset=UTF-8\r\nCo...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
> {panel}
> Using http interface for sql via curl works however:
> {code}
> bin/solr start -cloud
> bin/solr create -c test
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/test/update/json/docs' --data-binary '
> {
>   "id": "1",
>   "title": "Doc 1"
> }'
> curl 'http://localhost:8983/solr/test/update?commit=true'
> curl --data-urlencode 'stmt=SELECT count(*) FROM test' 
> http://localhost:8983/solr/test/sql?aggregationMode=facet
> {code}
> This is the code, that fails:
> {code}
> Connection con = 
> DriverManager.getConnection("jdbc:solr://localhost:8983?collection=test=map_reduce=2");
> {code}
> taken from: 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface
> Same error also occurs in 6.1.0-68 developer snapshot.
> Background: I'm trying to write a solr sql connector for Jedox BI Suite, 
> which should allow for better integration of solr into BI processes. Any 
> advice / help appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9167) Unable to connect to solr via solrj jdbc driver

2016-06-23 Thread Christian Schwarzinger (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Schwarzinger resolved SOLR-9167.
--
Resolution: Not A Problem

pointing to ZK solved the problem.
Thank you!

> Unable to connect to solr via solrj jdbc driver 
> 
>
> Key: SOLR-9167
> URL: https://issues.apache.org/jira/browse/SOLR-9167
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 6.0
> Environment: java.version=1.8.0_77
> java.vendor=Oracle Corporation
> os.name=Mac OS X
> os.arch=x86_64
> os.version=10.11.5
>Reporter: Christian Schwarzinger
>Priority: Minor
>
> Getting the following error, when trying to connect to solr via jdbc driver.
> {panel:title=client 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> (ClientCnxn.java:1102) - Session 0x0 for server 
> fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:8983, unexpected error, closing 
> socket connection and attempting reconnect
> java.io.IOException: Packet len1213486160 is out of range!
>   at 
> org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {panel}
> This is imho. caused by the following server error:
> {panel:title=server 
> error|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> Illegal character 0x0 in state=START for buffer 
> HeapByteBuffer@5cc6fe87[p=1,l=49,c=8192,r=48]={\x00<<<\x00\x00-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>>>charset=UTF-8\r\nCo...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
> {panel}
> Using http interface for sql via curl works however:
> {code}
> bin/solr start -cloud
> bin/solr create -c test
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/test/update/json/docs' --data-binary '
> {
>   "id": "1",
>   "title": "Doc 1"
> }'
> curl 'http://localhost:8983/solr/test/update?commit=true'
> curl --data-urlencode 'stmt=SELECT count(*) FROM test' 
> http://localhost:8983/solr/test/sql?aggregationMode=facet
> {code}
> This is the code, that fails:
> {code}
> Connection con = 
> DriverManager.getConnection("jdbc:solr://localhost:8983?collection=test=map_reduce=2");
> {code}
> taken from: 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface
> Same error also occurs in 6.1.0-68 developer snapshot.
> Background: I'm trying to write a solr sql connector for Jedox BI Suite, 
> which should allow for better integration of solr into BI processes. Any 
> advice / help appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 18 - Still Failing

2016-06-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/18/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=64703, name=collection1, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=64703, name=collection1, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55756: collection already exists: 
awholynewstresscollection_collection1_3
at __randomizedtesting.SeedInfo.seed([948547760D48CBB9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1575)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1596)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:984)




Build Log:
[...truncated 12353 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest_948547760D48CBB9-001/init-core-data-001
   [junit4]   2> 2401258 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 2401258 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 2401304 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 2401322 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 2401325 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.m.log jetty-6.1.26
   [junit4]   2> 2401359 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_49299_hdfs.9tmtrd/webapp
   [junit4]   2> 2401508 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.m.log NO JSP Support for /, did not find 
org.apache.jasper.servlet.JspServlet
   [junit4]   2> 2401727 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:49299
   [junit4]   2> 2402018 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 2402020 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.m.log jetty-6.1.26
   [junit4]   2> 2402034 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_46157_datanode.hbtxhk/webapp
   [junit4]   2> 2402156 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[948547760D48CBB9]-worker) [
] o.m.log NO JSP Support for /, did not find 
org.apache.jasper.servlet.JspServlet
   [junit4]   2> 2402471 INFO  

[jira] [Updated] (LUCENE-7352) TestSimpleExplanationsWithFillerDocs failures

2016-06-23 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7352:
-
Attachment: LUCENE-7352.patch

This is a test bug. CheckHits assumes that if there is a single sub 
explanation, then its value is necessarily the same as the parent explanation. 
This fails with dismax when there is a single sub that produces a negative 
score since in that case it uses 0 as a max score and multiplies the score with 
the tie breaker factor.

> TestSimpleExplanationsWithFillerDocs failures
> -
>
> Key: LUCENE-7352
> URL: https://issues.apache.org/jira/browse/LUCENE-7352
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
> Attachments: LUCENE-7352.patch
>
>
> Policeman Jenkins found reproducible {{testDMQ8()}} and {{testDMQ9()}} 
> failures on master 
> [http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17037/]:
> {noformat}
> Checking out Revision ece9d85cbea962fd7d327010f1ba184cefdfa8ed 
> (refs/remotes/origin/master)
> [...]
> [junit4] Suite: org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleExplanationsWithFillerDocs -Dtests.method=testDMQ8 
> -Dtests.seed=882B619046E7216B -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=fo -Dtests.timezone=Asia/Ashgabat -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J2 | TestSimpleExplanationsWithFillerDocs.testDMQ8 
> <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> (+((field:yy (field:w5)^100.0) | (field:xx)^10.0)~0.5 -extra:extra) 
> NEVER:MATCH: score(doc=713)=-1.9956312E-6 != explanationScore=-3.9912625E-6 
> Explanation: -3.9912625E-6 = sum of:
>[junit4]>   -3.9912625E-6 = weight(field:w5 in 713) 
> [RandomSimilarity], result of:
>[junit4]> -3.9912625E-6 = score(IBSimilarity, doc=713, freq=1.0), 
> computed from:
>[junit4]>   100.0 = boost
>[junit4]>   0.0 = NormalizationH2, computed from: 
>[junit4]> 1.0 = tf
>[junit4]> 5.502638 = avgFieldLength
>[junit4]> 5.6493154E19 = len
>[junit4]>   0.2533109 = LambdaTTF, computed from: 
>[junit4]> 2256.0 = totalTermFreq
>[junit4]> 8909.0 = numberOfDocuments
>[junit4]>   -3.9912624E-8 = DistributionSPL
>[junit4]>  expected:<-1.9956312E-6> but was:<-3.9912625E-6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([882B619046E7216B:D4118BC551735775]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:50)
>[junit4]>  at junit.framework.Assert.failNotEquals(Assert.java:287)
>[junit4]>  at junit.framework.Assert.assertEquals(Assert.java:120)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:501)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:196)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:183)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>[junit4]>  at 
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>[junit4]>  at 
> org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>[junit4]>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
> 

Re: VOTE: Apache Solr Ref Guide for 6.1

2016-06-23 Thread Varun Thacker
+1

On Wed, Jun 22, 2016 at 11:45 PM, Kevin Risden 
wrote:

> +1 the resized images for the SQL clients look great.
>
> Kevin Risden
>
> On Wed, Jun 22, 2016 at 1:38 AM, Shalin Shekhar Mangar <
> shalinman...@gmail.com> wrote:
>
>> +1
>>
>> On Tue, Jun 21, 2016 at 11:49 PM, Cassandra Targett 
>> wrote:
>>
>>> Please VOTE to release the Apache Solr Ref Guide for 6.1.
>>>
>>> The artifacts can be downloaded from:
>>>
>>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.1-RC0/
>>>
>>> $ more /apache-solr-ref-guide-6.1.pdf.sha1
>>> 5929b03039e99644bc4ef23b37088b343e2ff0c8  apache-solr-ref-guide-6.1.pdf
>>>
>>> Here's my +1.
>>>
>>> Thanks,
>>> Cassandra
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>
>


-- 


Regards,
Varun Thacker


[jira] [Commented] (SOLR-6492) Solr field type that supports multiple, dynamic analyzers

2016-06-23 Thread Danny Teichthal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15345928#comment-15345928
 ] 

Danny Teichthal commented on SOLR-6492:
---

[~solrtrey] - great, looking forward too.
Are there any other planned changes except from the new license?

> Solr field type that supports multiple, dynamic analyzers
> -
>
> Key: SOLR-6492
> URL: https://issues.apache.org/jira/browse/SOLR-6492
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Trey Grainger
> Fix For: 5.0
>
>
> A common request - particularly for multilingual search - is to be able to 
> support one or more dynamically-selected analyzers for a field. For example, 
> someone may have a "content" field and pass in a document in Greek (using an 
> Analyzer with Tokenizer/Filters for German), a separate document in English 
> (using an English Analyzer), and possibly even a field with mixed-language 
> content in Greek and English. This latter case could pass the content 
> separately through both an analyzer defined for Greek and another Analyzer 
> defined for English, stacking or concatenating the token streams based upon 
> the use-case.
> There are some distinct advantages in terms of index size and query 
> performance which can be obtained by stacking terms from multiple analyzers 
> in the same field instead of duplicating content in separate fields and 
> searching across multiple fields. 
> Other non-multilingual use cases may include things like switching to a 
> different analyzer for the same field to remove a feature (i.e. turning 
> on/off query-time synonyms against the same field on a per-query basis).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 336 - Failure!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/336/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:37770/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:37770/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([F249720D0461C723:7A1D4DD7AA9DAADB]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7352) TestSimpleExplanationsWithFillerDocs failures

2016-06-23 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15345887#comment-15345887
 ] 

Adrien Grand commented on LUCENE-7352:
--

I am looking into it. It does not seem to be related to BS1 this time since the 
test still fails when I disable BS1.

> TestSimpleExplanationsWithFillerDocs failures
> -
>
> Key: LUCENE-7352
> URL: https://issues.apache.org/jira/browse/LUCENE-7352
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
>
> Policeman Jenkins found reproducible {{testDMQ8()}} and {{testDMQ9()}} 
> failures on master 
> [http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17037/]:
> {noformat}
> Checking out Revision ece9d85cbea962fd7d327010f1ba184cefdfa8ed 
> (refs/remotes/origin/master)
> [...]
> [junit4] Suite: org.apache.lucene.search.TestSimpleExplanationsWithFillerDocs
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleExplanationsWithFillerDocs -Dtests.method=testDMQ8 
> -Dtests.seed=882B619046E7216B -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=fo -Dtests.timezone=Asia/Ashgabat -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J2 | TestSimpleExplanationsWithFillerDocs.testDMQ8 
> <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> (+((field:yy (field:w5)^100.0) | (field:xx)^10.0)~0.5 -extra:extra) 
> NEVER:MATCH: score(doc=713)=-1.9956312E-6 != explanationScore=-3.9912625E-6 
> Explanation: -3.9912625E-6 = sum of:
>[junit4]>   -3.9912625E-6 = weight(field:w5 in 713) 
> [RandomSimilarity], result of:
>[junit4]> -3.9912625E-6 = score(IBSimilarity, doc=713, freq=1.0), 
> computed from:
>[junit4]>   100.0 = boost
>[junit4]>   0.0 = NormalizationH2, computed from: 
>[junit4]> 1.0 = tf
>[junit4]> 5.502638 = avgFieldLength
>[junit4]> 5.6493154E19 = len
>[junit4]>   0.2533109 = LambdaTTF, computed from: 
>[junit4]> 2256.0 = totalTermFreq
>[junit4]> 8909.0 = numberOfDocuments
>[junit4]>   -3.9912624E-8 = DistributionSPL
>[junit4]>  expected:<-1.9956312E-6> but was:<-3.9912625E-6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([882B619046E7216B:D4118BC551735775]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:50)
>[junit4]>  at junit.framework.Assert.failNotEquals(Assert.java:287)
>[junit4]>  at junit.framework.Assert.assertEquals(Assert.java:120)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:338)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.verifyExplanation(CheckHits.java:358)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits$ExplanationAsserter.collect(CheckHits.java:501)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreRange(Weight.java:196)
>[junit4]>  at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:183)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
>[junit4]>  at 
> org.apache.lucene.search.ReqExclBulkScorer.score(ReqExclBulkScorer.java:48)
>[junit4]>  at 
> org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>[junit4]>  at 
> org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669)
>[junit4]>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.lucene.search.CheckHits.checkExplanations(CheckHits.java:310)
>[junit4]>  at 
> org.apache.lucene.search.QueryUtils.checkExplanations(QueryUtils.java:104)
>[junit4]>  at 
> 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 216 - Failure!

2016-06-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/216/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionStateFormat2Test.test

Error Message:
Could not find collection:.system

Stack Trace:
java.lang.AssertionError: Could not find collection:.system
at 
__randomizedtesting.SeedInfo.seed([467048718B3B8185:CE2477AB25C7EC7D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:154)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:134)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.testConfNameAndCollectionNameSame(CollectionStateFormat2Test.java:53)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.test(CollectionStateFormat2Test.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at