[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15491 - Failure!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15491/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:44615";, 
"node_name":"127.0.0.1:44615_", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:57383";,  
   "node_name":"127.0.0.1:57383_", "state":"active", 
"leader":"true"},   "core_node3":{ "core":"collection1",
 "base_url":"http://127.0.0.1:54432";, 
"node_name":"127.0.0.1:54432_", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:37655";, 
"node_name":"127.0.0.1:37655_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:57383";, 
"node_name":"127.0.0.1:57383_", "state":"active", 
"leader":"true"},   "core_node2":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:37655";, 
"node_name":"127.0.0.1:37655_", "state":"recovering", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "collMinRf_1x3":{ "replicationFactor":"3",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:44615";, 
"node_name":"127.0.0.1:44615_", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:37655";, 
"node_name":"127.0.0.1:37655_", "state":"active"},   
"core_node3":{ "core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:57383";, 
"node_name":"127.0.0.1:57383_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:44615";,
"node_name":"127.0.0.1:44615_",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:57383";,
"node_name":"127.0.0.1:57383_",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:54432";,
"node_name":"127.0.0.1:54432_",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:37655";,
"node_name":"127.0.0.1:37655_",
"state":"active",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCre

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 322 - Still Failing!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/322/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:45030/n_/hp","node_name":"127.0.0.1:45030_n_%2Fhp","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:51924/n_/hp";,   
"core":"c8n_1x3_lf_shard1_replica3",   
"node_name":"127.0.0.1:51924_n_%2Fhp"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:64073/n_/hp";,   
"node_name":"127.0.0.1:64073_n_%2Fhp",   "state":"down"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:45030/n_/hp";,   
"node_name":"127.0.0.1:45030_n_%2Fhp",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:45030/n_/hp","node_name":"127.0.0.1:45030_n_%2Fhp","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:51924/n_/hp";,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:51924_n_%2Fhp"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:64073/n_/hp";,
  "node_name":"127.0.0.1:64073_n_%2Fhp",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:45030/n_/hp";,
  "node_name":"127.0.0.1:45030_n_%2Fhp",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([6F91588C79C30CE:8EAD2A5269605D36]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:171)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedt

[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 8 - Still Failing

2016-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/8/

4 tests failed.
FAILED:  
org.apache.lucene.codecs.asserting.TestAssertingStoredFieldsFormat.testRamBytesUsed

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([C27A75535BA21309]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.asserting.TestAssertingStoredFieldsFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([C27A75535BA21309]:0)


FAILED:  
org.apache.lucene.codecs.asserting.TestAssertingTermVectorsFormat.testRamBytesUsed

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([C27A75535BA21309]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.asserting.TestAssertingTermVectorsFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([C27A75535BA21309]:0)




Build Log:
[...truncated 2756 lines...]
   [junit4] Suite: 
org.apache.lucene.codecs.asserting.TestAssertingStoredFieldsFormat
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestAssertingStoredFieldsFormat -Dtests.method=testRamBytesUsed 
-Dtests.seed=C27A75535BA21309 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=sr_RS -Dtests.timezone=America/Indiana/Knox -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   7200s J2 | TestAssertingStoredFieldsFormat.testRamBytesUsed 
<<<
   [junit4]> Throwable #1: java.lang.Exception: Test abandoned because 
suite timeout was reached.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([C27A75535BA21309]:0)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene53): {}, 
docValues:{}, sim=DefaultSimilarity, locale=sr_RS, timezone=America/Indiana/Knox
   [junit4]   2> NOTE: Linux 3.13.0-52-generic amd64/Oracle Corporation 
1.7.0_80 (64-bit)/cpus=4,threads=2,free=223203152,total=408420352
   [junit4]   2> NOTE: All tests run in this JVM: [Nested, Nested, Nested, 
Nested, Nested, Nested, Nested, Nested, Nested, Nested, Nested, Nested, Nested, 
Nested, Nested, Nested, Nested, Nested, Nested, Nested, Nested, 
TestLookaheadTokenFilter, Nested2, Nested, TestShuffleFS, 
TestAssertingNormsFormat, TestGroupFiltering, Nested1, 
TestAssertingStoredFieldsFormat]
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestAssertingStoredFieldsFormat -Dtests.seed=C27A75535BA21309 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=sr_RS -Dtests.timezone=America/Indiana/Knox -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.00s J2 | TestAssertingStoredFieldsFormat (suite) <<<
   [junit4]> Throwable #1: java.lang.Exception: Suite timeout exceeded (>= 
720 msec).
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([C27A75535BA21309]:0)
   [junit4] Completed [37/38] on J2 in 7223.39s, 6 tests, 2 errors <<< FAILURES!

[...truncated 17 lines...]
   [junit4] Suite: 
org.apache.lucene.codecs.asserting.TestAssertingTermVectorsFormat
   [junit4]   2> ?.?. 08, 2016 11:32:54 ?? 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.codecs.asserting.TestAssertingTermVectorsFormat
   [junit4]   2>1) Thread[id=9, name=JUnit4-serializer-daemon, 
state=TIMED_WAITING, group=main]
   [junit4]   2> at java.lang.Thread.sleep(Native Method)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:47)
   [junit4]   2>2) Thread[id=236, 
name=TEST-TestAssertingTermVectorsFormat.testRamBytesUsed-seed#[C27A75535BA21309],
 state=RUNNABLE, group=TGRP-TestAssertingTermVectorsFormat]
   [junit4]   2> at 
org.apache.lucene.store.MockIndexInputWrapper.readByte(MockIndexInputWrapper.java:132)
   [junit4]   2> at 
org.apache.lucene.util.packed.BlockPackedReaderIterator.skip(BlockPackedReaderIterator.java:126)
   [junit4]   2> at 
org.apache.lucene.codecs.compressing.CompressingTermVectorsReader.readPositions(CompressingTermVectorsReader.java:637)
   [junit4]   2> at 
org.apache.lucene.c

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2950 - Failure!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2950/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
Mutual exclusion failed. Found more than one task running for the same 
collection

Stack Trace:
java.lang.AssertionError: Mutual exclusion failed. Found more than one task 
running for the same collection
at 
__randomizedtesting.SeedInfo.seed([3B170988B382A39:8BE54F4225C447C1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:136)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapt

[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 15193 - Failure!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15193/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseG1GC -XX:-CompactStrings

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=10975, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=10974, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=10973, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=10972, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=10976, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=10975, name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 
java.util.concurrent.Sc

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 3004 - Still Failing!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/3004/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([B74FFF96A9771418:3F1BC04C078B79E0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:837)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(

[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090325#comment-15090325
 ] 

Mark Miller commented on LUCENE-6938:
-

bq. I think this is an improvement, not a requirement?

I think I slightly misunderstood this the first time. You meant making it more 
efficient for windows was not a requirement?

In that case I agree, though I figured if it was easy, we should just do it. It 
does not look so easy though. So I suggest a switch to turn it off in 
build.properties. But right, I don't think it's a requirement that we make it 
more efficient, just that we keep an id in the jars.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 906 - Still Failing

2016-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/906/

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2203, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2203, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([D783DC589FA23CF7:5FD7E382315E510F]:0)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:54310: Could not find collection : 
awholynewstresscollection_collection0_7
at __randomizedtesting.SeedInfo.seed([D783DC589FA23CF7]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:574)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:888)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_D783DC589FA23CF7-001/solr-instance-015/./collection1/data,
 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_D783DC589FA23CF7-001/solr-instance-015/./collection1/data/index.20160109035248280,
 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_D783DC589FA23CF7-001/solr-instance-015/./collection1/data/index.20160109035248176]
 expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_D783DC589FA23CF7-001/solr-instance-015/./collection1/data,
 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_D783DC589FA23CF7-001/solr-instance-015/./collection1/data/index.20160109035248280,
 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_D783DC589FA23CF7-001/solr-instance-015/./collection1/data/index.20160109035248176]
 expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([D783DC589FA23CF7:20F03200594A9311]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:815)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
a

[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090316#comment-15090316
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/9/16 1:27 AM:
--

I think it makes sense to have two implementations:

*MatchStream*: Uses an in-memory index to match Tuples.
*HavingStream*: Uses a ComparisionOperation to match Tuples.

One of the things we can think over is a specific stream for doing *parallel 
alerting*. The MatchStream is a step in that direction.


was (Author: joel.bernstein):
I think it makes sense to have two implementations:

*MatchStream*: Uses an in-memory index to match Tuples.
*HavingStream*: Uses a ComparisionOperation to match Tuples.

One of the things we can think over is a specific stream for doing *parallel 
alerting*. The MatchStream is step in that direction.

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090316#comment-15090316
 ] 

Joel Bernstein commented on SOLR-8530:
--

I think it makes sense to have two implementations:

*MatchStream*: Uses an in-memory index to match Tuples.
*HavingStream*: Uses a ComparisionOperation to match Tuples.

One of the things we can think over is a specific stream for doing *parallel 
alerting*. The MatchStream is step in that direction.

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Mark Miller
On Fri, Jan 8, 2016 at 2:52 PM Michael McCandless 
wrote:

> I agree it would be nice to have cutover to git by then: are we ready
> to open an INFRA issue to do the hard cutover?  Or do we still have
> things to do on our end?  (Thank you Dawid and Mark and Paul and Uwe
> and everyone else for pushing hard on this front!).
>

We are fairly close - just one last thing to come to consensus on. Remains
to be seen how fast INFRA reacts for us though.

There will also probably be a bit to do as we work through the first
release, in terms of release scripts, docs, etc. I think most of it should
be fairly light weight changes though.

- Mark
-- 
- Mark
about.me/markrmiller


[jira] [Created] (SOLR-8531) ZK leader path changed in 5.4

2016-01-08 Thread Jeff Wartes (JIRA)
Jeff Wartes created SOLR-8531:
-

 Summary: ZK leader path changed in 5.4
 Key: SOLR-8531
 URL: https://issues.apache.org/jira/browse/SOLR-8531
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.4
Reporter: Jeff Wartes



While doing a rolling upgrade from 5.3 to 5.4 of a solrcloud cluster, I 
observed that upgraded nodes would not register their shards as active unless 
they were elected the leader for the shard.
There were no errors, the shards were fully up and responsive, but would not  
publish any change from the "down" state.

This appears to be because the recovery process never happens, because the ZK 
node containing the current leader can't be found, because the ZK path has 
changed.

Specifically, the leader data node changed from:
/leaders/
to
/leaders//leader

It looks to me like this happened during SOLR-7844, perhaps accidentally. 

At the least, the "Migrating to Solr 5.4" section of the README should get 
updated with this info, since it means a rolling upgrade of a collection with 
multiple replicas will suffer serious degradation in the number of active 
replicas as nodes are upgraded. It's entirely possible this will reduce some 
shards to a single active replica.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090296#comment-15090296
 ] 

Mark Miller commented on LUCENE-6938:
-

Doesn't look easy to share any state between multiple inits.

I don't even know if doing it at the top level appears any better than per jar. 
It's still a ton of calls per run.

We can simply allow it to be disabled via build.properties if it's an issue for 
some Windows devs.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090259#comment-15090259
 ] 

Mark Miller commented on LUCENE-6938:
-

bq. there must be some way to just get the checkout sha *once*

The key word is once. Yes, of course we can get the sha the same way as we can 
get an svn version :) Uwe's concern is how many times we execute a program to 
do it. Our ant scripts init 8 billion times per target.

I'll look into trying to exec a minimal number of times.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Breaking Java back-compat in Solr

2016-01-08 Thread Jack Krupansky
With the talk of 6.0 coming out real soon and not waiting for new work,
will this 6.0/5.x issue become moot and morph into an issue for 7.0/6.x?

Settling the criteria for Solr plugin API back-compat still seems urgent,
but if the SOLR-8475 work can quickly get committed to trunk for 6.0 maybe
that takes some of the pressure off. Still, I'd prefer that the back-compat
criteria be settled ASAP.


-- Jack Krupansky

On Wed, Jan 6, 2016 at 10:43 AM, Yonik Seeley  wrote:

> On Wed, Jan 6, 2016 at 1:03 AM, Anshum Gupta 
> wrote:
> > As I understand, seems like there's reasonable consensus that we will:
> >
> > 1. provide strong back-compat for for SolrJ and REST APIs
> > 2. Strive to maintain but not guarantee *strong* back-compat for Java
> APIs.
>
> I think this actually represents what our current policy already is.
> The sticking point is perhaps "Strive to maintain" is changing
> definition to become much more lenient, to the point of being
> meaningless.
>
> Let's look at the issue that spawned this thread:
> https://issues.apache.org/jira/browse/SOLR-8475  (Some refactoring to
> SolrIndexSearcher)
>
> The issue is if QueryCommand and QueryResult should be moved out of
> SolrIndexSearcher in 5.x (essentially a rename), or of that rename
> should only be in 6.0.  If one's desire for a class rename (of classes
> that are likely to be used by plugins) overrides #2, I'd argue that
> means we essentially have no #2 at all.  Or perhaps I'm not grasping
> why it's really that important to rename those classes.
>
> Regarding annotations:
> Multiple people have suggested annotating classes that should remain
> back compat.  If we were to do this, wouldn't we want those
> annotations to cover the classes in question
> (SolrIndexSearcher,QueryCommand,QueryResult)?  If not, what would they
> cover and still be useful?
>
> -Yonik
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090228#comment-15090228
 ] 

Dennis Gove edited comment on SOLR-8530 at 1/8/16 11:57 PM:


A ComparisonOperation is another good option. 

My thinking for using an index is three-fold. First, a desire to not ask users 
to learn yet another way to do comparisons. If they already know the Solr 
syntax they can use that directly in this stream. And second to support even 
the non simple comparisons without having to implement them. For example a date 
range filter. This assumes that at some point we'll support metrics over dates 
but I think that's a reasonable assumption. And third, given the JDBCStream 
this provides a way for someone to do textual based queries over a subset of 
documents out of a join of Solr and non-Solr supplied documents. Obviously one 
could do a textual search over the Solr supplied stream directly but that may 
not be possible over the JDBC supplied stream.

That said, I'm not adverse to a ComparisonOperation. I just feel that a full 
index support gives us a lot of power going forward.


was (Author: dpgove):
This is another good option. 

My thinking for using an index is three-fold. First, a desire to not ask users 
to learn yet another way to do comparisons. If they already know the Solr 
syntax they can use that directly in this stream. And second to support even 
the non simple comparisons without having to implement them. For example a date 
range filter. This assumes that at some point we'll support metrics over dates 
but I think that's a reasonable assumption. And third, given the JDBCStream 
this provides a way for someone to do textual based queries over a subset of 
documents out of a join of Solr and non-Solr supplied documents. Obviously one 
could do a textual search over the Solr supplied stream directly but that may 
not be possible over the JDBC supplied stream.

That said, I'm not adverse to a ComparisonOperation. I just feel that a full 
index support gives us a lot of power going forward.

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090228#comment-15090228
 ] 

Dennis Gove commented on SOLR-8530:
---

This is another good option. 

My thinking for using an index is three-fold. First, a desire to not ask users 
to learn yet another way to do comparisons. If they already know the Solr 
syntax they can use that directly in this stream. And second to support even 
the non simple comparisons without having to implement them. For example a date 
range filter. This assumes that at some point we'll support metrics over dates 
but I think that's a reasonable assumption. And third, given the JDBCStream 
this provides a way for someone to do textual based queries over a subset of 
documents out of a join of Solr and non-Solr supplied documents. Obviously one 
could do a textual search over the Solr supplied stream directly but that may 
not be possible over the JDBC supplied stream.

That said, I'm not adverse to a ComparisonOperation. I just feel that a full 
index support gives us a lot of power going forward.

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8500) Allow the number of threads ConcurrentUpdateSolrClient StreamingSolrClients configurable by a system property

2016-01-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090169#comment-15090169
 ] 

Mark Miller commented on SOLR-8500:
---

I think using more than 1 thread may actually introduce more reordering 
problems right now.

> Allow the number of threads ConcurrentUpdateSolrClient StreamingSolrClients 
> configurable by a system property
> -
>
> Key: SOLR-8500
> URL: https://issues.apache.org/jira/browse/SOLR-8500
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-8500.patch
>
>
> Despite the warning in that code, in extremely high throughput situations 
> where there are guaranteed to be no updates to existing documents, it can be 
> useful to have more than one runner.
> I envision this as an "expert" kind of thing, used only in situations where 
> the a-priori knowledge is that there are no updates to existing documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8500) Allow the number of threads ConcurrentUpdateSolrClient StreamingSolrClients configurable by a system property

2016-01-08 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-8500:
-
Summary: Allow the number of threads ConcurrentUpdateSolrClient 
StreamingSolrClients configurable by a system property  (was: Allow the number 
of threds ConcurrentUpdateSolrClient StreamingSolrClients configurable by a 
system property)

> Allow the number of threads ConcurrentUpdateSolrClient StreamingSolrClients 
> configurable by a system property
> -
>
> Key: SOLR-8500
> URL: https://issues.apache.org/jira/browse/SOLR-8500
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-8500.patch
>
>
> Despite the warning in that code, in extremely high throughput situations 
> where there are guaranteed to be no updates to existing documents, it can be 
> useful to have more than one runner.
> I envision this as an "expert" kind of thing, used only in situations where 
> the a-priori knowledge is that there are no updates to existing documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090089#comment-15090089
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/8/16 10:38 PM:
---

Then we could also throw away the HavingStream that comes with the SQLHandler 
which relies on Presto classes. 


was (Author: joel.bernstein):
Then I could also throw away the HavingStream that comes with the SQLHandler 
which relies on Presto classes. 

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090073#comment-15090073
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/8/16 10:37 PM:
---

Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperation interface and implement the basic comparison 
logic. 

{code}

having(reduce(), gt("x", 100))

{code}

And with boolean operators

{code}
having(rollup(), or(gt("x", 100), lt("x", 500)))
{code}


was (Author: joel.bernstein):
Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 

{code}

having(reduce(), gt("x", 100))

{code}

And with boolean operators

{code}
having(rollup(), or(gt("x", 100), lt("x", 500)))
{code}

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090089#comment-15090089
 ] 

Joel Bernstein commented on SOLR-8530:
--

Then I could also throw away the HavingStream that comes with the SQLHandler 
which relies on Presto classes. 

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090073#comment-15090073
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/8/16 10:32 PM:
---

Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 

{code}

having(reduce(), gt("x", 100))

{code}

And with boolean operators

{code}
having(rollup(), or(gt("x", 100), lt("x", 500)))
{code}


was (Author: joel.bernstein):
Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 

{code}

having(reduce(), gt("x", 100))

{code}

And with boolean operators

{code}
having(reduce(), or(gt("x", 100), lt("x", 500)))
{code}

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090073#comment-15090073
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/8/16 10:31 PM:
---

Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 

{code}

having(reduce(), gt("x", 100))

{code}

And with boolean operators

{code}
having(reduce(), or(gt("x", 100), lt("x", 500)))
{code}


was (Author: joel.bernstein):
Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 

{code}

having(reduce(), gt("x", 100))

{code}

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090073#comment-15090073
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/8/16 10:29 PM:
---

Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 

{code}

having(reduce(), gt("x", 100))

{code}


was (Author: joel.bernstein):
Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090073#comment-15090073
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/8/16 10:28 PM:
---

Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implement the basic comparison 
logic. 


was (Author: joel.bernstein):
Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implements the basic comparison 
logic. 

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090073#comment-15090073
 ] 

Joel Bernstein commented on SOLR-8530:
--

Is there a specific reason to use an index for the comparison logic? We could 
also add a ComparisonOperator interface and implements the basic comparison 
logic. 

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6948) ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill

2016-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-6948.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

[~mjlawley] - thanks for the JIRA ticket and proposed fix. [~mikemccand] - 
thanks for the patch review.

> ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill
> 
>
> Key: LUCENE-6948
> URL: https://issues.apache.org/jira/browse/LUCENE-6948
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.4
>Reporter: Michael Lawley
>Assignee: Christine Poerschke
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6948.patch
>
>
> With a very large index (in our case > 10G), we are seeing exceptions like:
> java.lang.ArrayIndexOutOfBoundsException: -62400
>   at org.apache.lucene.util.PagedBytes$Reader.fill(PagedBytes.java:116)
>   at 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get(FieldCacheImpl.java:1342)
>   at 
> org.apache.lucene.search.join.TermsCollector$SV.collect(TermsCollector.java:106)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
> The code in question is trying to allocate an array with a negative size.  We 
> believe the source of the error is in 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get where the 
> following code occurs:
>   final int pointer = (int) docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }
> The cast to int will break if the (long) result of docToOffset.get is too 
> large, and is unnecessary in the first place since bytes.fill takes a long as 
> its second parameter.
> Proposed fix:
>   final long pointer = docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6948) ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090042#comment-15090042
 ] 

ASF subversion and git services commented on LUCENE-6948:
-

Commit 1723810 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723810 ]

LUCENE-6948: Fix ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill by 
removing an unnecessary long-to-int cast. Also, unrelated, 2 
ArrayList<>(initialCapacity) tweaks in getChildResources methods. (merge in 
revision 1723787 from trunk)

> ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill
> 
>
> Key: LUCENE-6948
> URL: https://issues.apache.org/jira/browse/LUCENE-6948
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.4
>Reporter: Michael Lawley
>Assignee: Christine Poerschke
> Attachments: LUCENE-6948.patch
>
>
> With a very large index (in our case > 10G), we are seeing exceptions like:
> java.lang.ArrayIndexOutOfBoundsException: -62400
>   at org.apache.lucene.util.PagedBytes$Reader.fill(PagedBytes.java:116)
>   at 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get(FieldCacheImpl.java:1342)
>   at 
> org.apache.lucene.search.join.TermsCollector$SV.collect(TermsCollector.java:106)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
> The code in question is trying to allocate an array with a negative size.  We 
> believe the source of the error is in 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get where the 
> following code occurs:
>   final int pointer = (int) docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }
> The cast to int will break if the (long) result of docToOffset.get is too 
> large, and is unnecessary in the first place since bytes.fill takes a long as 
> its second parameter.
> Proposed fix:
>   final long pointer = docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8511) Implement DatabaseMetaDataImpl.getURL()

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8511:
---
Attachment: SOLR-8511.patch

Removed for getCollection and use getCatalog instead.

> Implement DatabaseMetaDataImpl.getURL()
> ---
>
> Key: SOLR-8511
> URL: https://issues.apache.org/jira/browse/SOLR-8511
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8511.patch, SOLR-8511.patch
>
>
> /**
>  * Retrieves the URL for this DBMS.
>  *
>  * @return the URL for this DBMS or null if it cannot be
>  *  generated
>  * @exception SQLException if a database access error occurs
>  */



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 321 - Still Failing!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/321/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.test

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:48472 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:48472 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([BE26000435498C18:36723FDE9BB5E1E0]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:286)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1475)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:942)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:48472 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:209)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:173)
... 37 more


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestCloudSchemaless

Error Mes

[jira] [Created] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2016-01-08 Thread Dennis Gove (JIRA)
Dennis Gove created SOLR-8530:
-

 Summary: Add HavingStream to Streaming API and StreamingExpressions
 Key: SOLR-8530
 URL: https://issues.apache.org/jira/browse/SOLR-8530
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: Trunk
Reporter: Dennis Gove
Priority: Minor


The goal here is to support something similar to SQL's HAVING clause where one 
can filter documents based on data that is not available in the index. For 
example, filter the output of a reduce() based on the calculated metrics.

{code}
having(
  reduce(
search(.),
sum(cost),
on=customerId
  ),
  q="sum(cost):[500 TO *]"
)
{code}

This example would return all where the total spent by each distinct customer 
is >= 500. The total spent is calculated via the sum(cost) metric in the reduce 
stream.

The intent is to support as the filters in the having(...) clause the full 
query syntax of a search(...) clause. I see this being possible in one of two 
ways. 

1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
stream creating an instance of MemoryIndex and apply the query to it. If the 
result of that is >0 then the tuple should be returned from the HavingStream.

2. Create an in-memory solr index via something like RamDirectory, read all 
tuples into that in-memory index using the UpdateStream, and then stream out of 
that all the matching tuples from the query.

There are benefits to each approach but I think the easiest and most direct one 
is the MemoryIndex approach. With MemoryIndex it isn't necessary to read all 
incoming tuples before returning a single tuple. With a MemoryIndex there is a 
need to parse the solr query parameters and create a valid Lucene query but I 
suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 10 - Still Failing

2016-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/10/

No tests ran.

Build Log:
[...truncated 53068 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (13.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.2-src.tgz...
   [smoker] 28.5 MB in 0.03 sec (814.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.2.tgz...
   [smoker] 65.7 MB in 0.08 sec (804.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.2.zip...
   [smoker] 75.9 MB in 0.09 sec (806.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.2.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.2-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.4.0
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1449, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1432, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 762, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1387, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15190 - Still Failing!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15190/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [NRTCachingDirectory, 
NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [NRTCachingDirectory, NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([2C5F2838D1794722]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9754 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_2C5F2838D1794722-001/init-core-data-001
   [junit4]   2> 7657 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.a.s.SolrTestCaseJ4 ###Starting doTestStressReplication
   [junit4]   2> 7658 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_2C5F2838D1794722-001/solr-instance-001/collection1
   [junit4]   2> 7683 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.e.j.u.log Logging initialized @9292ms
   [junit4]   2> 7789 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 7839 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@4a0f2976{/solr,null,AVAILABLE}
   [junit4]   2> 7851 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.e.j.s.ServerConnector Started 
ServerConnector@f8baa15{HTTP/1.1}{127.0.0.1:46538}
   [junit4]   2> 7851 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.e.j.s.Server Started @9461ms
   [junit4]   2> 7851 INFO  
(TEST-TestReplicationHandler.doTestStressReplication-seed#[2C5F2838D1794722]) [ 
   ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostP

[jira] [Comment Edited] (SOLR-8502) Improve Solr JDBC Driver to support SQL Clients like DBVisualizer

2016-01-08 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089599#comment-15089599
 ] 

Kevin Risden edited comment on SOLR-8502 at 1/8/16 9:21 PM:


Here are some of the JIRAs are that are ready for review.
* SOLR-8503
* SOLR-8507
* SOLR-8509
* SOLR-8511
* SOLR-8513
* SOLR-8514
* SOLR-8515
* SOLR-8516


was (Author: risdenk):
Here are some of the JIRAs are that are ready for review.
* SOLR-8503
* SOLR-8507
* SOLR-8511
* SOLR-8513
* SOLR-8514
* SOLR-8515
* SOLR-8516

> Improve Solr JDBC Driver to support SQL Clients like DBVisualizer
> -
>
> Key: SOLR-8502
> URL: https://issues.apache.org/jira/browse/SOLR-8502
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>  Labels: jdbc
> Fix For: 6.0
>
>
> Currently when trying to connect to Solr with the JDBC driver with a SQL 
> client the driver must implement more methods and metadata to allow 
> connections. This JIRA is designed to act as an umbrella for the JDBC changes.
> An initial pass from a few months ago is here: 
> https://github.com/risdenk/lucene-solr/tree/expand-jdbc. This needs to be 
> broken up and create patches for the related sub tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8509) Determine test strategy and add tests for JDBC driver metadata.

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8509:
---
Flags: Patch

> Determine test strategy and add tests for JDBC driver metadata.
> ---
>
> Key: SOLR-8509
> URL: https://issues.apache.org/jira/browse/SOLR-8509
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8509.patch
>
>
> Currently there is no testing of the JDBC metadata. Need to determine best 
> way to do this and add tests. Probably makes sense to add new metadata tests 
> to JdbcTest in many cases since need a cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8509) Determine test strategy and add tests for JDBC driver metadata.

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8509:
---
Attachment: SOLR-8509.patch

Idea for testing JDBC driver metadata. Currently this has some tests for each 
of the items to be implemented under SOLR-8502. Tests currently fail since many 
of the methods are not implemented yet.

A thought here is to comment out the assertions currently and uncomment them as 
part of each subtask under SOLR-8502.

> Determine test strategy and add tests for JDBC driver metadata.
> ---
>
> Key: SOLR-8509
> URL: https://issues.apache.org/jira/browse/SOLR-8509
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8509.patch
>
>
> Currently there is no testing of the JDBC metadata. Need to determine best 
> way to do this and add tests. Probably makes sense to add new metadata tests 
> to JdbcTest in many cases since need a cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8502) Improve Solr JDBC Driver to support SQL Clients like DBVisualizer

2016-01-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8502:
-
Fix Version/s: 6.0

> Improve Solr JDBC Driver to support SQL Clients like DBVisualizer
> -
>
> Key: SOLR-8502
> URL: https://issues.apache.org/jira/browse/SOLR-8502
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>  Labels: jdbc
> Fix For: 6.0
>
>
> Currently when trying to connect to Solr with the JDBC driver with a SQL 
> client the driver must implement more methods and metadata to allow 
> connections. This JIRA is designed to act as an umbrella for the JDBC changes.
> An initial pass from a few months ago is here: 
> https://github.com/risdenk/lucene-solr/tree/expand-jdbc. This needs to be 
> broken up and create patches for the related sub tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8285) Ensure that /export handles documents that have no value for the field gracefully.

2016-01-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8285:


Assignee: Joel Bernstein

> Ensure that /export handles documents that have no value for the field 
> gracefully.
> --
>
> Key: SOLR-8285
> URL: https://issues.apache.org/jira/browse/SOLR-8285
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Joel Bernstein
> Fix For: 6.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8285) Ensure that /export handles documents that have no value for the field gracefully.

2016-01-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8285:
-
Fix Version/s: 6.0

> Ensure that /export handles documents that have no value for the field 
> gracefully.
> --
>
> Key: SOLR-8285
> URL: https://issues.apache.org/jira/browse/SOLR-8285
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
> Fix For: 6.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8462) Improve error reporting for /stream handler

2016-01-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8462:
-
Fix Version/s: (was: Trunk)
   6.0

> Improve error reporting for /stream handler
> ---
>
> Key: SOLR-8462
> URL: https://issues.apache.org/jira/browse/SOLR-8462
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Assignee: Joel Bernstein
>Priority: Trivial
> Fix For: 6.0
>
> Attachments: SOLR-8462.patch, SOLR-8462.patch
>
>
> Currently, the /stream request handler reports errors by adding an 
> "EXCEPTION" name/value pair on a tuple in the TupleStream where the error 
> arose.  The "value" in this name/value pair is the message attached to the 
> exception.
> This works well in most instances, however it could be better in a few ways:
> 1.) Not all exceptions have messages.  For instance, 
> {{NullPointerExceptions}} and other run time exceptions fall into this 
> category.  This causes the /stream handler to return the relatively unhelpful 
> value: {"EXCEPTION":null,"EOF":true}.  The /stream handler should make sure 
> the exception has a message, and if not, it should report some other 
> information about the error (exception class name?).
> 2.) There are some common error cases that can arise from mis-use of the API. 
>  For instance, if the 'expr' parameter is missing.  Detecting and handling 
> these cases specifically would allow users to get back clearer, more useful 
> error messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8462) Improve error reporting for /stream handler

2016-01-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8462:


Assignee: Joel Bernstein

> Improve error reporting for /stream handler
> ---
>
> Key: SOLR-8462
> URL: https://issues.apache.org/jira/browse/SOLR-8462
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Assignee: Joel Bernstein
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: SOLR-8462.patch, SOLR-8462.patch
>
>
> Currently, the /stream request handler reports errors by adding an 
> "EXCEPTION" name/value pair on a tuple in the TupleStream where the error 
> arose.  The "value" in this name/value pair is the message attached to the 
> exception.
> This works well in most instances, however it could be better in a few ways:
> 1.) Not all exceptions have messages.  For instance, 
> {{NullPointerExceptions}} and other run time exceptions fall into this 
> category.  This causes the /stream handler to return the relatively unhelpful 
> value: {"EXCEPTION":null,"EOF":true}.  The /stream handler should make sure 
> the exception has a message, and if not, it should report some other 
> information about the error (exception class name?).
> 2.) There are some common error cases that can arise from mis-use of the API. 
>  For instance, if the 'expr' parameter is missing.  Detecting and handling 
> these cases specifically would allow users to get back clearer, more useful 
> error messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6622) Issue with Multivalued fields when using UIMA

2016-01-08 Thread Tomasz Oliwa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089815#comment-15089815
 ] 

Tomasz Oliwa edited comment on SOLR-6622 at 1/8/16 8:29 PM:


This patch also fixes the issue I reported in 
https://issues.apache.org/jira/browse/SOLR-8528 , I just tested it. Would be 
nice if the patch could be committed to Solr.


was (Author: toldev):
I just reported https://issues.apache.org/jira/browse/SOLR-8528 , a bug with 
UIMA and multivalued fields that might have the same or similar underlying 
problem.

> Issue with Multivalued fields when using UIMA
> -
>
> Key: SOLR-6622
> URL: https://issues.apache.org/jira/browse/SOLR-6622
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - UIMA
>Affects Versions: Trunk
>Reporter: Maryam Khordad
> Attachments: SOLR-6622.patch
>
>
> When using any of UIMA addons on a multivalued fields, only the first row of 
> the field gets processed and UIMA update ignores the remaining rows. 
> This bug caused by "getTextsToAnalyze" method in "UIMAUpdateRequestProcessor" 
> class. SolrInputDocument
> .getFieldValue  must be changes to olrInputDocument
> .getFieldValues and the result must be an array not a single 
> variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8528) UIMA processor with multivalued fields and atomic updates bug

2016-01-08 Thread Tomasz Oliwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomasz Oliwa resolved SOLR-8528.

   Resolution: Fixed
Fix Version/s: 5.3.1

This issue is resolved by the patch in 
https://issues.apache.org/jira/browse/SOLR-6622

It would be good if the aforementioned patch could be committed to Solr.

> UIMA processor with multivalued fields and atomic updates bug
> -
>
> Key: SOLR-8528
> URL: https://issues.apache.org/jira/browse/SOLR-8528
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - UIMA, Schema and Analysis, update
>Affects Versions: 5.3.1
> Environment: Linux (Fedora 21)
>Reporter: Tomasz Oliwa
>  Labels: solr, uima
> Fix For: 5.3.1
>
>
> There is a showstopping bug with the UIMA processor and using atomic updates 
> in Solr.
> I am using the UIMA processor to populate multivalued fields upon indexing. 
> When I later use atomic updates to update a document, all UIMA populated 
> multivalued fields have only one value, the others are gone!
> To reproduce:
> 1. Use the org.apache.solr.uima.processor.UIMAUpdateRequestProcessorFactory 
> to populate a multivalued field during the indexing of a document. 
> 2. Use Solr atomic updates (http://yonik.com/solr/atomic-updates/) to set a 
> different field of the document to a new value and commit
> 3. Any multivalued fields created by the UIMAUpdateRequestProcessorFactory 
> now only have one value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Joel Bernstein
I agree completely with this approach. There's always another release right
around the corner. There's some nice features waiting in trunk.

+1 moving forward fairly soon.
+1 to 6.0 being the git release if possible.

Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Jan 8, 2016 at 2:51 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> I don't think we should hold the major release for lots of new large
> features that are not even started yet; they can just as easily be
> done in a 6.x release.
>
> Remember the counterbalance here is major new features that are done
> (as far as we know!), yet not readily available to users.  And moving
> away from Java 7, and lightning our back-compat burden.
>
> At some point soonish (a week or two) I'd like to cut the 6.x branch.
>
> I agree it would be nice to have cutover to git by then: are we ready
> to open an INFRA issue to do the hard cutover?  Or do we still have
> things to do on our end?  (Thank you Dawid and Mark and Paul and Uwe
> and everyone else for pushing hard on this front!).
>
> I don't think we need an umbrella Jira issue to track this ... let's
> just mark the issues as 6.0 Fix Version (I just added 6.0 to Lucene
> and Solr Jira).
>
> I'll open an issue and work on removing StorableField from trunk, and
> another for DimensionalTermQuery.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Jan 8, 2016 at 11:45 AM, Jack Krupansky
>  wrote:
> > +1 for a 5.x deprecation release so 6.0 can remove old stuff.
> >
> > +1 for a git-based release
> >
> > +1 for at least 3 months for people to finish and stabilize work in
> progress
> > - April to July seems like the right window to target
> >
> > -- Jack Krupansky
> >
> > On Fri, Jan 8, 2016 at 10:09 AM, Anshum Gupta 
> > wrote:
> >>
> >> +1 to that ! Do you have a planned timeline for this?
> >>
> >> I would want some time to clean up code and also have a deprecation
> >> release (5.5 or 5.6) out so we don't have to carry all the cruft
> through the
> >> 6x series.
> >>
> >> On Fri, Jan 8, 2016 at 4:37 AM, Michael McCandless
> >>  wrote:
> >>>
> >>> I think we should get the ball rolling for our next major release
> >>> (6.0.0)?
> >>>
> >>> E.g., dimensional values is a big new feature for 6.x, and I think
> >>> it's nearly ready except maybe fixing up the API so it's easier for
> >>> the 1D case.
> >>>
> >>> I think we should maybe remove StorableField before releasing?  I.e.,
> >>> go back to what we have in 5.x.  This change also caused challenges in
> >>> the 5.0 release, and we just kicked the can down the road, but I think
> >>> now we should just kick the can off the road...
> >>>
> >>> Mike McCandless
> >>>
> >>> http://blog.mikemccandless.com
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>>
> >>
> >>
> >>
> >> --
> >> Anshum Gupta
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-5209) last replica removal cascades to remove shard from clusterstate

2016-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-5209:
--
Fix Version/s: (was: 5.0)
   6.0

> last replica removal cascades to remove shard from clusterstate
> ---
>
> Key: SOLR-5209
> URL: https://issues.apache.org/jira/browse/SOLR-5209
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: Trunk, 6.0
>
> Attachments: SOLR-5209.patch, SOLR-5209.patch
>
>
> The problem we saw was that unloading of an only replica of a shard deleted 
> that shard's info from the clusterstate. Once it was gone then there was no 
> easy way to re-create the shard (other than dropping and re-creating the 
> whole collection's state).
> This seems like a bug?
> Overseer.java around line 600 has a comment and commented out code:
> // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
> slice has no hash range, remove it
> // if (newReplicas.size() == 0 && slice.getRange() == null) {
> // if there are no replicas left for the slice remove it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6948) ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089850#comment-15089850
 ] 

ASF subversion and git services commented on LUCENE-6948:
-

Commit 1723787 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1723787 ]

LUCENE-6948: Fix ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill by 
removing an unnecessary long-to-int cast. Also, unrelated, 2 
ArrayList<>(initialCapacity) tweaks in getChildResources methods.

> ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill
> 
>
> Key: LUCENE-6948
> URL: https://issues.apache.org/jira/browse/LUCENE-6948
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.4
>Reporter: Michael Lawley
>Assignee: Christine Poerschke
> Attachments: LUCENE-6948.patch
>
>
> With a very large index (in our case > 10G), we are seeing exceptions like:
> java.lang.ArrayIndexOutOfBoundsException: -62400
>   at org.apache.lucene.util.PagedBytes$Reader.fill(PagedBytes.java:116)
>   at 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get(FieldCacheImpl.java:1342)
>   at 
> org.apache.lucene.search.join.TermsCollector$SV.collect(TermsCollector.java:106)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
> The code in question is trying to allocate an array with a negative size.  We 
> believe the source of the error is in 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get where the 
> following code occurs:
>   final int pointer = (int) docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }
> The cast to int will break if the (long) result of docToOffset.get is too 
> large, and is unnecessary in the first place since bytes.fill takes a long as 
> its second parameter.
> Proposed fix:
>   final long pointer = docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2016-01-08 Thread David de Kleer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089837#comment-15089837
 ] 

David de Kleer commented on SOLR-7739:
--

Any updates on your update? ;-)

David

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Attachments: SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8529) Improve JdbcTest to not use plain assert statements

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8529:
---
Flags: Patch

> Improve JdbcTest to not use plain assert statements
> ---
>
> Key: SOLR-8529
> URL: https://issues.apache.org/jira/browse/SOLR-8529
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8529.patch
>
>
> Plain assert statements work but it makes debugging hard. Instead should use 
> assertEquals, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8529) Improve JdbcTest to not use plain assert statements

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8529:
---
Attachment: SOLR-8529.patch

Changes assert statements to assertTrue, assertFalse, and assertEquals as 
appropriate.

> Improve JdbcTest to not use plain assert statements
> ---
>
> Key: SOLR-8529
> URL: https://issues.apache.org/jira/browse/SOLR-8529
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8529.patch
>
>
> Plain assert statements work but it makes debugging hard. Instead should use 
> assertEquals, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Michael McCandless
I don't think we should hold the major release for lots of new large
features that are not even started yet; they can just as easily be
done in a 6.x release.

Remember the counterbalance here is major new features that are done
(as far as we know!), yet not readily available to users.  And moving
away from Java 7, and lightning our back-compat burden.

At some point soonish (a week or two) I'd like to cut the 6.x branch.

I agree it would be nice to have cutover to git by then: are we ready
to open an INFRA issue to do the hard cutover?  Or do we still have
things to do on our end?  (Thank you Dawid and Mark and Paul and Uwe
and everyone else for pushing hard on this front!).

I don't think we need an umbrella Jira issue to track this ... let's
just mark the issues as 6.0 Fix Version (I just added 6.0 to Lucene
and Solr Jira).

I'll open an issue and work on removing StorableField from trunk, and
another for DimensionalTermQuery.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Jan 8, 2016 at 11:45 AM, Jack Krupansky
 wrote:
> +1 for a 5.x deprecation release so 6.0 can remove old stuff.
>
> +1 for a git-based release
>
> +1 for at least 3 months for people to finish and stabilize work in progress
> - April to July seems like the right window to target
>
> -- Jack Krupansky
>
> On Fri, Jan 8, 2016 at 10:09 AM, Anshum Gupta 
> wrote:
>>
>> +1 to that ! Do you have a planned timeline for this?
>>
>> I would want some time to clean up code and also have a deprecation
>> release (5.5 or 5.6) out so we don't have to carry all the cruft through the
>> 6x series.
>>
>> On Fri, Jan 8, 2016 at 4:37 AM, Michael McCandless
>>  wrote:
>>>
>>> I think we should get the ball rolling for our next major release
>>> (6.0.0)?
>>>
>>> E.g., dimensional values is a big new feature for 6.x, and I think
>>> it's nearly ready except maybe fixing up the API so it's easier for
>>> the 1D case.
>>>
>>> I think we should maybe remove StorableField before releasing?  I.e.,
>>> go back to what we have in 5.x.  This change also caused challenges in
>>> the 5.0 release, and we just kicked the can down the road, but I think
>>> now we should just kick the can off the road...
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>>
>>
>> --
>> Anshum Gupta
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6622) Issue with Multivalued fields when using UIMA

2016-01-08 Thread Tomasz Oliwa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089815#comment-15089815
 ] 

Tomasz Oliwa commented on SOLR-6622:


I just reported https://issues.apache.org/jira/browse/SOLR-8528 , a bug with 
UIMA and multivalued fields that might have the same or similar underlying 
problem.

> Issue with Multivalued fields when using UIMA
> -
>
> Key: SOLR-6622
> URL: https://issues.apache.org/jira/browse/SOLR-6622
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - UIMA
>Affects Versions: Trunk
>Reporter: Maryam Khordad
> Attachments: SOLR-6622.patch
>
>
> When using any of UIMA addons on a multivalued fields, only the first row of 
> the field gets processed and UIMA update ignores the remaining rows. 
> This bug caused by "getTextsToAnalyze" method in "UIMAUpdateRequestProcessor" 
> class. SolrInputDocument
> .getFieldValue  must be changes to olrInputDocument
> .getFieldValues and the result must be an array not a single 
> variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8529) Improve JdbcTest to not use plain assert statements

2016-01-08 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089810#comment-15089810
 ] 

Kevin Risden commented on SOLR-8529:


Builds upon improvements from SOLR-8527.

> Improve JdbcTest to not use plain assert statements
> ---
>
> Key: SOLR-8529
> URL: https://issues.apache.org/jira/browse/SOLR-8529
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>
> Plain assert statements work but it makes debugging hard. Instead should use 
> assertEquals, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8529) Improve JdbcTest to not use plain assert statements

2016-01-08 Thread Kevin Risden (JIRA)
Kevin Risden created SOLR-8529:
--

 Summary: Improve JdbcTest to not use plain assert statements
 Key: SOLR-8529
 URL: https://issues.apache.org/jira/browse/SOLR-8529
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: Trunk
Reporter: Kevin Risden


Plain assert statements work but it makes debugging hard. Instead should use 
assertEquals, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8528) UIMA processor with multivalued fields and atomic updates bug

2016-01-08 Thread Tomasz Oliwa (JIRA)
Tomasz Oliwa created SOLR-8528:
--

 Summary: UIMA processor with multivalued fields and atomic updates 
bug
 Key: SOLR-8528
 URL: https://issues.apache.org/jira/browse/SOLR-8528
 Project: Solr
  Issue Type: Bug
  Components: contrib - UIMA, Schema and Analysis, update
Affects Versions: 5.3.1
 Environment: Linux (Fedora 21)
Reporter: Tomasz Oliwa


There is a showstopping bug with the UIMA processor and using atomic updates in 
Solr.

I am using the UIMA processor to populate multivalued fields upon indexing. 
When I later use atomic updates to update a document, all UIMA populated 
multivalued fields have only one value, the others are gone!

To reproduce:

1. Use the org.apache.solr.uima.processor.UIMAUpdateRequestProcessorFactory to 
populate a multivalued field during the indexing of a document. 
2. Use Solr atomic updates (http://yonik.com/solr/atomic-updates/) to set a 
different field of the document to a new value and commit
3. Any multivalued fields created by the UIMAUpdateRequestProcessorFactory now 
only have one value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8527) Improve JdbcTest to cleanup properly on failures

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8527:
---
Flags: Patch

> Improve JdbcTest to cleanup properly on failures
> 
>
> Key: SOLR-8527
> URL: https://issues.apache.org/jira/browse/SOLR-8527
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8527.patch
>
>
> Currently if a test case fails in JdbcTest then resources are not closed 
> properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8527) Improve JdbcTest to cleanup properly on failures

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8527:
---
Attachment: SOLR-8527.patch

This patch uses try-with-resources on the JDBC connections, statements, and 
resultsets. The diff looks a lot better in IntelliJ ignoring the whitespace 
changes in front of the assert statements.

> Improve JdbcTest to cleanup properly on failures
> 
>
> Key: SOLR-8527
> URL: https://issues.apache.org/jira/browse/SOLR-8527
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8527.patch
>
>
> Currently if a test case fails in JdbcTest then resources are not closed 
> properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_66) - Build # 15189 - Failure!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15189/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.MigrateRouteKeyTest

Error Message:
ObjectTracker found 3 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, SolrCore]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 3 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, SolrCore]
at __randomizedtesting.SeedInfo.seed([FC43CF849DD0E08]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.DistribJoinFromCollectionTest.test

Error Message:
Error from server at http://127.0.0.1:40289/ct_o/ew/to_2x2_shard1_replica2: 
SolrCloud join: from_1x2 has a local replica (from_1x2_shard1_replica1) on 
127.0.0.1:40289_ct_o%2Few, but it is down

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:40289/ct_o/ew/to_2x2_shard1_replica2: SolrCloud 
join: from_1x2 has a local replica (from_1x2_shard1_replica1) on 
127.0.0.1:40289_ct_o%2Few, but it is down
at 
__randomizedtesting.SeedInfo.seed([FC43CF849DD0E08:87900322E72163F0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.DistribJoinFromCollectionTest.testJoins(DistribJoinFromCollectionTest.java:132)
at 
org.apache.solr.cloud.DistribJoinFromCollectionTest.test(DistribJoinFromCollectionTest.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  

[jira] [Created] (SOLR-8527) Improve JdbcTest to cleanup properly on failures

2016-01-08 Thread Kevin Risden (JIRA)
Kevin Risden created SOLR-8527:
--

 Summary: Improve JdbcTest to cleanup properly on failures
 Key: SOLR-8527
 URL: https://issues.apache.org/jira/browse/SOLR-8527
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: Trunk
Reporter: Kevin Risden


Currently if a test case fails in JdbcTest then resources are not closed 
properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 3003 - Still Failing!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/3003/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([4B4BBA2C5F675922:D1BFC7CEC1FDC51E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:244)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529&qt=standard&start=0&rows=20&version=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
... 40 more




Build Log:
[...truncated 9572 lines...]
   [junit4] Suite: org.apache.solr.updat

[jira] [Updated] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-01-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6932:
---
Attachment: LUCENE-6932.patch

Thanks [~stephane campinas], I merged your two patches together into one, and 
changed the approach a bit to avoid adding a new {{enforceEOF}} member to 
{{RAMInputStream}} ... does it look OK?

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: Trunk
>Reporter: Stéphane Campinas
> Attachments: LUCENE-6932.patch, issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6966) Contribution: Codec for index-level encryption

2016-01-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089614#comment-15089614
 ] 

Robert Muir commented on LUCENE-6966:
-

https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2009/july/if-youre-typing-the-letters-a-e-s-into-your-code-youre-doing-it-wrong/

I am not sure where some of these ideas like "postings lists don't need to be 
encrypted" came from, but most of the design presented on this issue is 
completely insecure. Please, if you want to do this stuff in lucene, it needs 
to be a standardized scheme (like XTS or ESSIV) with all the known tradeoffs 
already computed. You can be 100% sure that if "crypto is invented here" that 
I'm gonna make comments on the issue, because it is the right thing to do.

The many justifications for doing it in a complicated way in the codec level 
seems to revolve around limitations in solrcloud, rather than from good design. 
Because you really can put different indexes in different directories and let 
the operating system do it for "multitenancy". Because Lucene has stuff like 
ParallelReader and different fields can be in different indexes if you really 
need that, etc, etc. Alternative everywhere which would allow you to still "let 
the OS do it", be secure, and have a working filesystem cache (be fast).



> Contribution: Codec for index-level encryption
> --
>
> Key: LUCENE-6966
> URL: https://issues.apache.org/jira/browse/LUCENE-6966
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/other
>Reporter: Renaud Delbru
>  Labels: codec, contrib
>
> We would like to contribute a codec that enables the encryption of sensitive 
> data in the index that has been developed as part of an engagement with a 
> customer. We think that this could be of interest for the community.
> Below is a description of the project.
> h1. Introduction
> In comparison with approaches where all data is encrypted (e.g., file system 
> encryption, index output / directory encryption), encryption at a codec level 
> enables more fine-grained control on which block of data is encrypted. This 
> is more efficient since less data has to be encrypted. This also gives more 
> flexibility such as the ability to select which field to encrypt.
> Some of the requirements for this project were:
> * The performance impact of the encryption should be reasonable.
> * The user can choose which field to encrypt.
> * Key management: During the life cycle of the index, the user can provide a 
> new version of his encryption key. Multiple key versions should co-exist in 
> one index.
> h1. What is supported ?
> - Block tree terms index and dictionary
> - Compressed stored fields format
> - Compressed term vectors format
> - Doc values format (prototype based on an encrypted index output) - this 
> will be submitted as a separated patch
> - Index upgrader: command to upgrade all the index segments with the latest 
> key version available.
> h1. How it is implemented ?
> h2. Key Management
> One index segment is encrypted with a single key version. An index can have 
> multiple segments, each one encrypted using a different key version. The key 
> version for a segment is stored in the segment info.
> The provided codec is abstract, and a subclass is responsible in providing an 
> implementation of the cipher factory. The cipher factory is responsible of 
> the creation of a cipher instance based on a given key version.
> h2. Encryption Model
> The encryption model is based on AES/CBC with padding. Initialisation vector 
> (IV) is reused for performance reason, but only on a per format and per 
> segment basis.
> While IV reuse is usually considered a bad practice, the CBC mode is somehow 
> resilient to IV reuse. The only "leak" of information that this could lead to 
> is being able to know that two encrypted blocks of data starts with the same 
> prefix. However, it is unlikely that two data blocks in an index segment will 
> start with the same data:
> - Stored Fields Format: Each encrypted data block is a compressed block 
> (~4kb) of one or more documents. It is unlikely that two compressed blocks 
> start with the same data prefix.
> - Term Vectors: Each encrypted data block is a compressed block (~4kb) of 
> terms and payloads from one or more documents. It is unlikely that two 
> compressed blocks start with the same data prefix.
> - Term Dictionary Index: The term dictionary index is encoded and encrypted 
> in one single data block.
> - Term Dictionary Data: Each data block of the term dictionary encodes a set 
> of suffixes. It is unlikely to have two dictionary data blocks sharing the 
> same prefix within the same segment.
> - DocValues: A DocValues file will be composed of multiple encrypted data 
> blocks.

[jira] [Commented] (SOLR-8502) Improve Solr JDBC Driver to support SQL Clients like DBVisualizer

2016-01-08 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089599#comment-15089599
 ] 

Kevin Risden commented on SOLR-8502:


Here are some of the JIRAs are that are ready for review.
* SOLR-8503
* SOLR-8507
* SOLR-8511
* SOLR-8513
* SOLR-8514
* SOLR-8515
* SOLR-8516

> Improve Solr JDBC Driver to support SQL Clients like DBVisualizer
> -
>
> Key: SOLR-8502
> URL: https://issues.apache.org/jira/browse/SOLR-8502
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>  Labels: jdbc
>
> Currently when trying to connect to Solr with the JDBC driver with a SQL 
> client the driver must implement more methods and metadata to allow 
> connections. This JIRA is designed to act as an umbrella for the JDBC changes.
> An initial pass from a few months ago is here: 
> https://github.com/risdenk/lucene-solr/tree/expand-jdbc. This needs to be 
> broken up and create patches for the related sub tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6944) BooleanWeight.bulkScorer should not build any sub bulk scorer if there are required/prohibited clauses

2016-01-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6944.
--
Resolution: Fixed

Yep, that's fixed!

> BooleanWeight.bulkScorer should not build any sub bulk scorer if there are 
> required/prohibited clauses
> --
>
> Key: LUCENE-6944
> URL: https://issues.apache.org/jira/browse/LUCENE-6944
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6944.patch
>
>
> BooleanWeight.bulkScorer creates a sub bulk scorer for all clauses until it 
> meets a clause that is not optional (the only kind of clause it can deal 
> with). However the Weight.bulkScorer method is sometimes costly, so 
> BooleanWeight.bulkScorer should first inspect all clauses to see if any of 
> them is not optional to avoid creating costly bulk scorers to only trash them 
> later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8505) core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of String literals

2016-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8505.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of String literals
> --
>
> Key: SOLR-8505
> URL: https://issues.apache.org/jira/browse/SOLR-8505
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8505.patch
>
>
> * Add {{core/DirectoryFactory.LOCK_TYPE_HDFS}}, other 
> {{core/DirectoryFactory.LOCK_TYPE_*}} values already exist.
> * Extend {{DirectoryFactoryTest.testLockTypesUnchanged}} to account for 
> LOCK_TYPE_HDFS.
> * Change {{SolrIndexConfigTest.testToMap}} to also consider 
> "hdfs"/LOCK_TYPE_HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8504) (IndexSchema|SolrIndexConfig)Test: private static finals for solrconfig.xml and schema.xml String literals

2016-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8504.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> (IndexSchema|SolrIndexConfig)Test: private static finals for solrconfig.xml 
> and schema.xml String literals
> --
>
> Key: SOLR-8504
> URL: https://issues.apache.org/jira/browse/SOLR-8504
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8504.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6948) ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill

2016-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned LUCENE-6948:
---

Assignee: Christine Poerschke

> ArrayIndexOutOfBoundsException in PagedBytes$Reader.fill
> 
>
> Key: LUCENE-6948
> URL: https://issues.apache.org/jira/browse/LUCENE-6948
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.4
>Reporter: Michael Lawley
>Assignee: Christine Poerschke
> Attachments: LUCENE-6948.patch
>
>
> With a very large index (in our case > 10G), we are seeing exceptions like:
> java.lang.ArrayIndexOutOfBoundsException: -62400
>   at org.apache.lucene.util.PagedBytes$Reader.fill(PagedBytes.java:116)
>   at 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get(FieldCacheImpl.java:1342)
>   at 
> org.apache.lucene.search.join.TermsCollector$SV.collect(TermsCollector.java:106)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
> The code in question is trying to allocate an array with a negative size.  We 
> believe the source of the error is in 
> org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get where the 
> following code occurs:
>   final int pointer = (int) docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }
> The cast to int will break if the (long) result of docToOffset.get is too 
> large, and is unnecessary in the first place since bytes.fill takes a long as 
> its second parameter.
> Proposed fix:
>   final long pointer = docToOffset.get(docID);
>   if (pointer == 0) {
> term.length = 0;
>   } else {
> bytes.fill(term, pointer);
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8505) core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of String literals

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089579#comment-15089579
 ] 

ASF subversion and git services commented on SOLR-8505:
---

Commit 1723768 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723768 ]

SOLR-8505: core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of 
String literals (merge in revision 1723751 from trunk)

> core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of String literals
> --
>
> Key: SOLR-8505
> URL: https://issues.apache.org/jira/browse/SOLR-8505
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8505.patch
>
>
> * Add {{core/DirectoryFactory.LOCK_TYPE_HDFS}}, other 
> {{core/DirectoryFactory.LOCK_TYPE_*}} values already exist.
> * Extend {{DirectoryFactoryTest.testLockTypesUnchanged}} to account for 
> LOCK_TYPE_HDFS.
> * Change {{SolrIndexConfigTest.testToMap}} to also consider 
> "hdfs"/LOCK_TYPE_HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8502) Improve Solr JDBC Driver to support SQL Clients like DBVisualizer

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089556#comment-15089556
 ] 

Joel Bernstein commented on SOLR-8502:
--

It looks like the jira filter is private. You can also just list the jira's 
that are ready for review.

This is a high priority for Solr 6. So I'll definitely work with you to get the 
code reviewed and ready to be committed.  


> Improve Solr JDBC Driver to support SQL Clients like DBVisualizer
> -
>
> Key: SOLR-8502
> URL: https://issues.apache.org/jira/browse/SOLR-8502
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>  Labels: jdbc
>
> Currently when trying to connect to Solr with the JDBC driver with a SQL 
> client the driver must implement more methods and metadata to allow 
> connections. This JIRA is designed to act as an umbrella for the JDBC changes.
> An initial pass from a few months ago is here: 
> https://github.com/risdenk/lucene-solr/tree/expand-jdbc. This needs to be 
> broken up and create patches for the related sub tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8502) Improve Solr JDBC Driver to support SQL Clients like DBVisualizer

2016-01-08 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089503#comment-15089503
 ] 

Kevin Risden commented on SOLR-8502:


The following filter can be used to look at the tickets that have patches, but 
are not committed/closed yet.

https://issues.apache.org/jira/issues/?filter=12334493&jql=project%20%3D%20SOLR%20AND%20parent%20%3D%20SOLR-8502%20AND%20Flags%20%3D%20patch%20AND%20status%20not%20in%20(Fixed%2C%20Closed%2C%20Done%2C%20Invalid)%20and%20attachments%20is%20not%20EMPTY%20order%20by%20created%20ASC

[~joel.bernstein] - Can you take a look at these when you get a chance?

> Improve Solr JDBC Driver to support SQL Clients like DBVisualizer
> -
>
> Key: SOLR-8502
> URL: https://issues.apache.org/jira/browse/SOLR-8502
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>  Labels: jdbc
>
> Currently when trying to connect to Solr with the JDBC driver with a SQL 
> client the driver must implement more methods and metadata to allow 
> connections. This JIRA is designed to act as an umbrella for the JDBC changes.
> An initial pass from a few months ago is here: 
> https://github.com/risdenk/lucene-solr/tree/expand-jdbc. This needs to be 
> broken up and create patches for the related sub tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Jetty update from 9.2 to 9.3 causes the server to reset formerly legitimate client connections.

2016-01-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089498#comment-15089498
 ] 

Yonik Seeley commented on SOLR-8453:


bq.  I guess the HttpClient could potentially do better by making the status 
received in the response available, but then it is in a race because the close 
may occur prior to the response being read/parsed/processed.

Not sure I understand this part.  At the OS/socket level, the server can send 
the response and immediately close the socket, and the client (if written 
properly) can always read the response.

> Jetty update from 9.2 to 9.3 causes the server to reset formerly legitimate 
> client connections.
> ---
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453_test.patch, SOLR-8453_test.patch, jetty9.2.pcapng, 
> jetty9.3.pcapng
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8516) Implement ResultSetImpl.getStatement

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8516:
---
Attachment: SOLR-8516.patch

Added initial implementation patch. Passes the StatementImpl object into the 
ResultSet to enable getStatement().

> Implement ResultSetImpl.getStatement
> 
>
> Key: SOLR-8516
> URL: https://issues.apache.org/jira/browse/SOLR-8516
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8516.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8516) Implement ResultSetImpl.getStatement

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8516:
---
Flags: Patch

> Implement ResultSetImpl.getStatement
> 
>
> Key: SOLR-8516
> URL: https://issues.apache.org/jira/browse/SOLR-8516
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8516.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6854) Provide extraction of more metrics from confusion matrix

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089485#comment-15089485
 ] 

ASF subversion and git services commented on LUCENE-6854:
-

Commit 1723759 from [~teofili] in branch 'dev/trunk'
[ https://svn.apache.org/r1723759 ]

LUCENE-6854 - adjusted precision calculation, minor fix in SNBC test

> Provide extraction of more metrics from confusion matrix
> 
>
> Key: LUCENE-6854
> URL: https://issues.apache.org/jira/browse/LUCENE-6854
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: Trunk
>
>
> {{ConfusionMatrix}} only provides a general accuracy measure while it'd be 
> good to be able to extract more metrics from it, for specific classes, like 
> precision, recall, f-measure, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8479) Add JDBCStream for integration with external data sources

2016-01-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089484#comment-15089484
 ] 

Joel Bernstein commented on SOLR-8479:
--

This is a great ticket!

One thing we can think about doing in the future is handling the defined sort 
differently. Possibly parsing it from the SQL statement.

One of the cool things about this is it allows you to distribute a SQL database 
as well. For example you could send the same query to multiple SQL servers then 
stream it all back together.

> Add JDBCStream for integration with external data sources
> -
>
> Key: SOLR-8479
> URL: https://issues.apache.org/jira/browse/SOLR-8479
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8479.patch, SOLR-8479.patch, SOLR-8479.patch, 
> SOLR-8479.patch, SOLR-8479.patch
>
>
> Given that the Streaming API can merge and join multiple incoming SolrStreams 
> to perform complex operations on the resulting combined datasets I think it 
> would be beneficial to also support incoming streams from other data sources. 
> The JDBCStream will provide a Streaming API interface to any data source 
> which provides a JDBC driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Jack Krupansky
+1 for a 5.x deprecation release so 6.0 can remove old stuff.

+1 for a git-based release

+1 for at least 3 months for people to finish and stabilize work in
progress - April to July seems like the right window to target

-- Jack Krupansky

On Fri, Jan 8, 2016 at 10:09 AM, Anshum Gupta 
wrote:

> +1 to that ! Do you have a planned timeline for this?
>
> I would want some time to clean up code and also have a deprecation
> release (5.5 or 5.6) out so we don't have to carry all the cruft through
> the 6x series.
>
> On Fri, Jan 8, 2016 at 4:37 AM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> I think we should get the ball rolling for our next major release (6.0.0)?
>>
>> E.g., dimensional values is a big new feature for 6.x, and I think
>> it's nearly ready except maybe fixing up the API so it's easier for
>> the 1D case.
>>
>> I think we should maybe remove StorableField before releasing?  I.e.,
>> go back to what we have in 5.x.  This change also caused challenges in
>> the 5.0 release, and we just kicked the can down the road, but I think
>> now we should just kick the can off the road...
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> Anshum Gupta
>


[jira] [Updated] (SOLR-8515) Implement StatementImpl.getConnection

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8515:
---
Attachment: SOLR-8515.patch

Fixed issue with JdbcTest and getting properties.

> Implement StatementImpl.getConnection
> -
>
> Key: SOLR-8515
> URL: https://issues.apache.org/jira/browse/SOLR-8515
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8515.patch, SOLR-8515.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15484 - Still Failing!

2016-01-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15484/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:37914";, 
"node_name":"127.0.0.1:37914_", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:42728";,  
   "node_name":"127.0.0.1:42728_", "state":"active", 
"leader":"true"},   "core_node3":{ "core":"collection1",
 "base_url":"http://127.0.0.1:33470";, 
"node_name":"127.0.0.1:33470_", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:54641";, 
"node_name":"127.0.0.1:54641_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:37914";, 
"node_name":"127.0.0.1:37914_", "state":"recovering"},   
"core_node2":{ "core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:54641";, 
"node_name":"127.0.0.1:54641_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"},   "collMinRf_1x3":{ 
"replicationFactor":"3", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "core":"collMinRf_1x3_shard1_replica3",
 "base_url":"http://127.0.0.1:54641";, 
"node_name":"127.0.0.1:54641_", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:33470";, 
"node_name":"127.0.0.1:33470_", "state":"active"},   
"core_node3":{ "core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:42728";, 
"node_name":"127.0.0.1:42728_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:37914";,
"node_name":"127.0.0.1:37914_",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:42728";,
"node_name":"127.0.0.1:42728_",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:33470";,
"node_name":"127.0.0.1:33470_",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:54641";,
"node_name":"127.0.0.1:54641_",
"state":"active",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCrea

Re: Update commit message

2016-01-08 Thread Dennis Gove
Thanks Erick. It appears that the following was able to work for me

$ svn propedit -r 1723749 --revprop svn:log
[[ make edit in vi and save/close ]]
Set new value for property 'svn:log' on revision 1723749

On Fri, Jan 8, 2016 at 11:18 AM, Erick Erickson 
wrote:

> Personally since the comment is in the JIRA I can live with it ;)
>
> WARNING: I haven't tried this myself, but I did find:
>
> svn propedit svn:log --revprop -r NNN
>
> see: http://subversion.apache.org/faq.html#change-log-msg
>
> From a quick scan there might be permissions or some such
> necessary so it may give you some kind of "access denied".
>
> I'd try it and if it didn't work after 10 minutes give up.
> The information is in the message so it doesn't seem worth
> too much effort IMO.
>
> Best,
> Erick
>
> On Fri, Jan 8, 2016 at 8:06 AM, Dennis Gove  wrote:
> > Is it possible to update an svn commit message? In commit 1723749 for
> > https://issues.apache.org/jira/browse/SOLR-8479 I accidentally
> double-posted
> > my commit message in the vi editor (though the first line is missing the
> > first character) and didn't notice before committing.
> >
> > Any chance I can edit the commit message now without screwing anything
> up?
> >
> > Thanks - Dennis
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-8526) Reuse Lucene.FieldType instances

2016-01-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8526:
---
Description: 
When lucene created FieldType (not to be confused with Solr's FieldType), Solr 
was ported by simply creating a new lucene.FieldType instance for every field 
indexed (see solr.FieldType.createField())

To avoid creating every time, Solr's SchemaField (which is already analagous to 
lucene.FieldType) can simply implement that interface.

  was:
When lucene created FieldType (not to be confused with Solr's FieldType), Solr 
was ported by simply creating a new lucene.FieldType instance for every field 
indexed.

To avoid creating every time, Solr's SchemaField (which is already analagous to 
lucene.FieldType) can simply implement that interface.


> Reuse Lucene.FieldType instances
> 
>
> Key: SOLR-8526
> URL: https://issues.apache.org/jira/browse/SOLR-8526
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
>
> When lucene created FieldType (not to be confused with Solr's FieldType), 
> Solr was ported by simply creating a new lucene.FieldType instance for every 
> field indexed (see solr.FieldType.createField())
> To avoid creating every time, Solr's SchemaField (which is already analagous 
> to lucene.FieldType) can simply implement that interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Erick Erickson
OK, does this sound like an umbrella JIRA (maybe one for Lucene and
one for Solr) for "Things to add for 6.0" to anybody else as a way of
organizing?

On Fri, Jan 8, 2016 at 8:21 AM, Ishan Chattopadhyaya
 wrote:
> Couple of items that I am working on that I would like to see in 6.0:
> SOLR-5944: Updatable DocValues in Solr
> SOLR-8396: Using Dimensional values in Solr
>
> The first one needs some more tests, maybe some refactoring and reviews.
> The second one requires some dev work, it is at a very early stage. I think
> (please correct me if I'm wrong) we should have dimensional fields in for
> Solr 6.0 since the regular numeric fields are now deprecated and dimensional
> fields are the way forward.
>
> Regards,
> Ishan
>
>
> On Fri, Jan 8, 2016 at 9:41 PM, Erick Erickson 
> wrote:
>>
>> What do people thing about waiting to cut the branch until someone has
>> something that shouldn't go into 6.0? Committing will be easier that
>> way.
>>
>> No biggie, maybe Mike's purpose is served by the notice "get your
>> stuff in trunk that you want to go in 6.0 Real Soon Now" ;)
>>
>> As always, since I'm not volunteering to be the RM, I'll be happy with
>> whatever people decide
>>
>> On Fri, Jan 8, 2016 at 7:51 AM, Shawn Heisey  wrote:
>> > On 1/8/2016 8:13 AM, Shawn Heisey wrote:
>> >> I've only been paying attention to commits for one new major release,
>> >> so
>> >> I can offer some info on 5.0, but not any of the previous major
>> >> releases.
>> >>
>> >> Robert created branch_5x on 2014/09/18.  Version 5.0.0 was released on
>> >> 2015/02/20.  That's five months from new branch to new major release.
>> >
>> > Turns out I *do* have information in my email history for 4.x.  Robert
>> > created branch_4x on 2012/05/29.  The 4.0.0 release was announced on
>> > 2012/10/12 -- four and a half months later.
>> >
>> > Thanks,
>> > Shawn
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8526) Reuse Lucene.FieldType instances

2016-01-08 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-8526:
--

 Summary: Reuse Lucene.FieldType instances
 Key: SOLR-8526
 URL: https://issues.apache.org/jira/browse/SOLR-8526
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley


When lucene created FieldType (not to be confused with Solr's FieldType), Solr 
was ported by simply creating a new lucene.FieldType instance for every field 
indexed.

To avoid creating every time, Solr's SchemaField (which is already analagous to 
lucene.FieldType) can simply implement that interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8515) Implement StatementImpl.getConnection

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8515:
---
Attachment: SOLR-8515.patch

Initial patch that improves the passing of the ConnectionImpl object to allow 
for getConnection().

> Implement StatementImpl.getConnection
> -
>
> Key: SOLR-8515
> URL: https://issues.apache.org/jira/browse/SOLR-8515
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8515.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8515) Implement StatementImpl.getConnection

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8515:
---
Flags: Patch

> Implement StatementImpl.getConnection
> -
>
> Key: SOLR-8515
> URL: https://issues.apache.org/jira/browse/SOLR-8515
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8515.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Ishan Chattopadhyaya
Couple of items that I am working on that I would like to see in 6.0:
SOLR-5944: Updatable DocValues in Solr
SOLR-8396: Using Dimensional values in Solr

The first one needs some more tests, maybe some refactoring and reviews.
The second one requires some dev work, it is at a very early stage. I think
(please correct me if I'm wrong) we should have dimensional fields in for
Solr 6.0 since the regular numeric fields are now deprecated and
dimensional fields are the way forward.

Regards,
Ishan


On Fri, Jan 8, 2016 at 9:41 PM, Erick Erickson 
wrote:

> What do people thing about waiting to cut the branch until someone has
> something that shouldn't go into 6.0? Committing will be easier that
> way.
>
> No biggie, maybe Mike's purpose is served by the notice "get your
> stuff in trunk that you want to go in 6.0 Real Soon Now" ;)
>
> As always, since I'm not volunteering to be the RM, I'll be happy with
> whatever people decide
>
> On Fri, Jan 8, 2016 at 7:51 AM, Shawn Heisey  wrote:
> > On 1/8/2016 8:13 AM, Shawn Heisey wrote:
> >> I've only been paying attention to commits for one new major release, so
> >> I can offer some info on 5.0, but not any of the previous major
> releases.
> >>
> >> Robert created branch_5x on 2014/09/18.  Version 5.0.0 was released on
> >> 2015/02/20.  That's five months from new branch to new major release.
> >
> > Turns out I *do* have information in my email history for 4.x.  Robert
> > created branch_4x on 2012/05/29.  The 4.0.0 release was announced on
> > 2012/10/12 -- four and a half months later.
> >
> > Thanks,
> > Shawn
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-8515) Implement StatementImpl.getConnection

2016-01-08 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089441#comment-15089441
 ] 

Kevin Risden commented on SOLR-8515:


Requires ConnectionImpl.getCatalog() from SOLR-8503

> Implement StatementImpl.getConnection
> -
>
> Key: SOLR-8515
> URL: https://issues.apache.org/jira/browse/SOLR-8515
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Update commit message

2016-01-08 Thread Erick Erickson
Personally since the comment is in the JIRA I can live with it ;)

WARNING: I haven't tried this myself, but I did find:

svn propedit svn:log --revprop -r NNN

see: http://subversion.apache.org/faq.html#change-log-msg

>From a quick scan there might be permissions or some such
necessary so it may give you some kind of "access denied".

I'd try it and if it didn't work after 10 minutes give up.
The information is in the message so it doesn't seem worth
too much effort IMO.

Best,
Erick

On Fri, Jan 8, 2016 at 8:06 AM, Dennis Gove  wrote:
> Is it possible to update an svn commit message? In commit 1723749 for
> https://issues.apache.org/jira/browse/SOLR-8479 I accidentally double-posted
> my commit message in the vi editor (though the first line is missing the
> first character) and didn't notice before committing.
>
> Any chance I can edit the commit message now without screwing anything up?
>
> Thanks - Dennis

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Erick Erickson
What do people thing about waiting to cut the branch until someone has
something that shouldn't go into 6.0? Committing will be easier that
way.

No biggie, maybe Mike's purpose is served by the notice "get your
stuff in trunk that you want to go in 6.0 Real Soon Now" ;)

As always, since I'm not volunteering to be the RM, I'll be happy with
whatever people decide

On Fri, Jan 8, 2016 at 7:51 AM, Shawn Heisey  wrote:
> On 1/8/2016 8:13 AM, Shawn Heisey wrote:
>> I've only been paying attention to commits for one new major release, so
>> I can offer some info on 5.0, but not any of the previous major releases.
>>
>> Robert created branch_5x on 2014/09/18.  Version 5.0.0 was released on
>> 2015/02/20.  That's five months from new branch to new major release.
>
> Turns out I *do* have information in my email history for 4.x.  Robert
> created branch_4x on 2012/05/29.  The 4.0.0 release was announced on
> 2012/10/12 -- four and a half months later.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8514) Implement StatementImpl.execute(String sql), StatementImpl.getResultSet(), and StatementImpl.getUpdateCount()

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8514:
---
Flags: Patch

> Implement StatementImpl.execute(String sql), StatementImpl.getResultSet(), 
> and StatementImpl.getUpdateCount()
> -
>
> Key: SOLR-8514
> URL: https://issues.apache.org/jira/browse/SOLR-8514
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8514.patch
>
>
> Currently only StatementImpl.executeQuery is implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Update commit message

2016-01-08 Thread Dennis Gove
Is it possible to update an svn commit message? In commit 1723749 for
https://issues.apache.org/jira/browse/SOLR-8479 I accidentally
double-posted my commit message in the vi editor (though the first line is
missing the first character) and didn't notice before committing.

Any chance I can edit the commit message now without screwing anything up?

Thanks - Dennis


[jira] [Updated] (SOLR-8514) Implement StatementImpl.execute(String sql), StatementImpl.getResultSet(), and StatementImpl.getUpdateCount()

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8514:
---
Attachment: SOLR-8514.patch

Added initial implementation. This reuses executeQuery() and just stores the 
last SQL statement to come in. Solr doesn't have a way to currently execute a 
query and then get the results back later.

> Implement StatementImpl.execute(String sql), StatementImpl.getResultSet(), 
> and StatementImpl.getUpdateCount()
> -
>
> Key: SOLR-8514
> URL: https://issues.apache.org/jira/browse/SOLR-8514
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8514.patch
>
>
> Currently only StatementImpl.executeQuery is implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8514) Implement StatementImpl.execute(String sql), StatementImpl.getResultSet(), and StatementImpl.getUpdateCount()

2016-01-08 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8514:
---
Summary: Implement StatementImpl.execute(String sql), 
StatementImpl.getResultSet(), and StatementImpl.getUpdateCount()  (was: 
Implement StatementImpl.execute(String sql) and StatementImpl.getResultSet())

> Implement StatementImpl.execute(String sql), StatementImpl.getResultSet(), 
> and StatementImpl.getUpdateCount()
> -
>
> Key: SOLR-8514
> URL: https://issues.apache.org/jira/browse/SOLR-8514
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>
> Currently only StatementImpl.executeQuery is implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8479) Add JDBCStream for integration with external data sources

2016-01-08 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089412#comment-15089412
 ] 

Dennis Gove commented on SOLR-8479:
---

I think a test like that is a great idea. I'll add it at some point in the 
future (perhaps under the guise of cleaning up our tests which was mentioned in 
the UpdateStream ticket).

> Add JDBCStream for integration with external data sources
> -
>
> Key: SOLR-8479
> URL: https://issues.apache.org/jira/browse/SOLR-8479
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8479.patch, SOLR-8479.patch, SOLR-8479.patch, 
> SOLR-8479.patch, SOLR-8479.patch
>
>
> Given that the Streaming API can merge and join multiple incoming SolrStreams 
> to perform complex operations on the resulting combined datasets I think it 
> would be beneficial to also support incoming streams from other data sources. 
> The JDBCStream will provide a Streaming API interface to any data source 
> which provides a JDBC driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8479) Add JDBCStream for integration with external data sources

2016-01-08 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089404#comment-15089404
 ] 

Jason Gerlowski edited comment on SOLR-8479 at 1/8/16 4:01 PM:
---

This looks awesome.

Only comment would be that we might regret not having a test chaining 
JDBCStream and UpdateStream together.

As Joel mentioned, one of the interesting possibilities here is quick 
data-import using those two streams.  Just thought it might be nice to have a 
test to catch any future regressions there.

Maybe it's not worth it though, or adding tests should be pushed to a different 
JIRA (since it looks like you're already working on committing this, and I'm 
commenting at the 11th hour here).

Oops, looks like I'm too late here.  Nevermind then : )


was (Author: gerlowskija):
This looks awesome.

Only comment would be that we might regret not having a test chaining 
JDBCStream and UpdateStream together.

As Joel mentioned, one of the interesting possibilities here is quick 
data-import using those two streams.  Just thought it might be nice to have a 
test to catch any future regressions there.

Maybe it's not worth it though, or adding tests should be pushed to a different 
JIRA (since it looks like you're already working on committing this, and I'm 
commenting at the 11th hour here).

> Add JDBCStream for integration with external data sources
> -
>
> Key: SOLR-8479
> URL: https://issues.apache.org/jira/browse/SOLR-8479
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8479.patch, SOLR-8479.patch, SOLR-8479.patch, 
> SOLR-8479.patch, SOLR-8479.patch
>
>
> Given that the Streaming API can merge and join multiple incoming SolrStreams 
> to perform complex operations on the resulting combined datasets I think it 
> would be beneficial to also support incoming streams from other data sources. 
> The JDBCStream will provide a Streaming API interface to any data source 
> which provides a JDBC driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8479) Add JDBCStream for integration with external data sources

2016-01-08 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089404#comment-15089404
 ] 

Jason Gerlowski commented on SOLR-8479:
---

This looks awesome.

Only comment would be that we might regret not having a test chaining 
JDBCStream and UpdateStream together.

As Joel mentioned, one of the interesting possibilities here is quick 
data-import using those two streams.  Just thought it might be nice to have a 
test to catch any future regressions there.

Maybe it's not worth it though, or adding tests should be pushed to a different 
JIRA (since it looks like you're already working on committing this, and I'm 
commenting at the 11th hour here).

> Add JDBCStream for integration with external data sources
> -
>
> Key: SOLR-8479
> URL: https://issues.apache.org/jira/browse/SOLR-8479
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8479.patch, SOLR-8479.patch, SOLR-8479.patch, 
> SOLR-8479.patch, SOLR-8479.patch
>
>
> Given that the Streaming API can merge and join multiple incoming SolrStreams 
> to perform complex operations on the resulting combined datasets I think it 
> would be beneficial to also support incoming streams from other data sources. 
> The JDBCStream will provide a Streaming API interface to any data source 
> which provides a JDBC driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8505) core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of String literals

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089401#comment-15089401
 ] 

ASF subversion and git services commented on SOLR-8505:
---

Commit 1723751 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1723751 ]

SOLR-8505: core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of 
String literals

> core/DirectoryFactory.LOCK_TYPE_HDFS - add & use it instead of String literals
> --
>
> Key: SOLR-8505
> URL: https://issues.apache.org/jira/browse/SOLR-8505
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8505.patch
>
>
> * Add {{core/DirectoryFactory.LOCK_TYPE_HDFS}}, other 
> {{core/DirectoryFactory.LOCK_TYPE_*}} values already exist.
> * Extend {{DirectoryFactoryTest.testLockTypesUnchanged}} to account for 
> LOCK_TYPE_HDFS.
> * Change {{SolrIndexConfigTest.testToMap}} to also consider 
> "hdfs"/LOCK_TYPE_HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 release

2016-01-08 Thread Shawn Heisey
On 1/8/2016 8:13 AM, Shawn Heisey wrote:
> I've only been paying attention to commits for one new major release, so
> I can offer some info on 5.0, but not any of the previous major releases.
> 
> Robert created branch_5x on 2014/09/18.  Version 5.0.0 was released on
> 2015/02/20.  That's five months from new branch to new major release.

Turns out I *do* have information in my email history for 4.x.  Robert
created branch_4x on 2012/05/29.  The 4.0.0 release was announced on
2012/10/12 -- four and a half months later.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8479) Add JDBCStream for integration with external data sources

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089397#comment-15089397
 ] 

ASF subversion and git services commented on SOLR-8479:
---

Commit 1723749 from dpg...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1723749 ]

OLR-8479: Add JDBCStream to Streaming API and Streaming Expressions for 
integration with external data sources
SOLR-8479: Add JDBCStream to Streaming API and Streaming Expressions for 
integration with external data sources

> Add JDBCStream for integration with external data sources
> -
>
> Key: SOLR-8479
> URL: https://issues.apache.org/jira/browse/SOLR-8479
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8479.patch, SOLR-8479.patch, SOLR-8479.patch, 
> SOLR-8479.patch, SOLR-8479.patch
>
>
> Given that the Streaming API can merge and join multiple incoming SolrStreams 
> to perform complex operations on the resulting combined datasets I think it 
> would be beneficial to also support incoming streams from other data sources. 
> The JDBCStream will provide a Streaming API interface to any data source 
> which provides a JDBC driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089392#comment-15089392
 ] 

ASF subversion and git services commented on LUCENE-6922:
-

Commit 1723748 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723748 ]

LUCENE-6922: more improvements in the svn to git mirror workaround tool

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-6922.patch, svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2016-01-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089393#comment-15089393
 ] 

Michael McCandless commented on LUCENE-6922:


Thanks [~paul.elsc...@xs4all.nl], I committed your last patch.

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-6922.patch, svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >