[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380491#comment-15380491
 ] 

Shalin Shekhar Mangar commented on SOLR-7280:
-

Ah, right, I missed that. I'm reviewing the rest of the patch.

> Load cores in sorted order and tweak coreLoadThread counts to improve cluster 
> stability on restarts
> ---
>
> Key: SOLR-7280
> URL: https://issues.apache.org/jira/browse/SOLR-7280
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7280.patch, SOLR-7280.patch
>
>
> In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order 
> and tweaking some of the coreLoadThread counts, he was able to improve the 
> stability of a cluster with thousands of collections. We should explore some 
> of these changes and fold them into Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts

2016-07-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380477#comment-15380477
 ] 

Noble Paul commented on SOLR-7280:
--

Shalin, that is the expected behavior. If there is only one replica for a shard 
and  that replica is in this node, then nobody else is waiting for that replica 
to come up.that means nobody else will wait time out because of that replica.

> Load cores in sorted order and tweak coreLoadThread counts to improve cluster 
> stability on restarts
> ---
>
> Key: SOLR-7280
> URL: https://issues.apache.org/jira/browse/SOLR-7280
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7280.patch, SOLR-7280.patch
>
>
> In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order 
> and tweaking some of the coreLoadThread counts, he was able to improve the 
> stability of a cluster with thousands of collections. We should explore some 
> of these changes and fold them into Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 121 - Still Failing

2016-07-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/121/

4 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:55779","node_name":"127.0.0.1:55779_","state":"active","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/20)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:54909;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:54909_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:55779;,   "node_name":"127.0.0.1:55779_",  
 "state":"active",   "leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:38009;,   "node_name":"127.0.0.1:38009_",  
 "state":"down",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:55779","node_name":"127.0.0.1:55779_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/20)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:54909;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:54909_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:55779;,
  "node_name":"127.0.0.1:55779_",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:38009;,
  "node_name":"127.0.0.1:38009_",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([53C8CD85C44C8CD3:DB9CF25F6AB0E12B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17264 - Still Failing!

2016-07-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17264/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([9D6A3B7056799830:6A19D528909137D6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1327)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11389 lines...]
   [junit4] Suite: 

[jira] [Updated] (SOLR-9285) ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on uncommitted doc

2016-07-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9285:
---
Attachment: SOLR-9285.patch


Updated patch:
* fully fleshed out {{TestRandomFlRTGCloud}}
* added the previously proposed {{needsSolrIndexSearcher}} method to 
{{DocTransformer}}
** default impl always returns {{false}}
** {{ValueSourceAugmenter}} overrides to always returns {{true}}
** {{DocTransformers}} overrides to return {{true}} if any of the wrapped/child 
transformers return {{true}}

There are almost certainly some other {{DocTransformer}} subclasses that should 
return true from this method, but I'd like to move forward with committing as 
is and target fixing any other classes (and adding test coverage for them) in 
other issues.  (As things stand in this patch they are no worse off then 
before.)

Unless there are objections, I'll try to commit/backport this on monday. 
(possibly after filing some new issues so i can update some TODO comments in 
tests with concrete jira IDs)


> ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on 
> uncommitted doc
> -
>
> Key: SOLR-9285
> URL: https://issues.apache.org/jira/browse/SOLR-9285
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9285.patch, SOLR-9285.patch, SOLR-9285.patch, 
> SOLR-9285.patch
>
>
> Found in SOLR-9180 testing.
> Even in single node solr envs, doing an RTG for an uncommitted doc that uses 
> ValueSourceAugmenter (ie: simple field aliasing, or functions in fl) causes 
> an ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+126) - Build # 17263 - Failure!

2016-07-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17263/
Java: 64bit/jdk-9-ea+126 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":2, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val modified",
 "b":"BY val", "i":20, "d":[   "val 1",   
"val 2"], "e":"EY val", "":{"v":1},  from server:  
https://127.0.0.1:40482/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":2,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val modified",
"b":"BY val",
"i":20,
"d":[
  "val 1",
  "val 2"],
"e":"EY val",
"":{"v":1},  from server:  https://127.0.0.1:40482/collection1
at 
__randomizedtesting.SeedInfo.seed([2CDD10298C956987:A4892FF32269047F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:215)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+126) - Build # 1167 - Failure!

2016-07-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1167/
Java: 32bit/jdk-9-ea+126 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:40327/ma/y/forceleader_test_collection_shard1_replica3]

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live 
SolrServers available to handle this 
request:[http://127.0.0.1:40327/ma/y/forceleader_test_collection_shard1_replica3]
at 
__randomizedtesting.SeedInfo.seed([45DADD5596E09F3A:A34DE995AF62665B]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:739)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741)
at 
org.apache.solr.cloud.ForceLeaderTest.sendDoc(ForceLeaderTest.java:424)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:131)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9208) ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU consumption

2016-07-15 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380153#comment-15380153
 ] 

Mikhail Khludnev commented on SOLR-9208:


[~fabrizio.fort...@gmail.com], I'm afraid it's not an issue, but a topic to 
discuss in mailing list. Attaching a test case and the brief code makes a lot 
of sense. 

> ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU 
> consumption
> -
>
> Key: SOLR-9208
> URL: https://issues.apache.org/jira/browse/SOLR-9208
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, Server
>Affects Versions: 6.0
>Reporter: Fabrizio Fortino
>Assignee: Mikhail Khludnev
>
> In our use case we swap two cores and close the old one. We started seeing 
> the below error from time to time (it's completely random, we are unable to 
> reproduce it). Moreover we have noticed that when this Exception is thrown 
> the CPU consumption goes pretty high (80-100%).
> Error Message:
> java.util.ConcurrentModificationException: 
> java.util.ConcurrentModificationException
> StackTrace:
> java.util.ArrayList$Itr.checkForComodification (ArrayList.java:901)
> java.util.ArrayList$Itr.next (ArrayList.java:851)
> org.apache.solr.core.SolrCore.close (SolrCore.java:1134)
> org.apache.solr.servlet.HttpSolrCall.destroy (HttpSolrCall.java:513)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:242)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:184)
> …ipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:581)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:548)
> …g.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:226)
> …g.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1160)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:511)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1092)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> …e.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:213)
> ….eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:119)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:134)
> org.eclipse.jetty.server.Server.handle (Server.java:518)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:308)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:244)
> …pse.jetty.io.AbstractConnection$ReadCallback.succeeded 
> (AbstractConnection.java:273)
> org.eclipse.jetty.io.FillInterest.fillable (FillInterest.java:95)
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run 
> (SelectChannelEndPoint.java:93)
> …il.thread.strategy.ExecuteProduceConsume.produceAndRun 
> (ExecuteProduceConsume.java:246)
> …e.jetty.util.thread.strategy.ExecuteProduceConsume.run 
> (ExecuteProduceConsume.java:156)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:654)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:572)
> java.lang.Thread.run (Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 334 - Failure

2016-07-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/334/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.overseer.ZkStateWriterTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.overseer.ZkStateWriterTest: 1) Thread[id=7161, 
name=watches-1143-thread-1, state=TIMED_WAITING, group=TGRP-ZkStateWriterTest]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.overseer.ZkStateWriterTest: 
   1) Thread[id=7161, name=watches-1143-thread-1, state=TIMED_WAITING, 
group=TGRP-ZkStateWriterTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([473C18A14CFA9A0C]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.overseer.ZkStateWriterTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=7161, name=watches-1143-thread-1, state=TIMED_WAITING, 
group=TGRP-ZkStateWriterTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=7161, name=watches-1143-thread-1, state=TIMED_WAITING, 
group=TGRP-ZkStateWriterTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([473C18A14CFA9A0C]:0)




Build Log:
[...truncated 11589 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateWriterTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J0/temp/solr.cloud.overseer.ZkStateWriterTest_473C18A14CFA9A0C-001/init-core-data-001
   [junit4]   2> 1168885 INFO  
(SUITE-ZkStateWriterTest-seed#[473C18A14CFA9A0C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 1168886 INFO  
(TEST-ZkStateWriterTest.testZkStateWriterBatching-seed#[473C18A14CFA9A0C]) [
] o.a.s.SolrTestCaseJ4 ###Starting testZkStateWriterBatching
   [junit4]   2> 1168886 INFO 

[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380105#comment-15380105
 ] 

Shalin Shekhar Mangar commented on SOLR-7280:
-

Hmm, actually the logic we had discussed earlier also fails on this corner 
case. Let me think more on this.

> Load cores in sorted order and tweak coreLoadThread counts to improve cluster 
> stability on restarts
> ---
>
> Key: SOLR-7280
> URL: https://issues.apache.org/jira/browse/SOLR-7280
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7280.patch, SOLR-7280.patch
>
>
> In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order 
> and tweaking some of the coreLoadThread counts, he was able to improve the 
> stability of a cluster with thousands of collections. We should explore some 
> of these changes and fold them into Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-9256.

Resolution: Fixed

Thanks, [~tinexw]! 

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1.1, 6.2, master (7.0), 6.1
>
> Attachments: SOLR-9256.patch
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Commented] (SOLR-9209) DIH JdbcDataSource - improve extensibility part 2

2016-07-15 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380087#comment-15380087
 ] 

Mikhail Khludnev commented on SOLR-9209:


*TODO* add the test attached to SOLR-9256

> DIH JdbcDataSource - improve extensibility part 2
> -
>
> Key: SOLR-9209
> URL: https://issues.apache.org/jira/browse/SOLR-9209
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Kristine Jetzke
>Assignee: Mikhail Khludnev
> Attachments: SOLR-9209.patch
>
>
> This is a follow up to SOLR-8616. Due to changes in SOLR-8612 it's now no 
> longer possible without additional modifications to use a different 
> {{ResultSetIterator}} class. The attached patch solves this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380066#comment-15380066
 ] 

Kristine Jetzke edited comment on SOLR-9256 at 7/15/16 8:37 PM:


[~mkhludnev] I attached a unit test for the master branch (The 6.0 branch would 
need a slightly different one) . I think there is nothing else to do since it 
works correctly in all open branches.


was (Author: tinexw):
[~mkhludnev] I attached a unit test. I think there is nothing else to do since 
it works correctly in all open branches.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1, 6.1.1, 6.2, master (7.0)
>
> Attachments: SOLR-9256.patch
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> 

[jira] [Commented] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380066#comment-15380066
 ] 

Kristine Jetzke commented on SOLR-9256:
---

[~mkhludnev] I attached a unit test. I think there is nothing else to do since 
it works correctly in all open branches.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1, 6.1.1, 6.2, master (7.0)
>
> Attachments: SOLR-9256.patch
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> 

[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380062#comment-15380062
 ] 

Shalin Shekhar Mangar commented on SOLR-7280:
-

What was the motivation behind changing the sorting logic? Can you please 
explain what use-cases does the new logic cover better?

bq. The shards with least no:of replicas in down nodes and there is at least 
one live node waiting for replicas of this shard. If these nodes are in down 
nodes, it is no use bringing up a replica because, until those down nodes come 
up, that shard cannot be up

This is flawed because with these rules, a shard which has exactly 1 replica in 
total and that too on the current node will always be chosen last.

> Load cores in sorted order and tweak coreLoadThread counts to improve cluster 
> stability on restarts
> ---
>
> Key: SOLR-7280
> URL: https://issues.apache.org/jira/browse/SOLR-7280
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7280.patch, SOLR-7280.patch
>
>
> In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order 
> and tweaking some of the coreLoadThread counts, he was able to improve the 
> stability of a cluster with thousands of collections. We should explore some 
> of these changes and fold them into Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kristine Jetzke updated SOLR-9256:
--
Attachment: SOLR-9256.patch

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1, 6.1.1, 6.2, master (7.0)
>
> Attachments: SOLR-9256.patch
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Comment Edited] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379929#comment-15379929
 ] 

Kristine Jetzke edited comment on SOLR-9256 at 7/15/16 8:15 PM:


This commit broke it: 
https://github.com/apache/lucene-solr/commit/13c9912b3c4698595db8d07fcbc09fe062ee5404
 (because the hasnext method always uses the initially created result set. This 
is closed if no more results are returned. Thus, the exception is thrown the 
next time the hasnext method is called. In the version before and after, the 
result set field is accessed instead which is set to null if no more results 
are available. Thus, no methods are called afterwards and no exception is 
thrown. )
This commit fixed it:  
https://github.com/apache/lucene-solr/commit/22e5d31cdc9e94aec8043fd451ae1918b5062528

[~benjamin.richter] If you can't upgrade to 6.1.0, you could create a copy the 
current version of {{org.apache.solr.handler.dataimport.JdbcDataSource}} in 
your own source code and reference it  in your data config files.


was (Author: tinexw):
This commit broke it: 
https://github.com/apache/lucene-solr/commit/13c9912b3c4698595db8d07fcbc09fe062ee5404
 (because the hasnext method always uses the initially created result set. This 
is closed if no more results are returned. Thus, the exception is thrown the 
next time the hasnext method is called. In the previous version and the newer 
version, the result set field is accessed instead which is set to null if no 
more results are available. Thus, no methods are called afterwards and no 
exception is thrown. )
This commit fixed it:  
https://github.com/apache/lucene-solr/commit/22e5d31cdc9e94aec8043fd451ae1918b5062528

[~benjamin.richter] If you can't upgrade to 6.1.0, you could create a copy the 
current version of {{org.apache.solr.handler.dataimport.JdbcDataSource}} in 
your own source code and reference it  in your data config files.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1, 6.1.1, 6.2, master (7.0)
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1072 - Still Failing

2016-07-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1072/

14 tests failed.
FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
java.net.BindException: Address already in use

Stack Trace:
java.lang.RuntimeException: java.net.BindException: Address already in use
at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:212)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:82)
at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:905)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
at 

[jira] [Comment Edited] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379929#comment-15379929
 ] 

Kristine Jetzke edited comment on SOLR-9256 at 7/15/16 7:56 PM:


This commit broke it: 
https://github.com/apache/lucene-solr/commit/13c9912b3c4698595db8d07fcbc09fe062ee5404
 (because the hasnext method always uses the initially created result set. This 
is closed if no more results are returned. Thus, the exception is thrown the 
next time the hasnext method is called. In the previous version and the newer 
version, the result set field is accessed instead which is set to null if no 
more results are available. Thus, no methods are called afterwards and no 
exception is thrown. )
This commit fixed it:  
https://github.com/apache/lucene-solr/commit/22e5d31cdc9e94aec8043fd451ae1918b5062528

[~benjamin.richter] If you can't upgrade to 6.1.0, you could create a copy the 
current version of {{org.apache.solr.handler.dataimport.JdbcDataSource}} in 
your own source code and reference it  in your data config files.


was (Author: tinexw):
This commit broke it: 
https://github.com/apache/lucene-solr/commit/13c9912b3c4698595db8d07fcbc09fe062ee5404
 (because the hasnext method always uses the initially created result set. This 
is closed if no more results are returned. Thus, the exception the next time 
the hasnext method is called. In the previous version and the newer version, 
the result set field is accessed instead which is set to null if no more 
results are available. Thus, no methods are called afterwards and no exception 
is thrown. )
This commit fixed it:  
https://github.com/apache/lucene-solr/commit/22e5d31cdc9e94aec8043fd451ae1918b5062528

[~benjamin.richter] If you can't upgrade to 6.1.0, you could create a copy the 
current version of {{org.apache.solr.handler.dataimport.JdbcDataSource}} in 
your own source code and reference it  in your data config files.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1, 6.1.1, 6.2, master (7.0)
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> 

[jira] [Comment Edited] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379929#comment-15379929
 ] 

Kristine Jetzke edited comment on SOLR-9256 at 7/15/16 7:44 PM:


This commit broke it: 
https://github.com/apache/lucene-solr/commit/13c9912b3c4698595db8d07fcbc09fe062ee5404
 (because the hasnext method always uses the initially created result set. This 
is closed if no more results are returned. Thus, the exception the next time 
the hasnext method is called. In the previous version and the newer version, 
the result set field is accessed instead which is set to null if no more 
results are available. Thus, no methods are called afterwards and no exception 
is thrown. )
This commit fixed it:  
https://github.com/apache/lucene-solr/commit/22e5d31cdc9e94aec8043fd451ae1918b5062528

[~benjamin.richter] If you can't upgrade to 6.1.0, you could create a copy the 
current version of {{org.apache.solr.handler.dataimport.JdbcDataSource}} in 
your own source code and reference it  in your data config files.


was (Author: tinexw):
This commit broke it: 
https://github.com/apache/lucene-solr/commit/13c9912b3c4698595db8d07fcbc09fe062ee5404
 (don't know yet why)
This commit fixed it:  
https://github.com/apache/lucene-solr/commit/22e5d31cdc9e94aec8043fd451ae1918b5062528

[~benjamin.richter] If you can't upgrade to 6.1.0, you could create a copy the 
current version of {{org.apache.solr.handler.dataimport.JdbcDataSource}} in 
your own source code and reference it  in your data config files.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1, 6.1.1, 6.2, master (7.0)
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at 

[jira] [Commented] (SOLR-9310) PeerSync fails on a node restart due to IndexFingerPrint mismatch

2016-07-15 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379989#comment-15379989
 ] 

Pushkar Raste commented on SOLR-9310:
-

Adding [~k317h] in the loop

> PeerSync fails on a node restart due to IndexFingerPrint mismatch
> -
>
> Key: SOLR-9310
> URL: https://issues.apache.org/jira/browse/SOLR-9310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
> Attachments: PeerSyncReplicationTest.patch
>
>
> I found that Peer Sync fails if a node restarts and documents were indexed 
> while node was down. IndexFingerPrint check fails after recovering node 
> applies updates. 
> This happens only when node restarts and not if node just misses updates due 
> reason other than it being down.
> Please check attached patch for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9310) PeerSync fails on a node restart due to IndexFingerPrint mismatch

2016-07-15 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379987#comment-15379987
 ] 

Pushkar Raste commented on SOLR-9310:
-

[~ysee...@gmail.com] and [~markrmil...@gmail.com], in 
[SOLR-8690|https://issues.apache.org/jira/browse/SOLR-8690] you mentioned that 
fingerprint check could have performance cost. Was performance cost you 
mentioned could be due to the fact that PeerSync was failing on node restart 
and hence Solr was falling back to do full replication ? 

Is PeerSync failing on node restart expected behavior



> PeerSync fails on a node restart due to IndexFingerPrint mismatch
> -
>
> Key: SOLR-9310
> URL: https://issues.apache.org/jira/browse/SOLR-9310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
> Attachments: PeerSyncReplicationTest.patch
>
>
> I found that Peer Sync fails if a node restarts and documents were indexed 
> while node was down. IndexFingerPrint check fails after recovering node 
> applies updates. 
> This happens only when node restarts and not if node just misses updates due 
> reason other than it being down.
> Please check attached patch for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9256:
---
Fix Version/s: 6.1
   master (7.0)
   6.2
   6.1.1
   6.0.2

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
> Fix For: 6.0.2, 6.1, 6.1.1, 6.2, master (7.0)
>
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 

[jira] [Resolved] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9290.
-
Resolution: Fixed

It looks like all the test fixes that we made here were already fixed by 
SOLR-4509 on master. I only had to add a super.shutdown() call in 
MockCoreContainer just for the sake of correctness.

> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.2, 5.4.1, 5.5.1, 5.5.2, 6.0, 6.0.1, 6.1
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 6.2, 5.5.3
>
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379964#comment-15379964
 ] 

ASF subversion and git services commented on SOLR-9290:
---

Commit 833c8ee152fc28b7ec767d0e8f8ecd346229d443 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=833c8ee ]

SOLR-9290: MockCoreContainer should call super.shutdown()


> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.2, 5.4.1, 5.5.1, 5.5.2, 6.0, 6.0.1, 6.1
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 6.2, 5.5.3
>
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene with Semantic Vectors (LSA, LSI, LDA)?

2016-07-15 Thread Mila88
Hello, 

I have a project which indexes and scores documents using Lucene. However, 
I'd like to do that using semantic indexing (LSI, LSA or Semantic Vectors). 

I've read old posts and some people said that Semantic Vectors plays well 
with Lucene. However, I noticed that its classes are used only by command 
line (throw method main) instead of by API. 

So, I'd like to know if anyone can suggest any other approach so that I 
could use semantic indexing in Lucene. 

Thanks, 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Lucene-with-Semantic-Vectors-LSA-LSI-LDA-tp4287395.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7381) Add new RangeField

2016-07-15 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-7381:
---
Attachment: LUCENE-7381.patch

Thanks [~mikemccand]!

bq. Can we make a new enum used only by this new query instead?

Sure thing. Done. I also added random dimension testing to 
{{TestRangeFieldQueries}}.

This should be ready.

> Add new RangeField
> --
>
> Key: LUCENE-7381
> URL: https://issues.apache.org/jira/browse/LUCENE-7381
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
> Attachments: LUCENE-7381.patch, LUCENE-7381.patch
>
>
> I've been tinkering with a new Point-based {{RangeField}} for indexing 
> numeric ranges that could be useful for a number of applications.
> For example, a single dimension represents a span along a single axis such as 
> indexing calendar entries start and end time, 2d range could represent 
> bounding boxes for geometric applications (e.g., supporting Point based geo 
> shapes), 3d ranges bounding cubes for 3d geometric applications (collision 
> detection, 3d geospatial), and 4d ranges for space time applications. I'm 
> sure there's applicability for 5d+ ranges but a first incarnation should 
> likely limit for performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-9290:
-

Hoss pointed out to me privately that the test fixes for ZkController, 
TestCoreContainer and OverseerTest should be applied to master as well.

> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.2, 5.4.1, 5.5.1, 5.5.2, 6.0, 6.0.1, 6.1
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 6.2, 5.5.3
>
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379929#comment-15379929
 ] 

Kristine Jetzke commented on SOLR-9256:
---

This commit broke it: 
https://github.com/apache/lucene-solr/commit/13c9912b3c4698595db8d07fcbc09fe062ee5404
 (don't know yet why)
This commit fixed it:  
https://github.com/apache/lucene-solr/commit/22e5d31cdc9e94aec8043fd451ae1918b5062528

[~benjamin.richter] If you can't upgrade to 6.1.0, you could create a copy the 
current version of {{org.apache.solr.handler.dataimport.JdbcDataSource}} in 
your own source code and reference it  in your data config files.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> 

[jira] [Updated] (SOLR-2199) DIH JdbcDataSource - Support multiple resultsets

2016-07-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-2199:
---
Fix Version/s: 6.1
   master (7.0)

> DIH JdbcDataSource - Support multiple resultsets
> 
>
> Key: SOLR-2199
> URL: https://issues.apache.org/jira/browse/SOLR-2199
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
>Reporter: Mark Waddle
>Assignee: Mikhail Khludnev
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-2199.patch, SOLR-2199.patch
>
>
> Database servers can return multiple result sets from a single statement. 
> This can be beneficial for indexing because it reduces the number of 
> connections and statements being executed against a database, therefore 
> reducing overhead. The JDBC Statement object supports reading multiple 
> ResultSets. Support should be added to the JdbcDataSource to take advantage 
> of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2199) DIH JdbcDataSource - Support multiple resultsets

2016-07-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev closed SOLR-2199.
--

> DIH JdbcDataSource - Support multiple resultsets
> 
>
> Key: SOLR-2199
> URL: https://issues.apache.org/jira/browse/SOLR-2199
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
>Reporter: Mark Waddle
>Assignee: Mikhail Khludnev
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-2199.patch, SOLR-2199.patch
>
>
> Database servers can return multiple result sets from a single statement. 
> This can be beneficial for indexing because it reduces the number of 
> connections and statements being executed against a database, therefore 
> reducing overhead. The JDBC Statement object supports reading multiple 
> ResultSets. Support should be added to the JdbcDataSource to take advantage 
> of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379887#comment-15379887
 ] 

Kristine Jetzke edited comment on SOLR-9256 at 7/15/16 6:59 PM:


I was able to reproduce it in 6.0.1 as well. It also works for me in 6.1.0. 
It fails in 6.0.1 if one of the inner queries returns no result. 

[~mkhludnev] Which branches correspond to those versions? The result set 
handling in  {{JdbcDataSource.java}} does not differ in {{branch_6_0}} and 
{{branch_6_1}}. UPDATE: Nevermind, I found the tags. They do differ in the 
result handling.


was (Author: tinexw):
I was able to reproduce it in 6.0.1 as well. It also works for me in 6.1.0. 
It fails in 6.0.1 if one of the inner queries returns no result. 

[~mkhludnev] Which branches correspond to those versions? The result set 
handling in  {{JdbcDataSource.java}} does not differ in {{branch_6_0}} and 
{{branch_6_1}}.

{code}
diff --git 
a/branch_6_0:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 
b/branch_6_1:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
index 2dfaae7..e1eabeb 100644
--- 
a/branch_6_0:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
+++ 
b/branch_6_1:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
@@ -71,7 +71,7 @@ public class JdbcDataSource extends
 
   @Override
   public void init(Context context, Properties initProps) {
-initProps = decryptPwd(initProps);
+initProps = decryptPwd(context, initProps);
 Object o = initProps.get(CONVERT_TYPE);
 if (o != null)
   convertType = Boolean.parseBoolean(o.toString());
@@ -112,8 +112,8 @@ public class JdbcDataSource extends
 }
   }
 
-  private Properties decryptPwd(Properties initProps) {
-String encryptionKey = initProps.getProperty("encryptKeyFile");
+  private Properties decryptPwd(Context context, Properties initProps) {
+String encryptionKey = 
context.replaceTokens(initProps.getProperty("encryptKeyFile"));
 if (initProps.getProperty("password") != null && encryptionKey != null) {
   // this means the password is encrypted and use the file to decode it
   try {
{code}

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> 

[jira] [Comment Edited] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379887#comment-15379887
 ] 

Kristine Jetzke edited comment on SOLR-9256 at 7/15/16 6:50 PM:


I was able to reproduce it in 6.0.1 as well. It also works for me in 6.1.0. 
It fails in 6.0.1 if one of the inner queries returns no result. 

[~mkhludnev] Which branches correspond to those versions? The result set 
handling in  {{JdbcDataSource.java}} does not differ in {{branch_6_0}} and 
{{branch_6_1}}.

{code}
diff --git 
a/branch_6_0:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 
b/branch_6_1:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
index 2dfaae7..e1eabeb 100644
--- 
a/branch_6_0:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
+++ 
b/branch_6_1:solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
@@ -71,7 +71,7 @@ public class JdbcDataSource extends
 
   @Override
   public void init(Context context, Properties initProps) {
-initProps = decryptPwd(initProps);
+initProps = decryptPwd(context, initProps);
 Object o = initProps.get(CONVERT_TYPE);
 if (o != null)
   convertType = Boolean.parseBoolean(o.toString());
@@ -112,8 +112,8 @@ public class JdbcDataSource extends
 }
   }
 
-  private Properties decryptPwd(Properties initProps) {
-String encryptionKey = initProps.getProperty("encryptKeyFile");
+  private Properties decryptPwd(Context context, Properties initProps) {
+String encryptionKey = 
context.replaceTokens(initProps.getProperty("encryptKeyFile"));
 if (initProps.getProperty("password") != null && encryptionKey != null) {
   // this means the password is encrypted and use the file to decode it
   try {
{code}


was (Author: tinexw):
I was able to reproduce it in 6.0.1 as well. It also works for me in 6.1.0. 
It fails in 6.0.1 if one of the inner queries returns no result. 

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> 

[jira] [Updated] (SOLR-9310) PeerSync fails on a node restart due to IndexFingerPrint mismatch

2016-07-15 Thread Pushkar Raste (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pushkar Raste updated SOLR-9310:

Attachment: PeerSyncReplicationTest.patch

> PeerSync fails on a node restart due to IndexFingerPrint mismatch
> -
>
> Key: SOLR-9310
> URL: https://issues.apache.org/jira/browse/SOLR-9310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
> Attachments: PeerSyncReplicationTest.patch
>
>
> I found that Peer Sync fails if a node restarts and documents were indexed 
> while node was down. IndexFingerPrint check fails after recovering node 
> applies updates. 
> This happens only when node restarts and not if node just misses updates due 
> reason other than it being down.
> Please check attached patch for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9310) PeerSync fails on a node restart due to IndexFingerPrint mismatch

2016-07-15 Thread Pushkar Raste (JIRA)
Pushkar Raste created SOLR-9310:
---

 Summary: PeerSync fails on a node restart due to IndexFingerPrint 
mismatch
 Key: SOLR-9310
 URL: https://issues.apache.org/jira/browse/SOLR-9310
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Pushkar Raste


I found that Peer Sync fails if a node restarts and documents were indexed 
while node was down. IndexFingerPrint check fails after recovering node applies 
updates. 

This happens only when node restarts and not if node just misses updates due 
reason other than it being down.

Please check attached patch for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9296) Examine SortingResponseWriter with an eye towards removing extra object creation

2016-07-15 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-9296:
-
Attachment: SOLR-9296.patch

Here's what the patch looks like. If you want to play with it be aware of a 
couple of things:

There's a bunch of instrumentation in here that'll be removed. As it stands, 
these two vars are set up for measurement:
  static boolean reuseBuffers = true;
  static boolean justMeasure = true;

reuseBuffers is for comparing the new paths that try to minimize object 
creating, set to false if you want to see the older behavior.

justMeasure uses the (nocommit) NullWriter to avoid writing to the client for 
perf measurements, except you will get one summary tuple back at the very end.

Posting here for any comments people want to make. In particular I have these 
ReusableWriters as yet more local classes, unsure whether they'd be useful on 
their own and should be moved to some utility class (suggestions?).

More when I have time.

> Examine SortingResponseWriter with an eye towards removing extra object 
> creation
> 
>
> Key: SOLR-9296
> URL: https://issues.apache.org/jira/browse/SOLR-9296
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2, master (7.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-9296.patch
>
>
> Assigning to myself just to keep from losing track it. Anyone who wants to 
> take it, please feel free!
> While looking at SOLR-9166 I noticed that SortingResponseWriter does a 
> toString for each field it writes out. At a _very_ preliminary examination it 
> seems like we create a lot of String objects that need to be GC'd. Could we 
> reduce this by using some kind of CharsRef/ByteBuffer/Whatever?
> I've only looked at this briefly, not quite sure what the gotchas are but 
> throwing it out for discussion.
> Some initial thoughts:
> 1> for the fixed types (numerics, dates, booleans) there's a strict upper 
> limit on the size of each value so we can allocate something up-front.
> 2> for string fields, we already get a chars ref so just pass that through?
> 3> must make sure that whatever does the actual writing transfers all the 
> bytes before returning.
> I'm sure I won't get to this for a week or perhaps more, so grab it if you 
> have the bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379887#comment-15379887
 ] 

Kristine Jetzke commented on SOLR-9256:
---

I was able to reproduce it in 6.0.1 as well. It also works for me in 6.1.0. It 
fails if one of the inner queries returns no result.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Comment Edited] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Kristine Jetzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379887#comment-15379887
 ] 

Kristine Jetzke edited comment on SOLR-9256 at 7/15/16 6:44 PM:


I was able to reproduce it in 6.0.1 as well. It also works for me in 6.1.0. 
It fails in 6.0.1 if one of the inner queries returns no result. 


was (Author: tinexw):
I was able to reproduce it in 6.0.1 as well. It also works for me in 6.1.0. It 
fails if one of the inner queries returns no result.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>  

[jira] [Commented] (SOLR-7036) Faster method for group.facet

2016-07-15 Thread Jamie Swain (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379870#comment-15379870
 ] 

Jamie Swain commented on SOLR-7036:
---

[~mdvir1] I tried using the 7 files you sent last night applied to 
f9c94706416c80dcdc4514256c2e4cbf975c386b.  I was able to build and run solr, 
and then add around 500k docs to it.  I tried a normal query, grouped query, 
facet query, group + facet query, and all worked fine without 
"group.facet.method=uif".  If I try "group.facet.method=uif", then I never get 
a response to my request, it appears the request just hangs.  

I'm going to dig into this more later today, and I'll probably try running this 
with the debugger to try to see what is happening.

This is what my query looks like:
{code}
"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 422,
"params": {
"q": "*:*",
"facet.field": "colorFamily",
"json.nl": "flat",
"omitHeader": "false",
"group.facet": "true",
"rows": "30",
"facet": "true",
"wt": "json",
"group.field": "styleIdColor",
"group": "true"
}
},
{code}

In my schema, the colorFamily field used for faceting is like this:
{code}

{code}

The solr logs don't show me much for this, unfortunately.  


> Faster method for group.facet
> -
>
> Key: SOLR-7036
> URL: https://issues.apache.org/jira/browse/SOLR-7036
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 4.10.3
>Reporter: Jim Musil
>Assignee: Erick Erickson
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, 
> SOLR-7036.patch, performance.txt, source_for_patch.zip
>
>
> This is a patch that speeds up the performance of requests made with 
> group.facet=true. The original code that collects and counts unique facet 
> values for each group does not use the same improved field cache methods that 
> have been added for normal faceting in recent versions.
> Specifically, this approach leverages the UninvertedField class which 
> provides a much faster way to look up docs that contain a term. I've also 
> added a simple grouping map so that when a term is found for a doc, it can 
> quickly look up the group to which it belongs.
> Group faceting was very slow for our data set and when the number of docs or 
> terms was high, the latency spiked to multiple second requests. This solution 
> provides better overall performance -- from an average of 54ms to 32ms. It 
> also dropped our slowest performing queries way down -- from 6012ms to 991ms.
> I also added a few tests.
> I added an additional parameter so that you can choose to use this method or 
> the original. Add group.facet.method=fc to use the improved method or 
> group.facet.method=original which is the default if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+126) - Build # 1165 - Failure!

2016-07-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1165/
Java: 64bit/jdk-9-ea+126 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete

Error Message:
Got a hard commit we weren't expecting

Stack Trace:
java.lang.AssertionError: Got a hard commit we weren't expecting
at 
__randomizedtesting.SeedInfo.seed([8DDD3059ED9C3623:4A9188C4F634FB93]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete(SoftAutoCommitTest.java:285)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)




Build Log:
[...truncated 10934 lines...]
   [junit4] Suite: 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 17260 - Failure!

2016-07-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17260/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'f' for path 'params/fixed' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{ 
"add":"second", "a":"A val", "fixed":"changeit", "b":"B val", 
"wt":"json"},   "context":{ "webapp":"/g_bkc/g", "path":"/dump1", 
"httpMethod":"GET"}},  from server:  http://127.0.0.1:46535/g_bkc/g/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'f' for path 
'params/fixed' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{
"add":"second",
"a":"A val",
"fixed":"changeit",
"b":"B val",
"wt":"json"},
  "context":{
"webapp":"/g_bkc/g",
"path":"/dump1",
"httpMethod":"GET"}},  from server:  
http://127.0.0.1:46535/g_bkc/g/collection1
at 
__randomizedtesting.SeedInfo.seed([21457743C20AB3A1:A91148996CF6DE59]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:241)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.

2016-07-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379555#comment-15379555
 ] 

Mark Miller commented on SOLR-7065:
---

That wasn't a finished patch, just work on getting this to work, so don't 
insist on sticking to any of the impl.

But, if I remember right (and I may not), it was -1 means success, synced with 
all replicas and > 0 is how many replicas were synced with if it wasn't all of 
them. In that case, 0 would mean, did not sync with all of them, and actually 
synced with 0 of them.

> Let a replica become the leader regardless of it's last published state if 
> all replicas participate in the election process.
> 
>
> Key: SOLR-7065
> URL: https://issues.apache.org/jira/browse/SOLR-7065
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-7065.patch, SOLR-7065.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9221) Remove Solr contribs: map-reduce, morphlines-core and morphlines-cell

2016-07-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379535#comment-15379535
 ] 

Mark Miller commented on SOLR-9221:
---

If you go look at all the JIRA's I've worked on, I don't think you can say it's 
not actively maintained. Every issue may not get addressed or addressed on your 
timeline, but that is perfectly fine and it's BS that it's not maintained.

I have worked a bit on a couple of the latest issues.

We have discussed what we want to do with the contrib and it seems most likely 
we will either pull the tika integration contrib (morphlines-cell) or pull both 
morphlines contribs and have a more generic plugin point.

> Remove Solr contribs: map-reduce, morphlines-core and morphlines-cell
> -
>
> Key: SOLR-9221
> URL: https://issues.apache.org/jira/browse/SOLR-9221
> Project: Solr
>  Issue Type: Task
>  Components: contrib - MapReduce, contrib - morphlines-cell, contrib 
> - morphlines-core
>Reporter: Steve Rowe
>Priority: Minor
> Attachments: SOLR-9221.patch
>
>
> The Solr contribs map-reduce, morphlines-cell and morphlines-core contain 
> tests that are not being fixed: SOLR-6489 and SOLR-9220.
> (Some subset of?) these components live in the Kite SDK: http://kitesdk.org - 
> why are they also hosted in Solr?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379531#comment-15379531
 ] 

Uwe Schindler commented on LUCENE-7382:
---

Sorry, it's not realy slow, it just uses more memory and produces more objects. 
We have the "generic" one for all use cases of AttributeFactory, where we don't 
handle with Tokens, e.g. FuzzyQuery's term enums or other use cases. And there 
are many!

The Token-specific one is just more efficient memory and speed-wise for 
TokenStreams - and because of that it is defined there. It just optimizes the 
case of standard token attributes like term, offsets, positions,... Otherwise 
it inherits/delegates to the default - so we still need the default.

> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
>Assignee: Uwe Schindler
> Fix For: 6.2
>
> Attachments: LUCENE-7382.patch
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9155) Improve ZkController::getLeader exception handling

2016-07-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379532#comment-15379532
 ] 

Mike Drob commented on SOLR-9155:
-

Can somebody take a look at this? I saw this happen on a live system and think 
it would be good to improve the logging around it for my ops team.

> Improve ZkController::getLeader exception handling
> --
>
> Key: SOLR-9155
> URL: https://issues.apache.org/jira/browse/SOLR-9155
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 6.0
>Reporter: Mike Drob
> Attachments: SOLR-9155.patch
>
>
> {{ZkController::getLeader}} does not handle InterruptedException, and instead 
> rethrows it as a SolrException. There's a couple other improvements we could 
> make around the exception handling as well:
> * Not using exceptions for flow control
> * Avoid log-and-rethrow
> * The exception message could provide remedy steps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.

2016-07-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379529#comment-15379529
 ] 

Mike Drob commented on SOLR-7065:
-

bq. We are skipping recovery, so we want to return -1 (success).
That's... not what I expected. Can you explain what the possible return values 
mean? I got the impression that the three options are -1, 0, and > 0?

> Let a replica become the leader regardless of it's last published state if 
> all replicas participate in the election process.
> 
>
> Key: SOLR-7065
> URL: https://issues.apache.org/jira/browse/SOLR-7065
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-7065.patch, SOLR-7065.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379517#comment-15379517
 ] 

David Smiley commented on LUCENE-7382:
--

Why do we have both; why the "slow" one?

> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
>Assignee: Uwe Schindler
> Fix For: 6.2
>
> Attachments: LUCENE-7382.patch
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7380) Add Polygon.fromGeoJSON

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379514#comment-15379514
 ] 

ASF subversion and git services commented on LUCENE-7380:
-

Commit 573aaf75f52f446c8b7ab915eefd420013c544a1 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=573aaf7 ]

LUCENE-7380: add Polygon.fromGeoJSON


> Add Polygon.fromGeoJSON
> ---
>
> Key: LUCENE-7380
> URL: https://issues.apache.org/jira/browse/LUCENE-7380
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7380.patch, LUCENE-7380.patch
>
>
> Working with {{Polygon}} is a bit tricky today because you typically
> must use an external dependency to convert e.g. a GeoJSON string into
> Lucene's Polygon class ... I think this is a weakness in our API, and
> it clearly confuses users: http://markmail.org/thread/mpge4wqo7cfqm4i5
> So I created a simplistic GeoJSON parser to extract a single Polygon
> or MultiPolygon from a GeoJSON string, without any dependencies.  The
> parser only handles the various ways that a single Polygon or
> MultiPolygon can appear in a GeoJSON string, and throws an exception
> otherwise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7380) Add Polygon.fromGeoJSON

2016-07-15 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7380.

Resolution: Fixed

> Add Polygon.fromGeoJSON
> ---
>
> Key: LUCENE-7380
> URL: https://issues.apache.org/jira/browse/LUCENE-7380
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7380.patch, LUCENE-7380.patch
>
>
> Working with {{Polygon}} is a bit tricky today because you typically
> must use an external dependency to convert e.g. a GeoJSON string into
> Lucene's Polygon class ... I think this is a weakness in our API, and
> it clearly confuses users: http://markmail.org/thread/mpge4wqo7cfqm4i5
> So I created a simplistic GeoJSON parser to extract a single Polygon
> or MultiPolygon from a GeoJSON string, without any dependencies.  The
> parser only handles the various ways that a single Polygon or
> MultiPolygon can appear in a GeoJSON string, and throws an exception
> otherwise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7380) Add Polygon.fromGeoJSON

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379512#comment-15379512
 ] 

ASF subversion and git services commented on LUCENE-7380:
-

Commit 343f374b530fa71dc6102d74725b536f5f1367f3 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=343f374 ]

LUCENE-7380: add Polygon.fromGeoJSON


> Add Polygon.fromGeoJSON
> ---
>
> Key: LUCENE-7380
> URL: https://issues.apache.org/jira/browse/LUCENE-7380
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7380.patch, LUCENE-7380.patch
>
>
> Working with {{Polygon}} is a bit tricky today because you typically
> must use an external dependency to convert e.g. a GeoJSON string into
> Lucene's Polygon class ... I think this is a weakness in our API, and
> it clearly confuses users: http://markmail.org/thread/mpge4wqo7cfqm4i5
> So I created a simplistic GeoJSON parser to extract a single Polygon
> or MultiPolygon from a GeoJSON string, without any dependencies.  The
> parser only handles the various ways that a single Polygon or
> MultiPolygon can appear in a GeoJSON string, and throws an exception
> otherwise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7383) FieldQueryTest.testFlattenToParentBlockJoinQuery failure

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379484#comment-15379484
 ] 

ASF subversion and git services commented on LUCENE-7383:
-

Commit 2e0b2f5e37cb65103248467c02388d4e3f86dc91 in lucene-solr's branch 
refs/heads/master from [~martijn.v.groningen]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2e0b2f5 ]

LUCENE-7383: fix test, only use BoostQuery once


> FieldQueryTest.testFlattenToParentBlockJoinQuery failure
> 
>
> Key: LUCENE-7383
> URL: https://issues.apache.org/jira/browse/LUCENE-7383
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Martijn van Groningen
>
> Reproduces for me in master:
> {noformat}
>[junit4] Started J0 PID(26725@localhost).
>[junit4] Suite: org.apache.lucene.search.vectorhighlight.FieldQueryTest
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=FieldQueryTest 
> -Dtests.method=testFlattenToParentBlockJoinQuery 
> -Dtests.seed=FBAF10B3AA838B8D -Dtests.slow=true -Dtests.locale=pt 
> -Dtests.timezone=Asia/Chita -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s | FieldQueryTest.testFlattenToParentBlockJoinQuery 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FBAF10B3AA838B8D:6C7C115D5027C6BB]:0)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.AbstractTestCase.assertCollectionQueries(AbstractTestCase.java:162)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery(FieldQueryTest.java:966)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
> docValues:{}, maxPointsInLeafNode=1120, maxMBSortInHeap=7.244053319393249, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=pt, timezone=Asia/Chita
>[junit4]   2> NOTE: Linux 4.2.0-38-generic amd64/Oracle Corporation 
> 1.8.0_92 (64-bit)/cpus=8,threads=1,free=430920456,total=504889344
>[junit4]   2> NOTE: All tests run in this JVM: [FieldQueryTest]
>[junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
>[junit4] 
>[junit4] 
>[junit4] Tests with failures [seed: FBAF10B3AA838B8D]:
>[junit4]   - 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379495#comment-15379495
 ] 

Uwe Schindler commented on LUCENE-7382:
---

I think the maven artifacts are not yet uptodate. This was commited not long 
ago.

> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
>Assignee: Uwe Schindler
> Fix For: 6.2
>
> Attachments: LUCENE-7382.patch
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379492#comment-15379492
 ] 

Uwe Schindler commented on LUCENE-7382:
---

This problem affected all Tokenizers which would now suddenly used the "slower" 
default factory.

> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
>Assignee: Uwe Schindler
> Fix For: 6.2
>
> Attachments: LUCENE-7382.patch
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7383) FieldQueryTest.testFlattenToParentBlockJoinQuery failure

2016-07-15 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen resolved LUCENE-7383.
---
Resolution: Fixed

thanks for raising the issue [~mikemccand]!

> FieldQueryTest.testFlattenToParentBlockJoinQuery failure
> 
>
> Key: LUCENE-7383
> URL: https://issues.apache.org/jira/browse/LUCENE-7383
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Martijn van Groningen
>
> Reproduces for me in master:
> {noformat}
>[junit4] Started J0 PID(26725@localhost).
>[junit4] Suite: org.apache.lucene.search.vectorhighlight.FieldQueryTest
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=FieldQueryTest 
> -Dtests.method=testFlattenToParentBlockJoinQuery 
> -Dtests.seed=FBAF10B3AA838B8D -Dtests.slow=true -Dtests.locale=pt 
> -Dtests.timezone=Asia/Chita -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s | FieldQueryTest.testFlattenToParentBlockJoinQuery 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FBAF10B3AA838B8D:6C7C115D5027C6BB]:0)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.AbstractTestCase.assertCollectionQueries(AbstractTestCase.java:162)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery(FieldQueryTest.java:966)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
> docValues:{}, maxPointsInLeafNode=1120, maxMBSortInHeap=7.244053319393249, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=pt, timezone=Asia/Chita
>[junit4]   2> NOTE: Linux 4.2.0-38-generic amd64/Oracle Corporation 
> 1.8.0_92 (64-bit)/cpus=8,threads=1,free=430920456,total=504889344
>[junit4]   2> NOTE: All tests run in this JVM: [FieldQueryTest]
>[junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
>[junit4] 
>[junit4] 
>[junit4] Tests with failures [seed: FBAF10B3AA838B8D]:
>[junit4]   - 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7383) FieldQueryTest.testFlattenToParentBlockJoinQuery failure

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379483#comment-15379483
 ] 

ASF subversion and git services commented on LUCENE-7383:
-

Commit 7b5365678684359d5fb0b76696767b030209ae09 in lucene-solr's branch 
refs/heads/branch_6x from [~martijn.v.groningen]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7b53656 ]

LUCENE-7383: fix test, only use BoostQuery once


> FieldQueryTest.testFlattenToParentBlockJoinQuery failure
> 
>
> Key: LUCENE-7383
> URL: https://issues.apache.org/jira/browse/LUCENE-7383
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Martijn van Groningen
>
> Reproduces for me in master:
> {noformat}
>[junit4] Started J0 PID(26725@localhost).
>[junit4] Suite: org.apache.lucene.search.vectorhighlight.FieldQueryTest
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=FieldQueryTest 
> -Dtests.method=testFlattenToParentBlockJoinQuery 
> -Dtests.seed=FBAF10B3AA838B8D -Dtests.slow=true -Dtests.locale=pt 
> -Dtests.timezone=Asia/Chita -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s | FieldQueryTest.testFlattenToParentBlockJoinQuery 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FBAF10B3AA838B8D:6C7C115D5027C6BB]:0)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.AbstractTestCase.assertCollectionQueries(AbstractTestCase.java:162)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery(FieldQueryTest.java:966)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
> docValues:{}, maxPointsInLeafNode=1120, maxMBSortInHeap=7.244053319393249, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=pt, timezone=Asia/Chita
>[junit4]   2> NOTE: Linux 4.2.0-38-generic amd64/Oracle Corporation 
> 1.8.0_92 (64-bit)/cpus=8,threads=1,free=430920456,total=504889344
>[junit4]   2> NOTE: All tests run in this JVM: [FieldQueryTest]
>[junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
>[junit4] 
>[junit4] 
>[junit4] Tests with failures [seed: FBAF10B3AA838B8D]:
>[junit4]   - 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-07-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379481#comment-15379481
 ] 

Uwe Schindler commented on LUCENE-7355:
---

I posted a patch to fix on LUCENE-7382.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-7382:
--
Attachment: LUCENE-7382.patch

Simple patch.

> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
>Assignee: Uwe Schindler
> Fix For: 6.2
>
> Attachments: LUCENE-7382.patch
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379453#comment-15379453
 ] 

Terry Smith commented on LUCENE-7382:
-

Thanks, I didn't realize this would hit 6.2. I have nightly builds that follow 
the 6.2.0-SNAPSHOT and 7.0.0-SNAPSHOT artifacts on the ASF snapshot maven repo 
and this didn't hit my 6.2 branch yet.


> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
>Assignee: Uwe Schindler
> Fix For: 6.2
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-07-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379415#comment-15379415
 ] 

Uwe Schindler edited comment on LUCENE-7355 at 7/15/16 2:08 PM:


This broke the usage of Default attribute factory, see LUCENE-7382. I will fix 
this in a later commit. The default should be the same as the default as given 
by Tokenizers. The AttributeFactory as defined as default here is just "slow" 
and brings problems (e.g., LUCENE-7382), because it is not the one as used by 
Lucene as default otherwise. Sorry for not seeing the problem earlier!


was (Author: thetaphi):
This broke the usage of Default attribute factory, see LUCENE-7382. I will fix 
this in a later commit. The default should be the same as the default as given 
by Tokenizers. The AttributeFactory as defined as default here is just "slow" 
and brings problems, because it is not the one used by Lucene as default.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-07-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-7355:
---

This broke the usage of Default attribute factory, see LUCENE-7382. I will fix 
this in a later commit. The default should be the same as the default as given 
by Tokenizers. The AttributeFactory as defined as default here is just "slow" 
and brings problems, because it is not the one used by Lucene as default.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, LUCENE-7355.patch, 
> LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-7382:
--
 Assignee: Uwe Schindler
Affects Version/s: 6.2
Fix Version/s: 6.2

> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
>Assignee: Uwe Schindler
> Fix For: 6.2
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379408#comment-15379408
 ] 

Uwe Schindler commented on LUCENE-7382:
---

Hi Terry,
thanks for opening the issue. The default used by LUCENE-7355 is just wrong. I 
did not review the change closely. As 6.2 was not yet released , we can change 
this easily. I will post a patch later.

> Wrong default attribute factory in use
> --
>
> Key: LUCENE-7382
> URL: https://issues.apache.org/jira/browse/LUCENE-7382
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (7.0), 6.2
>Reporter: Terry Smith
> Fix For: 6.2
>
>
> Originally reported to the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e
> LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
> uses a different AttributeFactory. 
> https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122
> The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which 
> uses PackedTokenAttributeImpl while the new default is now 
> AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
> PackedTokenAttributeImpl.
> [~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7383) FieldQueryTest.testFlattenToParentBlockJoinQuery failure

2016-07-15 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen reassigned LUCENE-7383:
-

Assignee: Martijn van Groningen

> FieldQueryTest.testFlattenToParentBlockJoinQuery failure
> 
>
> Key: LUCENE-7383
> URL: https://issues.apache.org/jira/browse/LUCENE-7383
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Martijn van Groningen
>
> Reproduces for me in master:
> {noformat}
>[junit4] Started J0 PID(26725@localhost).
>[junit4] Suite: org.apache.lucene.search.vectorhighlight.FieldQueryTest
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=FieldQueryTest 
> -Dtests.method=testFlattenToParentBlockJoinQuery 
> -Dtests.seed=FBAF10B3AA838B8D -Dtests.slow=true -Dtests.locale=pt 
> -Dtests.timezone=Asia/Chita -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s | FieldQueryTest.testFlattenToParentBlockJoinQuery 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FBAF10B3AA838B8D:6C7C115D5027C6BB]:0)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.AbstractTestCase.assertCollectionQueries(AbstractTestCase.java:162)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery(FieldQueryTest.java:966)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
> docValues:{}, maxPointsInLeafNode=1120, maxMBSortInHeap=7.244053319393249, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=pt, timezone=Asia/Chita
>[junit4]   2> NOTE: Linux 4.2.0-38-generic amd64/Oracle Corporation 
> 1.8.0_92 (64-bit)/cpus=8,threads=1,free=430920456,total=504889344
>[junit4]   2> NOTE: All tests run in this JVM: [FieldQueryTest]
>[junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
>[junit4] 
>[junit4] 
>[junit4] Tests with failures [seed: FBAF10B3AA838B8D]:
>[junit4]   - 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7383) FieldQueryTest.testFlattenToParentBlockJoinQuery failure

2016-07-15 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7383:
--

 Summary: FieldQueryTest.testFlattenToParentBlockJoinQuery failure
 Key: LUCENE-7383
 URL: https://issues.apache.org/jira/browse/LUCENE-7383
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless


Reproduces for me in master:

{noformat}
   [junit4] Started J0 PID(26725@localhost).
   [junit4] Suite: org.apache.lucene.search.vectorhighlight.FieldQueryTest
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=FieldQueryTest 
-Dtests.method=testFlattenToParentBlockJoinQuery -Dtests.seed=FBAF10B3AA838B8D 
-Dtests.slow=true -Dtests.locale=pt -Dtests.timezone=Asia/Chita 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.10s | FieldQueryTest.testFlattenToParentBlockJoinQuery <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([FBAF10B3AA838B8D:6C7C115D5027C6BB]:0)
   [junit4]>at 
org.apache.lucene.search.vectorhighlight.AbstractTestCase.assertCollectionQueries(AbstractTestCase.java:162)
   [junit4]>at 
org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery(FieldQueryTest.java:966)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
docValues:{}, maxPointsInLeafNode=1120, maxMBSortInHeap=7.244053319393249, 
sim=RandomSimilarity(queryNorm=false): {}, locale=pt, timezone=Asia/Chita
   [junit4]   2> NOTE: Linux 4.2.0-38-generic amd64/Oracle Corporation 1.8.0_92 
(64-bit)/cpus=8,threads=1,free=430920456,total=504889344
   [junit4]   2> NOTE: All tests run in this JVM: [FieldQueryTest]
   [junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
   [junit4] 
   [junit4] 
   [junit4] Tests with failures [seed: FBAF10B3AA838B8D]:
   [junit4]   - 
org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7383) FieldQueryTest.testFlattenToParentBlockJoinQuery failure

2016-07-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379387#comment-15379387
 ] 

Michael McCandless commented on LUCENE-7383:


Likely caused by LUCENE-7376?

> FieldQueryTest.testFlattenToParentBlockJoinQuery failure
> 
>
> Key: LUCENE-7383
> URL: https://issues.apache.org/jira/browse/LUCENE-7383
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> Reproduces for me in master:
> {noformat}
>[junit4] Started J0 PID(26725@localhost).
>[junit4] Suite: org.apache.lucene.search.vectorhighlight.FieldQueryTest
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=FieldQueryTest 
> -Dtests.method=testFlattenToParentBlockJoinQuery 
> -Dtests.seed=FBAF10B3AA838B8D -Dtests.slow=true -Dtests.locale=pt 
> -Dtests.timezone=Asia/Chita -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.10s | FieldQueryTest.testFlattenToParentBlockJoinQuery 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FBAF10B3AA838B8D:6C7C115D5027C6BB]:0)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.AbstractTestCase.assertCollectionQueries(AbstractTestCase.java:162)
>[junit4]>  at 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery(FieldQueryTest.java:966)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
> docValues:{}, maxPointsInLeafNode=1120, maxMBSortInHeap=7.244053319393249, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=pt, timezone=Asia/Chita
>[junit4]   2> NOTE: Linux 4.2.0-38-generic amd64/Oracle Corporation 
> 1.8.0_92 (64-bit)/cpus=8,threads=1,free=430920456,total=504889344
>[junit4]   2> NOTE: All tests run in this JVM: [FieldQueryTest]
>[junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
>[junit4] 
>[junit4] 
>[junit4] Tests with failures [seed: FBAF10B3AA838B8D]:
>[junit4]   - 
> org.apache.lucene.search.vectorhighlight.FieldQueryTest.testFlattenToParentBlockJoinQuery
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7382) Wrong default attribute factory in use

2016-07-15 Thread Terry Smith (JIRA)
Terry Smith created LUCENE-7382:
---

 Summary: Wrong default attribute factory in use
 Key: LUCENE-7382
 URL: https://issues.apache.org/jira/browse/LUCENE-7382
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: master (7.0)
Reporter: Terry Smith


Originally reported to the mailing list: 
http://mail-archives.apache.org/mod_mbox/lucene-java-user/201607.mbox/%3cCAJ0VynnMAH7N7byPevTV9Htxo-Nk-B7mwUwRgP4X8gN=v4p...@mail.gmail.com%3e

LUCENE-7355 made a change to CustomAnalyzer.createComponents() such that it 
uses a different AttributeFactory. 
https://github.com/apache/lucene-solr/commit/e92a38af90d12e51390b4307ccbe0c24ac7b6b4e#diff-b39a076156e10aa7a4ba86af0357a0feL122


The previous default was TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY which uses 
PackedTokenAttributeImpl while the new default is now 
AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY which does not use 
PackedTokenAttributeImpl.

[~thetaphi] Asked me to open an issue for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2016-07-15 Thread Scott Stults (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379318#comment-15379318
 ] 

Scott Stults commented on SOLR-7495:


The patched test {{testSimpleGroupedFacet}} verifies that this is still an 
issue with 6.1. It fails with:

{quote}
Caused by: java.lang.IllegalStateException: unexpected docvalues type NUMERIC 
for field 'duration_i1' (expected=SORTED). Use UninvertingReader or index with 
docvalues.
{quote}

However, when SimpleFacets.java is also patched the test passes. So this 
appears to still be necessary in 6.1.

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 17258 - Failure!

2016-07-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17258/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([6C1F4BCED6EE5A59:E44B7414781237A1]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testFillWorkQueue(MultiThreadedOCPTest.java:111)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Benjamin Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379257#comment-15379257
 ] 

Benjamin Richter commented on SOLR-9256:


Tried SOLR 6.0.1 with multiple dataSources (jdbc1, jdbc2, ... jdbc22) ... same 
error.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Commented] (LUCENE-7380) Add Polygon.fromGeoJSON

2016-07-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379253#comment-15379253
 ] 

Robert Muir commented on LUCENE-7380:
-

+1

> Add Polygon.fromGeoJSON
> ---
>
> Key: LUCENE-7380
> URL: https://issues.apache.org/jira/browse/LUCENE-7380
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7380.patch, LUCENE-7380.patch
>
>
> Working with {{Polygon}} is a bit tricky today because you typically
> must use an external dependency to convert e.g. a GeoJSON string into
> Lucene's Polygon class ... I think this is a weakness in our API, and
> it clearly confuses users: http://markmail.org/thread/mpge4wqo7cfqm4i5
> So I created a simplistic GeoJSON parser to extract a single Polygon
> or MultiPolygon from a GeoJSON string, without any dependencies.  The
> parser only handles the various ways that a single Polygon or
> MultiPolygon can appear in a GeoJSON string, and throws an exception
> otherwise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Benjamin Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379250#comment-15379250
 ] 

Benjamin Richter commented on SOLR-9256:


I tried to reproduce the problem with SOLR 6.1 again, but it seems this error 
does only occur on 6.0 and 6.0.1. Sorry for the confusion.

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Updated] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Benjamin Richter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Richter updated SOLR-9256:
---
Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with postgreSQL 
9.4 Server on Java Version 1.8.0_91 runtime  (was: Solr 6.0, 6.0.1, 6.1 Single 
Instance or SolrCloud with postgreSQL 9.4 Server on Java Version 1.8.0_91 
runtime)

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> 

[jira] [Updated] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-07-15 Thread Benjamin Richter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Richter updated SOLR-9256:
---
Affects Version/s: (was: 6.1)

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1
> Environment: Solr 6.0, 6.0.1, 6.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> 

[jira] [Resolved] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9290.
-
   Resolution: Fixed
 Assignee: Shalin Shekhar Mangar
Fix Version/s: 5.5.3
   6.2

Thanks everyone for the help!

> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.2, 5.4.1, 5.5.1, 5.5.2, 6.0, 6.0.1, 6.1
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 6.2, 5.5.3
>
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9290:

Affects Version/s: 5.3.2
   5.4.1
   6.0
   6.0.1
   6.1

> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.2, 5.4.1, 5.5.1, 5.5.2, 6.0, 6.0.1, 6.1
>Reporter: Anshum Gupta
>Priority: Critical
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379204#comment-15379204
 ] 

ASF subversion and git services commented on SOLR-9290:
---

Commit e16fb5aa3073021993595acc061cc62bd575adc2 in lucene-solr's branch 
refs/heads/branch_5_5 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e16fb5a ]

SOLR-9290: TCP-connections in CLOSE_WAIT spike during heavy indexing and do not 
decrease
(cherry picked from commit bb7742e)

(cherry picked from commit d00c44de2eab6d01fb1df39a17b17fb769a0f541)


> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1, 5.5.2
>Reporter: Anshum Gupta
>Priority: Critical
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379122#comment-15379122
 ] 

ASF subversion and git services commented on SOLR-9290:
---

Commit 00ad5efac95f38cb1df9ef33672f17a7167a656f in lucene-solr's branch 
refs/heads/branch_5x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=00ad5ef ]

SOLR-9290: Adding 5.5.3 section and this issue to CHANGES.txt


> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1, 5.5.2
>Reporter: Anshum Gupta
>Priority: Critical
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15379102#comment-15379102
 ] 

ASF subversion and git services commented on SOLR-9290:
---

Commit d00c44de2eab6d01fb1df39a17b17fb769a0f541 in lucene-solr's branch 
refs/heads/branch_5x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d00c44d ]

SOLR-9290: TCP-connections in CLOSE_WAIT spike during heavy indexing and do not 
decrease
(cherry picked from commit bb7742e)


> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1, 5.5.2
>Reporter: Anshum Gupta
>Priority: Critical
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9307) DIH detect all corrupt files: even if i mentionned at tika_data_config onError="skip" solr stop indexing at first corrupt file found

2016-07-15 Thread kostali (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kostali updated SOLR-9307:
--
 Flags: Important
Remaining Estimate: 168h
 Original Estimate: 168h

> DIH detect all corrupt files: even if i mentionned at tika_data_config 
> onError="skip" solr  stop indexing at first corrupt file found
> -
>
> Key: SOLR-9307
> URL: https://issues.apache.org/jira/browse/SOLR-9307
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 5.4.1
> Environment: windows
>Reporter: kostali
>  Labels: tika
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I try to index many files msword and pdf using solr-5.4.1 ;
> In solr logg I get only the description of ERROR not the file who cause the 
> Error;
>  how to get a list of files are corrupt and Tika cannot index them; AND even 
> if solr try index corrupt file and fail how force solr to continue indexing 
> the next file ,because in handler DIH of solr I wrote in tika_data_config.xml 
> onError="skip" or onError="continue" dont work because the indexation stop 
> when tika try index the first corrupt file found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+126) - Build # 1161 - Failure!

2016-07-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1161/
Java: 64bit/jdk-9-ea+126 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:36960/fk/xl/c8n_1x3_lf_shard1_replica3]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:36960/fk/xl/c8n_1x3_lf_shard1_replica3]
at 
__randomizedtesting.SeedInfo.seed([C8647AD796822505:4030450D387E48FD]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:753)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:592)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:578)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:174)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378924#comment-15378924
 ] 

ASF subversion and git services commented on SOLR-9290:
---

Commit bb7742ebc7f33f5c9f41cc3ad28b30c20a19a380 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bb7742e ]

SOLR-9290: TCP-connections in CLOSE_WAIT spike during heavy indexing and do not 
decrease


> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1, 5.5.2
>Reporter: Anshum Gupta
>Priority: Critical
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9290) TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9290:

Summary: TCP-connections in CLOSE_WAIT spike during heavy indexing and do 
not decrease  (was: TCP-connections in CLOSE_WAIT spikes during heavy indexing 
when SSL is enabled)

> TCP-connections in CLOSE_WAIT spike during heavy indexing and do not decrease
> -
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1, 5.5.2
>Reporter: Anshum Gupta
>Priority: Critical
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9290) TCP-connections in CLOSE_WAIT spikes during heavy indexing when SSL is enabled

2016-07-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9290:

Attachment: SOLR-9290.patch

Patch which fixes the ZkControllerTest failure. Thanks to [~varunthacker] for 
spotting the fix.

I'll run precommit + tests again and then commit this patch to 6x and backport 
to 5x.

> TCP-connections in CLOSE_WAIT spikes during heavy indexing when SSL is enabled
> --
>
> Key: SOLR-9290
> URL: https://issues.apache.org/jira/browse/SOLR-9290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1, 5.5.2
>Reporter: Anshum Gupta
>Priority: Critical
> Attachments: SOLR-9290-debug.patch, SOLR-9290-debug.patch, 
> SOLR-9290.patch, SOLR-9290.patch, SOLR-9290.patch, index.sh, setup-solr.sh, 
> setup-solr.sh
>
>
> Heavy indexing on Solr with SSL leads to a lot of connections in CLOSE_WAIT 
> state. 
> At my workplace, we have seen this issue only with 5.5.1 and could not 
> reproduce it with 5.4.1 but from my conversation with Shalin, he knows of 
> users with 5.3.1 running into this issue too. 
> Here's an excerpt from the email [~shaie] sent to the mailing list  (about 
> what we see:
> {quote}
> 1) It consistently reproduces on 5.5.1, but *does not* reproduce on 5.4.1
> 2) It does not reproduce when SSL is disabled
> 3) Restarting the Solr process (sometimes both need to be restarted), the
> count drops to 0, but if indexing continues, they climb up again
> When it does happen, Solr seems stuck. The leader cannot talk to the
> replica, or vice versa, the replica is usually put in DOWN state and
> there's no way to fix it besides restarting the JVM.
> {quote}
> Here's the mail thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201607.mbox/%3c46cc66220a8143dc903fa34e79205...@vp-exc01.dips.local%3E
> Creating this issue so we could track this and have more people comment on 
> what they see. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org