[jira] [Created] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-03-27 Thread Kishor gandham (JIRA)
Kishor gandham created SOLR-12155:
-

 Summary: Solr 7.2.1 deadlock in 
UnInvertedField.getUnInvertedField() 
 Key: SOLR-12155
 URL: https://issues.apache.org/jira/browse/SOLR-12155
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.2.1
Reporter: Kishor gandham
 Attachments: stack.txt

I am attaching a stack trace from our production Solr (7.2.1). Occasionally, we 
are seeing SOLR becoming unresponsive. We are then forced to kill the JVM and 
start solr again.

We have a lot of facet queries and our index has approximately 15 million 
documents. We have recently started using json.facet queries and some of the 
facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 4 - Still Unstable

2018-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/4/

7 tests failed.
FAILED:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Unable to restart (#9): CloudJettyRunner 
[url=http://127.0.0.1:58765/collection1_shard1_replica_n63]

Stack Trace:
java.lang.AssertionError: Unable to restart (#9): CloudJettyRunner 
[url=http://127.0.0.1:58765/collection1_shard1_replica_n63]
at 
__randomizedtesting.SeedInfo.seed([EEA866DCFE8E8CEC:66FC59065072E114]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.RollingRestartTest.restartWithRolesTest(RollingRestartTest.java:103)
at 
org.apache.solr.cloud.RollingRestartTest.test(RollingRestartTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 24 - Unstable

2018-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/24/

4 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection

Error Message:
Timeout waiting for new leader null Live Nodes: [127.0.0.1:39482_solr, 
127.0.0.1:41683_solr, 127.0.0.1:42780_solr] Last available state: 
DocCollection(collection1//collections/collection1/state.json/15)={   
"pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node62":{   "core":"collection1_shard1_replica_n61",   
"base_url":"https://127.0.0.1:35299/solr;,   
"node_name":"127.0.0.1:35299_solr",   "state":"down",   
"type":"NRT"}, "core_node64":{   
"core":"collection1_shard1_replica_n63",   
"base_url":"https://127.0.0.1:42780/solr;,   
"node_name":"127.0.0.1:42780_solr",   "state":"down",   
"type":"NRT"}, "core_node66":{   
"core":"collection1_shard1_replica_n65",   
"base_url":"https://127.0.0.1:41683/solr;,   
"node_name":"127.0.0.1:41683_solr",   "state":"active",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for new leader
null
Live Nodes: [127.0.0.1:39482_solr, 127.0.0.1:41683_solr, 127.0.0.1:42780_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/15)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node62":{
  "core":"collection1_shard1_replica_n61",
  "base_url":"https://127.0.0.1:35299/solr;,
  "node_name":"127.0.0.1:35299_solr",
  "state":"down",
  "type":"NRT"},
"core_node64":{
  "core":"collection1_shard1_replica_n63",
  "base_url":"https://127.0.0.1:42780/solr;,
  "node_name":"127.0.0.1:42780_solr",
  "state":"down",
  "type":"NRT"},
"core_node66":{
  "core":"collection1_shard1_replica_n65",
  "base_url":"https://127.0.0.1:41683/solr;,
  "node_name":"127.0.0.1:41683_solr",
  "state":"active",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([D2AD563A50B0EFC:A536C9D9674B3AD6]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection(LeaderVoteWaitTimeoutTest.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 

[jira] [Commented] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416832#comment-16416832
 ] 

Karl Wright commented on LUCENE-8227:
-

This is the failure case I'm going to convert to a unit test first:

{code}
   [junit4]   1> doc=2333 is contained by shape but is outside the returned 
XYZBounds
   [junit4]   1>   unquantized=[lat=-3.1780051348770987E-74, 
lon=-3.032608859187692([X=-0.9951793580358298, Y=-0.1088898762907205, 
Z=-3.181560858610375E-74])]
   [junit4]   1>   quantized=[X=-0.9951793580415914, 
Y=-0.10888987641797832, Z=-2.3309121299774915E-10]
{code}

{code}
   [junit4]   1>   shape=GeoComplexPolygon:
{planetmodel=PlanetModel.WGS84, 
 number of shapes=1, 
 address=40c76856, 
 testPoint=[X=0.38044889065958476, Y=-0.47772089071622287, 
Z=0.7906122375677148], 
 testPointInSet=true, 
 shapes={ {
  [lat=-0.63542308910253, lon=0.9853722928232957([X=0.4446759777403525, 
Y=0.6707549854468698, Z=-0.593478073768])], 
  [lat=0.0, lon=0.0([X=1.0011188539924791, Y=0.0, Z=0.0])], 
[lat=0.45435018176633574, lon=3.141592653589793([X=-0.8989684544372841, 
Y=1.1009188402610632E-16, Z=0.4390846549572752])], 
  [lat=-0.375870856827283, lon=2.9129132647718414([X=-0.9065744420970767, 
Y=0.21100590938346708, Z=-0.36732668582405886])], 
  [lat=-1.2205765069413237, lon=3.141592653589793([X=-0.3424714964202101, 
Y=4.194066218902145E-17, Z=-0.9375649457139603])]}}
{code}

{code}
   [junit4]   1>   bounds=XYZBounds: [xmin=-0.9936143692718389 
xmax=1.0011188549924792 ymin=-1.0011188549924792 ymax=0.6707549864468698 
zmin=-0.9977622930221051 zmax=0.9977622930221051]
{code}



> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Major
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> 

[jira] [Resolved] (SOLR-12035) ExtendedDismaxQParser fails to include charfilters in nostopanalyzer

2018-03-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-12035.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> ExtendedDismaxQParser fails to include charfilters in nostopanalyzer
> 
>
> Key: SOLR-12035
> URL: https://issues.apache.org/jira/browse/SOLR-12035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Tim Allison
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In some circumstances, the ExtendedDismaxQParser tries to remove stop filters 
> from the TokenizerChain.  When building the new analyzer without the stop 
> filters, the charfilters from the original TokenizerChain are not copied over.
> The fix is trivial.
> {noformat}
> -  TokenizerChain newa = new TokenizerChain(tcq.getTokenizerFactory(), 
> newtf);
> + TokenizerChain newa = new TokenizerChain(tcq.getCharFilterFactories(), 
> tcq.getTokenizerFactory(), newtf);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12035) ExtendedDismaxQParser fails to include charfilters in nostopanalyzer

2018-03-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416812#comment-16416812
 ] 

Tomás Fernández Löbbe commented on SOLR-12035:
--

Thanks Tim!

> ExtendedDismaxQParser fails to include charfilters in nostopanalyzer
> 
>
> Key: SOLR-12035
> URL: https://issues.apache.org/jira/browse/SOLR-12035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Tim Allison
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In some circumstances, the ExtendedDismaxQParser tries to remove stop filters 
> from the TokenizerChain.  When building the new analyzer without the stop 
> filters, the charfilters from the original TokenizerChain are not copied over.
> The fix is trivial.
> {noformat}
> -  TokenizerChain newa = new TokenizerChain(tcq.getTokenizerFactory(), 
> newtf);
> + TokenizerChain newa = new TokenizerChain(tcq.getCharFilterFactories(), 
> tcq.getTokenizerFactory(), newtf);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #329: SOLR-12035

2018-03-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/329


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12035) ExtendedDismaxQParser fails to include charfilters in nostopanalyzer

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416809#comment-16416809
 ] 

ASF subversion and git services commented on SOLR-12035:


Commit 3e29c7dbd507032315aa698702daef1c7a370f75 in lucene-solr's branch 
refs/heads/master from [~tomasflobbe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3e29c7d ]

SOLR-12035: edimax should include charfilters in nostopanalyzer

This closes #329


> ExtendedDismaxQParser fails to include charfilters in nostopanalyzer
> 
>
> Key: SOLR-12035
> URL: https://issues.apache.org/jira/browse/SOLR-12035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Tim Allison
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some circumstances, the ExtendedDismaxQParser tries to remove stop filters 
> from the TokenizerChain.  When building the new analyzer without the stop 
> filters, the charfilters from the original TokenizerChain are not copied over.
> The fix is trivial.
> {noformat}
> -  TokenizerChain newa = new TokenizerChain(tcq.getTokenizerFactory(), 
> newtf);
> + TokenizerChain newa = new TokenizerChain(tcq.getCharFilterFactories(), 
> tcq.getTokenizerFactory(), newtf);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12035) ExtendedDismaxQParser fails to include charfilters in nostopanalyzer

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416811#comment-16416811
 ] 

ASF subversion and git services commented on SOLR-12035:


Commit 8b8187d1ee81e04d6bf7f7484e95224fe36683f4 in lucene-solr's branch 
refs/heads/branch_7x from [~tomasflobbe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b8187d ]

SOLR-12035: edimax should include charfilters in nostopanalyzer

This closes #329


> ExtendedDismaxQParser fails to include charfilters in nostopanalyzer
> 
>
> Key: SOLR-12035
> URL: https://issues.apache.org/jira/browse/SOLR-12035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Tim Allison
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some circumstances, the ExtendedDismaxQParser tries to remove stop filters 
> from the TokenizerChain.  When building the new analyzer without the stop 
> filters, the charfilters from the original TokenizerChain are not copied over.
> The fix is trivial.
> {noformat}
> -  TokenizerChain newa = new TokenizerChain(tcq.getTokenizerFactory(), 
> newtf);
> + TokenizerChain newa = new TokenizerChain(tcq.getCharFilterFactories(), 
> tcq.getTokenizerFactory(), newtf);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-27 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416784#comment-16416784
 ] 

Cao Manh Dat edited comment on SOLR-12066 at 3/28/18 4:36 AM:
--

Attached patch for this ticket
* remove core's data 
* test
* making the exception log less verbose (new format below)

{quote}
26283 ERROR 
(coreContainerWorkExecutor-42-thread-1-processing-n:127.0.0.1:52836_solr) 
[n:127.0.0.1:52836_solr] o.a.s.c.CoreContainer Error waiting for SolrCore 
to be loaded on startup
org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName 
core_node3 does not exist in shard shard1, ignore the exception if the replica 
was deleted
at 
org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1739) 
~[java/:?]
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1637) 
~[java/:?]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1037)
 ~[java/:?]
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:644) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.2.jar:3.2.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_151]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 ~[java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_151]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
{quote}


was (Author: caomanhdat):
Attached patch for this ticket
* remove core's data 
* test
* making the exception log less verbose (new format below)

{quote}
26192 ERROR 
(coreContainerWorkExecutor-42-thread-1-processing-n:127.0.0.1:52489_solr) 
[n:127.0.0.1:52489_solr] o.a.s.c.CoreContainer Error waiting for SolrCore 
to be created
java.util.concurrent.ExecutionException: 
org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName 
core_node4 does not exist in shard shard1, ignore the exception if the replica 
was deleted
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
~[?:1.8.0_151]
at 
org.apache.solr.core.CoreContainer.lambda$load$14(CoreContainer.java:673) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.2.jar:3.2.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_151]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 ~[java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_151]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.apache.solr.cloud.ZkController$NotInClusterStateException: 
coreNodeName core_node4 does not exist in shard shard1, ignore the exception if 
the replica was deleted
at 
org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1739) 
~[java/:?]
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1637) 
~[java/:?]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1037)
 ~[java/:?]
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:644) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.2.jar:3.2.2]
... 5 more
{quote}

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> --
>
> Key: SOLR-12066
> URL: https://issues.apache.org/jira/browse/SOLR-12066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12066.patch
>
>
> Initially when SOLR-12047 was created it looked like waiting for a state in 
> ZK for only 3 seconds was the culprit for cores not loading up
>  
> But it turns out to be something else. Here are the steps to reproduce this 
> 

[jira] [Comment Edited] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-27 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416784#comment-16416784
 ] 

Cao Manh Dat edited comment on SOLR-12066 at 3/28/18 4:24 AM:
--

Attached patch for this ticket
* remove core's data 
* test
* making the exception log less verbose (new format below)

{quote}
26192 ERROR 
(coreContainerWorkExecutor-42-thread-1-processing-n:127.0.0.1:52489_solr) 
[n:127.0.0.1:52489_solr] o.a.s.c.CoreContainer Error waiting for SolrCore 
to be created
java.util.concurrent.ExecutionException: 
org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName 
core_node4 does not exist in shard shard1, ignore the exception if the replica 
was deleted
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
~[?:1.8.0_151]
at 
org.apache.solr.core.CoreContainer.lambda$load$14(CoreContainer.java:673) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.2.jar:3.2.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_151]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 ~[java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_151]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.apache.solr.cloud.ZkController$NotInClusterStateException: 
coreNodeName core_node4 does not exist in shard shard1, ignore the exception if 
the replica was deleted
at 
org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1739) 
~[java/:?]
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1637) 
~[java/:?]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1037)
 ~[java/:?]
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:644) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.2.jar:3.2.2]
... 5 more
{quote}


was (Author: caomanhdat):
Attached patch for this ticket
* remove core's data 
* test
* making the exception log less verbose
{quote}

26192 ERROR 
(coreContainerWorkExecutor-42-thread-1-processing-n:127.0.0.1:52489_solr) 
[n:127.0.0.1:52489_solr] o.a.s.c.CoreContainer Error waiting for SolrCore 
to be created
java.util.concurrent.ExecutionException: 
org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName 
core_node4 does not exist in shard shard1, ignore the exception if the replica 
was deleted
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
~[?:1.8.0_151]
at 
org.apache.solr.core.CoreContainer.lambda$load$14(CoreContainer.java:673) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.2.jar:3.2.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_151]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 ~[java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_151]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.apache.solr.cloud.ZkController$NotInClusterStateException: 
coreNodeName core_node4 does not exist in shard shard1, ignore the exception if 
the replica was deleted
at 
org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1739) 
~[java/:?]
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1637) 
~[java/:?]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1037)
 ~[java/:?]
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:644) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.2.jar:3.2.2]
... 5 more
{quote}

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> 

[jira] [Commented] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-27 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416784#comment-16416784
 ] 

Cao Manh Dat commented on SOLR-12066:
-

Attached patch for this ticket
* remove core's data 
* test
* making the exception log less verbose
{quote}

26192 ERROR 
(coreContainerWorkExecutor-42-thread-1-processing-n:127.0.0.1:52489_solr) 
[n:127.0.0.1:52489_solr] o.a.s.c.CoreContainer Error waiting for SolrCore 
to be created
java.util.concurrent.ExecutionException: 
org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName 
core_node4 does not exist in shard shard1, ignore the exception if the replica 
was deleted
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
~[?:1.8.0_151]
at 
org.apache.solr.core.CoreContainer.lambda$load$14(CoreContainer.java:673) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.2.jar:3.2.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_151]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_151]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 ~[java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_151]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.apache.solr.cloud.ZkController$NotInClusterStateException: 
coreNodeName core_node4 does not exist in shard shard1, ignore the exception if 
the replica was deleted
at 
org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1739) 
~[java/:?]
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1637) 
~[java/:?]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1037)
 ~[java/:?]
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:644) 
~[java/:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.2.jar:3.2.2]
... 5 more
{quote}

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> --
>
> Key: SOLR-12066
> URL: https://issues.apache.org/jira/browse/SOLR-12066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12066.patch
>
>
> Initially when SOLR-12047 was created it looked like waiting for a state in 
> ZK for only 3 seconds was the culprit for cores not loading up
>  
> But it turns out to be something else. Here are the steps to reproduce this 
> problem
>  
>  - create a 3 node cluster
>  - create a 1 shard X 2 replica collection to use node1 and node2 ( 
> [http://localhost:8983/solr/admin/collections?action=create=test_node_lost=1=2=true]
>  )
>  - stop node 2 : ./bin/solr stop -p 7574
>  - Solr will create a new replica on node3 after 30 seconds because of the 
> ".auto_add_replicas" trigger
>  - At this point state.json has info about replicas being on node1 and node3
>  - Start node2. Bam!
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1053)
> ...
> Caused by: org.apache.solr.common.SolrException: 
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1619)
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
> ...
> Caused by: org.apache.solr.common.SolrException: coreNodeName core_node4 does 
> not exist in shard shard1: 
> DocCollection(test_node_lost//collections/test_node_lost/state.json/12)={
> ...{code}
>  
> The practical effects of this is not big since the move replica has already 
> put the replica on another JVM . But to the user it's super confusing on 
> what's happening. He can never get rid of this error unless he manually 
> cleans up the data directory on node2 and restart
>  
> Please note: I chose autoAddReplicas=true to reproduce this. but a user could 
> be using a node lost 

[jira] [Updated] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-27 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12066:

Attachment: SOLR-12066.patch

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> --
>
> Key: SOLR-12066
> URL: https://issues.apache.org/jira/browse/SOLR-12066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12066.patch
>
>
> Initially when SOLR-12047 was created it looked like waiting for a state in 
> ZK for only 3 seconds was the culprit for cores not loading up
>  
> But it turns out to be something else. Here are the steps to reproduce this 
> problem
>  
>  - create a 3 node cluster
>  - create a 1 shard X 2 replica collection to use node1 and node2 ( 
> [http://localhost:8983/solr/admin/collections?action=create=test_node_lost=1=2=true]
>  )
>  - stop node 2 : ./bin/solr stop -p 7574
>  - Solr will create a new replica on node3 after 30 seconds because of the 
> ".auto_add_replicas" trigger
>  - At this point state.json has info about replicas being on node1 and node3
>  - Start node2. Bam!
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1053)
> ...
> Caused by: org.apache.solr.common.SolrException: 
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1619)
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
> ...
> Caused by: org.apache.solr.common.SolrException: coreNodeName core_node4 does 
> not exist in shard shard1: 
> DocCollection(test_node_lost//collections/test_node_lost/state.json/12)={
> ...{code}
>  
> The practical effects of this is not big since the move replica has already 
> put the replica on another JVM . But to the user it's super confusing on 
> what's happening. He can never get rid of this error unless he manually 
> cleans up the data directory on node2 and restart
>  
> Please note: I chose autoAddReplicas=true to reproduce this. but a user could 
> be using a node lost trigger and and run into the same issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-27 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12066:

Attachment: (was: SOLR-12066)

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> --
>
> Key: SOLR-12066
> URL: https://issues.apache.org/jira/browse/SOLR-12066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12066.patch
>
>
> Initially when SOLR-12047 was created it looked like waiting for a state in 
> ZK for only 3 seconds was the culprit for cores not loading up
>  
> But it turns out to be something else. Here are the steps to reproduce this 
> problem
>  
>  - create a 3 node cluster
>  - create a 1 shard X 2 replica collection to use node1 and node2 ( 
> [http://localhost:8983/solr/admin/collections?action=create=test_node_lost=1=2=true]
>  )
>  - stop node 2 : ./bin/solr stop -p 7574
>  - Solr will create a new replica on node3 after 30 seconds because of the 
> ".auto_add_replicas" trigger
>  - At this point state.json has info about replicas being on node1 and node3
>  - Start node2. Bam!
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1053)
> ...
> Caused by: org.apache.solr.common.SolrException: 
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1619)
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
> ...
> Caused by: org.apache.solr.common.SolrException: coreNodeName core_node4 does 
> not exist in shard shard1: 
> DocCollection(test_node_lost//collections/test_node_lost/state.json/12)={
> ...{code}
>  
> The practical effects of this is not big since the move replica has already 
> put the replica on another JVM . But to the user it's super confusing on 
> what's happening. He can never get rid of this error unless he manually 
> cleans up the data directory on node2 and restart
>  
> Please note: I chose autoAddReplicas=true to reproduce this. but a user could 
> be using a node lost trigger and and run into the same issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1604 - Unstable!

2018-03-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1604/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaOnIndexing

Error Message:
Time out waiting for LIR state get removed

Stack Trace:
java.util.concurrent.TimeoutException: Time out waiting for LIR state get 
removed
at 
__randomizedtesting.SeedInfo.seed([F59E73D0A1114671:8CE55F5F028C8C52]:0)
at org.apache.solr.util.TimeOut.waitFor(TimeOut.java:66)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaOnIndexing(DeleteReplicaTest.java:331)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaOnIndexing

Error Message:
Time out waiting for LIR state get removed

Stack Trace:
java.util.concurrent.TimeoutException: Time out waiting for LIR state get 
removed
at 
__randomizedtesting.SeedInfo.seed([F59E73D0A1114671:8CE55F5F028C8C52]:0)
  

[jira] [Updated] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-27 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12066:

Attachment: SOLR-12066

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> --
>
> Key: SOLR-12066
> URL: https://issues.apache.org/jira/browse/SOLR-12066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12066
>
>
> Initially when SOLR-12047 was created it looked like waiting for a state in 
> ZK for only 3 seconds was the culprit for cores not loading up
>  
> But it turns out to be something else. Here are the steps to reproduce this 
> problem
>  
>  - create a 3 node cluster
>  - create a 1 shard X 2 replica collection to use node1 and node2 ( 
> [http://localhost:8983/solr/admin/collections?action=create=test_node_lost=1=2=true]
>  )
>  - stop node 2 : ./bin/solr stop -p 7574
>  - Solr will create a new replica on node3 after 30 seconds because of the 
> ".auto_add_replicas" trigger
>  - At this point state.json has info about replicas being on node1 and node3
>  - Start node2. Bam!
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1053)
> ...
> Caused by: org.apache.solr.common.SolrException: 
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1619)
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
> ...
> Caused by: org.apache.solr.common.SolrException: coreNodeName core_node4 does 
> not exist in shard shard1: 
> DocCollection(test_node_lost//collections/test_node_lost/state.json/12)={
> ...{code}
>  
> The practical effects of this is not big since the move replica has already 
> put the replica on another JVM . But to the user it's super confusing on 
> what's happening. He can never get rid of this error unless he manually 
> cleans up the data directory on node2 and restart
>  
> Please note: I chose autoAddReplicas=true to reproduce this. but a user could 
> be using a node lost trigger and and run into the same issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416779#comment-16416779
 ] 

Shawn Heisey commented on SOLR-12154:
-

Does that also cover log4j 1.2?  We do include a dependency on the log4j1 
compatibility jar, which implements the older API within the newer API.

Today has been a very dense day.  (multicast joke!)  So I'm too fried to figure 
out what you're talking about with the Level class there. :)


> Disallow Log4j2 explicit usage via forbidden APIs
> -
>
> Key: SOLR-12154
> URL: https://issues.apache.org/jira/browse/SOLR-12154
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Blocker
> Fix For: 7.4
>
> Attachments: SOLR-12154.patch
>
>
> We need to add org.apache.logging.log4j.** to forbidden APIs
> From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
> discussion ( [https://reviews.apache.org/r/65888/] ) 
> {quote} We *don't* do log4j calls in the code in general, we have that 
> explicitly forbidden in forbidden APIS today, and code that does something 
> with log4j has to supress that. Developers must instead use slf4j APIs. I 
> don't believe that's changing now with log4j2, or does it?
> {quote}
> We need to address this before 7.4 to make sure we don't break anything by 
> using Log4j2 directly 
> After SOLR-7887 the following classes explicitly import the 
> org.apache.logging.log4j.** package so let's validate it's usage
> - Log4j2Watcher
> - SolrLogLayout
> - StartupLoggingUtils
> - RequestLoggingTest
> - LoggingHandlerTest
> - SolrTestCaseJ4
> - TestLogLevelAnnotations
> - LogLevel



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 525 - Unstable!

2018-03-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/525/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([EDE37BEEE9945F63:65B744344768329B]:0)
at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 1791 lines...]
   [junit4] JVM J1: stdout was not empty, see: 

[jira] [Assigned] (SOLR-12035) ExtendedDismaxQParser fails to include charfilters in nostopanalyzer

2018-03-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-12035:


Assignee: Tomás Fernández Löbbe

> ExtendedDismaxQParser fails to include charfilters in nostopanalyzer
> 
>
> Key: SOLR-12035
> URL: https://issues.apache.org/jira/browse/SOLR-12035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Tim Allison
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some circumstances, the ExtendedDismaxQParser tries to remove stop filters 
> from the TokenizerChain.  When building the new analyzer without the stop 
> filters, the charfilters from the original TokenizerChain are not copied over.
> The fix is trivial.
> {noformat}
> -  TokenizerChain newa = new TokenizerChain(tcq.getTokenizerFactory(), 
> newtf);
> + TokenizerChain newa = new TokenizerChain(tcq.getCharFilterFactories(), 
> tcq.getTokenizerFactory(), newtf);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12035) ExtendedDismaxQParser fails to include charfilters in nostopanalyzer

2018-03-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416759#comment-16416759
 ] 

Tomás Fernández Löbbe commented on SOLR-12035:
--

PR looks good to me

> ExtendedDismaxQParser fails to include charfilters in nostopanalyzer
> 
>
> Key: SOLR-12035
> URL: https://issues.apache.org/jira/browse/SOLR-12035
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Tim Allison
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some circumstances, the ExtendedDismaxQParser tries to remove stop filters 
> from the TokenizerChain.  When building the new analyzer without the stop 
> filters, the charfilters from the original TokenizerChain are not copied over.
> The fix is trivial.
> {noformat}
> -  TokenizerChain newa = new TokenizerChain(tcq.getTokenizerFactory(), 
> newtf);
> + TokenizerChain newa = new TokenizerChain(tcq.getCharFilterFactories(), 
> tcq.getTokenizerFactory(), newtf);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12153) Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416745#comment-16416745
 ] 

ASF subversion and git services commented on SOLR-12153:


Commit f8af2747836afb1d821ccff37a6e9e1e8eab0989 in lucene-solr's branch 
refs/heads/master from [~tomasflobbe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f8af274 ]

SOLR-12153: Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync


> Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync
> --
>
> Key: SOLR-12153
> URL: https://issues.apache.org/jira/browse/SOLR-12153
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Trivial
> Attachments: SOLR-12153.patch
>
>
> The time dependency is probably causing the sporadic failures like:
> {noformat}
> FAILED:  org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync
> Error Message:
> Stack Trace:
> java.lang.AssertionError
>         at 
> __randomizedtesting.SeedInfo.seed([D1CF6CAB31D9C539:B979BF09A43DC4A7]:0)
>         at org.junit.Assert.fail(Assert.java:92)
>         at org.junit.Assert.assertTrue(Assert.java:43)
>         at org.junit.Assert.assertTrue(Assert.java:54)
>         at 
> org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync(ZkSolrClientTest.java:257)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>         at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>         at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> 

[jira] [Commented] (SOLR-12153) Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416746#comment-16416746
 ] 

ASF subversion and git services commented on SOLR-12153:


Commit bf222c35da4688024e26987fbab6965b9207 in lucene-solr's branch 
refs/heads/branch_7x from [~tomasflobbe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf222c3 ]

SOLR-12153: Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync


> Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync
> --
>
> Key: SOLR-12153
> URL: https://issues.apache.org/jira/browse/SOLR-12153
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Trivial
> Attachments: SOLR-12153.patch
>
>
> The time dependency is probably causing the sporadic failures like:
> {noformat}
> FAILED:  org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync
> Error Message:
> Stack Trace:
> java.lang.AssertionError
>         at 
> __randomizedtesting.SeedInfo.seed([D1CF6CAB31D9C539:B979BF09A43DC4A7]:0)
>         at org.junit.Assert.fail(Assert.java:92)
>         at org.junit.Assert.assertTrue(Assert.java:43)
>         at org.junit.Assert.assertTrue(Assert.java:54)
>         at 
> org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync(ZkSolrClientTest.java:257)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>         at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>         at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> 

Re: Review Request 65942: SOLR-11982: Add support for shards.sort parameter

2018-03-27 Thread Tomás Fernández Löbbe

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65942/
---

(Updated March 28, 2018, 3:19 a.m.)


Review request for lucene.


Changes
---

Uploading most recent patch from SOLR-11982 on behalf of Ere Maijala


Repository: lucene-solr


Description
---

Creating this Review request on Ere Maijala's patch. See SOLR-11982 for 
previous discussion.
It would be nice to have the possibility to easily sort the shards in the 
preferred order e.g. by replica type. The attached patch adds support for 
shards.sort parameter that allows one to sort e.g. PULL and TLOG replicas first 
with ``shards.sor=replicaType:PULL|TLOG``(which would mean that NRT replicas 
wouldn't be hit with queries unless they're the only ones available) and/or to 
sort by replica location (like preferLocalShards=true but more versatile).


Diffs (updated)
-

  solr/CHANGES.txt 4af91ea76c 
  
solr/core/src/java/org/apache/solr/handler/component/HttpShardHandlerFactory.java
 6bfd36af94 
  solr/core/src/java/org/apache/solr/util/TimeOut.java ce996f4326 
  
solr/core/src/test/org/apache/solr/handler/component/TestHttpShardHandlerFactory.java
 3ffa015a26 
  solr/solr-ref-guide/src/distributed-requests.adoc 096f632bbd 
  solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc 81c6f8 
  solr/solrj/src/java/org/apache/solr/common/params/ShardParams.java cbc33f41f4 
  
solr/solrj/src/test/org/apache/solr/client/solrj/impl/CloudSolrClientTest.java 
e54f9ad7c6 


Diff: https://reviews.apache.org/r/65942/diff/3/

Changes: https://reviews.apache.org/r/65942/diff/2-3/


Testing
---


Thanks,

Tomás Fernández Löbbe



[JENKINS] Lucene-Solr-NightlyTests-7.3 - Build # 11 - Still Failing

2018-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.3/11/

9 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexWriterThreadsToSegments.testSegmentCountOnFlushRandom

Error Message:
Captured an uncaught exception in thread: Thread[id=8926, name=Thread-7641, 
state=RUNNABLE, group=TGRP-TestIndexWriterThreadsToSegments]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8926, name=Thread-7641, state=RUNNABLE, 
group=TGRP-TestIndexWriterThreadsToSegments]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at __randomizedtesting.SeedInfo.seed([2155517A325F9CA3]:0)
at java.util.HashMap.resize(HashMap.java:704)
at java.util.HashMap.putVal(HashMap.java:663)
at java.util.HashMap.put(HashMap.java:612)
at java.util.HashSet.add(HashSet.java:220)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.noDups(IndexWriter.java:862)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:851)
at 
org.apache.lucene.index.IndexWriter.numDeletedDocs(IndexWriter.java:877)
at org.apache.lucene.index.IndexWriter.segString(IndexWriter.java:4668)
at org.apache.lucene.index.IndexWriter.segString(IndexWriter.java:4658)
at org.apache.lucene.index.IndexWriter.segString(IndexWriter.java:4645)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.merge(ConcurrentMergeScheduler.java:517)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2247)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:511)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:293)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:268)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:258)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
at 
org.apache.lucene.index.TestIndexWriterThreadsToSegments$CheckSegmentCount.run(TestIndexWriterThreadsToSegments.java:131)
at java.util.concurrent.CyclicBarrier.dowait(CyclicBarrier.java:220)
at java.util.concurrent.CyclicBarrier.await(CyclicBarrier.java:362)
at 
org.apache.lucene.index.TestIndexWriterThreadsToSegments$2.run(TestIndexWriterThreadsToSegments.java:216)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterThreadsToSegments

Error Message:
The test or suite printed 18924 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 18924 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([2155517A325F9CA3]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:40466/mupg/f/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:40466/mupg/f/collection1
at 
__randomizedtesting.SeedInfo.seed([88AD74A55052EB03:F94B7FFEAE86FB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 21711 - Still Unstable!

2018-03-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21711/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations

Error Message:
invalid bounds for shape=GeoComplexPolygon: {planetmodel=PlanetModel.WGS84, 
number of shapes=1, address=201456b, testPoint=[lat=-0.2601577399030455, 
lon=2.905718760892798([X=-0.9404332714537585, Y=0.22603114757077347, 
Z=-0.2574633923189196])], testPointInSet=true, shapes={ 
{[lat=-1.5707963267948966, lon=0.0([X=6.109531986173988E-17, Y=0.0, 
Z=-0.997762292022105])], [lat=1.0987261172592477, 
lon=-3.141592653589793([X=-0.45402784795728585, Y=-5.560237507246512E-17, 
Z=0.8892515351493394])], [lat=1.1363070513215905, 
lon=1.7276848701358942([X=-0.06566297862906387, Y=0.415093084183059, 
Z=0.9055926328150202])], [lat=-0.7491091812809194, 
lon=-3.141592653589793([X=-0.7319721689191644, Y=-8.964073737318005E-17, 
Z=-0.6806857366050147])], [lat=-0.36253140107409454, 
lon=2.5983479047737883([X=-0.8009515117320926, Y=0.4836536674271932, 
Z=-0.3548886491780724])]}}

Stack Trace:
java.lang.AssertionError: invalid bounds for shape=GeoComplexPolygon: 
{planetmodel=PlanetModel.WGS84, number of shapes=1, address=201456b, 
testPoint=[lat=-0.2601577399030455, 
lon=2.905718760892798([X=-0.9404332714537585, Y=0.22603114757077347, 
Z=-0.2574633923189196])], testPointInSet=true, shapes={ 
{[lat=-1.5707963267948966, lon=0.0([X=6.109531986173988E-17, Y=0.0, 
Z=-0.997762292022105])], [lat=1.0987261172592477, 
lon=-3.141592653589793([X=-0.45402784795728585, Y=-5.560237507246512E-17, 
Z=0.8892515351493394])], [lat=1.1363070513215905, 
lon=1.7276848701358942([X=-0.06566297862906387, Y=0.415093084183059, 
Z=0.9055926328150202])], [lat=-0.7491091812809194, 
lon=-3.141592653589793([X=-0.7319721689191644, Y=-8.964073737318005E-17, 
Z=-0.6806857366050147])], [lat=-0.36253140107409454, 
lon=2.5983479047737883([X=-0.8009515117320926, Y=0.4836536674271932, 
Z=-0.3548886491780724])]}}
at 
__randomizedtesting.SeedInfo.seed([E96D6E80B57E8EAE:591213143A332032]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:260)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

Re: Lucene/Solr 7.3

2018-03-27 Thread Varun Thacker
Tried running Solr's techproducts example from the 7.3 branch with java
8,9,10 and was able to run it successfully

On Tue, Mar 27, 2018 at 1:56 PM, Uwe Schindler  wrote:

> Hi,
>
>
>
> It’s pushed to 7.3. It would be good if you and some other committers
> could spend some time to maybe quickly check to start up solr’s techexample
> with Java 8, Java 9 and also Java 10 on your favourite operating system.
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
> 
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Alan Woodward 
> *Sent:* Tuesday, March 27, 2018 10:31 PM
> *To:* dev@lucene.apache.org
> *Subject:* Re: Lucene/Solr 7.3
>
>
>
> > I think that's what's happening?
>
>
>
> Correct - just waiting for SOLR-12141 and I’ll starting building the next
> RC.
>
>
>
> On 27 Mar 2018, at 20:48, Cassandra Targett  wrote:
>
>
>
> I have the 7.3 Ref Guide built locally and the PDF has been pushed to the
> solr-ref-guide-rc SVN repo, but I'm holding off on starting the vote thread
> until RC2 of 7.3 is available (I think that's what's happening? please
> correct me if I've misread the threads) so one vote doesn't finish too far
> ahead of the other.
>
>
>
> Cassandra
>
>
>
> On Mon, Mar 26, 2018 at 9:29 AM, Alan Woodward 
> wrote:
>
> The release candidate is out, everybody please vote!
>
>
>
> I’ve drafted some release notes, available here:
>
> https://wiki.apache.org/solr/ReleaseNote73
>
> https://wiki.apache.org/lucene-java/ReleaseNote73
>
>
>
> They’re fairly bare-bones at the moment, if anybody would like to expand
> on them please feel free.
>
>
>
>
>
> On 21 Mar 2018, at 15:53, Alan Woodward  wrote:
>
>
>
> FYI I’ve started building a release candidate.
>
>
>
> I’ve updated the build script on 7.3 to allow building with ant 1.10, if
> this doesn’t produce any problems then I’ll forward-port to 7x and master.
>
>
>
> On 21 Mar 2018, at 02:37, Đạt Cao Mạnh  wrote:
>
>
>
> Hi Alan,
>
>
>
> I committed the fix as well as resolve the issue.
>
>
>
> Thanks!
>
>
>
> On Tue, Mar 20, 2018 at 9:27 PM Alan Woodward 
> wrote:
>
> OK, thanks. Let me know when it’s in.
>
>
>
>
>
> On 20 Mar 2018, at 14:07, Đạt Cao Mạnh  wrote:
>
>
>
> Hi  Alan, guys,
>
>
>
> I found a blocker issue SOLR-12129, I've already uploaded a patch and
> beasting the tests, if the result is good I will commit and notify your
> guys!
>
>
>
> Thanks!
>
>
>
> On Tue, Mar 20, 2018 at 2:37 AM Alan Woodward 
> wrote:
>
> Go ahead!
>
>
>
>
>
> On 19 Mar 2018, at 18:33, Andrzej Białecki  com> wrote:
>
>
>
> Alan,
>
>
>
> I would like to commit the change in SOLR-11407 (
> 78d592d2fdfc64c227fc1bcb8fafa3d806fbb384) to branch_7_3. This fixes the
> logic that waits for replica recovery and provides more details about any
> failures.
>
>
>
> On 17 Mar 2018, at 13:01, Alan Woodward  wrote:
>
>
>
> I’d like to build the RC on Monday, but it depends on SOLR-12070.  I can
> help debugging that if need be.
>
>
>
> +1 to backport your fixes
>
>
>
> On 17 Mar 2018, at 01:42, Varun Thacker  wrote:
>
>
>
> I was going through the blockers for 7.3 and only SOLR-12070 came up. Is
> the fix complete for this Andrzej?
>
>
>
> @Alan : When do you plan on cutting an RC ? I committed SOLR-12083
> yesterday and SOLR-12063 today to master/branch_7x. Both are important
> fixes for CDCR so if you are okay I can backport it to the release branch
>
>
>
> On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh 
> wrote:
>
> Hi guys, Alan
>
>
>
> I committed the fix for SOLR-12110 to branch_7_3
>
>
>
> Thanks!
>
>
>
> On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh 
> wrote:
>
> Hi Alan,
>
>
>
> Sure the issue is marked as Blocker for 7.3.
>
>
>
> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward 
> wrote:
>
> Thanks Đạt, could you mark the issue as a Blocker and let me know when
> it’s been resolved?
>
>
>
> On 16 Mar 2018, at 02:05, Đạt Cao Mạnh  wrote:
>
>
>
> Hi guys, Alan,
>
>
>
> I found a blocker issue SOLR-12110, when investigating test failure. I've
> already uploaded a patch and beasting the tests, if the result is good I
> will commit soon.
>
>
>
> Thanks!
>
>
>
> On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward 
> wrote:
>
> Just realised that I don’t have an ASF Jenkins account - Uwe or Steve, can
> you give me a hand setting up the 7.3 Jenkins jobs?
>
>
>
> Thanks, Alan
>
>
>
>
>
> On 12 Mar 2018, at 09:32, Alan Woodward  wrote:
>
>
>
> I’ve created the 7.3 release branch.  I’ll leave 24 hours for bug-fixes
> and doc patches and then create a release candidate.
>
>
>
> 

[jira] [Commented] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416549#comment-16416549
 ] 

Varun Thacker commented on SOLR-12154:
--

Very quick patch. There is a nocommit in there and also doesn't address the 
usage of Level in ( org.apache.logging.log4j.Level ) SolrTestCaseJ4 . We could 
use strings instead of the object . I remember it was like that before 
SOLR-7887 but at some point I wanted to simply the type casts that were 
happening so i changed it to org.apache.logging.log4j.Level . We could revert 
the change or think of a better approach.

I won't be looking into this for the next 2 days most likely

> Disallow Log4j2 explicit usage via forbidden APIs
> -
>
> Key: SOLR-12154
> URL: https://issues.apache.org/jira/browse/SOLR-12154
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Blocker
> Fix For: 7.4
>
> Attachments: SOLR-12154.patch
>
>
> We need to add org.apache.logging.log4j.** to forbidden APIs
> From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
> discussion ( [https://reviews.apache.org/r/65888/] ) 
> {quote} We *don't* do log4j calls in the code in general, we have that 
> explicitly forbidden in forbidden APIS today, and code that does something 
> with log4j has to supress that. Developers must instead use slf4j APIs. I 
> don't believe that's changing now with log4j2, or does it?
> {quote}
> We need to address this before 7.4 to make sure we don't break anything by 
> using Log4j2 directly 
> After SOLR-7887 the following classes explicitly import the 
> org.apache.logging.log4j.** package so let's validate it's usage
> - Log4j2Watcher
> - SolrLogLayout
> - StartupLoggingUtils
> - RequestLoggingTest
> - LoggingHandlerTest
> - SolrTestCaseJ4
> - TestLogLevelAnnotations
> - LogLevel



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12154:
-
Attachment: SOLR-12154.patch

> Disallow Log4j2 explicit usage via forbidden APIs
> -
>
> Key: SOLR-12154
> URL: https://issues.apache.org/jira/browse/SOLR-12154
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Blocker
> Fix For: 7.4
>
> Attachments: SOLR-12154.patch
>
>
> We need to add org.apache.logging.log4j.** to forbidden APIs
> From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
> discussion ( [https://reviews.apache.org/r/65888/] ) 
> {quote} We *don't* do log4j calls in the code in general, we have that 
> explicitly forbidden in forbidden APIS today, and code that does something 
> with log4j has to supress that. Developers must instead use slf4j APIs. I 
> don't believe that's changing now with log4j2, or does it?
> {quote}
> We need to address this before 7.4 to make sure we don't break anything by 
> using Log4j2 directly 
> After SOLR-7887 the following classes explicitly import the 
> org.apache.logging.log4j.** package so let's validate it's usage
> - Log4j2Watcher
> - SolrLogLayout
> - StartupLoggingUtils
> - RequestLoggingTest
> - LoggingHandlerTest
> - SolrTestCaseJ4
> - TestLogLevelAnnotations
> - LogLevel



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416531#comment-16416531
 ] 

Varun Thacker commented on SOLR-12154:
--

Some of these classes already suppress the forbidden message since they 
interact with log4j2 directly . But we should still audit those as well. 

For example SolrLogLayout seems to be importing some classes while it could 
perhaps avoid. 

 

> Disallow Log4j2 explicit usage via forbidden APIs
> -
>
> Key: SOLR-12154
> URL: https://issues.apache.org/jira/browse/SOLR-12154
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Blocker
> Fix For: 7.4
>
>
> We need to add org.apache.logging.log4j.** to forbidden APIs
> From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
> discussion ( [https://reviews.apache.org/r/65888/] ) 
> {quote} We *don't* do log4j calls in the code in general, we have that 
> explicitly forbidden in forbidden APIS today, and code that does something 
> with log4j has to supress that. Developers must instead use slf4j APIs. I 
> don't believe that's changing now with log4j2, or does it?
> {quote}
> We need to address this before 7.4 to make sure we don't break anything by 
> using Log4j2 directly 
> After SOLR-7887 the following classes explicitly import the 
> org.apache.logging.log4j.** package so let's validate it's usage
> - Log4j2Watcher
> - SolrLogLayout
> - StartupLoggingUtils
> - RequestLoggingTest
> - LoggingHandlerTest
> - SolrTestCaseJ4
> - TestLogLevelAnnotations
> - LogLevel



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12154:
-
Description: 
We need to add org.apache.logging.log4j.** to forbidden APIs

From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
discussion ( [https://reviews.apache.org/r/65888/] ) 
{quote} We *don't* do log4j calls in the code in general, we have that 
explicitly forbidden in forbidden APIS today, and code that does something with 
log4j has to supress that. Developers must instead use slf4j APIs. I don't 
believe that's changing now with log4j2, or does it?
{quote}
We need to address this before 7.4 to make sure we don't break anything by 
using Log4j2 directly 

After SOLR-7887 the following classes explicitly import the 
org.apache.logging.log4j.** package so let's validate it's usage

- Log4j2Watcher

- SolrLogLayout

- StartupLoggingUtils

- RequestLoggingTest

- LoggingHandlerTest

- SolrTestCaseJ4

- TestLogLevelAnnotations

- LogLevel

  was:
We need to add org.apache.logging.log4j.** to forbidden APIs

From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
discussion ( [https://reviews.apache.org/r/65888/] ) 
{quote} We *don't* do log4j calls in the code in general, we have that 
explicitly forbidden in forbidden APIS today, and code that does something with 
log4j has to supress that. Developers must instead use slf4j APIs. I don't 
believe that's changing now with log4j2, or does it?
{quote}
We need to address this before 7.4 to make sure we don't break anything by 
using Log4j2 directly 

After SOLR-7887 the following classes explicitly import the 
org.apache.logging.log4j.** package so let's validate it's usage

- Log4j2Watcher

- SolrLogLayout ( already has a SuppressForbidden annotation )

- StartupLoggingUtils

- RequestLoggingTest

- LoggingHandlerTest

- SolrTestCaseJ4

- TestLogLevelAnnotations

- LogLevel 


> Disallow Log4j2 explicit usage via forbidden APIs
> -
>
> Key: SOLR-12154
> URL: https://issues.apache.org/jira/browse/SOLR-12154
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Blocker
> Fix For: 7.4
>
>
> We need to add org.apache.logging.log4j.** to forbidden APIs
> From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
> discussion ( [https://reviews.apache.org/r/65888/] ) 
> {quote} We *don't* do log4j calls in the code in general, we have that 
> explicitly forbidden in forbidden APIS today, and code that does something 
> with log4j has to supress that. Developers must instead use slf4j APIs. I 
> don't believe that's changing now with log4j2, or does it?
> {quote}
> We need to address this before 7.4 to make sure we don't break anything by 
> using Log4j2 directly 
> After SOLR-7887 the following classes explicitly import the 
> org.apache.logging.log4j.** package so let's validate it's usage
> - Log4j2Watcher
> - SolrLogLayout
> - StartupLoggingUtils
> - RequestLoggingTest
> - LoggingHandlerTest
> - SolrTestCaseJ4
> - TestLogLevelAnnotations
> - LogLevel



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12154:
-
Description: 
We need to add org.apache.logging.log4j.** to forbidden APIs

From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
discussion ( [https://reviews.apache.org/r/65888/] ) 
{quote} We *don't* do log4j calls in the code in general, we have that 
explicitly forbidden in forbidden APIS today, and code that does something with 
log4j has to supress that. Developers must instead use slf4j APIs. I don't 
believe that's changing now with log4j2, or does it?
{quote}
We need to address this before 7.4 to make sure we don't break anything by 
using Log4j2 directly 

After SOLR-7887 the following classes explicitly import the 
org.apache.logging.log4j.** package so let's validate it's usage

- Log4j2Watcher

- SolrLogLayout ( already has a SuppressForbidden annotation )

- StartupLoggingUtils

- RequestLoggingTest

- LoggingHandlerTest

- SolrTestCaseJ4

- TestLogLevelAnnotations

- LogLevel 

  was:
We need to add org.apache.logging.log4j.** to forbidden APIs

From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
discussion ( [https://reviews.apache.org/r/65888/] ) 
{quote} We *don't* do log4j calls in the code in general, we have that 
explicitly forbidden in forbidden APIS today, and code that does something with 
log4j has to supress that. Developers must instead use slf4j APIs. I don't 
believe that's changing now with log4j2, or does it?
{quote}
We need to address this before 7.4 to make sure we don't break anything by 
using Log4j2 directly 

After SOLR-7887 the following classes explicitly import the 
org.apache.logging.log4j.** package so let's validate it's usage

- Log4j2Watcher

- SolrLogLayout

- StartupLoggingUtils

- RequestLoggingTest

- LoggingHandlerTest

- SolrTestCaseJ4

- TestLogLevelAnnotations

- LogLevel 


> Disallow Log4j2 explicit usage via forbidden APIs
> -
>
> Key: SOLR-12154
> URL: https://issues.apache.org/jira/browse/SOLR-12154
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Blocker
> Fix For: 7.4
>
>
> We need to add org.apache.logging.log4j.** to forbidden APIs
> From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
> discussion ( [https://reviews.apache.org/r/65888/] ) 
> {quote} We *don't* do log4j calls in the code in general, we have that 
> explicitly forbidden in forbidden APIS today, and code that does something 
> with log4j has to supress that. Developers must instead use slf4j APIs. I 
> don't believe that's changing now with log4j2, or does it?
> {quote}
> We need to address this before 7.4 to make sure we don't break anything by 
> using Log4j2 directly 
> After SOLR-7887 the following classes explicitly import the 
> org.apache.logging.log4j.** package so let's validate it's usage
> - Log4j2Watcher
> - SolrLogLayout ( already has a SuppressForbidden annotation )
> - StartupLoggingUtils
> - RequestLoggingTest
> - LoggingHandlerTest
> - SolrTestCaseJ4
> - TestLogLevelAnnotations
> - LogLevel 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12154:
-
Issue Type: Sub-task  (was: Task)
  Security: (was: Public)
Parent: SOLR-7887

> Disallow Log4j2 explicit usage via forbidden APIs
> -
>
> Key: SOLR-12154
> URL: https://issues.apache.org/jira/browse/SOLR-12154
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Blocker
> Fix For: 7.4
>
>
> We need to add org.apache.logging.log4j.** to forbidden APIs
> From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
> discussion ( [https://reviews.apache.org/r/65888/] ) 
> {quote} We *don't* do log4j calls in the code in general, we have that 
> explicitly forbidden in forbidden APIS today, and code that does something 
> with log4j has to supress that. Developers must instead use slf4j APIs. I 
> don't believe that's changing now with log4j2, or does it?
> {quote}
> We need to address this before 7.4 to make sure we don't break anything by 
> using Log4j2 directly 
> After SOLR-7887 the following classes explicitly import the 
> org.apache.logging.log4j.** package so let's validate it's usage
> - Log4j2Watcher
> - SolrLogLayout
> - StartupLoggingUtils
> - RequestLoggingTest
> - LoggingHandlerTest
> - SolrTestCaseJ4
> - TestLogLevelAnnotations
> - LogLevel 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12154) Disallow Log4j2 explicit usage via forbidden APIs

2018-03-27 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12154:


 Summary: Disallow Log4j2 explicit usage via forbidden APIs
 Key: SOLR-12154
 URL: https://issues.apache.org/jira/browse/SOLR-12154
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: 7.4


We need to add org.apache.logging.log4j.** to forbidden APIs

From [Tomás|https://reviews.apache.org/users/tflobbe/] on the reviewboard 
discussion ( [https://reviews.apache.org/r/65888/] ) 
{quote} We *don't* do log4j calls in the code in general, we have that 
explicitly forbidden in forbidden APIS today, and code that does something with 
log4j has to supress that. Developers must instead use slf4j APIs. I don't 
believe that's changing now with log4j2, or does it?
{quote}
We need to address this before 7.4 to make sure we don't break anything by 
using Log4j2 directly 

After SOLR-7887 the following classes explicitly import the 
org.apache.logging.log4j.** package so let's validate it's usage

- Log4j2Watcher

- SolrLogLayout

- StartupLoggingUtils

- RequestLoggingTest

- LoggingHandlerTest

- SolrTestCaseJ4

- TestLogLevelAnnotations

- LogLevel 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12144) Remove SOLR_LOG_PRESTART_ROTATION and leverage log4j2

2018-03-27 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416520#comment-16416520
 ] 

Lucene/Solr QA commented on SOLR-12144:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m  3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:black}{color} | {color:black} {color} | {color:black}  0m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12144 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916362/SOLR-12144.patch |
| Optional Tests |  validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / b151b2c |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| modules | C: solr U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/21/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Remove SOLR_LOG_PRESTART_ROTATION and leverage log4j2 
> --
>
> Key: SOLR-12144
> URL: https://issues.apache.org/jira/browse/SOLR-12144
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Varun Thacker
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12144.patch, SOLR-12144.patch
>
>
> With log4j2 rotating the file on restart is as simple as adding a policy - 
> OnStartupTriggeringPolicy
> So we can remove Solr logic which does the same and exposes it via 
> SOLR_LOG_PRESTART_ROTATION .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-master #2218: POMs out of sync

2018-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2218/

No tests ran.

Build Log:
[...truncated 31570 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 204 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:679: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 16 minutes 30 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-7.x #165: POMs out of sync

2018-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.x/165/

No tests ran.

Build Log:
[...truncated 31628 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 204 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:679: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 16 minutes 12 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12153) Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync

2018-03-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-12153:
-
Attachment: SOLR-12153.patch

> Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync
> --
>
> Key: SOLR-12153
> URL: https://issues.apache.org/jira/browse/SOLR-12153
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Trivial
> Attachments: SOLR-12153.patch
>
>
> The time dependency is probably causing the sporadic failures like:
> {noformat}
> FAILED:  org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync
> Error Message:
> Stack Trace:
> java.lang.AssertionError
>         at 
> __randomizedtesting.SeedInfo.seed([D1CF6CAB31D9C539:B979BF09A43DC4A7]:0)
>         at org.junit.Assert.fail(Assert.java:92)
>         at org.junit.Assert.assertTrue(Assert.java:43)
>         at org.junit.Assert.assertTrue(Assert.java:54)
>         at 
> org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync(ZkSolrClientTest.java:257)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>         at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>         at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> 

[jira] [Created] (SOLR-12153) Remove Thread.sleep from ZkSolrClientTest.testMultipleWatchesAsync

2018-03-27 Thread JIRA
Tomás Fernández Löbbe created SOLR-12153:


 Summary: Remove Thread.sleep from 
ZkSolrClientTest.testMultipleWatchesAsync
 Key: SOLR-12153
 URL: https://issues.apache.org/jira/browse/SOLR-12153
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe


The time dependency is probably causing the sporadic failures like:
{noformat}
FAILED:  org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync

Error Message:


Stack Trace:
java.lang.AssertionError
        at 
__randomizedtesting.SeedInfo.seed([D1CF6CAB31D9C539:B979BF09A43DC4A7]:0)
        at org.junit.Assert.fail(Assert.java:92)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at org.junit.Assert.assertTrue(Assert.java:54)
        at 
org.apache.solr.cloud.ZkSolrClientTest.testMultipleWatchesAsync(ZkSolrClientTest.java:257)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
     

[jira] [Assigned] (SOLR-12142) EmbeddedSolrServer should use req.getContentWriter

2018-03-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-12142:
-

Assignee: Noble Paul

> EmbeddedSolrServer should use req.getContentWriter 
> ---
>
> Key: SOLR-12142
> URL: https://issues.apache.org/jira/browse/SOLR-12142
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
>
> In SOLR-11380, SolrRequest.getContentWriter was introduced as a replacement 
> for getContentStreams.  However, EmbeddedSolrServer still calls 
> getContentStreams, and so clients who need to send POST data to it cannot yet 
> switch from the Deprecated API to the new API.  The SolrTextTagger is an 
> example of a project where one would want to do this.
> It seems EmbeddedSolrServer ought to check for getContentWriter and if 
> present then convert it into a ContentStream somehow.  For the time being, 
> ESS needs to call both since both APIs exist.
> CC [~noble.paul]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12094) JsonRecordReader ignores root record fields after the split point

2018-03-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416475#comment-16416475
 ] 

Noble Paul edited comment on SOLR-12094 at 3/27/18 11:48 PM:
-

before going into the patch, I can see that it is not designed to work like 
that . The reason is that {{JsonRecordReader}} is a streaming parser. To 
include the {{'after'}} in the document, It must hold all the data in the 
{{'exams}} in memory.  Think if there are a million docs in {{'exams'}} and 
keeping all of them in memory before  it can read the value of {{'after'}}. So, 
it is going to seriously affect the performance of the parser for the normal 
use case. 


was (Author: noble.paul):
before going into the patch, I can see that it is not designed to work like 
that . The reason is that {{JsonRecordReader}} is a streaming parser. To 
include the {{'after'}} in the document, It must hold all the data in the 
{{'examsæ}} in memory. So, it is going to seriously affect the performance of 
the parser for the normal use case. 

> JsonRecordReader ignores root record fields after the split point
> -
>
> Key: SOLR-12094
> URL: https://issues.apache.org/jira/browse/SOLR-12094
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Przemysław Szeremiota
>Priority: Major
> Attachments: SOLR-12094.patch, SOLR-12094.patch, 
> json-record-reader-bug.patch
>
>
> JsonRecordReader, when configured with other than top-level split, ignores 
> all top-level JSON nodes after the split ends, for example:
> {code}
> {
>   "first": "John",
>   "last": "Doe",
>   "grade": 8,
>   "exams": [
> {
> "subject": "Maths",
> "test": "term1",
> "marks": 90
> },
> {
> "subject": "Biology",
> "test": "term1",
> "marks": 86
> }
>   ],
>   "after": "456"
> }
> {code}
> Node "after" won't be visible in SolrInputDocument constructed from 
> /update/json/docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12094) JsonRecordReader ignores root record fields after the split point

2018-03-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416475#comment-16416475
 ] 

Noble Paul commented on SOLR-12094:
---

before going into the patch, I can see that it is not designed to work like 
that . The reason is that {{JsonRecordReader}} is a streaming parser. To 
include the {{'after'}} in the document, It must hold all the data in the 
{{'examsæ}} in memory. So, it is going to seriously affect the performance of 
the parser for the normal use case. 

> JsonRecordReader ignores root record fields after the split point
> -
>
> Key: SOLR-12094
> URL: https://issues.apache.org/jira/browse/SOLR-12094
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Przemysław Szeremiota
>Priority: Major
> Attachments: SOLR-12094.patch, SOLR-12094.patch, 
> json-record-reader-bug.patch
>
>
> JsonRecordReader, when configured with other than top-level split, ignores 
> all top-level JSON nodes after the split ends, for example:
> {code}
> {
>   "first": "John",
>   "last": "Doe",
>   "grade": 8,
>   "exams": [
> {
> "subject": "Maths",
> "test": "term1",
> "marks": 90
> },
> {
> "subject": "Biology",
> "test": "term1",
> "marks": 86
> }
>   ],
>   "after": "456"
> }
> {code}
> Node "after" won't be visible in SolrInputDocument constructed from 
> /update/json/docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416468#comment-16416468
 ] 

Andrzej Bialecki  commented on SOLR-11882:
--

Updated patch. This adds a SolrCore instance identifier (tag) to all gauges in 
a registry, which are then matched and removed when SolrCore is closed.

The size of the patch is partially caused by the change in 
{{SolrMetricProducer.initializeMetrics(...)}} and the need to pass around the 
SolrCore instance tag.
All unit tests pass, and the scenario described above also passes, ie. produces 
only 2 strongly referenced SolrCore objects.

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11882:
-
Attachment: SOLR-11882.patch

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 348 - Unstable

2018-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/348/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/21/consoleText

[repro] Revision: e595541ef3f9642632ac85d03c62616b5f70f1e4

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.method=testMultipleThreads -Dtests.seed=13B21A44A257644D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sv-SE -Dtests.timezone=America/Halifax -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
b151b2ccfed8bcc9de0f6d046fa0fb2a15360285
[repro] git fetch
[repro] git checkout e595541ef3f9642632ac85d03c62616b5f70f1e4

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3296 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.seed=13B21A44A257644D -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sv-SE -Dtests.timezone=America/Halifax 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 273 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro] git checkout b151b2ccfed8bcc9de0f6d046fa0fb2a15360285

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416442#comment-16416442
 ] 

Karl Wright commented on LUCENE-8227:
-

Committed code that should fix the NPEs.  The assertions are also fixed.  
However, we still see failures in cases 2 and 3 due to incorrect edge-crossing 
logic that need to be worked out, so this ticket is not yet complete.


> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Major
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[smoker][junit4] ERROR   0.99s J0 | TestGeo3DPoint.testGeo3DRelations 
> <<<
>[smoker][junit4]> Throwable #1: java.lang.NullPointerException
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F2A368AB96A2FD75:42DC153F19EF53E9]:0)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[smoker][junit4]>  at 
> 

[jira] [Commented] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416435#comment-16416435
 ] 

ASF subversion and git services commented on LUCENE-8227:
-

Commit 9b05d5676d01a09635b403cd7057c7207735e3e2 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b05d56 ]

LUCENE-8227: Handle identical planes properly in GeoComplexPolygon.


> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Major
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[smoker][junit4] ERROR   0.99s J0 | TestGeo3DPoint.testGeo3DRelations 
> <<<
>[smoker][junit4]> Throwable #1: java.lang.NullPointerException
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F2A368AB96A2FD75:42DC153F19EF53E9]:0)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[smoker][junit4]>  at 
> 

[jira] [Commented] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416432#comment-16416432
 ] 

ASF subversion and git services commented on LUCENE-8227:
-

Commit 6dcb6ae64155921ffb2841732844c9b8776968bd in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6dcb6ae ]

LUCENE-8227: Handle identical planes properly in GeoComplexPolygon.


> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Major
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[smoker][junit4] ERROR   0.99s J0 | TestGeo3DPoint.testGeo3DRelations 
> <<<
>[smoker][junit4]> Throwable #1: java.lang.NullPointerException
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F2A368AB96A2FD75:42DC153F19EF53E9]:0)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[smoker][junit4]>  at 
> 

[jira] [Commented] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416434#comment-16416434
 ] 

ASF subversion and git services commented on LUCENE-8227:
-

Commit e656091690b7efc869da5eb2702791152070b388 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e656091 ]

LUCENE-8227: Handle identical planes properly in GeoComplexPolygon.


> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Major
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[smoker][junit4] ERROR   0.99s J0 | TestGeo3DPoint.testGeo3DRelations 
> <<<
>[smoker][junit4]> Throwable #1: java.lang.NullPointerException
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F2A368AB96A2FD75:42DC153F19EF53E9]:0)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[smoker][junit4]>  at 
> 

[jira] [Commented] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416355#comment-16416355
 ] 

Andrzej Bialecki  commented on SOLR-11882:
--

bq. Could we introspect the impl and do the right thing with new impls that 
take the new param?
That would be exceedingly messy - this method is called in many components and 
from different contexts (eg. most but not all mbeans are initialized in 
SolrCore, but handlers are also initialized in CoreContainer, some components 
initialize their own sub-components, etc...)

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416335#comment-16416335
 ] 

Mark Miller commented on SOLR-11882:


bq. definitely not for 7.3.

Could we introspect the impl and do the right thing with new impls that take 
the new param?

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416320#comment-16416320
 ] 

Andrzej Bialecki  edited comment on SOLR-11882 at 3/27/18 10:09 PM:


[~romseygeek] The current patch is broken (Solr silently loses some metrics 
from active cores .. oops). I'm preparing a new patch that is conceptually 
simpler and appears to be working well.

However, this new fix requires changing the API of {{SolrMetricProducer}} (new 
parameter in {{initializeMetrics(...)}} method) so I think it's suitable only 
for 8.0 - definitely not for 7.3. I think that at this point the only thing we 
can do for 7.3 is to add this issue to the "known bugs" section.


was (Author: ab):
[~romseygeek] The current patch is broken (Solr silently loses some metrics 
from active cores .. oops). I'm preparing a new patch that is conceptually 
simpler and appears to be working well.

However, this new fix requires changing the API of {{SolrMetricProducer}} (new 
parameter in {{initializeMetrics(...)}} method) so I think it's suitable only 
for 8.0 - definitely not for 7.3.

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416320#comment-16416320
 ] 

Andrzej Bialecki  commented on SOLR-11882:
--

[~romseygeek] The current patch is broken (Solr silently loses some metrics 
from active cores .. oops). I'm preparing a new patch that is conceptually 
simpler and appears to be working well.

However, this new fix requires changing the API of {{SolrMetricProducer}} (new 
parameter in {{initializeMetrics(...)}} method) so I think it's suitable only 
for 8.0 - definitely not for 7.3.

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2018-03-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416317#comment-16416317
 ] 

Jan Høydahl commented on LUCENE-5143:
-

[~thetaphi], [~hossman] Have you had a look at this? I'm ready to commit this 
if we agree it's a Good Thing™

> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-5143.patch, LUCENE-5143.patch, LUCENE-5143.patch, 
> LUCENE-5143_READMEs.patch, LUCENE-5143_READMEs.patch, 
> LUCENE-5143_READMEs.patch, LUCENE_5143_KEYS.patch
>
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  reassigned SOLR-11882:


Assignee: Andrzej Bialecki   (was: Erick Erickson)

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-27 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11882:
-
Fix Version/s: master (8.0)

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, SOLR-11882.patch, create-cores.zip, 
> solr-dump-full_Leak_Suspects.zip, solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21710 - Unstable!

2018-03-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21710/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud.testCreateCollectionAddShardWithReplicaTypeUsingPolicy

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([F4A7D64BE8079D4D:64F5C3D8E5CB5CB1]:0)
at 
org.apache.solr.cloud.autoscaling.sim.SimClusterStateProvider.lambda$getCollectionStates$52(SimClusterStateProvider.java:1236)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1380)
at 
org.apache.solr.cloud.autoscaling.sim.SimClusterStateProvider.lambda$getCollectionStates$53(SimClusterStateProvider.java:1228)
at 
java.base/java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1617)
at 
org.apache.solr.cloud.autoscaling.sim.SimClusterStateProvider.getCollectionStates(SimClusterStateProvider.java:1227)
at 
org.apache.solr.cloud.autoscaling.sim.SimClusterStateProvider.getClusterState(SimClusterStateProvider.java:1209)
at 
org.apache.solr.cloud.autoscaling.sim.SimSolrCloudTestCase.getCollectionState(SimSolrCloudTestCase.java:190)
at 
org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud.testCreateCollectionAddShardWithReplicaTypeUsingPolicy(TestPolicyCloud.java:286)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Comment Edited] (SOLR-11947) Math Expressions User Guide

2018-03-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416279#comment-16416279
 ] 

Joel Bernstein edited comment on SOLR-11947 at 3/27/18 9:35 PM:


I think the best approach is to link functions mentioned in the user guide to 
the reference guide.

I'd also like to setup a TOC on each of the user guide sub-pages so people can 
see at glance the main topics in the section.

Then replace the current statistical-programming.adoc with the 
math-expressions.adoc. 

So there would be this structure:

Streaming Expressions

   Stream Sources

   Stream Decorators

   Math Expressions Reference

   Math Expressions User Guide

      Scalar Math

      Vector Math 

      ...

   Graph Expressions

 

The reference guide needs more work as well, but I think people will spend most 
of their time in user guide and just check the reference guide for syntax.


was (Author: joel.bernstein):
I think the best approach is to link functions mentioned in the user guide to 
the reference guide.

I'd also like to setup a TOC on each of the user guide sub-pages so people can 
see at glance the main topics in the section.

Then replace the current statistical-programming.adoc with the 
math-expressions.adoc. 

So there would be this structure:

Streaming Expressions

   Stream Sources

   Stream Decorators

   Stream Evalutors

   Math Expressions

      Scalar Math

      Vector Math 

      ...

   Graph Expressions

 

The reference guide needs more work as well, but I think people will spend most 
of their time in user guide and just check the reference guide for syntax.

> Math Expressions User Guide
> ---
>
> Key: SOLR-11947
> URL: https://issues.apache.org/jira/browse/SOLR-11947
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11947.patch, SOLR-11947.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11947) Math Expressions User Guide

2018-03-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416279#comment-16416279
 ] 

Joel Bernstein edited comment on SOLR-11947 at 3/27/18 9:34 PM:


I think the best approach is to link functions mentioned in the user guide to 
the reference guide.

I'd also like to setup a TOC on each of the user guide sub-pages so people can 
see at glance the main topics in the section.

Then replace the current statistical-programming.adoc with the 
math-expressions.adoc. 

So there would be this structure:

Streaming Expressions

   Stream Sources

   Stream Decorators

   Stream Evalutors

   Math Expressions

      Scalar Math

      Vector Math 

      ...

   Graph Expressions

 

The reference guide needs more work as well, but I think people will spend most 
of their time in user guide and just check the reference guide for syntax.


was (Author: joel.bernstein):
I think the best approach is to link functions mentioned in the user guide to 
the reference guide.

I'd also like to setup a TOC on each of the user guide sub-pages so people can 
see at glance the main topics in the section.

Then replace the current statistical-programming.adoc with the 
math-expressions.adoc. 

So there would be this structure:

Streaming Expressions

   Stream Sources

   Stream Decorators

   Math Expressions

      Scalar Math

      Vector Math 

      ...

   Graph Expressions

 

The reference guide needs more work as well, but I think people will spend most 
of their time in user guide and just check the reference guide for syntax.

> Math Expressions User Guide
> ---
>
> Key: SOLR-11947
> URL: https://issues.apache.org/jira/browse/SOLR-11947
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11947.patch, SOLR-11947.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11947) Math Expressions User Guide

2018-03-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416279#comment-16416279
 ] 

Joel Bernstein commented on SOLR-11947:
---

I think the best approach is to link functions mentioned in the user guide to 
the reference guide.

I'd also like to setup a TOC on each of the user guide sub-pages so people can 
see at glance the main topics in the section.

Then replace the current statistical-programming.adoc with the 
math-expressions.adoc. 

So there would be this structure:

Streaming Expressions

   Stream Sources

   Stream Decorators

   Math Expressions

      Scalar Math

      Vector Math 

      ...

   Graph Expressions

 

The reference guide needs more work as well, but I think people will spend most 
of their time in user guide and just check the reference guide for syntax.

> Math Expressions User Guide
> ---
>
> Key: SOLR-11947
> URL: https://issues.apache.org/jira/browse/SOLR-11947
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11947.patch, SOLR-11947.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8223) CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines

2018-03-27 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-8223.
-
Resolution: Fixed

> CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines
> 
>
> Key: LUCENE-8223
> URL: https://issues.apache.org/jira/browse/LUCENE-8223
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Alan Woodward
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: trunk, 7.4
>
>
> The 7.3 Jenkins smoke tester has failed a couple of times due to 
> CachingNaiveBayesClassifierTest.testPerformance() (see 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/9/] for example).
> I don't think performance tests like this are very useful as part of the 
> standard test suite, because they depend too much on what else is happening 
> on the machine they're being run on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8223) CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines

2018-03-27 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-8223:

Fix Version/s: 7.4
   trunk

> CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines
> 
>
> Key: LUCENE-8223
> URL: https://issues.apache.org/jira/browse/LUCENE-8223
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Alan Woodward
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: trunk, 7.4
>
>
> The 7.3 Jenkins smoke tester has failed a couple of times due to 
> CachingNaiveBayesClassifierTest.testPerformance() (see 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/9/] for example).
> I don't think performance tests like this are very useful as part of the 
> standard test suite, because they depend too much on what else is happening 
> on the machine they're being run on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8223) CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines

2018-03-27 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-8223:

Component/s: modules/classification

> CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines
> 
>
> Key: LUCENE-8223
> URL: https://issues.apache.org/jira/browse/LUCENE-8223
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Alan Woodward
>Assignee: Tommaso Teofili
>Priority: Major
>
> The 7.3 Jenkins smoke tester has failed a couple of times due to 
> CachingNaiveBayesClassifierTest.testPerformance() (see 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/9/] for example).
> I don't think performance tests like this are very useful as part of the 
> standard test suite, because they depend too much on what else is happening 
> on the machine they're being run on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [VOTE] Release Lucene/Solr 7.3.0 RC1

2018-03-27 Thread Uwe Schindler
It’s pushed to 7.3.

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Uwe Schindler  
Sent: Tuesday, March 27, 2018 12:21 PM
To: dev@lucene.apache.org
Subject: RE: [VOTE] Release Lucene/Solr 7.3.0 RC1

 

Hi,

 

I fixed the Windows and Linux shell scripts to correctly parse the major 
version and use numerical comparisons instead of alphabetical string compare.

 

In fact, both - Windows and Linux/Mac - did not work at all caused by more or 
less the same issue (“10” < “9” if compared alphabetically, while the Linux 
script did not even use the major version part it compared “10” < “1.8”). On 
top of the version parsing issue, the script used a GC option which was 
deprecated and a no-op since Java 9, which was finally removed in Java 10: 
-XX:+UseParNewGC (this is obsolete as CMS collector automatically uses it so 
there is no reason to pass it in any Post-Java7 VM). So I removed this option.

 

The issue is here, I’ll commit shortly: 
https://issues.apache.org/jira/browse/SOLR-12141

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de  

 

From: Mark Miller  > 
Sent: Monday, March 26, 2018 9:20 PM
To: dev@lucene.apache.org  
Subject: Re: [VOTE] Release Lucene/Solr 7.3.0 RC1

 

Smoke test success: SUCCESS! [1:04:19.586886]

 

But +1 to fix running on Java 10.

 

- Mark

 

On Mon, Mar 26, 2018 at 1:37 PM Uwe Schindler  > wrote:

Hi,

 

I did not run smoke tester, but I checked some more fancy stuff to be safe: I 
tried the JAR file (lucene-core.jar) with multiple Java versions and the MR-JAR 
feature worked with a quick “hack” (it creates a broken BytesRef and calls a 
method on it, the resulting Exception’s stack trace should have right methods):

 

*

JAVA_HOME = C:\Program Files\Java\jdk1.8.0_144

java version "1.8.0_144"

Java(TM) SE Runtime Environment (build 1.8.0_144-b01)

Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

*

 

Microsoft Windows [Version 10.0.16299.334]

(c) 2017 Microsoft Corporation. Alle Rechte vorbehalten.

 

C:\Users\Uwe Schindler\Desktop\test>type Test.java

import org.apache.lucene.util.BytesRef;

public abstract class Test {

  public static void main(String... args) {

BytesRef b1 = new BytesRef(new byte[0], 0, 10);

BytesRef b2 = new BytesRef(20);

b1.compareTo(b2);

  }

}

 

C:\Users\Uwe Schindler\Desktop\test>java -cp lucene-core-7.3.0.jar;. Test

Exception in thread "main" java.lang.IndexOutOfBoundsException: Range [0, 10) 
out-of-bounds for length 0

at 
org.apache.lucene.util.FutureArrays.checkFromToIndex(FutureArrays.java:45)

at 
org.apache.lucene.util.FutureArrays.compareUnsigned(FutureArrays.java:72)

at org.apache.lucene.util.BytesRef.compareTo(BytesRef.java:163)

at Test.main(Test.java:7)

 

And now Java 9 / 10:

 

*

JAVA_HOME = C:\Program Files\Java\jdk-9.0.1

java version "9.0.1"

Java(TM) SE Runtime Environment (build 9.0.1+11)

Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode)

*

 

C:\Users\Uwe Schindler\Desktop\test>java -cp lucene-core-7.3.0.jar;. Test

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Array 
index out of range: 10

at java.base/java.util.Arrays.rangeCheck(Arrays.java:122)

at java.base/java.util.Arrays.compareUnsigned(Arrays.java:6101)

at org.apache.lucene.util.BytesRef.compareTo(BytesRef.java:163)

at Test.main(Test.java:7)

 

*

JAVA_HOME = C:\Program Files\Java\jdk-10

openjdk version "10" 2018-03-20

OpenJDK Runtime Environment 18.3 (build 10+46)

OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode)

*

 

C:\Users\Uwe Schindler\Desktop\test>java -cp lucene-core-7.3.0.jar;. Test

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Array 
index out of range: 10

at java.base/java.util.Arrays.rangeCheck(Arrays.java:122)

at java.base/java.util.Arrays.compareUnsigned(Arrays.java:6101)

at org.apache.lucene.util.BytesRef.compareTo(BytesRef.java:163)

at Test.main(Test.java:7)

 

So all looks fine from this perspective.

 

I then tested to start the techproducts example with Java 8, Java 9, Java 10:

 

Java 8 and 9 started up (yeah), but Java 10 failed (at least on windows) 
because of the braindead version parsing:

 

C:\Users\Uwe Schindler\Desktop\solr-7.3.0\bin>solr start -e 

RE: Lucene/Solr 7.3

2018-03-27 Thread Uwe Schindler
Hi,

 

It’s pushed to 7.3. It would be good if you and some other committers could 
spend some time to maybe quickly check to start up solr’s techexample with Java 
8, Java 9 and also Java 10 on your favourite operating system.

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Alan Woodward  
Sent: Tuesday, March 27, 2018 10:31 PM
To: dev@lucene.apache.org
Subject: Re: Lucene/Solr 7.3

 

> I think that's what's happening?

 

Correct - just waiting for SOLR-12141 and I’ll starting building the next RC.





On 27 Mar 2018, at 20:48, Cassandra Targett  > wrote:

 

I have the 7.3 Ref Guide built locally and the PDF has been pushed to the 
solr-ref-guide-rc SVN repo, but I'm holding off on starting the vote thread 
until RC2 of 7.3 is available (I think that's what's happening? please correct 
me if I've misread the threads) so one vote doesn't finish too far ahead of the 
other.

 

Cassandra

 

On Mon, Mar 26, 2018 at 9:29 AM, Alan Woodward  > wrote:

The release candidate is out, everybody please vote!

 

I’ve drafted some release notes, available here:

https://wiki.apache.org/solr/ReleaseNote73

https://wiki.apache.org/lucene-java/ReleaseNote73

 

They’re fairly bare-bones at the moment, if anybody would like to expand on 
them please feel free.

 





On 21 Mar 2018, at 15:53, Alan Woodward  > wrote:

 

FYI I’ve started building a release candidate.

 

I’ve updated the build script on 7.3 to allow building with ant 1.10, if this 
doesn’t produce any problems then I’ll forward-port to 7x and master.





On 21 Mar 2018, at 02:37, Đạt Cao Mạnh  > wrote:

 

Hi Alan, 

 

I committed the fix as well as resolve the issue.

 

Thanks!

 

On Tue, Mar 20, 2018 at 9:27 PM Alan Woodward  > wrote:

OK, thanks. Let me know when it’s in.

 





On 20 Mar 2018, at 14:07, Đạt Cao Mạnh  > wrote:

 

Hi  Alan, guys,

 

I found a blocker issue SOLR-12129, I've already uploaded a patch and beasting 
the tests, if the result is good I will commit and notify your guys!

 

Thanks!

 

On Tue, Mar 20, 2018 at 2:37 AM Alan Woodward  > wrote:

Go ahead!

 





On 19 Mar 2018, at 18:33, Andrzej Białecki  > wrote:

 

Alan,

 

I would like to commit the change in SOLR-11407 
(78d592d2fdfc64c227fc1bcb8fafa3d806fbb384) to branch_7_3. This fixes the logic 
that waits for replica recovery and provides more details about any failures.

 

On 17 Mar 2018, at 13:01, Alan Woodward  > wrote:

 

I’d like to build the RC on Monday, but it depends on SOLR-12070.  I can help 
debugging that if need be.

 

+1 to backport your fixes





On 17 Mar 2018, at 01:42, Varun Thacker  > wrote:

 

I was going through the blockers for 7.3 and only SOLR-12070 came up. Is the 
fix complete for this Andrzej?

 

@Alan : When do you plan on cutting an RC ? I committed SOLR-12083 yesterday 
and SOLR-12063 today to master/branch_7x. Both are important fixes for CDCR so 
if you are okay I can backport it to the release branch

 

On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh  > wrote:

Hi guys, Alan

 

I committed the fix for SOLR-12110 to branch_7_3

 

Thanks!

 

On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh  > wrote:

Hi Alan,

 

Sure the issue is marked as Blocker for 7.3.

 

On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward  > wrote:

Thanks Đạt, could you mark the issue as a Blocker and let me know when it’s 
been resolved?





On 16 Mar 2018, at 02:05, Đạt Cao Mạnh  > wrote:

 

Hi guys, Alan,

 

I found a blocker issue SOLR-12110, when investigating test failure. I've 
already uploaded a patch and beasting the tests, if the result is good I will 
commit soon.

 

Thanks!

 

On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward  > wrote:

Just realised that I don’t have an ASF Jenkins account - Uwe or Steve, can you 
give me a hand setting up the 7.3 Jenkins jobs?

 

Thanks, Alan

 





On 12 Mar 2018, at 09:32, Alan Woodward  > wrote:

 

I’ve created the 7.3 release branch.  I’ll leave 24 hours for bug-fixes and doc 
patches and then create a release 

[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-03-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416231#comment-16416231
 ] 

David Smiley commented on SOLR-12136:
-

Here is no language for hl.fl, hl.q, hl.qparser:

{noformat}

`hl.fl`::
Specifies a list of fields to highlight, either comma- or space-delimited.
A wildcard of `\*` (asterisk) can be used to match field globs, such as 
`text_*` or even `\*` to highlight on all fields where highlighting is 
possible. 
When using `*`, consider adding `hl.requireFieldMatch=true`.
+
Note that the field(s) listed here ought to have compatible text-analysis 
(defined in the schema) with field(s) referenced in the query to be 
highlighted.  
It may be necessary to modify `hl.q` and `hl.qparser` and/or modify the text 
analysis.
The following example uses the 
<> syntax and 
<> to highlight 
fields in `hl.fl`:
`=field1 field2={!edismax qf=$hl.fl 
v=$q}=lucene=true` (along with other applicable 
parameters, of course).
+
The default is the value of the `df` parameter which in turn has no default.

`hl.q`::
A query to use for highlighting.
This parameter allows you to highlight different terms or fields than those 
being used to search for documents.
When setting this, you might also need to set `hl.qparser`.
+
The default is the value of the `q` parameter (already parsed).

`hl.qparser`::
The query parser to use for the `hl.q` query.  It only applies when `hl.q` is 
set.
+
The default is the value of the `defType` parameter which in turn defaults to 
`lucene`.
{noformat}

> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12141) Solr does not start on Windows and Linux/Mac with Java 10 or later

2018-03-27 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved SOLR-12141.
--
Resolution: Fixed

I fixed the issue and pushed to 7.3, 7.x and master branches.

> Solr does not start on Windows and Linux/Mac with Java 10 or later
> --
>
> Key: SOLR-12141
> URL: https://issues.apache.org/jira/browse/SOLR-12141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.0, 7.1, 7.2, 7.3
> Environment: Windows 10 with Java 10+
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch, 
> SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch
>
>
> If you try to start Solr on Windows with Java 10, it fails with the following 
> message:
> {noformat}
> C:\Users\Uwe Schindler\Desktop\solr-7.3.0\bin>solr start -e techproducts
> ERROR: Java 1.8 or later is required to run Solr. Current Java version is: 10
> {noformat}
> Java 8 and Java 9 works. I did not try Linux, but the version parsing on 
> Windows is so braindead (i tried to fix it for Java 9 already). Windows CMD 
> shell does not know any numerical comparisons, so it fails as "10" is 
> alphabetically smaller "9".
> I hope this is better on Linux.
> Why do we have the version check at all? Wouldn't it be better to simply wait 
> for a useful message by the Java VM on startup because of wrong class file 
> format? This is too simply to break, especially as the output of "java 
> -version" is not standardized (and changes with Java 10 to also have a date 
> code,...). It also may contain "openjdk" instead of "java".
> So please please, let's get rid of the version check!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12141) Solr does not start on Windows and Linux/Mac with Java 10 or later

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416223#comment-16416223
 ] 

ASF subversion and git services commented on SOLR-12141:


Commit 98a6b3d642928b1ac9076c6c5a369472581f7633 in lucene-solr's branch 
refs/heads/branch_7_3 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=98a6b3d ]

SOLR-12141: Fix "bin/solr" shell scripts (Windows/Linux/Mac) to correctly 
detect major Java version and use numerical version comparison to enforce 
minimum requirements. Also remove obsolete "UseParNewGC" option. This allows to 
start Solr with Java 10 or later.


> Solr does not start on Windows and Linux/Mac with Java 10 or later
> --
>
> Key: SOLR-12141
> URL: https://issues.apache.org/jira/browse/SOLR-12141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.0, 7.1, 7.2, 7.3
> Environment: Windows 10 with Java 10+
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch, 
> SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch
>
>
> If you try to start Solr on Windows with Java 10, it fails with the following 
> message:
> {noformat}
> C:\Users\Uwe Schindler\Desktop\solr-7.3.0\bin>solr start -e techproducts
> ERROR: Java 1.8 or later is required to run Solr. Current Java version is: 10
> {noformat}
> Java 8 and Java 9 works. I did not try Linux, but the version parsing on 
> Windows is so braindead (i tried to fix it for Java 9 already). Windows CMD 
> shell does not know any numerical comparisons, so it fails as "10" is 
> alphabetically smaller "9".
> I hope this is better on Linux.
> Why do we have the version check at all? Wouldn't it be better to simply wait 
> for a useful message by the Java VM on startup because of wrong class file 
> format? This is too simply to break, especially as the output of "java 
> -version" is not standardized (and changes with Java 10 to also have a date 
> code,...). It also may contain "openjdk" instead of "java".
> So please please, let's get rid of the version check!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12141) Solr does not start on Windows and Linux/Mac with Java 10 or later

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416221#comment-16416221
 ] 

ASF subversion and git services commented on SOLR-12141:


Commit f8b8ac71904c96a1fd43acc1a129e45ff83597b9 in lucene-solr's branch 
refs/heads/branch_7x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f8b8ac7 ]

SOLR-12141: Fix "bin/solr" shell scripts (Windows/Linux/Mac) to correctly 
detect major Java version and use numerical version comparison to enforce 
minimum requirements. Also remove obsolete "UseParNewGC" option. This allows to 
start Solr with Java 10 or later.


> Solr does not start on Windows and Linux/Mac with Java 10 or later
> --
>
> Key: SOLR-12141
> URL: https://issues.apache.org/jira/browse/SOLR-12141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.0, 7.1, 7.2, 7.3
> Environment: Windows 10 with Java 10+
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch, 
> SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch
>
>
> If you try to start Solr on Windows with Java 10, it fails with the following 
> message:
> {noformat}
> C:\Users\Uwe Schindler\Desktop\solr-7.3.0\bin>solr start -e techproducts
> ERROR: Java 1.8 or later is required to run Solr. Current Java version is: 10
> {noformat}
> Java 8 and Java 9 works. I did not try Linux, but the version parsing on 
> Windows is so braindead (i tried to fix it for Java 9 already). Windows CMD 
> shell does not know any numerical comparisons, so it fails as "10" is 
> alphabetically smaller "9".
> I hope this is better on Linux.
> Why do we have the version check at all? Wouldn't it be better to simply wait 
> for a useful message by the Java VM on startup because of wrong class file 
> format? This is too simply to break, especially as the output of "java 
> -version" is not standardized (and changes with Java 10 to also have a date 
> code,...). It also may contain "openjdk" instead of "java".
> So please please, let's get rid of the version check!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12141) Solr does not start on Windows and Linux/Mac with Java 10 or later

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416214#comment-16416214
 ] 

ASF subversion and git services commented on SOLR-12141:


Commit ade2cf2e742fc4f2c312064df9e1ac78159bb23a in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ade2cf2 ]

SOLR-12141: Fix "bin/solr" shell scripts (Windows/Linux/Mac) to correctly 
detect major Java version and use numerical version comparison to enforce 
minimum requirements. Also remove obsolete "UseParNewGC" option. This allows to 
start Solr with Java 10 or later.


> Solr does not start on Windows and Linux/Mac with Java 10 or later
> --
>
> Key: SOLR-12141
> URL: https://issues.apache.org/jira/browse/SOLR-12141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.0, 7.1, 7.2, 7.3
> Environment: Windows 10 with Java 10+
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch, 
> SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch
>
>
> If you try to start Solr on Windows with Java 10, it fails with the following 
> message:
> {noformat}
> C:\Users\Uwe Schindler\Desktop\solr-7.3.0\bin>solr start -e techproducts
> ERROR: Java 1.8 or later is required to run Solr. Current Java version is: 10
> {noformat}
> Java 8 and Java 9 works. I did not try Linux, but the version parsing on 
> Windows is so braindead (i tried to fix it for Java 9 already). Windows CMD 
> shell does not know any numerical comparisons, so it fails as "10" is 
> alphabetically smaller "9".
> I hope this is better on Linux.
> Why do we have the version check at all? Wouldn't it be better to simply wait 
> for a useful message by the Java VM on startup because of wrong class file 
> format? This is too simply to break, especially as the output of "java 
> -version" is not standardized (and changes with Java 10 to also have a date 
> code,...). It also may contain "openjdk" instead of "java".
> So please please, let's get rid of the version check!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12141) Solr does not start on Windows and Linux/Mac with Java 10 or later

2018-03-27 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-12141:
-
Attachment: SOLR-12141.patch

> Solr does not start on Windows and Linux/Mac with Java 10 or later
> --
>
> Key: SOLR-12141
> URL: https://issues.apache.org/jira/browse/SOLR-12141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.0, 7.1, 7.2, 7.3
> Environment: Windows 10 with Java 10+
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch, 
> SOLR-12141.patch, SOLR-12141.patch, SOLR-12141.patch
>
>
> If you try to start Solr on Windows with Java 10, it fails with the following 
> message:
> {noformat}
> C:\Users\Uwe Schindler\Desktop\solr-7.3.0\bin>solr start -e techproducts
> ERROR: Java 1.8 or later is required to run Solr. Current Java version is: 10
> {noformat}
> Java 8 and Java 9 works. I did not try Linux, but the version parsing on 
> Windows is so braindead (i tried to fix it for Java 9 already). Windows CMD 
> shell does not know any numerical comparisons, so it fails as "10" is 
> alphabetically smaller "9".
> I hope this is better on Linux.
> Why do we have the version check at all? Wouldn't it be better to simply wait 
> for a useful message by the Java VM on startup because of wrong class file 
> format? This is too simply to break, especially as the output of "java 
> -version" is not standardized (and changes with Java 10 to also have a date 
> code,...). It also may contain "openjdk" instead of "java".
> So please please, let's get rid of the version check!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8223) CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines

2018-03-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416210#comment-16416210
 ] 

ASF subversion and git services commented on LUCENE-8223:
-

Commit 25704a1ca2d3f6ef1b37073cfc79468ca9a6ff84 in lucene-solr's branch 
refs/heads/branch_7x from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=25704a1 ]

LUCENE-8223 - remove time dependent checks in performance test

(cherry picked from commit b3cf209)


> CachingNaiveBayesClassifierTest.testPerformance() fails on slow machines
> 
>
> Key: LUCENE-8223
> URL: https://issues.apache.org/jira/browse/LUCENE-8223
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Tommaso Teofili
>Priority: Major
>
> The 7.3 Jenkins smoke tester has failed a couple of times due to 
> CachingNaiveBayesClassifierTest.testPerformance() (see 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/9/] for example).
> I don't think performance tests like this are very useful as part of the 
> standard test suite, because they depend too much on what else is happening 
> on the machine they're being run on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IndexReader.close() contract

2018-03-27 Thread Adrien Grand
I'm afraid that always calling doClose would be problematic. If the reader
is used in another thread then files might get closed under the hood while
a search is being performed. Even if the guard that we added to
MMapDirectory would now prevent segfaults, users would get potentially
confusing exception.

I see pros and cons regarding checking "if (closed)" in ensureOpen. In any
case I think we should keep a way to let ongoing searches finish
successfully before effectively closing the reader, which I suspect users
rely on.

Le mar. 27 mars 2018 à 15:19, Dawid Weiss  a écrit :

> I'm looking at IndexReader.close and keep wondering:
>
>   /**
>* Closes files associated with this index.
>* Also saves any new deletions to disk.
>* No other methods should be called after this has been called.
>* @throws IOException if there is a low-level IO error
>*/
>   @Override
>   public final synchronized void close() throws IOException {
> if (!closed) {
>   decRef();
>   closed = true;
> }
>   }
>
> If you have refCount > 1 this leads to an odd scenario in which the
> decRef decreases the counter, but does not call doClose. It also sets
> 'closed'
> to true, but the object remains functional (since ensureClosed checks
> refCount only).
>
> Wouldn't it be better for close() to actually check the refCount and
> (if closed == false):
>
> - always call doClose
> - mark the object as closed (closed = true)
> - if refCount != 1 => unpaired incRef/decRef somewhere, signal an
> exception?
>
> Then the javadoc would stay true, even if somebody played with reference
> counts.
>
> Dawid
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene/Solr 7.3

2018-03-27 Thread Alan Woodward
> I think that's what's happening?

Correct - just waiting for SOLR-12141 and I’ll starting building the next RC.

> On 27 Mar 2018, at 20:48, Cassandra Targett  wrote:
> 
> I have the 7.3 Ref Guide built locally and the PDF has been pushed to the 
> solr-ref-guide-rc SVN repo, but I'm holding off on starting the vote thread 
> until RC2 of 7.3 is available (I think that's what's happening? please 
> correct me if I've misread the threads) so one vote doesn't finish too far 
> ahead of the other.
> 
> Cassandra
> 
> On Mon, Mar 26, 2018 at 9:29 AM, Alan Woodward  > wrote:
> The release candidate is out, everybody please vote!
> 
> I’ve drafted some release notes, available here:
> https://wiki.apache.org/solr/ReleaseNote73 
> 
> https://wiki.apache.org/lucene-java/ReleaseNote73 
> 
> 
> They’re fairly bare-bones at the moment, if anybody would like to expand on 
> them please feel free.
> 
> 
>> On 21 Mar 2018, at 15:53, Alan Woodward > > wrote:
>> 
>> FYI I’ve started building a release candidate.
>> 
>> I’ve updated the build script on 7.3 to allow building with ant 1.10, if 
>> this doesn’t produce any problems then I’ll forward-port to 7x and master.
>> 
>>> On 21 Mar 2018, at 02:37, Đạt Cao Mạnh >> > wrote:
>>> 
>>> Hi Alan, 
>>> 
>>> I committed the fix as well as resolve the issue.
>>> 
>>> Thanks!
>>> 
>>> On Tue, Mar 20, 2018 at 9:27 PM Alan Woodward >> > wrote:
>>> OK, thanks. Let me know when it’s in.
>>> 
>>> 
 On 20 Mar 2018, at 14:07, Đạt Cao Mạnh > wrote:
 
 Hi  Alan, guys,
 
 I found a blocker issue SOLR-12129, I've already uploaded a patch and 
 beasting the tests, if the result is good I will commit and notify your 
 guys!
 
 Thanks!
 
 On Tue, Mar 20, 2018 at 2:37 AM Alan Woodward > wrote:
 Go ahead!
 
 
> On 19 Mar 2018, at 18:33, Andrzej Białecki 
>  > wrote:
> 
> Alan,
> 
> I would like to commit the change in SOLR-11407 
> (78d592d2fdfc64c227fc1bcb8fafa3d806fbb384) to branch_7_3. This fixes the 
> logic that waits for replica recovery and provides more details about any 
> failures.
> 
>> On 17 Mar 2018, at 13:01, Alan Woodward > > wrote:
>> 
>> I’d like to build the RC on Monday, but it depends on SOLR-12070.  I can 
>> help debugging that if need be.
>> 
>> +1 to backport your fixes
>> 
>>> On 17 Mar 2018, at 01:42, Varun Thacker >> > wrote:
>>> 
>>> I was going through the blockers for 7.3 and only SOLR-12070 came up. 
>>> Is the fix complete for this Andrzej?
>>> 
>>> @Alan : When do you plan on cutting an RC ? I committed SOLR-12083 
>>> yesterday and SOLR-12063 today to master/branch_7x. Both are important 
>>> fixes for CDCR so if you are okay I can backport it to the release 
>>> branch
>>> 
>>> On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh >> > wrote:
>>> Hi guys, Alan
>>> 
>>> I committed the fix for SOLR-12110 to branch_7_3
>>> 
>>> Thanks!
>>> 
>>> On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh >> > wrote:
>>> Hi Alan,
>>> 
>>> Sure the issue is marked as Blocker for 7.3.
>>> 
>>> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward >> > wrote:
>>> Thanks Đạt, could you mark the issue as a Blocker and let me know when 
>>> it’s been resolved?
>>> 
 On 16 Mar 2018, at 02:05, Đạt Cao Mạnh > wrote:
 
 Hi guys, Alan,
 
 I found a blocker issue SOLR-12110, when investigating test failure. 
 I've already uploaded a patch and beasting the tests, if the result is 
 good I will commit soon.
 
 Thanks!
  
 On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward > wrote:
 Just realised that I don’t have an ASF Jenkins account - Uwe or Steve, 
 can you give me a hand setting up the 7.3 Jenkins jobs?
 
 Thanks, Alan
 
 
> On 12 Mar 2018, at 09:32, Alan Woodward  

[jira] [Created] (SOLR-12152) Split up TriggerIntegrationTest into multiple tests to isolate and increase reliability

2018-03-27 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12152:


 Summary: Split up TriggerIntegrationTest into multiple tests to 
isolate and increase reliability
 Key: SOLR-12152
 URL: https://issues.apache.org/jira/browse/SOLR-12152
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling, SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 7.4, master (8.0)


TriggerIntegrationTest is big enough already. It is time to split it up into 
multiple test classes. This will keep one test method from affecting the others 
and help tone down the logs in case we need to troubleshoot further.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10734) Multithreaded test/support for AtomicURP broken

2018-03-27 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416180#comment-16416180
 ] 

Shalin Shekhar Mangar commented on SOLR-10734:
--

Attached: SOLR-10734-fix2.patch

There were two separate bugs here, one rare and the other more common:
# Common: By the time a version conflict is reported, the SolrInputDocument 
inside the AddUpdateCommand is already modified to be a full document i.e. it 
has no atomic update command anymore. So when we try to update the same 
AddUpdateCommand with the new version, it ends up overwriting the older 
document in the index. The fix was to keep a reference to the atomic updates 
and re-apply them on a version conflict.
# Rare: The processor sets the version on the document only if a version is 
returned by VersionInfo,lookupVersion. Since the default version is 0 i.e. no 
constraints, two different updates can race and get a null version thereby 
overwriting each other. The fix is to set version to -1 if 
VersionInfo,lookupVersion returns null.

I beasted this test 100 times and it passes consistently where as earlier it 
used to fail 1/5 times with the right seed.

> Multithreaded test/support for AtomicURP broken
> ---
>
> Key: SOLR-10734
> URL: https://issues.apache.org/jira/browse/SOLR-10734
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-10734-fix2.patch, SOLR-10734.patch, 
> SOLR-10734.patch, SOLR-10734.patch, SOLR-10734.patch, Screen Shot 2017-05-31 
> at 4.50.23 PM.png, log-snippet, testMaster_2500, testResults7_10, 
> testResultsMaster_10
>
>
> The multithreaded test doesn't actually start the threads, but only invokes 
> the run directly. The join afterwards doesn't do anything, hence.
> {code}
> diff --git 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
>  
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> index f3f833d..10b7770 100644
> --- 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> +++ 
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> @@ -238,7 +238,7 @@ public class AtomicUpdateProcessorFactoryTest extends 
> SolrTestCaseJ4 {
>}
>  }
>};
> -  t.run();
> +  t.run(); // red flag, shouldn't this be t.start?
>threads.add(t);
>finalCount += index; //int_i
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11947) Math Expressions User Guide

2018-03-27 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416176#comment-16416176
 ] 

Cassandra Targett commented on SOLR-11947:
--

I see all these commits went into master, but not branch_7x. What's the plan 
going forward after these commits?

A suggestion for this new "Math Expressions" section is that each of those 
sub-pages should either integrate or link to all of the expressions that fit in 
that category. This will aid the reader who may be interested in "Time Series" 
to not only learn about them in general but find the details about the 
supported expressions for that category.

It's not clear from this or offline messages we've exchanged how this new 
sections is meant to work with the existing reference, or how any of the ideas 
discussed in SOLR-11766 fit in with all this either. If you have thought 
through that yet, I'd be interested to hear your ideas/plans.

> Math Expressions User Guide
> ---
>
> Key: SOLR-11947
> URL: https://issues.apache.org/jira/browse/SOLR-11947
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11947.patch, SOLR-11947.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10734) Multithreaded test/support for AtomicURP broken

2018-03-27 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-10734:
-
Attachment: SOLR-10734-fix2.patch

> Multithreaded test/support for AtomicURP broken
> ---
>
> Key: SOLR-10734
> URL: https://issues.apache.org/jira/browse/SOLR-10734
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-10734-fix2.patch, SOLR-10734.patch, 
> SOLR-10734.patch, SOLR-10734.patch, SOLR-10734.patch, Screen Shot 2017-05-31 
> at 4.50.23 PM.png, log-snippet, testMaster_2500, testResults7_10, 
> testResultsMaster_10
>
>
> The multithreaded test doesn't actually start the threads, but only invokes 
> the run directly. The join afterwards doesn't do anything, hence.
> {code}
> diff --git 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
>  
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> index f3f833d..10b7770 100644
> --- 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> +++ 
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> @@ -238,7 +238,7 @@ public class AtomicUpdateProcessorFactoryTest extends 
> SolrTestCaseJ4 {
>}
>  }
>};
> -  t.run();
> +  t.run(); // red flag, shouldn't this be t.start?
>threads.add(t);
>finalCount += index; //int_i
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.3

2018-03-27 Thread Cassandra Targett
I have the 7.3 Ref Guide built locally and the PDF has been pushed to the
solr-ref-guide-rc SVN repo, but I'm holding off on starting the vote thread
until RC2 of 7.3 is available (I think that's what's happening? please
correct me if I've misread the threads) so one vote doesn't finish too far
ahead of the other.

Cassandra

On Mon, Mar 26, 2018 at 9:29 AM, Alan Woodward  wrote:

> The release candidate is out, everybody please vote!
>
> I’ve drafted some release notes, available here:
> https://wiki.apache.org/solr/ReleaseNote73
> https://wiki.apache.org/lucene-java/ReleaseNote73
>
> They’re fairly bare-bones at the moment, if anybody would like to expand
> on them please feel free.
>
>
> On 21 Mar 2018, at 15:53, Alan Woodward  wrote:
>
> FYI I’ve started building a release candidate.
>
> I’ve updated the build script on 7.3 to allow building with ant 1.10, if
> this doesn’t produce any problems then I’ll forward-port to 7x and master.
>
> On 21 Mar 2018, at 02:37, Đạt Cao Mạnh  wrote:
>
> Hi Alan,
>
> I committed the fix as well as resolve the issue.
>
> Thanks!
>
> On Tue, Mar 20, 2018 at 9:27 PM Alan Woodward 
> wrote:
>
>> OK, thanks. Let me know when it’s in.
>>
>>
>> On 20 Mar 2018, at 14:07, Đạt Cao Mạnh  wrote:
>>
>> Hi  Alan, guys,
>>
>> I found a blocker issue SOLR-12129, I've already uploaded a patch and
>> beasting the tests, if the result is good I will commit and notify your
>> guys!
>>
>> Thanks!
>>
>> On Tue, Mar 20, 2018 at 2:37 AM Alan Woodward 
>> wrote:
>>
>>> Go ahead!
>>>
>>>
>>> On 19 Mar 2018, at 18:33, Andrzej Białecki >> com> wrote:
>>>
>>> Alan,
>>>
>>> I would like to commit the change in SOLR-11407 (
>>> 78d592d2fdfc64c227fc1bcb8fafa3d806fbb384) to branch_7_3. This fixes the
>>> logic that waits for replica recovery and provides more details about any
>>> failures.
>>>
>>> On 17 Mar 2018, at 13:01, Alan Woodward  wrote:
>>>
>>> I’d like to build the RC on Monday, but it depends on SOLR-12070.  I can
>>> help debugging that if need be.
>>>
>>> +1 to backport your fixes
>>>
>>> On 17 Mar 2018, at 01:42, Varun Thacker  wrote:
>>>
>>> I was going through the blockers for 7.3 and only SOLR-12070 came up. Is
>>> the fix complete for this Andrzej?
>>>
>>> @Alan : When do you plan on cutting an RC ? I committed SOLR-12083
>>> yesterday and SOLR-12063 today to master/branch_7x. Both are important
>>> fixes for CDCR so if you are okay I can backport it to the release branch
>>>
>>> On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh 
>>> wrote:
>>>
 Hi guys, Alan

 I committed the fix for SOLR-12110 to branch_7_3

 Thanks!

 On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh 
 wrote:

> Hi Alan,
>
> Sure the issue is marked as Blocker for 7.3.
>
> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward 
> wrote:
>
>> Thanks Đạt, could you mark the issue as a Blocker and let me know
>> when it’s been resolved?
>>
>> On 16 Mar 2018, at 02:05, Đạt Cao Mạnh 
>> wrote:
>>
>> Hi guys, Alan,
>>
>> I found a blocker issue SOLR-12110, when investigating test failure.
>> I've already uploaded a patch and beasting the tests, if the result is 
>> good
>> I will commit soon.
>>
>> Thanks!
>>
>> On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward 
>> wrote:
>>
>>> Just realised that I don’t have an ASF Jenkins account - Uwe or
>>> Steve, can you give me a hand setting up the 7.3 Jenkins jobs?
>>>
>>> Thanks, Alan
>>>
>>>
>>> On 12 Mar 2018, at 09:32, Alan Woodward 
>>> wrote:
>>>
>>> I’ve created the 7.3 release branch.  I’ll leave 24 hours for
>>> bug-fixes and doc patches and then create a release candidate.
>>>
>>> We’re now in feature-freeze for 7.3, so please bear in mind the
>>> following:
>>>
>>>- No new features may be committed to the branch.
>>>- Documentation patches, build patches and serious bug fixes may
>>>be committed to the branch. However, you should submit *all* patches
>>>you want to commit to Jira first to give others the chance to review 
>>> and
>>>possibly vote against the patch. Keep in mind that it is our main 
>>> intention
>>>to keep the branch as stable as possible.
>>>- All patches that are intended for the branch should first be
>>>committed to the unstable branch, merged into the stable branch, and 
>>> then
>>>into the current release branch.
>>>- Normal unstable and stable branch development may continue as
>>>usual. However, if you plan to commit a big 

[jira] [Commented] (SOLR-12094) JsonRecordReader ignores root record fields after the split point

2018-03-27 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416136#comment-16416136
 ] 

Dawid Weiss commented on SOLR-12094:


I looked at the code of that streaming parser, it is quite complex; seems like 
all this node copying and record trickery could be avoided, but it'd be a 
significantly more complex patch then. [~noble.paul] - you seem to be involved 
much more in the parser development, would you like to take a look before I 
commit it in?

> JsonRecordReader ignores root record fields after the split point
> -
>
> Key: SOLR-12094
> URL: https://issues.apache.org/jira/browse/SOLR-12094
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Przemysław Szeremiota
>Priority: Major
> Attachments: SOLR-12094.patch, SOLR-12094.patch, 
> json-record-reader-bug.patch
>
>
> JsonRecordReader, when configured with other than top-level split, ignores 
> all top-level JSON nodes after the split ends, for example:
> {code}
> {
>   "first": "John",
>   "last": "Doe",
>   "grade": 8,
>   "exams": [
> {
> "subject": "Maths",
> "test": "term1",
> "marks": 90
> },
> {
> "subject": "Biology",
> "test": "term1",
> "marks": 86
> }
>   ],
>   "after": "456"
> }
> {code}
> Node "after" won't be visible in SolrInputDocument constructed from 
> /update/json/docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12094) JsonRecordReader ignores root record fields after the split point

2018-03-27 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-12094:
---
Attachment: SOLR-12094.patch

> JsonRecordReader ignores root record fields after the split point
> -
>
> Key: SOLR-12094
> URL: https://issues.apache.org/jira/browse/SOLR-12094
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Przemysław Szeremiota
>Priority: Major
> Attachments: SOLR-12094.patch, SOLR-12094.patch, 
> json-record-reader-bug.patch
>
>
> JsonRecordReader, when configured with other than top-level split, ignores 
> all top-level JSON nodes after the split ends, for example:
> {code}
> {
>   "first": "John",
>   "last": "Doe",
>   "grade": 8,
>   "exams": [
> {
> "subject": "Maths",
> "test": "term1",
> "marks": 90
> },
> {
> "subject": "Biology",
> "test": "term1",
> "marks": 86
> }
>   ],
>   "after": "456"
> }
> {code}
> Node "after" won't be visible in SolrInputDocument constructed from 
> /update/json/docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10734) Multithreaded test/support for AtomicURP broken

2018-03-27 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416130#comment-16416130
 ] 

Shalin Shekhar Mangar commented on SOLR-10734:
--

The latest test failure indicates that there is a genuine bug here. After 
adding some debug logging, I see that the test fails because the value of 
{{int_i}} in the document is different from what the test expects if and only 
if there is a version conflict reported in the logs.

> Multithreaded test/support for AtomicURP broken
> ---
>
> Key: SOLR-10734
> URL: https://issues.apache.org/jira/browse/SOLR-10734
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-10734.patch, SOLR-10734.patch, SOLR-10734.patch, 
> SOLR-10734.patch, Screen Shot 2017-05-31 at 4.50.23 PM.png, log-snippet, 
> testMaster_2500, testResults7_10, testResultsMaster_10
>
>
> The multithreaded test doesn't actually start the threads, but only invokes 
> the run directly. The join afterwards doesn't do anything, hence.
> {code}
> diff --git 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
>  
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> index f3f833d..10b7770 100644
> --- 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> +++ 
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> @@ -238,7 +238,7 @@ public class AtomicUpdateProcessorFactoryTest extends 
> SolrTestCaseJ4 {
>}
>  }
>};
> -  t.run();
> +  t.run(); // red flag, shouldn't this be t.start?
>threads.add(t);
>finalCount += index; //int_i
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 184 - Still unstable

2018-03-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/184/

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchWithMasterUrl

Error Message:
expected: but was:<.numFound:505!=501>

Stack Trace:
java.lang.AssertionError: expected: but was:<.numFound:505!=501>
at 
__randomizedtesting.SeedInfo.seed([D615EA42C65A9D66:CDAE1B3DA7E5E5D0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchWithMasterUrl(TestReplicationHandler.java:800)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 2078 lines...]
   [junit4] JVM J0: stdout was not empty, see: 

[jira] [Updated] (SOLR-12094) JsonRecordReader ignores root record fields after the split point

2018-03-27 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-12094:
---
Summary: JsonRecordReader ignores root record fields after the split point  
(was: JsonRecordReader ignores root fields after split)

> JsonRecordReader ignores root record fields after the split point
> -
>
> Key: SOLR-12094
> URL: https://issues.apache.org/jira/browse/SOLR-12094
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Przemysław Szeremiota
>Priority: Major
> Attachments: SOLR-12094.patch, json-record-reader-bug.patch
>
>
> JsonRecordReader, when configured with other than top-level split, ignores 
> all top-level JSON nodes after the split ends, for example:
> {code}
> {
>   "first": "John",
>   "last": "Doe",
>   "grade": 8,
>   "exams": [
> {
> "subject": "Maths",
> "test": "term1",
> "marks": 90
> },
> {
> "subject": "Biology",
> "test": "term1",
> "marks": 86
> }
>   ],
>   "after": "456"
> }
> {code}
> Node "after" won't be visible in SolrInputDocument constructed from 
> /update/json/docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1513 - Unstable

2018-03-27 Thread Apache Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 76, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416092#comment-16416092
 ] 

Karl Wright commented on LUCENE-8227:
-

The NPEs are caused when travel planes are identical to edges.  This is 
relatively easy to address, but needs to be carefully coded.

The assertion failures are a bit tougher because they indicate we don't 
understand the geometry properly in some situations.  I'll have to look into 
these in more depth.


> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Major
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[smoker][junit4] ERROR   0.99s J0 | TestGeo3DPoint.testGeo3DRelations 
> <<<
>[smoker][junit4]> Throwable #1: java.lang.NullPointerException
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F2A368AB96A2FD75:42DC153F19EF53E9]:0)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[smoker][junit4]>  at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7243 - Still Unstable!

2018-03-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7243/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=379165

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=379165
at 
__randomizedtesting.SeedInfo.seed([E30502B4D836B954:DB6971914CE61B12]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:48)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:31)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1866 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\temp\junit4-J0-20180327_171057_1696203757878692549481.sysout
   

[jira] [Issue Comment Deleted] (SOLR-12151) abstract MultiSolrCloudTestCase class

2018-03-27 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12151:
---
Comment: was deleted

(was: Patch attached, reviews, comments, questions etc. welcome as usual. 
Hoping to commit this towards the end of next week.)

> abstract MultiSolrCloudTestCase class
> -
>
> Key: SOLR-12151
> URL: https://issues.apache.org/jira/browse/SOLR-12151
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12151.patch
>
>
> An abstract base class for tests that require more than one SolrCloud.
> Builds upon the existing 
> [SolrCloudTestCase|https://github.com/apache/lucene-solr/blob/master/solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java]
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12151) abstract MultiSolrCloudTestCase class

2018-03-27 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12151:
---
Attachment: SOLR-12151.patch

> abstract MultiSolrCloudTestCase class
> -
>
> Key: SOLR-12151
> URL: https://issues.apache.org/jira/browse/SOLR-12151
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12151.patch
>
>
> An abstract base class for tests that require more than one SolrCloud.
> Builds upon the existing 
> [SolrCloudTestCase|https://github.com/apache/lucene-solr/blob/master/solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java]
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12151) abstract MultiSolrCloudTestCase class

2018-03-27 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416083#comment-16416083
 ] 

Christine Poerschke commented on SOLR-12151:


Patch attached, reviews, comments, questions etc. welcome as usual. Hoping to 
commit this towards the end of next week.

> abstract MultiSolrCloudTestCase class
> -
>
> Key: SOLR-12151
> URL: https://issues.apache.org/jira/browse/SOLR-12151
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12151.patch
>
>
> An abstract base class for tests that require more than one SolrCloud.
> Builds upon the existing 
> [SolrCloudTestCase|https://github.com/apache/lucene-solr/blob/master/solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java]
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12151) abstract MultiSolrCloudTestCase class

2018-03-27 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-12151:
--

 Summary: abstract MultiSolrCloudTestCase class
 Key: SOLR-12151
 URL: https://issues.apache.org/jira/browse/SOLR-12151
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke


An abstract base class for tests that require more than one SolrCloud.

Builds upon the existing 
[SolrCloudTestCase|https://github.com/apache/lucene-solr/blob/master/solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java]
 class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9185) Solr's edismax and "Lucene"/standard query parsers should optionally not split on whitespace before sending terms to analysis

2018-03-27 Thread Dean Gurvitz (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416051#comment-16416051
 ] 

Dean Gurvitz commented on SOLR-9185:


I missed that comment. Anyways, I just think that we should be more careful 
with such changes in minor versions, and at least explicitly mention them in 
the changes.txt file for those who with to upgrade their version.

> Solr's edismax and "Lucene"/standard query parsers should optionally not 
> split on whitespace before sending terms to analysis
> -
>
> Key: SOLR-9185
> URL: https://issues.apache.org/jira/browse/SOLR-9185
> Project: Solr
>  Issue Type: New Feature
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 6.5, 7.0
>
> Attachments: SOLR-9185.patch, SOLR-9185.patch, SOLR-9185.patch, 
> SOLR-9185.patch
>
>
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12094) JsonRecordReader ignores root fields after split

2018-03-27 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-12094:
---
Description: 
JsonRecordReader, when configured with other than top-level split, ignores all 
top-level JSON nodes after the split ends, for example:

{code}
{
  "first": "John",
  "last": "Doe",
  "grade": 8,
  "exams": [
{
"subject": "Maths",
"test": "term1",
"marks": 90
},
{
"subject": "Biology",
"test": "term1",
"marks": 86
}
  ],
  "after": "456"
}
{code}

Node "after" won't be visible in SolrInputDocument constructed from 
/update/json/docs.

  was:
JsonRecordReader, when configured with other than top-level split, ignores all 
top-level JSON nodes after split ends, for example:

{code}
{
  "first": "John",
  "last": "Doe",
  "grade": 8,
  "exams": [
{
"subject": "Maths",
"test": "term1",
"marks": 90
},
{
"subject": "Biology",
"test": "term1",
"marks": 86
}
  ],
  "after": "456"
}
{code}

Node "after" won't be visible in SolrInputDocument constructed from 
/update/json/docs.

I don't have fix, only (breaking) patch for relevant test


> JsonRecordReader ignores root fields after split
> 
>
> Key: SOLR-12094
> URL: https://issues.apache.org/jira/browse/SOLR-12094
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Przemysław Szeremiota
>Priority: Major
> Attachments: SOLR-12094.patch, json-record-reader-bug.patch
>
>
> JsonRecordReader, when configured with other than top-level split, ignores 
> all top-level JSON nodes after the split ends, for example:
> {code}
> {
>   "first": "John",
>   "last": "Doe",
>   "grade": 8,
>   "exams": [
> {
> "subject": "Maths",
> "test": "term1",
> "marks": 90
> },
> {
> "subject": "Biology",
> "test": "term1",
> "marks": 86
> }
>   ],
>   "after": "456"
> }
> {code}
> Node "after" won't be visible in SolrInputDocument constructed from 
> /update/json/docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9185) Solr's edismax and "Lucene"/standard query parsers should optionally not split on whitespace before sending terms to analysis

2018-03-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416035#comment-16416035
 ] 

Steve Rowe commented on SOLR-9185:
--

bq. I would like to note that this issue actually included an API change in a 
minor Solr version without prior deprecation warnings, changing 
SolrQueryParserBase.init 's signature. I think this should've been avoided as I 
don't see how it relates to the rest of the issue, or at least be mentioned 
explicitly if already included, and preferably moved to a major version update.

You're right, it was an unrelated cleanup of an unused parameter.  As I noted 
in a comment above:

bq. In addition to the grammar changes, I've removed the Version matchVersion 
param from SolrQueryParserBase.init() - it was being ignored, and the 
equivalent param was removed from the Lucene classic QueryParser in LUCENE-5859.



> Solr's edismax and "Lucene"/standard query parsers should optionally not 
> split on whitespace before sending terms to analysis
> -
>
> Key: SOLR-9185
> URL: https://issues.apache.org/jira/browse/SOLR-9185
> Project: Solr
>  Issue Type: New Feature
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 6.5, 7.0
>
> Attachments: SOLR-9185.patch, SOLR-9185.patch, SOLR-9185.patch, 
> SOLR-9185.patch
>
>
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9185) Solr's edismax and "Lucene"/standard query parsers should optionally not split on whitespace before sending terms to analysis

2018-03-27 Thread Dean Gurvitz (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416018#comment-16416018
 ] 

Dean Gurvitz commented on SOLR-9185:


I would like to note that this issue actually included an API change in a minor 
Solr version without prior deprecation warnings, changing 
SolrQueryParserBase.init 's signature. I think this should've been avoided as I 
don't see how it relates to the rest of the issue, or at least be mentioned 
explicitly if already included, and preferably moved to a major version update.

> Solr's edismax and "Lucene"/standard query parsers should optionally not 
> split on whitespace before sending terms to analysis
> -
>
> Key: SOLR-9185
> URL: https://issues.apache.org/jira/browse/SOLR-9185
> Project: Solr
>  Issue Type: New Feature
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 6.5, 7.0
>
> Attachments: SOLR-9185.patch, SOLR-9185.patch, SOLR-9185.patch, 
> SOLR-9185.patch
>
>
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2018-03-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416015#comment-16416015
 ] 

Mark Miller commented on SOLR-10783:


Also, are all of the SOLR_JETTY_CONFIG refs in solr.cmd treated the same way? 
It looks to me like quotes are only removed around a subset of occurrences? I 
may be missing something - on my last pull that file was also conflicted, but 
not around the spots I'm concerned with.

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.0
>Reporter: Mano Kovacs
>Assignee: Mark Miller
>Priority: Major
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, 
> SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6425) Move extractTerms to Weight

2018-03-27 Thread Dean Gurvitz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416011#comment-16416011
 ] 

Dean Gurvitz commented on LUCENE-6425:
--

I get that. However, Lucene is also designed for external use, and at my 
company we actually needed an index neutral extraction. This isn't a huge deal 
or anything, but I just think these issues can sometimes be addressed with a 
little more care.

> Move extractTerms to Weight
> ---
>
> Key: LUCENE-6425
> URL: https://issues.apache.org/jira/browse/LUCENE-6425
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6425.patch, LUCENE-6425.patch
>
>
> Today we have extractTerms on Query, but it is supposed to only be called 
> after the query has been specialized to a given IndexReader using 
> Query.rewrite(IndexReader) to allow some complex queries to replace terms 
> "matchers" with actual terms (eg. WildcardQuery).
> However, we already have an abstraction for indexreader-specialized queries: 
> Weight. So I think it would make more sense to have extractTerms on Weight. 
> This would also remove the trap of calling extractTerms on a query which is 
> not rewritten yet.
> Since Weights know about whether scores are needed or not, I also hope this 
> would help improve the extractTerms semantics. We currently have 2 use-cases 
> for extractTerms: distributed IDF and highlighting. While the former only 
> cares about terms which are used for scoring, it could make sense to 
> highlight terms that were used for matching, even if they did not contribute 
> to the score (eg. if wrapped in a ConstantScoreQuery or a BooleanQuery FILTER 
> clause). So highlighters could do searcher.createNormalizedWeight(query, 
> false).extractTerms(termSet) to get all terms that were used for matching the 
> query while distributed IDF would instead do 
> searcher.createNormalizedWeight(query, true).extractTerms(termSet) to get 
> scoring terms only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-27 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright reassigned LUCENE-8227:
---

Assignee: Karl Wright

> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Major
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[smoker][junit4] ERROR   0.99s J0 | TestGeo3DPoint.testGeo3DRelations 
> <<<
>[smoker][junit4]> Throwable #1: java.lang.NullPointerException
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F2A368AB96A2FD75:42DC153F19EF53E9]:0)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[smoker][junit4]>  at 
> 

[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-03-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415999#comment-16415999
 ] 

Erick Erickson commented on SOLR-12136:
---

Change away ;)

But if you decide to change it, please pretend you don't understand anything 
about the guts of query parsers, analysis chains and the like and their 
interdependencies. 

Remember the user's list question about
q=field1:something=otherword
and the surprise that "otherword" was analyzed against the "df" field.

But wait, it wouldn't have been if using edismax and we happened to include 
field1 in the "qf" list but maybe not if we overrode hl.qparser parameter, but 
we could always override that with {!}..

I couldn't figure out a way to convey that so chose a simpler prescriptive 
approach. If people dive deeper they can tweak all the other params

> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11913) SolrParams ought to implement Iterable<Map.Entry<String,String[]>>

2018-03-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415996#comment-16415996
 ] 

David Smiley commented on SOLR-11913:
-

{quote}I tried doing so but it gave me some error stating _Rat problems were 
found!_ . I tried searching over the internet but couldn't find anything 
useful, did some playing around build.xml but all was in waste.
{quote}
That refers to quality or style checks that Lucene/Solr expressly insists upon. 
 The log output _should_ tell you exactly what the problem is.
{quote} when you said about callers of getParameterNamesiterator(), did you 
mean callers in SolrParams class only or all callers of that iterator function?
{quote}
All.  Note your IDE should have a "find usages" or similarly named feature.

bq. Though ModifiableSolrParams is just an example, I check for this class and 
it is using getParameterNamesiterator() for its add(SolrParams params) function 
which has never been called, so no point in changing that, I guess.

Please do change it.  When you say "has never been called"; maybe I don't 
understand what you mean but IntelliJ reports 30 usages across the codebase.

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch, SOLR-11913.patch, SOLR-11913_v2.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-03-27 Thread Rupa Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415992#comment-16415992
 ] 

Rupa Shankar commented on SOLR-11277:
-

Weird, sorry about that. For good measure, I regenerated the patch and 
re-attached, but `git am SOLR-11277.patch` on the latest master works for me. 

> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, max_size_auto_commit.patch
>
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >