[jira] [Commented] (SOLR-11880) Avoid creating new exceptions for every request made via MDCAwareThreadPoolExecutor

2018-01-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335473#comment-16335473
 ] 

Shalin Shekhar Mangar commented on SOLR-11880:
--

I don't think that is possible. If we want the caller's stack trace then we 
either call fillInStackTrace at the point of the call or we use a cached copy 
from the right place. If this is indeed a bottleneck (and I am not sure that it 
is) then we can either eliminate the submitter stack trace with a flag or do 
something ugly/clever like having the caller of execute method pass down a 
static exception object

> Avoid creating new exceptions for every request made via 
> MDCAwareThreadPoolExecutor
> ---
>
> Key: SOLR-11880
> URL: https://issues.apache.org/jira/browse/SOLR-11880
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
>
> MDCAwareThreadPoolExecutor has this line in it's{{{execute}} method
>  
> {code:java}
> final Exception submitterStackTrace = new Exception("Submitter stack 
> trace");{code}
> This means that every call via the a thread pool will create this exception, 
> and only when it sees an error will it be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8132) HyphenationDecompoundTokenFilter does not set position/offset attributes correctly

2018-01-22 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335455#comment-16335455
 ] 

Adrien Grand commented on LUCENE-8132:
--

No, the hyphenation decompounder would have to be the first token filter in the 
analysis chain.

> HyphenationDecompoundTokenFilter does not set position/offset attributes 
> correctly
> --
>
> Key: LUCENE-8132
> URL: https://issues.apache.org/jira/browse/LUCENE-8132
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6.1, 7.2.1
>Reporter: Holger Bruch
>Priority: Major
>
> HyphenationDecompoundTokenFilter and DictionaryDecompoundTokenFilter set 
> positionIncrement to 0 for all subwords, reuse start/endoffset of the 
> original token and ignore positionLength completly.
> In consequence, the QueryBuilder generates a SynonymQuery comprising all 
> subwords, which should rather treated as individual terms.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8132) HyphenationDecompoundTokenFilter does not set position/offset attributes correctly

2018-01-22 Thread Holger Bruch (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335427#comment-16335427
 ] 

Holger Bruch commented on LUCENE-8132:
--

I’m not as deeply in Lucene as you are. What would be the pros and cons of 
ensuring the input is an instance of tokenizer?
Would it still be possible to apply a token filters like WDF or lowercase 
filter before the HyphenationDecompunder?

> HyphenationDecompoundTokenFilter does not set position/offset attributes 
> correctly
> --
>
> Key: LUCENE-8132
> URL: https://issues.apache.org/jira/browse/LUCENE-8132
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6.1, 7.2.1
>Reporter: Holger Bruch
>Priority: Major
>
> HyphenationDecompoundTokenFilter and DictionaryDecompoundTokenFilter set 
> positionIncrement to 0 for all subwords, reuse start/endoffset of the 
> original token and ignore positionLength completly.
> In consequence, the QueryBuilder generates a SynonymQuery comprising all 
> subwords, which should rather treated as individual terms.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335416#comment-16335416
 ] 

Ignacio Vera commented on LUCENE-8133:
--

I disagree with the assessment. I think the factory is doing the right job and 
it is trying to create a convex polygon with the provided points as expected. 
The issue is that when creating one of the edges, the final plane suffers from 
numerical imprecision resulting of some of the points being out of the plane.

I attached the test with extra code that creates the bogus plane.

 

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, LUCENE-8133.patch, 
> image-2018-01-22-12-52-21-656.png, image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8133:
-
Attachment: LUCENE-8133.patch

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, LUCENE-8133.patch, 
> image-2018-01-22-12-52-21-656.png, image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+37) - Build # 1226 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1226/
Java: 64bit/jdk-10-ea+37 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test

Error Message:
Error from server at https://127.0.0.1:34337/solr: no such collection or alias

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at https://127.0.0.1:34337/solr: no such collection or alias
at 
__randomizedtesting.SeedInfo.seed([36F490D22F31B59F:BEA0AF0881CDD867]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException.create(HttpSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test(TimeRoutedAliasUpdateProcessorTest.java:107)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-11885) Solrj client deleteByIds handle route request miss wrap basic auth credentials

2018-01-22 Thread Aibao Luo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aibao Luo updated SOLR-11885:
-
Description: 
public Map getRoutes(DocRouter router, 
DocCollection col, Map urlMap, ModifiableSolrParams 
params, String idField) {

 
 if (request != null) {

  UpdateRequest urequest = (UpdateRequest) request.getRequest();

  urequest.deleteById(deleteId, version);

} else {

  UpdateRequest urequest = new UpdateRequest();

  urequest.setParams(params);

  urequest.deleteById(deleteId, version);

  urequest.setCommitWithin(getCommitWithin());

  request = new LBHttpSolrClient.Req(urequest, urls);

  routes.put(leaderUrl, request);

}



}

 

while delete by ids, inner wrapped request to routed slice should contains  
auth credentials from source request, as adding documents does.

  was:
public Map getRoutes(DocRouter router,
 DocCollection col, Map urlMap,
 ModifiableSolrParams params, String idField) {
 
 if ((documents == null || documents.size() == 0)
 && (deleteById == null || deleteById.size() == 0)) {
 return null;
 }
 
 Map routes = new HashMap<>();
 if (documents != null) {
 Set>> entries = 
documents.entrySet();
 for (Entry> entry : entries) {
 SolrInputDocument doc = entry.getKey();
 Object id = doc.getFieldValue(idField);
 if (id == null) {
 return null;
 }
 Slice slice = router.getTargetSlice(id
 .toString(), doc, null, null, col);
 if (slice == null) {
 return null;
 }
 List urls = urlMap.get(slice.getName());
 if (urls == null) {
 return null;
 }
 String leaderUrl = urls.get(0);
 LBHttpSolrClient.Req request = (LBHttpSolrClient.Req) routes
 .get(leaderUrl);
 if (request == null) {
 UpdateRequest updateRequest = new UpdateRequest();
 updateRequest.setMethod(getMethod());
 updateRequest.setCommitWithin(getCommitWithin());
 updateRequest.setParams(params);
 updateRequest.setPath(getPath());
 updateRequest.setBasicAuthCredentials(getBasicAuthUser(), 
getBasicAuthPassword());
 request = new LBHttpSolrClient.Req(updateRequest, urls);
 routes.put(leaderUrl, request);
 }
 UpdateRequest urequest = (UpdateRequest) request.getRequest();
 Map value = entry.getValue();
 Boolean ow = null;
 if (value != null) {
 ow = (Boolean) value.get(OVERWRITE);
 }
 if (ow != null) {
 urequest.add(doc, ow);
 } else {
 urequest.add(doc);
 }
 }
 }
 
 // Route the deleteById's
 
 if (deleteById != null) {
 
 Iterator>> entries = deleteById.entrySet()
 .iterator();
 while (entries.hasNext()) {
 
 Map.Entry> entry = entries.next();
 
 String deleteId = entry.getKey();
 Map map = entry.getValue();
 Long version = null;
 if (map != null) {
 version = (Long) map.get(VER);
 }
 Slice slice = router.getTargetSlice(deleteId, null, null, null, col);
 if (slice == null) {
 return null;
 }
 List urls = urlMap.get(slice.getName());
 if (urls == null) {
 return null;
 }
 String leaderUrl = urls.get(0);
 LBHttpSolrClient.Req request = routes.get(leaderUrl);
 if (request != null) {
 UpdateRequest urequest = (UpdateRequest) request.getRequest();
 urequest.deleteById(deleteId, version);
 } else {
 UpdateRequest urequest = new UpdateRequest();
 urequest.setParams(params);
 urequest.deleteById(deleteId, version);
 urequest.setCommitWithin(getCommitWithin());
 request = new LBHttpSolrClient.Req(urequest, urls);
 routes.put(leaderUrl, request);
 }
 }
 }

 return routes;
}

 

when call delete by ids, inner wrapped request to routed slice should contains  
auth credentials from source request, as adding documents does.


> Solrj client deleteByIds handle route request miss wrap basic auth credentials
> --
>
> Key: SOLR-11885
> URL: https://issues.apache.org/jira/browse/SOLR-11885
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 5.5.5, 6.6.2, 7.2.1
>Reporter: Aibao Luo
>Priority: Major
>
> public Map getRoutes(DocRouter router, 
> DocCollection col, Map urlMap, ModifiableSolrParams 
> params, String idField) {
>  
>  if (request != null) {
>   UpdateRequest urequest = (UpdateRequest) request.getRequest();
>   urequest.deleteById(deleteId, version);
> } else {
>   UpdateRequest urequest = new UpdateRequest();
>   urequest.setParams(params);
>   urequest.deleteById(deleteId, version);
>   urequest.setCommitWithin(getCommitWithin());
>   request = new LBHttpSolrClient.Req(urequest, urls);
>   routes.put(leaderUrl, request);
> }
> 
> }
>  
> while delete by 

[jira] [Created] (SOLR-11885) Solrj client deleteByIds handle route request miss wrap basic auth credentials

2018-01-22 Thread Aibao Luo (JIRA)
Aibao Luo created SOLR-11885:


 Summary: Solrj client deleteByIds handle route request miss wrap 
basic auth credentials
 Key: SOLR-11885
 URL: https://issues.apache.org/jira/browse/SOLR-11885
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 6.6.2, 5.5.5, 7.2.1
Reporter: Aibao Luo


public Map getRoutes(DocRouter router,
 DocCollection col, Map urlMap,
 ModifiableSolrParams params, String idField) {
 
 if ((documents == null || documents.size() == 0)
 && (deleteById == null || deleteById.size() == 0)) {
 return null;
 }
 
 Map routes = new HashMap<>();
 if (documents != null) {
 Set>> entries = 
documents.entrySet();
 for (Entry> entry : entries) {
 SolrInputDocument doc = entry.getKey();
 Object id = doc.getFieldValue(idField);
 if (id == null) {
 return null;
 }
 Slice slice = router.getTargetSlice(id
 .toString(), doc, null, null, col);
 if (slice == null) {
 return null;
 }
 List urls = urlMap.get(slice.getName());
 if (urls == null) {
 return null;
 }
 String leaderUrl = urls.get(0);
 LBHttpSolrClient.Req request = (LBHttpSolrClient.Req) routes
 .get(leaderUrl);
 if (request == null) {
 UpdateRequest updateRequest = new UpdateRequest();
 updateRequest.setMethod(getMethod());
 updateRequest.setCommitWithin(getCommitWithin());
 updateRequest.setParams(params);
 updateRequest.setPath(getPath());
 updateRequest.setBasicAuthCredentials(getBasicAuthUser(), 
getBasicAuthPassword());
 request = new LBHttpSolrClient.Req(updateRequest, urls);
 routes.put(leaderUrl, request);
 }
 UpdateRequest urequest = (UpdateRequest) request.getRequest();
 Map value = entry.getValue();
 Boolean ow = null;
 if (value != null) {
 ow = (Boolean) value.get(OVERWRITE);
 }
 if (ow != null) {
 urequest.add(doc, ow);
 } else {
 urequest.add(doc);
 }
 }
 }
 
 // Route the deleteById's
 
 if (deleteById != null) {
 
 Iterator>> entries = deleteById.entrySet()
 .iterator();
 while (entries.hasNext()) {
 
 Map.Entry> entry = entries.next();
 
 String deleteId = entry.getKey();
 Map map = entry.getValue();
 Long version = null;
 if (map != null) {
 version = (Long) map.get(VER);
 }
 Slice slice = router.getTargetSlice(deleteId, null, null, null, col);
 if (slice == null) {
 return null;
 }
 List urls = urlMap.get(slice.getName());
 if (urls == null) {
 return null;
 }
 String leaderUrl = urls.get(0);
 LBHttpSolrClient.Req request = routes.get(leaderUrl);
 if (request != null) {
 UpdateRequest urequest = (UpdateRequest) request.getRequest();
 urequest.deleteById(deleteId, version);
 } else {
 UpdateRequest urequest = new UpdateRequest();
 urequest.setParams(params);
 urequest.deleteById(deleteId, version);
 urequest.setCommitWithin(getCommitWithin());
 request = new LBHttpSolrClient.Req(urequest, urls);
 routes.put(leaderUrl, request);
 }
 }
 }

 return routes;
}

 

when call delete by ids, inner wrapped request to routed slice should contains  
auth credentials from source request, as adding documents does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11051) Use disk free metric in default cluster preferences

2018-01-22 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11051.
---
Resolution: Fixed

> Use disk free metric in default cluster preferences
> ---
>
> Key: SOLR-11051
> URL: https://issues.apache.org/jira/browse/SOLR-11051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11051.patch
>
>
> Currently, the default cluster preferences is simply based on core count:
> {code}
> {minimize : cores, precision:1}
> {code}
> We should start using the disk free metric in default cluster preferences to 
> place replicas on nodes with the most disk space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11880) Avoid creating new exceptions for every request made via MDCAwareThreadPoolExecutor

2018-01-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335396#comment-16335396
 ] 

Noble Paul commented on SOLR-11880:
---

We should create a custom Exception which doesn't fillInStackTrace() in 
constructor. we can do the fillInStackTrace lazily


> Avoid creating new exceptions for every request made via 
> MDCAwareThreadPoolExecutor
> ---
>
> Key: SOLR-11880
> URL: https://issues.apache.org/jira/browse/SOLR-11880
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
>
> MDCAwareThreadPoolExecutor has this line in it's{{{execute}} method
>  
> {code:java}
> final Exception submitterStackTrace = new Exception("Submitter stack 
> trace");{code}
> This means that every call via the a thread pool will create this exception, 
> and only when it sees an error will it be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 407 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/407/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=14477, name=jetty-launcher-2528-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=14487, name=jetty-launcher-2528-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=14477, name=jetty-launcher-2528-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (SOLR-11051) Use disk free metric in default cluster preferences

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335392#comment-16335392
 ] 

ASF subversion and git services commented on SOLR-11051:


Commit 7da8145f7fc0b7ae6d1842397e4291d7a9e7ca93 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7da8145 ]

SOLR-11051: Use disk free metric in default cluster preferences


> Use disk free metric in default cluster preferences
> ---
>
> Key: SOLR-11051
> URL: https://issues.apache.org/jira/browse/SOLR-11051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11051.patch
>
>
> Currently, the default cluster preferences is simply based on core count:
> {code}
> {minimize : cores, precision:1}
> {code}
> We should start using the disk free metric in default cluster preferences to 
> place replicas on nodes with the most disk space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11051) Use disk free metric in default cluster preferences

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335384#comment-16335384
 ] 

ASF subversion and git services commented on SOLR-11051:


Commit 2f4f8932c6d8d5dcaba445125699f930bca2b67d in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2f4f893 ]

SOLR-11051: Use disk free metric in default cluster preferences


> Use disk free metric in default cluster preferences
> ---
>
> Key: SOLR-11051
> URL: https://issues.apache.org/jira/browse/SOLR-11051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11051.patch
>
>
> Currently, the default cluster preferences is simply based on core count:
> {code}
> {minimize : cores, precision:1}
> {code}
> We should start using the disk free metric in default cluster preferences to 
> place replicas on nodes with the most disk space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 126 - Still Failing

2018-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/126/

No tests ran.

Build Log:
[...truncated 28288 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.14 sec (1.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.0-src.tgz...
   [smoker] 31.7 MB in 0.06 sec (490.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.tgz...
   [smoker] 73.1 MB in 0.11 sec (664.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.zip...
   [smoker] 83.6 MB in 0.14 sec (616.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6284 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6284 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] 
   [smoker] command "export JAVA_HOME="/home/jenkins/tools/java/latest1.8" 
PATH="/home/jenkins/tools/java/latest1.8/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.8/bin/java"; ant clean test 
-Dtests.slow=false" failed:
   [smoker] Buildfile: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.3.0/build.xml
   [smoker] 
   [smoker] clean:
   [smoker][delete] Deleting directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.3.0/build
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: Apache Ivy 2.4.0 - 20141213170938 :: 
http://ant.apache.org/ivy/ ::
   [smoker] [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.3.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] -clover.load:
   [smoker] 
   [smoker] resolve-groovy:
   [smoker] [ivy:cachepath] :: resolving dependencies :: 
org.codehaus.groovy#groovy-all-caller;working
   [smoker] [ivy:cachepath] confs: [default]
   [smoker] [ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.12 in 
public
   [smoker] [ivy:cachepath] :: resolution report :: resolve 947ms :: artifacts 
dl 25ms
   [smoker] 
-
   [smoker] |  |modules||   
artifacts   |
   [smoker] |   conf   | number| search|dwnlded|evicted|| 
number|dwnlded|
   [smoker] 
-
   [smoker] |  default |   1   |   0   |   0   |   0   ||   1   |   
0   |
   [smoker] 
-
   [smoker] 
   [smoker] -init-totals:
   [smoker] 
   [smoker] test-core:
   [smoker] 
   [smoker] -clover.disable:
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.3.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] -clover.load:
   [smoker] 
   [smoker] -clover.classpath:
   [smoker] 
   

[jira] [Commented] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2018-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335373#comment-16335373
 ] 

ASF GitHub Bot commented on SOLR-11692:
---

Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/281
  
This PR was already merged (SOLR-11692) but wasn't mentioned in the commit 
message. Could you close @millerjeff0 ?


> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Assignee: David Smiley
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
> Fix For: 7.3
>
> Attachments: SOLR-11692.patch, SOLR-11692.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:370)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:745)
> Related JIRA: SOLR-8933



--
This message was sent by Atlassian JIRA

[GitHub] lucene-solr issue #281: Cleaning up class casts and making sure sheilded res...

2018-01-22 Thread tflobbe
Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/281
  
This PR was already merged (SOLR-11692) but wasn't mentioned in the commit 
message. Could you close @millerjeff0 ?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11804) Test RankQuery in distributed mode

2018-01-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335372#comment-16335372
 ] 

Tomás Fernández Löbbe commented on SOLR-11804:
--

Thanks [~diegoceccarelli]. The patch looks good, although {{q=all}} only 
returns 2 docs, maybe the test can be improved by having a better query (that 
returns at least the re-rank amount of docs?). In line with that, even if in 
many cases (specially in this test) we only validate distributed vs 
non-distributed, I'd prefer if you checked something of the response, at least 
to make sure docs are coming back, maybe like:
{code:java}
    QueryResponse response = query("q",facetQuery, "rows",100, "facet","true",
  "facet.range",tlong,
  "facet.range.start",200,
  "facet.range.gap",100,
  "facet.range.end",900,
  "facet.range.other","all");
    assertEquals(tlong, response.getFacetRanges().get(0).getName());
    assertEquals(new Integer(6), response.getFacetRanges().get(0).getBefore());
    assertEquals(new Integer(5), response.getFacetRanges().get(0).getBetween());
    assertEquals(new Integer(2), 
response.getFacetRanges().get(0).getAfter());{code}

> Test RankQuery in distributed mode
> --
>
> Key: SOLR-11804
> URL: https://issues.apache.org/jira/browse/SOLR-11804
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Diego Ceccarelli
>Priority: Minor
>
> Currently {{RankQuery}} is not tested in distributed mode. I added a few 
> tests in `TestDistributedSearch` to check that it works properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11702) Redesign current LIR implementation

2018-01-22 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335360#comment-16335360
 ] 

Cao Manh Dat commented on SOLR-11702:
-

bq. LIRRollingUpdatesTest.testNewReplicaOldLeader – why is the proxy closed for 
both leader and replica? Isn't closing for replica sufficient to force LIR?
Yeah, you're right, closing leader's proxy is not necessary. That call is only 
for safety, I just want to simulate the real network partition between leader 
and replica

bq. LIRRollingUpdatesTest calls TestInjection.reset() in tearDown but fault 
injection isn't used anywhere in the test so it can be removed.
+1

bq. Javadocs for ZkShardTerms.ensureTermIsHigher says "Ensure that leader's 
term is lower than some replica's terms" but shouldn't the leader have a higher 
term? This is also mentioned in the design document "The idea of term is only 
replicas (in the same shard) with highest term are considered healthy". The 
impl is doing the opposite i.e. it is increasing the replica's term to 
leaderTerm+1.
+1, the javadoc is miss typed 

bq. Can you add javadocs to the various methods in the ZkShardTerms.Terms class?
Sure

> Redesign current LIR implementation
> ---
>
> Key: SOLR-11702
> URL: https://issues.apache.org/jira/browse/SOLR-11702
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11702.patch, SOLR-11702.patch
>
>
> I recently looked into some problem related to racing between LIR and 
> Recovering. I would like to propose a totally new approach to solve SOLR-5495 
> problem because fixing current implementation by a bandage will lead us to 
> other problems (we can not prove the correctness of the implementation).
> Feel free to give comments/thoughts about this new scheme.
> https://docs.google.com/document/d/1dM2GKMULsS45ZMuvtztVnM2m3fdUeRYNCyJorIIisEo/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11747) Pause triggers until actions finish executing and the cool down period expires

2018-01-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335357#comment-16335357
 ] 

Shalin Shekhar Mangar commented on SOLR-11747:
--

Oops, fixed. Thanks Varun!

> Pause triggers until actions finish executing and the cool down period expires
> --
>
> Key: SOLR-11747
> URL: https://issues.apache.org/jira/browse/SOLR-11747
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11747.patch, SOLR-11747.patch
>
>
> Currently, the cool down period is a fixed period during which events 
> generated from triggers aren't accepted. The cool down starts after actions 
> performed by a previous event finish. But it doesn't protect against triggers 
> acting on intermediate cluster state during the time the actions are 
> executing. Events can still be created and persisted to ZK and will be 
> executed serially once the cool down finishes.
> To protect against, this I intend to modify the behaviour of the system to 
> pause all triggers from execution between the start of actions and end of 
> cool down period. The triggers will be resumed after the cool down period 
> expires.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11747) Pause triggers until actions finish executing and the cool down period expires

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335355#comment-16335355
 ] 

ASF subversion and git services commented on SOLR-11747:


Commit d61aec090454eeea0d498850c1dabbda5fbfc5e0 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d61aec0 ]

SOLR-11747: Fixed note in upgrade section

(cherry picked from commit e16da35)


> Pause triggers until actions finish executing and the cool down period expires
> --
>
> Key: SOLR-11747
> URL: https://issues.apache.org/jira/browse/SOLR-11747
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11747.patch, SOLR-11747.patch
>
>
> Currently, the cool down period is a fixed period during which events 
> generated from triggers aren't accepted. The cool down starts after actions 
> performed by a previous event finish. But it doesn't protect against triggers 
> acting on intermediate cluster state during the time the actions are 
> executing. Events can still be created and persisted to ZK and will be 
> executed serially once the cool down finishes.
> To protect against, this I intend to modify the behaviour of the system to 
> pause all triggers from execution between the start of actions and end of 
> cool down period. The triggers will be resumed after the cool down period 
> expires.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11747) Pause triggers until actions finish executing and the cool down period expires

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335354#comment-16335354
 ] 

ASF subversion and git services commented on SOLR-11747:


Commit e16da35707e5b48c063c08f2ef7bee13b4cb0cd0 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e16da35 ]

SOLR-11747: Fixed note in upgrade section


> Pause triggers until actions finish executing and the cool down period expires
> --
>
> Key: SOLR-11747
> URL: https://issues.apache.org/jira/browse/SOLR-11747
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11747.patch, SOLR-11747.patch
>
>
> Currently, the cool down period is a fixed period during which events 
> generated from triggers aren't accepted. The cool down starts after actions 
> performed by a previous event finish. But it doesn't protect against triggers 
> acting on intermediate cluster state during the time the actions are 
> executing. Events can still be created and persisted to ZK and will be 
> executed serially once the cool down finishes.
> To protect against, this I intend to modify the behaviour of the system to 
> pause all triggers from execution between the start of actions and end of 
> cool down period. The triggers will be resumed after the cool down period 
> expires.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11702) Redesign current LIR implementation

2018-01-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335353#comment-16335353
 ] 

Shalin Shekhar Mangar commented on SOLR-11702:
--

Ok, thanks for clarifying Dat. A few more questions/comments:
 # LIRRollingUpdatesTest.testNewReplicaOldLeader -- why is the proxy closed for 
both leader and replica? Isn't closing for replica sufficient to force LIR?
 # LIRRollingUpdatesTest calls TestInjection.reset() in tearDown but fault 
injection isn't used anywhere in the test so it can be removed.
 # Javadocs for ZkShardTerms.ensureTermIsHigher says "Ensure that leader's term 
is lower than some replica's terms" but shouldn't the leader have a higher 
term? This is also mentioned in the design document "The idea of _term_ is only 
replicas (in the same shard) with highest term are considered healthy". The 
impl is doing the opposite i.e. it is increasing the replica's term to 
leaderTerm+1.
 # Can you add javadocs to the various methods in the ZkShardTerms.Terms class?

> Redesign current LIR implementation
> ---
>
> Key: SOLR-11702
> URL: https://issues.apache.org/jira/browse/SOLR-11702
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11702.patch, SOLR-11702.patch
>
>
> I recently looked into some problem related to racing between LIR and 
> Recovering. I would like to propose a totally new approach to solve SOLR-5495 
> problem because fixing current implementation by a bandage will lead us to 
> other problems (we can not prove the correctness of the implementation).
> Feel free to give comments/thoughts about this new scheme.
> https://docs.google.com/document/d/1dM2GKMULsS45ZMuvtztVnM2m3fdUeRYNCyJorIIisEo/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11882) SolrMetric registries cause huge memory consumption with lots of transient cores

2018-01-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335349#comment-16335349
 ] 

Erick Erickson commented on SOLR-11882:
---

Ignore that patch, something's pretty screwy with my test.

 

> SolrMetric registries cause huge memory consumption with lots of transient 
> cores
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-11882.patch, solr-dump-full_Leak_Suspects.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 1225 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1225/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd: 1) 
Thread[id=114, name=qtp245207537-114, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=108, 
name=qtp245207537-108, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd: 
   1) Thread[id=114, name=qtp245207537-114, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
   2) Thread[id=108, name=qtp245207537-108, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([EB24F3408972FCDB]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=114, name=qtp245207537-114, state=TIMED_WAITING, 
group=TGRP-TestSolrEntityProcessorEndToEnd] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1641 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1641/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.test

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:63990/solr within 1 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:63990/solr within 1 ms
at 
__randomizedtesting.SeedInfo.seed([8B0770F4C39B4E88:3534F2E6D672370]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:119)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:109)
at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:276)
at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:155)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:398)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.AliasIntegrationTest.searchSeveralWays(AliasIntegrationTest.java:461)
at 
org.apache.solr.cloud.AliasIntegrationTest.searchSeveralWays(AliasIntegrationTest.java:487)
at 
org.apache.solr.cloud.AliasIntegrationTest.searchSeveralWays(AliasIntegrationTest.java:447)
at 
org.apache.solr.cloud.AliasIntegrationTest.test(AliasIntegrationTest.java:379)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-11882) SolrMetric registries cause huge memory consumption with lots of transient cores

2018-01-22 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11882:
--
Attachment: SOLR-11882.patch

> SolrMetric registries cause huge memory consumption with lots of transient 
> cores
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-11882.patch, solr-dump-full_Leak_Suspects.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11862) Add fuzzyKmeans Stream Evaluatory

2018-01-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11862:
--
Attachment: SOLR-11862.patch

> Add fuzzyKmeans Stream Evaluatory
> -
>
> Key: SOLR-11862
> URL: https://issues.apache.org/jira/browse/SOLR-11862
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11862.patch
>
>
> This ticket adds the fuzzy kmeans clustering algorithm to the Streaming 
> Expression machine learning library.
>  
> Implementation provided by Apache Commons Math



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2270 - Still Unstable

2018-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2270/

2 tests failed.
FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([6D1EADCA5A32E1AF:E20937E07A1EA8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:274)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeMarkersRegistration

Error Message:
Path 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 327 - Failure

2018-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/327/

11 tests failed.
FAILED:  
org.apache.lucene.classification.CachingNaiveBayesClassifierTest.testPerformance

Error Message:
evaluation took more than 1m: 63s

Stack Trace:
java.lang.AssertionError: evaluation took more than 1m: 63s
at 
__randomizedtesting.SeedInfo.seed([EE8881F21EE76365:296973D075535BCA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.classification.CachingNaiveBayesClassifierTest.testPerformance(CachingNaiveBayesClassifierTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.MigrateRouteKeyTest.multipleShardMigrateTest

Error Message:
Could not load collection from ZK: sourceCollection

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
sourceCollection
at 
__randomizedtesting.SeedInfo.seed([6D23C70BE67C2B40:7133DD71C7BC9BA1]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1108)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:647)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1206)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21323 - Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21323/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.DistributedQueryElevationComponentTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.component.DistributedQueryElevationComponentTest: 
1) Thread[id=30405, name=qtp25014969-30405, state=TIMED_WAITING, 
group=TGRP-DistributedQueryElevationComponentTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at 
org.apache.solr.handler.component.DistributedQueryElevationComponentTest: 
   1) Thread[id=30405, name=qtp25014969-30405, state=TIMED_WAITING, 
group=TGRP-DistributedQueryElevationComponentTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A1BFBAEAD7DCEF8B]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.DistributedQueryElevationComponentTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=30405, name=qtp25014969-30405, state=TIMED_WAITING, 
group=TGRP-DistributedQueryElevationComponentTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=30405, name=qtp25014969-30405, state=TIMED_WAITING, 
group=TGRP-DistributedQueryElevationComponentTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A1BFBAEAD7DCEF8B]:0)




Build Log:
[...truncated 12909 lines...]
   [junit4] Suite: 
org.apache.solr.handler.component.DistributedQueryElevationComponentTest
   [junit4]   2> 1640763 INFO  
(SUITE-DistributedQueryElevationComponentTest-seed#[A1BFBAEAD7DCEF8B]-worker) [ 
   ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-11884) find/fix inefficiencies in our use of logging

2018-01-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335266#comment-16335266
 ] 

Erick Erickson commented on SOLR-11884:
---

Thanks Michael, at least they're linked now

> find/fix inefficiencies in our use of logging
> -
>
> Key: SOLR-11884
> URL: https://issues.apache.org/jira/browse/SOLR-11884
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> We've been looking at Solr using Flight Recorder and ran across some 
> interesting things I'd like to discuss. Let's discuss general logging 
> approaches here, then perhaps break out sub-JIRAs when we reach any kind of 
> agreement.
> 1> Every log message generates a new Throwable, presumably to get things like 
> line number, file, class name and the like. On a 2 minute run blasting 
> updates this meant 150,000 (yes, 150K) instances of "new Throwable()".
>  
> See the section "Asynchronous Logging with Caller Location Information" at:
> [https://logging.apache.org/log4j/2.x/performance.html]
> I'm not totally sure changing the layout pattern will fix this in log4j 1.x, 
> but apparently certainly should in log4j 2.
>  
> The cost of course would be that lots of our log messages would lack some of 
> the information. Exceptions would still contain all the file/class/line 
> information of course.
>  
> Proposal:
> Change the layout pattern to, by default, _NOT_  include information that 
> requires a Throwable to be created. Also include a pattern that could be 
> un-commented to get this information back for troubleshooting.
>  
> 
>  
> We generate strings when we don't need them. Any construct like
> log.info("whatever " + method_that_builds_a_string + " : " + some_variable);
> generates the string (some of which are quite expensive) and then throws it 
> away if the log level is at, say, WARN. The above link also shows that 
> parameterizing this doesn't suffer this problem, so anything like the above 
> should be re-written as:
> log.info("whatever {} : {} ", method_that_builds_a_string, some_variable);
>  
> The alternative is to do something like but let's make use of the built-in 
> capabilities instead.
> if (log.level >= INFO) {
>    log.info("whatever " + method_that_builds_a_string + " : " + 
> some_variable);
> }
> etc.
> This would be a pretty huge thing to fix all-at-once so I suppose we'll have 
> to approach it incrementally. It's also something that, if we get them all 
> out of the code should be added to precommit failures. In the meantime, if 
> anyone who has the precommit chops could create a target that checked for 
> this it'd be a great help in tracking all of them down, then could be 
> incorporated in the regular precommit checks if/when they're all removed.
> Proposal:
> Use JFR or whatever to identify the egregious violations of this kind of 
> thing (I have a couple I've found) and change them to parameterized form (and 
> prove it works). Then see what we can do to move forward with removing them 
> all through the code base.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #298: Add web app logging interface auto-refresh toggle.

2018-01-22 Thread tflobbe
Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/298
  
@tommymh Would you mind creating a [Jira 
issue](https://issues.apache.org/jira/projects/SOLR) for visibility. You can 
also rename this PR to include the Jira code, that way they'll be linked


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 415 - Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/415/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly

Error Message:
Unexpected number of elements in the group for intGSF: 5

Stack Trace:
java.lang.AssertionError: Unexpected number of elements in the group for 
intGSF: 5
at 
__randomizedtesting.SeedInfo.seed([B845666B9CFA27A2:23FE0833D1A215FC]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly(DocValuesNotIndexedTest.java:377)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([B845666B9CFA27A2:301159B132064A5A]:0)
   

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 1224 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1224/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=9005, name=jetty-launcher-1530-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=9010, name=jetty-launcher-1530-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=9005, name=jetty-launcher-1530-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (SOLR-11747) Pause triggers until actions finish executing and the cool down period expires

2018-01-22 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335203#comment-16335203
 ] 

Varun Thacker commented on SOLR-11747:
--

Hi Shalin,

Looks like in the upgrade section the sentence is incomplete. 

> Pause triggers until actions finish executing and the cool down period expires
> --
>
> Key: SOLR-11747
> URL: https://issues.apache.org/jira/browse/SOLR-11747
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11747.patch, SOLR-11747.patch
>
>
> Currently, the cool down period is a fixed period during which events 
> generated from triggers aren't accepted. The cool down starts after actions 
> performed by a previous event finish. But it doesn't protect against triggers 
> acting on intermediate cluster state during the time the actions are 
> executing. Events can still be created and persisted to ZK and will be 
> executed serially once the cool down finishes.
> To protect against, this I intend to modify the behaviour of the system to 
> pause all triggers from execution between the start of actions and end of 
> cool down period. The triggers will be resumed after the cool down period 
> expires.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11624) collection creation should not also overwrite/delete any configset but it can!

2018-01-22 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-11624.
-
   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

> collection creation should not also overwrite/delete any configset but it can!
> --
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, 
> SOLR-11624.4.patch, SOLR-11624.patch, SOLR-11624.patch, SOLR-11624.patch, 
> SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4402 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4402/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=10465, 
name=updateExecutor-2540-thread-3, state=RUNNABLE, 
group=TGRP-ChaosMonkeySafeLeaderTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=10465, name=updateExecutor-2540-thread-3, 
state=RUNNABLE, group=TGRP-ChaosMonkeySafeLeaderTest]
at 
__randomizedtesting.SeedInfo.seed([1EAD81425A0923DE:96F9BE98F4F54E26]:0)
Caused by: org.apache.solr.common.SolrException: Replica: 
http://127.0.0.1:64547/collection1_shard1_replica_n25/ should have been marked 
under leader initiated recovery in ZkController but wasn't.
at __randomizedtesting.SeedInfo.seed([1EAD81425A0923DE]:0)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryThread.run(LeaderInitiatedRecoveryThread.java:88)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12394 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeySafeLeaderTest_1EAD81425A0923DE-001/init-core-data-001
   [junit4]   2> 938941 WARN  
(SUITE-ChaosMonkeySafeLeaderTest-seed#[1EAD81425A0923DE]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=4 numCloses=4
   [junit4]   2> 938941 INFO  
(SUITE-ChaosMonkeySafeLeaderTest-seed#[1EAD81425A0923DE]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 938942 INFO  
(SUITE-ChaosMonkeySafeLeaderTest-seed#[1EAD81425A0923DE]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0) w/ MAC_OS_X supressed clientAuth
   [junit4]   2> 938942 INFO  
(SUITE-ChaosMonkeySafeLeaderTest-seed#[1EAD81425A0923DE]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 938942 INFO  
(SUITE-ChaosMonkeySafeLeaderTest-seed#[1EAD81425A0923DE]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 938943 INFO  
(TEST-ChaosMonkeySafeLeaderTest.test-seed#[1EAD81425A0923DE]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 938943 INFO  (Thread-2075) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 938943 INFO  (Thread-2075) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 938946 ERROR (Thread-2075) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 939046 INFO  
(TEST-ChaosMonkeySafeLeaderTest.test-seed#[1EAD81425A0923DE]) [] 
o.a.s.c.ZkTestServer start zk server on port:64523
   [junit4]   2> 939056 INFO  (zkConnectionManagerCallback-2517-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 939062 INFO  (zkConnectionManagerCallback-2519-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 939071 INFO  
(TEST-ChaosMonkeySafeLeaderTest.test-seed#[1EAD81425A0923DE]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 939074 INFO  
(TEST-ChaosMonkeySafeLeaderTest.test-seed#[1EAD81425A0923DE]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 939077 INFO  
(TEST-ChaosMonkeySafeLeaderTest.test-seed#[1EAD81425A0923DE]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 939079 INFO  
(TEST-ChaosMonkeySafeLeaderTest.test-seed#[1EAD81425A0923DE]) [] 
o.a.s.c.AbstractZkTestCase put 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 929 - Still Failing

2018-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/929/

No tests ran.

Build Log:
[...truncated 28244 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (37.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.03 sec (1195.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.0 MB in 0.06 sec (1214.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.4 MB in 0.07 sec (1212.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6231 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6231 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (188.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.5 MB in 0.05 sec (1139.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.4 MB in 0.13 sec (1163.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.4 MB in 0.13 sec (1186.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WARN] *** Your open file limit is currently 6.  
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] *** [WARN] ***  Your Max Processes Limit is currently 10240. 
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or 

[jira] [Commented] (SOLR-6630) Deprecate the "implicit" router and rename to "manual"

2018-01-22 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335093#comment-16335093
 ] 

Ishan Chattopadhyaya commented on SOLR-6630:


bq. ping
Had dropped the ball on this; I'll pick it up next (after about 2-3 issues I'm 
currently working on).

> Deprecate the "implicit" router and rename to "manual"
> --
>
> Key: SOLR-6630
> URL: https://issues.apache.org/jira/browse/SOLR-6630
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Shawn Heisey
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 7.0
>
> Attachments: SOLR-6630.patch, SOLR-6630.patch
>
>
> I had this exchange with an IRC user named "kindkid" this morning:
> {noformat}
> 08:30 < kindkid> I'm using sharding with the implicit router, but I'm seeing
>  all my documents end up on just one of my 24 shards. What
>  might be causing this? (4.10.0)
> 08:35 <@elyograg> kindkid: you used the implicit router.  that means that
>   documents will be indexed on the shard you sent them
> to, not
>   routed elsewhere.
> 08:37 < kindkid> oh. wow. not sure where I got the idea, but I was under the
>  impression that implicit router would use a hash of the
>  uniqueKey modulo number of shards to pick a shard.
> 08:38 <@elyograg> I think you probably wanted the compositeId router.
> 08:39 <@elyograg> implicit is not a very good name.  It's technically
> correct,
>   but the meaning of the word is not well known.
> 08:39 <@elyograg> "manual" would be a better name.
> {noformat}
> The word "implicit" has a very specific meaning, and I think it's
> absolutely correct terminology for what it does, but I don't think that
> it's very clear to a typical person.  This is not the first time I've
> encountered the confusion.
> Could we deprecate the implicit name and use something much more
> descriptive and easily understood, like "manual" instead?  Let's go
> ahead and accept implicit in 5.x releases, but issue a warning in the
> log.  Maybe we can have a startup system property or a config option
> that will force the name to be updated in zookeeper and get rid of the
> warning.  If we do this, my bias is to have an upgrade to 6.x force the
> name change in zookeeper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6630) Deprecate the "implicit" router and rename to "manual"

2018-01-22 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335093#comment-16335093
 ] 

Ishan Chattopadhyaya edited comment on SOLR-6630 at 1/22/18 10:53 PM:
--

bq. ping
Had dropped the ball on this; I'll pick it up after about 2-3 issues I'm 
currently working on.


was (Author: ichattopadhyaya):
bq. ping
Had dropped the ball on this; I'll pick it up next (after about 2-3 issues I'm 
currently working on).

> Deprecate the "implicit" router and rename to "manual"
> --
>
> Key: SOLR-6630
> URL: https://issues.apache.org/jira/browse/SOLR-6630
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Shawn Heisey
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 7.0
>
> Attachments: SOLR-6630.patch, SOLR-6630.patch
>
>
> I had this exchange with an IRC user named "kindkid" this morning:
> {noformat}
> 08:30 < kindkid> I'm using sharding with the implicit router, but I'm seeing
>  all my documents end up on just one of my 24 shards. What
>  might be causing this? (4.10.0)
> 08:35 <@elyograg> kindkid: you used the implicit router.  that means that
>   documents will be indexed on the shard you sent them
> to, not
>   routed elsewhere.
> 08:37 < kindkid> oh. wow. not sure where I got the idea, but I was under the
>  impression that implicit router would use a hash of the
>  uniqueKey modulo number of shards to pick a shard.
> 08:38 <@elyograg> I think you probably wanted the compositeId router.
> 08:39 <@elyograg> implicit is not a very good name.  It's technically
> correct,
>   but the meaning of the word is not well known.
> 08:39 <@elyograg> "manual" would be a better name.
> {noformat}
> The word "implicit" has a very specific meaning, and I think it's
> absolutely correct terminology for what it does, but I don't think that
> it's very clear to a typical person.  This is not the first time I've
> encountered the confusion.
> Could we deprecate the implicit name and use something much more
> descriptive and easily understood, like "manual" instead?  Let's go
> ahead and accept implicit in 5.x releases, but issue a warning in the
> log.  Maybe we can have a startup system property or a config option
> that will force the name to be updated in zookeeper and get rid of the
> warning.  If we do this, my bias is to have an upgrade to 6.x force the
> name change in zookeeper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335070#comment-16335070
 ] 

Karl Wright commented on LUCENE-8133:
-

Ok, what is happening is that the polygon tiling code "tries" a few things to 
see if they work out.  It's expecting the GeoConvexPolygon construction to be 
successful when it does this.  The problem is that sometimes the constructor 
throws an IllegalArgumentException -- not something the code was expecting.

The solution seems to be to just catch the exception and signal that the tiling 
can't be created:

{code}
  try {
// No holes, for test
final GeoPolygon testPolygon = new GeoConvexPolygon(planetModel, 
points, null, internalEdges, returnIsInternal);
if (testPolygon.isWithin(testPoint)) {
  return null;
}
  } catch (IllegalArgumentException e) {
return null;
  }
{code}

[~ivera], can you extend your test case to verify that the shape produced works 
properly?  If that passes I will commit my fix.


> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 417 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/417/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

7 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestNIOFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001\testVLong-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001\testVLong-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001\testVLong-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001\testVLong-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_5D689F79FA02E7F6-001

at __randomizedtesting.SeedInfo.seed([5D689F79FA02E7F6]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestSimpleFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_5D689F79FA02E7F6-001\tempDir-005:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_5D689F79FA02E7F6-001\tempDir-005
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_5D689F79FA02E7F6-001\tempDir-005:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_5D689F79FA02E7F6-001\tempDir-005

at __randomizedtesting.SeedInfo.seed([5D689F79FA02E7F6]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-11884) find/fix inefficiencies in our use of logging

2018-01-22 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334977#comment-16334977
 ] 

Michael Braun commented on SOLR-11884:
--

[~erickerickson] I previously logged  SOLR-10415 to track some of the issues I 
found in solr core. 

> find/fix inefficiencies in our use of logging
> -
>
> Key: SOLR-11884
> URL: https://issues.apache.org/jira/browse/SOLR-11884
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> We've been looking at Solr using Flight Recorder and ran across some 
> interesting things I'd like to discuss. Let's discuss general logging 
> approaches here, then perhaps break out sub-JIRAs when we reach any kind of 
> agreement.
> 1> Every log message generates a new Throwable, presumably to get things like 
> line number, file, class name and the like. On a 2 minute run blasting 
> updates this meant 150,000 (yes, 150K) instances of "new Throwable()".
>  
> See the section "Asynchronous Logging with Caller Location Information" at:
> [https://logging.apache.org/log4j/2.x/performance.html]
> I'm not totally sure changing the layout pattern will fix this in log4j 1.x, 
> but apparently certainly should in log4j 2.
>  
> The cost of course would be that lots of our log messages would lack some of 
> the information. Exceptions would still contain all the file/class/line 
> information of course.
>  
> Proposal:
> Change the layout pattern to, by default, _NOT_  include information that 
> requires a Throwable to be created. Also include a pattern that could be 
> un-commented to get this information back for troubleshooting.
>  
> 
>  
> We generate strings when we don't need them. Any construct like
> log.info("whatever " + method_that_builds_a_string + " : " + some_variable);
> generates the string (some of which are quite expensive) and then throws it 
> away if the log level is at, say, WARN. The above link also shows that 
> parameterizing this doesn't suffer this problem, so anything like the above 
> should be re-written as:
> log.info("whatever {} : {} ", method_that_builds_a_string, some_variable);
>  
> The alternative is to do something like but let's make use of the built-in 
> capabilities instead.
> if (log.level >= INFO) {
>    log.info("whatever " + method_that_builds_a_string + " : " + 
> some_variable);
> }
> etc.
> This would be a pretty huge thing to fix all-at-once so I suppose we'll have 
> to approach it incrementally. It's also something that, if we get them all 
> out of the code should be added to precommit failures. In the meantime, if 
> anyone who has the precommit chops could create a target that checked for 
> this it'd be a great help in tracking all of them down, then could be 
> incorporated in the regular precommit checks if/when they're all removed.
> Proposal:
> Use JFR or whatever to identify the egregious violations of this kind of 
> thing (I have a couple I've found) and change them to parameterized form (and 
> prove it works). Then see what we can do to move forward with removing them 
> all through the code base.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1223 - Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1223/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([E0877BC0E61808C1:6BA0A811A71EA345]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:431)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-6630) Deprecate the "implicit" router and rename to "manual"

2018-01-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334890#comment-16334890
 ] 

Jan Høydahl commented on SOLR-6630:
---

ping

> Deprecate the "implicit" router and rename to "manual"
> --
>
> Key: SOLR-6630
> URL: https://issues.apache.org/jira/browse/SOLR-6630
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Shawn Heisey
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 7.0
>
> Attachments: SOLR-6630.patch, SOLR-6630.patch
>
>
> I had this exchange with an IRC user named "kindkid" this morning:
> {noformat}
> 08:30 < kindkid> I'm using sharding with the implicit router, but I'm seeing
>  all my documents end up on just one of my 24 shards. What
>  might be causing this? (4.10.0)
> 08:35 <@elyograg> kindkid: you used the implicit router.  that means that
>   documents will be indexed on the shard you sent them
> to, not
>   routed elsewhere.
> 08:37 < kindkid> oh. wow. not sure where I got the idea, but I was under the
>  impression that implicit router would use a hash of the
>  uniqueKey modulo number of shards to pick a shard.
> 08:38 <@elyograg> I think you probably wanted the compositeId router.
> 08:39 <@elyograg> implicit is not a very good name.  It's technically
> correct,
>   but the meaning of the word is not well known.
> 08:39 <@elyograg> "manual" would be a better name.
> {noformat}
> The word "implicit" has a very specific meaning, and I think it's
> absolutely correct terminology for what it does, but I don't think that
> it's very clear to a typical person.  This is not the first time I've
> encountered the confusion.
> Could we deprecate the implicit name and use something much more
> descriptive and easily understood, like "manual" instead?  Let's go
> ahead and accept implicit in 5.x releases, but issue a warning in the
> log.  Maybe we can have a startup system property or a config option
> that will force the name to be updated in zookeeper and get rid of the
> warning.  If we do this, my bias is to have an upgrade to 6.x force the
> name change in zookeeper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11884) find/fix inefficiencies in our use of logging

2018-01-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334859#comment-16334859
 ] 

Erick Erickson commented on SOLR-11884:
---

Forgot to say two things:

 

1> Assigning to myself to track, don't know when I'll get to it, especially the 
larger issues of the rest of the code base.

2> I haven't verified these behaviors when using log4j 1.x yet. Whatever 
doesn't work for log4j 1 we can add after moving to log4j 2 (see SOLR-7887) 
which should be Real Soon Now.

> find/fix inefficiencies in our use of logging
> -
>
> Key: SOLR-11884
> URL: https://issues.apache.org/jira/browse/SOLR-11884
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> We've been looking at Solr using Flight Recorder and ran across some 
> interesting things I'd like to discuss. Let's discuss general logging 
> approaches here, then perhaps break out sub-JIRAs when we reach any kind of 
> agreement.
> 1> Every log message generates a new Throwable, presumably to get things like 
> line number, file, class name and the like. On a 2 minute run blasting 
> updates this meant 150,000 (yes, 150K) instances of "new Throwable()".
>  
> See the section "Asynchronous Logging with Caller Location Information" at:
> [https://logging.apache.org/log4j/2.x/performance.html]
> I'm not totally sure changing the layout pattern will fix this in log4j 1.x, 
> but apparently certainly should in log4j 2.
>  
> The cost of course would be that lots of our log messages would lack some of 
> the information. Exceptions would still contain all the file/class/line 
> information of course.
>  
> Proposal:
> Change the layout pattern to, by default, _NOT_  include information that 
> requires a Throwable to be created. Also include a pattern that could be 
> un-commented to get this information back for troubleshooting.
>  
> 
>  
> We generate strings when we don't need them. Any construct like
> log.info("whatever " + method_that_builds_a_string + " : " + some_variable);
> generates the string (some of which are quite expensive) and then throws it 
> away if the log level is at, say, WARN. The above link also shows that 
> parameterizing this doesn't suffer this problem, so anything like the above 
> should be re-written as:
> log.info("whatever {} : {} ", method_that_builds_a_string, some_variable);
>  
> The alternative is to do something like but let's make use of the built-in 
> capabilities instead.
> if (log.level >= INFO) {
>    log.info("whatever " + method_that_builds_a_string + " : " + 
> some_variable);
> }
> etc.
> This would be a pretty huge thing to fix all-at-once so I suppose we'll have 
> to approach it incrementally. It's also something that, if we get them all 
> out of the code should be added to precommit failures. In the meantime, if 
> anyone who has the precommit chops could create a target that checked for 
> this it'd be a great help in tracking all of them down, then could be 
> incorporated in the regular precommit checks if/when they're all removed.
> Proposal:
> Use JFR or whatever to identify the egregious violations of this kind of 
> thing (I have a couple I've found) and change them to parameterized form (and 
> prove it works). Then see what we can do to move forward with removing them 
> all through the code base.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11884) find/fix inefficiencies in our use of logging

2018-01-22 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-11884:
-

 Summary: find/fix inefficiencies in our use of logging
 Key: SOLR-11884
 URL: https://issues.apache.org/jira/browse/SOLR-11884
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson


We've been looking at Solr using Flight Recorder and ran across some 
interesting things I'd like to discuss. Let's discuss general logging 
approaches here, then perhaps break out sub-JIRAs when we reach any kind of 
agreement.

1> Every log message generates a new Throwable, presumably to get things like 
line number, file, class name and the like. On a 2 minute run blasting updates 
this meant 150,000 (yes, 150K) instances of "new Throwable()".

 

See the section "Asynchronous Logging with Caller Location Information" at:

[https://logging.apache.org/log4j/2.x/performance.html]

I'm not totally sure changing the layout pattern will fix this in log4j 1.x, 
but apparently certainly should in log4j 2.

 

The cost of course would be that lots of our log messages would lack some of 
the information. Exceptions would still contain all the file/class/line 
information of course.

 

Proposal:

Change the layout pattern to, by default, _NOT_  include information that 
requires a Throwable to be created. Also include a pattern that could be 
un-commented to get this information back for troubleshooting.

 



 

We generate strings when we don't need them. Any construct like

log.info("whatever " + method_that_builds_a_string + " : " + some_variable);

generates the string (some of which are quite expensive) and then throws it 
away if the log level is at, say, WARN. The above link also shows that 
parameterizing this doesn't suffer this problem, so anything like the above 
should be re-written as:

log.info("whatever {} : {} ", method_that_builds_a_string, some_variable);

 

The alternative is to do something like but let's make use of the built-in 
capabilities instead.

if (log.level >= INFO) {
   log.info("whatever " + method_that_builds_a_string + " : " + some_variable);

}

etc.

This would be a pretty huge thing to fix all-at-once so I suppose we'll have to 
approach it incrementally. It's also something that, if we get them all out of 
the code should be added to precommit failures. In the meantime, if anyone who 
has the precommit chops could create a target that checked for this it'd be a 
great help in tracking all of them down, then could be incorporated in the 
regular precommit checks if/when they're all removed.

Proposal:

Use JFR or whatever to identify the egregious violations of this kind of thing 
(I have a couple I've found) and change them to parameterized form (and prove 
it works). Then see what we can do to move forward with removing them all 
through the code base.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11884) find/fix inefficiencies in our use of logging

2018-01-22 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-11884:
-

Assignee: Erick Erickson

> find/fix inefficiencies in our use of logging
> -
>
> Key: SOLR-11884
> URL: https://issues.apache.org/jira/browse/SOLR-11884
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> We've been looking at Solr using Flight Recorder and ran across some 
> interesting things I'd like to discuss. Let's discuss general logging 
> approaches here, then perhaps break out sub-JIRAs when we reach any kind of 
> agreement.
> 1> Every log message generates a new Throwable, presumably to get things like 
> line number, file, class name and the like. On a 2 minute run blasting 
> updates this meant 150,000 (yes, 150K) instances of "new Throwable()".
>  
> See the section "Asynchronous Logging with Caller Location Information" at:
> [https://logging.apache.org/log4j/2.x/performance.html]
> I'm not totally sure changing the layout pattern will fix this in log4j 1.x, 
> but apparently certainly should in log4j 2.
>  
> The cost of course would be that lots of our log messages would lack some of 
> the information. Exceptions would still contain all the file/class/line 
> information of course.
>  
> Proposal:
> Change the layout pattern to, by default, _NOT_  include information that 
> requires a Throwable to be created. Also include a pattern that could be 
> un-commented to get this information back for troubleshooting.
>  
> 
>  
> We generate strings when we don't need them. Any construct like
> log.info("whatever " + method_that_builds_a_string + " : " + some_variable);
> generates the string (some of which are quite expensive) and then throws it 
> away if the log level is at, say, WARN. The above link also shows that 
> parameterizing this doesn't suffer this problem, so anything like the above 
> should be re-written as:
> log.info("whatever {} : {} ", method_that_builds_a_string, some_variable);
>  
> The alternative is to do something like but let's make use of the built-in 
> capabilities instead.
> if (log.level >= INFO) {
>    log.info("whatever " + method_that_builds_a_string + " : " + 
> some_variable);
> }
> etc.
> This would be a pretty huge thing to fix all-at-once so I suppose we'll have 
> to approach it incrementally. It's also something that, if we get them all 
> out of the code should be added to precommit failures. In the meantime, if 
> anyone who has the precommit chops could create a target that checked for 
> this it'd be a great help in tracking all of them down, then could be 
> incorporated in the regular precommit checks if/when they're all removed.
> Proposal:
> Use JFR or whatever to identify the egregious violations of this kind of 
> thing (I have a couple I've found) and change them to parameterized form (and 
> prove it works). Then see what we can do to move forward with removing them 
> all through the code base.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334818#comment-16334818
 ] 

Karl Wright commented on LUCENE-8133:
-

It should be possible to generate a single concave polygon to cover this case.  
What it's apparently doing, though, is trying to tile it with multiple convex 
polygons, which is why it gets into trouble.

The way to debug it is to log (using System.out.println) the decisions it is 
making during the tiling process, and why those decisions are being made.  Then 
we find where the error is and figure out why it made the bad decision.


> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7130 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7130/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.RecoveryAfterSoftCommitTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data\tlog\tlog.001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data\tlog\tlog.001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data\tlog
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23\data
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores\collection1_shard1_replica_t23
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.RecoveryAfterSoftCommitTest_438539E3191812F6-001\shard-2-001\cores:
 java.nio.file.DirectoryNotEmptyException: 

[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-01-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334807#comment-16334807
 ] 

Shawn Heisey commented on SOLR-7887:


bq. This will fail precommit, which requires that all rev values are exactly 
$\{//\}.

This makes sense, and now I get why it's set up in a way that seemed insane to 
me, and I'm glad that there is a workaround.


> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334782#comment-16334782
 ] 

Ignacio Vera commented on LUCENE-8133:
--

I understand that is clockwise when looking from the inside of the planet, 
anyhow we have a common understanding.

I have a closer look to the problem and I think I understand what is going on. 
When you look to the shortest edge and check if the points used to generate, 
one of them is actually not within.

 

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-01-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334774#comment-16334774
 ] 

Steve Rowe edited comment on SOLR-7887 at 1/22/18 7:28 PM:
---

{quote}I think that all of the log4j components should use a single property 
for their version, instead of having "/log4j/log4j-api", "/log4j/log4j-core", 
etc.
{quote}
>From the latest patch:
{noformat}
--- solr/solr-ref-guide/ivy.xml (date 1516411619000)
[...]
-
+
+
{noformat}
This will fail precommit, which requires that all {{rev}} values are exactly 
{{${//}}}. The canonical way of dealing with shared versions is to 
declare a {{.version}} property in {{ivy-versions.properties}} and then 
use it, rather than literal versions, as the value for the properties that 
should share it. E.g.:
{noformat}
com.carrotsearch.randomizedtesting.version = 2.5.3
/com.carrotsearch.randomizedtesting/junit4-ant = 
${com.carrotsearch.randomizedtesting.version}
/com.carrotsearch.randomizedtesting/randomizedtesting-runner = 
${com.carrotsearch.randomizedtesting.version}
{noformat}


was (Author: steve_rowe):
bq. I think that all of the log4j components should use a single property for 
their version, instead of having "/log4j/log4j-api", "/log4j/log4j-core", etc.

>From the latest patch:

{noformat}
--- solr/solr-ref-guide/ivy.xml (date 1516411619000)
[...]
-
+
+
{noformat}

This will fail precommit, which requires that all {{rev}} values are exactly to 
{{//}}.  The canonical way of dealing with shared versions is to 
declare a {{.version}} property in {{ivy-versions.properties}} and then 
use it, rather than literal versions, as the value for the properties that 
should share it.  E.g.:

{noformat}
com.carrotsearch.randomizedtesting.version = 2.5.3
/com.carrotsearch.randomizedtesting/junit4-ant = 
${com.carrotsearch.randomizedtesting.version}
/com.carrotsearch.randomizedtesting/randomizedtesting-runner = 
${com.carrotsearch.randomizedtesting.version}
{noformat} 

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-01-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334774#comment-16334774
 ] 

Steve Rowe commented on SOLR-7887:
--

bq. I think that all of the log4j components should use a single property for 
their version, instead of having "/log4j/log4j-api", "/log4j/log4j-core", etc.

>From the latest patch:

{noformat}
--- solr/solr-ref-guide/ivy.xml (date 1516411619000)
[...]
-
+
+
{noformat}

This will fail precommit, which requires that all {{rev}} values are exactly to 
{{//}}.  The canonical way of dealing with shared versions is to 
declare a {{.version}} property in {{ivy-versions.properties}} and then 
use it, rather than literal versions, as the value for the properties that 
should share it.  E.g.:

{noformat}
com.carrotsearch.randomizedtesting.version = 2.5.3
/com.carrotsearch.randomizedtesting/junit4-ant = 
${com.carrotsearch.randomizedtesting.version}
/com.carrotsearch.randomizedtesting/randomizedtesting-runner = 
${com.carrotsearch.randomizedtesting.version}
{noformat} 

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 127 - Still unstable

2018-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/127/

8 tests failed.
FAILED:  
org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat.testSparseSortedFixedLengthVsStoredFields

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([92DF66FA9BB46D37]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([92DF66FA9BB46D37]:0)


FAILED:  org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest.test

Error Message:
shard1 is not consistent.  Got 51 from 
https://127.0.0.1:40994/collection1_shard1_replica_n21 (previous client) and 
got 1124 from https://127.0.0.1:37294/collection1_shard1_replica_n23

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 51 from 
https://127.0.0.1:40994/collection1_shard1_replica_n21 (previous client) and 
got 1124 from https://127.0.0.1:37294/collection1_shard1_replica_n23
at 
__randomizedtesting.SeedInfo.seed([45427567F6CE52C3:CD164ABD58323F3B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1330)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1309)
at 
org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2269 - Still unstable

2018-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2269/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.OpenCloseCoreStressTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.core.OpenCloseCoreStressTest: 1) Thread[id=112, 
name=qtp329524253-112, state=TIMED_WAITING, group=TGRP-OpenCloseCoreStressTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.OpenCloseCoreStressTest: 
   1) Thread[id=112, name=qtp329524253-112, state=TIMED_WAITING, 
group=TGRP-OpenCloseCoreStressTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([6553BF52F8187A4B]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.OpenCloseCoreStressTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=112, name=qtp329524253-112, state=TIMED_WAITING, 
group=TGRP-OpenCloseCoreStressTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=112, name=qtp329524253-112, state=TIMED_WAITING, 
group=TGRP-OpenCloseCoreStressTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([6553BF52F8187A4B]:0)




Build Log:
[...truncated 11775 lines...]
   [junit4] Suite: org.apache.solr.core.OpenCloseCoreStressTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J0/temp/solr.core.OpenCloseCoreStressTest_6553BF52F8187A4B-001/init-core-data-001
   [junit4]   2> 155473 INFO  
(SUITE-OpenCloseCoreStressTest-seed#[6553BF52F8187A4B]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 155475 INFO  
(SUITE-OpenCloseCoreStressTest-seed#[6553BF52F8187A4B]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 155475 INFO  

[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334643#comment-16334643
 ] 

Karl Wright commented on LUCENE-8133:
-

[~ivera], the first example you gave ( (0 0), (0 1),(1 1), (1 0) ) seems 
clockwise to me.  Increasing latitude is north, and increasing longitude is 
east.  So it's working as described in the javadoc.

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11882) SolrMetric registries cause huge memory consumption with lots of transient cores

2018-01-22 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-11882:
-

Assignee: Erick Erickson

> SolrMetric registries cause huge memory consumption with lots of transient 
> cores
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Erick Erickson
>Priority: Major
> Attachments: solr-dump-full_Leak_Suspects.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11880) Avoid creating new exceptions for every request made via MDCAwareThreadPoolExecutor

2018-01-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334607#comment-16334607
 ] 

Shalin Shekhar Mangar commented on SOLR-11880:
--

If we remove that exception then we're running blind. There would be no way to 
get the stack trace of the submitter when an actual exception happens inside 
the executor thread.

> Avoid creating new exceptions for every request made via 
> MDCAwareThreadPoolExecutor
> ---
>
> Key: SOLR-11880
> URL: https://issues.apache.org/jira/browse/SOLR-11880
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
>
> MDCAwareThreadPoolExecutor has this line in it's{{{execute}} method
>  
> {code:java}
> final Exception submitterStackTrace = new Exception("Submitter stack 
> trace");{code}
> This means that every call via the a thread pool will create this exception, 
> and only when it sees an error will it be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11883) NPE on missing nested query in QueryValueSource

2018-01-22 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334598#comment-16334598
 ] 

Munendra S N commented on SOLR-11883:
-

Handle both cases like

{noformat}
query($qq) and query({!edismax v=$qq})
{noformat}


> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-11883.patch
>
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*=query($qq)=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:530)
>   at 

[jira] [Updated] (SOLR-11883) NPE on missing nested query in QueryValueSource

2018-01-22 Thread Munendra S N (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-11883:

Attachment: SOLR-11883.patch

> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-11883.patch
>
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*=query($qq)=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:530)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)
>   at 
> 

[jira] [Commented] (SOLR-11883) NPE on missing nested query in QueryValueSource

2018-01-22 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334592#comment-16334592
 ] 

Munendra S N commented on SOLR-11883:
-

For the same request,
response with 400 status code
{code:json}
{
"responseHeader": {
"status": 400,
"QTime": 1,
"params": {
"q": "*",
"defType": "edismax",
"boost": "query($qq)"
}
},
"error": {
"metadata": [
"error-class",
"org.apache.solr.common.SolrException",
"root-error-class",
"org.apache.solr.search.SyntaxError"
],
"msg": "org.apache.solr.search.SyntaxError: Missing param qq while 
parsing function 'query($qq)'",
"code": 400
}
}
{code}


> NPE on missing nested query in QueryValueSource
> ---
>
> Key: SOLR-11883
> URL: https://issues.apache.org/jira/browse/SOLR-11883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
>
> When the nested query or query de-referencing is used but the query isn't 
> specified Solr throws NPE.
> For following request, 
> {code:java}
> http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*=query($qq)=edismax
> {code}
> Solr returned 500 with stack trace
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
>   at java.util.Arrays.hashCode(Arrays.java:4146)
>   at java.util.Objects.hash(Objects.java:128)
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
>   at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 406 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/406/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([AE9BB41BCE058087:26CF8BC160F9ED7F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-11883) NPE on missing nested query in QueryValueSource

2018-01-22 Thread Munendra S N (JIRA)
Munendra S N created SOLR-11883:
---

 Summary: NPE on missing nested query in QueryValueSource
 Key: SOLR-11883
 URL: https://issues.apache.org/jira/browse/SOLR-11883
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Munendra S N


When the nested query or query de-referencing is used but the query isn't 
specified Solr throws NPE.

For following request, 
{code:java}
http://localhost:8983/solr/blockjoin70001-1492010056/select?q=*=query($qq)=edismax
{code}

Solr returned 500 with stack trace

{code:java}
java.lang.NullPointerException
at 
org.apache.lucene.queries.function.valuesource.QueryValueSource.hashCode(QueryValueSource.java:63)
at java.util.Arrays.hashCode(Arrays.java:4146)
at java.util.Objects.hash(Objects.java:128)
at 
org.apache.lucene.queries.function.ValueSource$WrappedDoubleValuesSource.hashCode(ValueSource.java:275)
at java.util.Arrays.hashCode(Arrays.java:4146)
at java.util.Objects.hash(Objects.java:128)
at 
org.apache.lucene.queries.function.FunctionScoreQuery$MultiplicativeBoostValuesSource.hashCode(FunctionScoreQuery.java:269)
at java.util.Arrays.hashCode(Arrays.java:4146)
at java.util.Objects.hash(Objects.java:128)
at 
org.apache.lucene.queries.function.FunctionScoreQuery.hashCode(FunctionScoreQuery.java:130)
at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1326)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:380)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:530)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
at 

GitHub PR comments move to JIRA "Work Log" tab

2018-01-22 Thread David Smiley
I've been a bit frustrated with the excess noise that the current GitHub PR
code review comments generates onto JIRA (and ensuing extra emails).  I
emailed us...@infra.apache.org and I got some helpful input that we have
some options.  In particular, the integration can be customized to instead
populate the JIRA "Work Log" tab for these PR comments. Here's an example:
https://issues.apache.org/jira/browse/ACCUMULO-4770 (then click "Work Log")

Here are the docs on this:
https://reference.apache.org/pmc/github
More details on the 4 options:
https://git-wip-us.apache.org/repos/infra?p=asfgit-admin.git;a=blob;f=NOTES;hb=HEAD#l293
Thanks to Christopher Tubbs who responded to my inquiries.

This weekend I plan to request that ASF Infra modify our configuration to
be "worklog".  Christopher and I feel this really ought to be the default
setup.

~ David


-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 1222 - Failure!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1222/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly

Error Message:
Unexpected number of elements in the group for intGSF: 4

Stack Trace:
java.lang.AssertionError: Unexpected number of elements in the group for 
intGSF: 4
at 
__randomizedtesting.SeedInfo.seed([F2644420F7092010:69DF2A78BA51124E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly(DocValuesNotIndexedTest.java:377)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([F2644420F7092010:9F98E0DD4D41DF17]:0)
at 

[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334514#comment-16334514
 ] 

Ignacio Vera commented on LUCENE-8133:
--

That is odd. I have been using the factory to build simple square polygons and 
the behavior has been more like WKT, where points are defined in a counter 
clockwise direction as it will be seen from the "top":

(lat,lon)-> (0 0), (0 1),(1 1), (1 0) gives you a convex polygon. Points are 
CCW from the "top".

(lat,lon)-> (0 0), (1 0),(1 1), (0 1) gives you a concave polygon. Points are 
CW from the "top".

Is that the opposite to what is expected? or I am messed up with directions.

[~dsmiley]: Noted, Jira tickets should not be exclusive.

 

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8134) Disallow changing index options on the fly

2018-01-22 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334497#comment-16334497
 ] 

Adrien Grand commented on LUCENE-8134:
--

Here is a patch that takes the same approach that we already use for points and 
doc values. The test checks that index options cannot be changed by 
addDocument, addIndexes(CodecReader) or addIndexes(Directory).

> Disallow changing index options on the fly
> --
>
> Key: LUCENE-8134
> URL: https://issues.apache.org/jira/browse/LUCENE-8134
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8134.patch
>
>
> Follow-up of LUCENE-8031: changing index options is problematic because the 
> way a field is indexed can influence the way the field length should be 
> computed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11882) SolrMetric registries cause huge memory consumption with lots of transient cores

2018-01-22 Thread Eros Taborelli (JIRA)
Eros Taborelli created SOLR-11882:
-

 Summary: SolrMetric registries cause huge memory consumption with 
lots of transient cores
 Key: SOLR-11882
 URL: https://issues.apache.org/jira/browse/SOLR-11882
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Server
Affects Versions: 7.1
Reporter: Eros Taborelli
 Attachments: solr-dump-full_Leak_Suspects.zip

*Description:*

Our setup involves using a lot of small cores (possibly hundred thousand), but 
working only on a few of them at any given time.

We already followed all recommendations in this guide: 
[https://wiki.apache.org/solr/LotsOfCores]

We noticed that after creating/loading around 1000-2000 empty cores, with no 
documents inside, the heap consumption went through the roof despite having set 
transientCacheSize to only 64 (heap size set to 12G).

All cores are correctly set to loadOnStartup=false and transient=true, and we 
have verified via logs that the cores in excess are actually being closed.

However, a reference remains in the 
org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
until a core if fully unloaded.

Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
ConcurrentHashMap until a core is actually fully loaded.

I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size = 
512m) and made a report (attached) using eclipse MAT.

*Desired outcome:*

When a transient core is closed, the references in the SolrMetricManager should 
be removed, in the same fashion the reporters for the core are also closed and 
removed.

In alternative, a unloadOnClose=true|false flag could be implemented to fully 
unload a transient core when closed due to the cache size.

*Note:*

The documentation mentions everywhere that the unused cores will be unloaded, 
but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8134) Disallow changing index options on the fly

2018-01-22 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8134:
-
Attachment: LUCENE-8134.patch

> Disallow changing index options on the fly
> --
>
> Key: LUCENE-8134
> URL: https://issues.apache.org/jira/browse/LUCENE-8134
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8134.patch
>
>
> Follow-up of LUCENE-8031: changing index options is problematic because the 
> way a field is indexed can influence the way the field length should be 
> computed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8134) Disallow changing index options on the fly

2018-01-22 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8134:


 Summary: Disallow changing index options on the fly
 Key: LUCENE-8134
 URL: https://issues.apache.org/jira/browse/LUCENE-8134
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
 Fix For: master (8.0)


Follow-up of LUCENE-8031: changing index options is problematic because the way 
a field is indexed can influence the way the field length should be computed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+37) - Build # 21319 - Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21319/
Java: 64bit/jdk-10-ea+37 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node5:{"core":"c8n_1x3_lf_shard1_replica_n2","base_url":"http://127.0.0.1:42975/niw","node_name":"127.0.0.1:42975_niw","state":"active","type":"NRT","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/16)={   
"pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "state":"down",   
"base_url":"http://127.0.0.1:43095/niw;,   
"core":"c8n_1x3_lf_shard1_replica_n1",   
"node_name":"127.0.0.1:43095_niw",   "type":"NRT"}, 
"core_node5":{   "core":"c8n_1x3_lf_shard1_replica_n2",   
"base_url":"http://127.0.0.1:42975/niw;,   
"node_name":"127.0.0.1:42975_niw",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"c8n_1x3_lf_shard1_replica_n3",   
"base_url":"http://127.0.0.1:33167/niw;,   
"node_name":"127.0.0.1:33167_niw",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node5:{"core":"c8n_1x3_lf_shard1_replica_n2","base_url":"http://127.0.0.1:42975/niw","node_name":"127.0.0.1:42975_niw","state":"active","type":"NRT","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/16)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "state":"down",
  "base_url":"http://127.0.0.1:43095/niw;,
  "core":"c8n_1x3_lf_shard1_replica_n1",
  "node_name":"127.0.0.1:43095_niw",
  "type":"NRT"},
"core_node5":{
  "core":"c8n_1x3_lf_shard1_replica_n2",
  "base_url":"http://127.0.0.1:42975/niw;,
  "node_name":"127.0.0.1:42975_niw",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"c8n_1x3_lf_shard1_replica_n3",
  "base_url":"http://127.0.0.1:33167/niw;,
  "node_name":"127.0.0.1:33167_niw",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([7FB65A1B5889650A:F7E265C1F67508F2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:169)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1456 - Still unstable

2018-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1456/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventFromRestoredState

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all
at 
__randomizedtesting.SeedInfo.seed([CA9ABFF875BC0E0D:CAAC0B385CB1AA67]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventFromRestoredState(TestTriggerIntegration.java:673)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.test

Error Message:
Expected numSlices=5 numReplicas=1 but found 
DocCollection(solrj_collection3//collections/solrj_collection3/state.json/29)={ 
  "pullReplicas":"0",   "replicationFactor":"1",   "shards":{ "shard1":{   

[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs

2018-01-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-8389:
---
Attachment: SOLR-8389.patch

> Convert CDCR peer cluster and other configurations into collection properties 
> modifiable via APIs
> -
>
> Key: SOLR-8389
> URL: https://issues.apache.org/jira/browse/SOLR-8389
> Project: Solr
>  Issue Type: Improvement
>  Components: CDCR, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, 
> SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, 
> SOLR-8389.patch, Screen Shot 2017-12-21 at 5.44.36 PM.png
>
>
> CDCR configuration is kept inside solrconfig.xml which makes it difficult to 
> add or change peer cluster configuration.
> I propose to move all CDCR config to collection level properties in cluster 
> state so that they can be modified using the existing modify collection API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1640 - Still Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1640/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=29236, name=jetty-launcher-5616-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=29236, name=jetty-launcher-5616-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)
at __randomizedtesting.SeedInfo.seed([1836ADAFF73FADF7]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData

Error Message:
Node watch should have fired!

Stack Trace:
java.lang.AssertionError: Node watch should have fired!
at 
__randomizedtesting.SeedInfo.seed([1836ADAFF73FADF7:3EA642FDA3B16BFD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData(TestDistribStateManager.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 

[jira] [Commented] (SOLR-11624) collection creation should not also overwrite/delete any configset but it can!

2018-01-22 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334378#comment-16334378
 ] 

Ishan Chattopadhyaya commented on SOLR-11624:
-

Thanks [~abhidemon] for the patch (test+documentation) and thanks to Erick, 
David & Shawn for reviewing the approaches / patches.

> collection creation should not also overwrite/delete any configset but it can!
> --
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, 
> SOLR-11624.4.patch, SOLR-11624.patch, SOLR-11624.patch, SOLR-11624.patch, 
> SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11480) Remove lingering admin-extra.html files and references

2018-01-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-11480:
--

Assignee: Christine Poerschke

> Remove lingering admin-extra.html files and references
> --
>
> Key: SOLR-11480
> URL: https://issues.apache.org/jira/browse/SOLR-11480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, examples
>Affects Versions: 6.6
>Reporter: Eric Pugh
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11480.patch, SOLR-11480.patch
>
>
> There are still some admin-extra related files, most confusingly, in the 
> techproducts configset!   I started up solr with -e techproducts, and saw the 
> files listed, but couldn't use them.   
> While sad to see it go, I think it's better to have it be completely removed, 
> versus still lingering about.
> Related to https://issues.apache.org/jira/browse/SOLR-8140.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11480) Remove lingering admin-extra.html files and references

2018-01-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11480:
---
Attachment: SOLR-11480.patch

> Remove lingering admin-extra.html files and references
> --
>
> Key: SOLR-11480
> URL: https://issues.apache.org/jira/browse/SOLR-11480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, examples
>Affects Versions: 6.6
>Reporter: Eric Pugh
>Priority: Minor
> Attachments: SOLR-11480.patch, SOLR-11480.patch
>
>
> There are still some admin-extra related files, most confusingly, in the 
> techproducts configset!   I started up solr with -e techproducts, and saw the 
> files listed, but couldn't use them.   
> While sad to see it go, I think it's better to have it be completely removed, 
> versus still lingering about.
> Related to https://issues.apache.org/jira/browse/SOLR-8140.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11480) Remove lingering admin-extra.html files and references

2018-01-22 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334369#comment-16334369
 ] 

Christine Poerschke commented on SOLR-11480:


Thanks [~epugh] for noticing and following up on this!

Today I also had an "what is this admin-extra stuff?" confusion moment.

Attached patch is your October patch applied to current master branch (just one 
small conflict) plus reference removal from core-specific-tools.adoc and 
core-overview.js files too.

> Remove lingering admin-extra.html files and references
> --
>
> Key: SOLR-11480
> URL: https://issues.apache.org/jira/browse/SOLR-11480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, examples
>Affects Versions: 6.6
>Reporter: Eric Pugh
>Priority: Minor
> Attachments: SOLR-11480.patch, SOLR-11480.patch
>
>
> There are still some admin-extra related files, most confusingly, in the 
> techproducts configset!   I started up solr with -e techproducts, and saw the 
> files listed, but couldn't use them.   
> While sad to see it go, I think it's better to have it be completely removed, 
> versus still lingering about.
> Related to https://issues.apache.org/jira/browse/SOLR-8140.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334344#comment-16334344
 ] 

Karl Wright commented on LUCENE-8133:
-

One thing to note is the javadoc for createGeoPolygon():

{code}
  /** Create a GeoPolygon using the specified points and holes, using order to 
determine 
   * siding of the polygon.  Much like ESRI, this method uses clockwise to 
indicate the space
   * on the same side of the shape as being inside, and counter-clockwise to 
indicate the
   * space on the opposite side as being inside.
   * @param pointList is a list of the GeoPoints to build an arbitrary polygon 
out of.  If points go
   *  clockwise from a given pole, then that pole should be within the polygon. 
 If points go
   *  counter-clockwise, then that pole should be outside the polygon.
   * @return a GeoPolygon corresponding to what was specified.
   */
{code}

In other words, the behavior is quite different depending on the order the 
points come in.  The code considers the order to determine the "pole" -- that 
is, the point that's actually within the polygon.  Here are the points in the 
test case:

{code}
+points.add(new GeoPoint(PlanetModel.SPHERE, 
Geo3DUtil.fromDegrees(-23.434456), Geo3DUtil.fromDegrees(14.459204)));
+points.add(new GeoPoint(PlanetModel.SPHERE, Geo3DUtil.fromDegrees( 
-23.43394), Geo3DUtil.fromDegrees(14.459206)));
+points.add(new GeoPoint(PlanetModel.SPHERE, 
Geo3DUtil.fromDegrees(-23.434196), Geo3DUtil.fromDegrees(14.458647)));
+points.add(new GeoPoint(PlanetModel.SPHERE, 
Geo3DUtil.fromDegrees(-23.434452), Geo3DUtil.fromDegrees(14.458646)));
{code}

My guess is that the "pole" chosen for this ordering is in the part of the 
sphere that is "everything but" the polygon that was intended.  That *should* 
work nevertheless.

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334341#comment-16334341
 ] 

David Smiley commented on LUCENE-8133:
--

Since we're on the subject of JIRA etiquette, please don't address JIRA issues 
to specific people – as in "hello ".  Everyone should feel invited to 
participate, even if, practically speaking, certain parts of our code have 
historically only been improved by one person.  That said, CC away to anyone 
you think would take an interest in the issue.

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11624) collection creation should not also overwrite/delete any configset but it can!

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334308#comment-16334308
 ] 

ASF subversion and git services commented on SOLR-11624:


Commit 6ef838663bd0a006109e427ae2995192f38f2453 in lucene-solr's branch 
refs/heads/branch_7x from [~ishan1604]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ef8386 ]

SOLR-11624: Autocreated configsets will not use .AUTOCREATED suffix


> collection creation should not also overwrite/delete any configset but it can!
> --
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, 
> SOLR-11624.4.patch, SOLR-11624.patch, SOLR-11624.patch, SOLR-11624.patch, 
> SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11624) collection creation should not also overwrite/delete any configset but it can!

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334302#comment-16334302
 ] 

ASF subversion and git services commented on SOLR-11624:


Commit 183835ed2485915006746e456d7124cb5d5d4abb in lucene-solr's branch 
refs/heads/master from [~ishan1604]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=183835e ]

SOLR-11624: Autocreated configsets will not use .AUTOCREATED suffix


> collection creation should not also overwrite/delete any configset but it can!
> --
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, 
> SOLR-11624.4.patch, SOLR-11624.patch, SOLR-11624.patch, SOLR-11624.patch, 
> SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334286#comment-16334286
 ] 

Karl Wright commented on LUCENE-8133:
-

I will be able to look at this by this afternoon.

 

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #298: Add web app logging interface auto-refresh toggle.

2018-01-22 Thread BramVer
Github user BramVer commented on the issue:

https://github.com/apache/lucene-solr/pull/298
  
This would be a very welcome addition to the page, strange that it isn't 
there in the first place.
Don't know how people deal with logging because right now it is not 
possible for me to read anything if they all close after 10 seconds :)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8132) HyphenationDecompoundTokenFilter does not set position/offset attributes correctly

2018-01-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334273#comment-16334273
 ] 

Robert Muir commented on LUCENE-8132:
-

Thats what HyphenationDecompoundTokenFilter already does. I think maybe the 
name is confusing, at least look at the class javadocs :)

In this case I'm sorry but I think you are stretching, (and you aren't 
correct). We should fix these filters and enforce tokenizer as input, seriously.

> HyphenationDecompoundTokenFilter does not set position/offset attributes 
> correctly
> --
>
> Key: LUCENE-8132
> URL: https://issues.apache.org/jira/browse/LUCENE-8132
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6.1, 7.2.1
>Reporter: Holger Bruch
>Priority: Major
>
> HyphenationDecompoundTokenFilter and DictionaryDecompoundTokenFilter set 
> positionIncrement to 0 for all subwords, reuse start/endoffset of the 
> original token and ignore positionLength completly.
> In consequence, the QueryBuilder generates a SynonymQuery comprising all 
> subwords, which should rather treated as individual terms.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11051) Use disk free metric in default cluster preferences

2018-01-22 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11051:
--
Attachment: SOLR-11051.patch

> Use disk free metric in default cluster preferences
> ---
>
> Key: SOLR-11051
> URL: https://issues.apache.org/jira/browse/SOLR-11051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11051.patch
>
>
> Currently, the default cluster preferences is simply based on core count:
> {code}
> {minimize : cores, precision:1}
> {code}
> We should start using the disk free metric in default cluster preferences to 
> place replicas on nodes with the most disk space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11871) MOVEREPLICA suggester should not suggest the leader to be moved if there are other replicas

2018-01-22 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11871.
---
   Resolution: Fixed
Fix Version/s: 7.3

> MOVEREPLICA suggester should not suggest the leader to be moved if there are 
> other replicas
> ---
>
> Key: SOLR-11871
> URL: https://issues.apache.org/jira/browse/SOLR-11871
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 7.3
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11871) MOVEREPLICA suggester should not suggest the leader to be moved if there are other replicas

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334243#comment-16334243
 ] 

ASF subversion and git services commented on SOLR-11871:


Commit 40503e94964700deb0c3fa338260ffd2bcc41f09 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=40503e9 ]

SOLR-11871: MoveReplicaSuggester should not suggest leader if other replicas 
are available


> MOVEREPLICA suggester should not suggest the leader to be moved if there are 
> other replicas
> ---
>
> Key: SOLR-11871
> URL: https://issues.apache.org/jira/browse/SOLR-11871
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11871) MOVEREPLICA suggester should not suggest the leader to be moved if there are other replicas

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334239#comment-16334239
 ] 

ASF subversion and git services commented on SOLR-11871:


Commit 4aeabe7ff25c78867cb10993a60aada1faadd0af in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4aeabe7 ]

SOLR-11871: MoveReplicaSuggester should not suggest leader if other replicas 
are available


> MOVEREPLICA suggester should not suggest the leader to be moved if there are 
> other replicas
> ---
>
> Key: SOLR-11871
> URL: https://issues.apache.org/jira/browse/SOLR-11871
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11871) MOVEREPLICA suggester should not suggest the leader to be moved if there are other replicas

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334238#comment-16334238
 ] 

ASF subversion and git services commented on SOLR-11871:


Commit d0a5dbe8d592120891205d7136f6368c473d0022 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d0a5dbe ]

SOLR-11871: MoveReplicaSuggester should not suggest leader if other replicas 
are available


> MOVEREPLICA suggester should not suggest the leader to be moved if there are 
> other replicas
> ---
>
> Key: SOLR-11871
> URL: https://issues.apache.org/jira/browse/SOLR-11871
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11871) MOVEREPLICA suggester should not suggest the leader to be moved if there are other replicas

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334237#comment-16334237
 ] 

ASF subversion and git services commented on SOLR-11871:


Commit 876ecd87fb44b828785681238915eb0e1965ad58 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=876ecd8 ]

SOLR-11871: MoveReplicaSuggester should not suggest leader if other replicas 
are available


> MOVEREPLICA suggester should not suggest the leader to be moved if there are 
> other replicas
> ---
>
> Key: SOLR-11871
> URL: https://issues.apache.org/jira/browse/SOLR-11871
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11747) Pause triggers until actions finish executing and the cool down period expires

2018-01-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334233#comment-16334233
 ] 

ASF subversion and git services commented on SOLR-11747:


Commit 9fe1a8b12d301b26b8a106fdf270993e37bfd033 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9fe1a8b ]

SOLR-11747: Pause triggers until actions finish executing and the cool down 
period expires

(cherry picked from commit 5425353)


> Pause triggers until actions finish executing and the cool down period expires
> --
>
> Key: SOLR-11747
> URL: https://issues.apache.org/jira/browse/SOLR-11747
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11747.patch, SOLR-11747.patch
>
>
> Currently, the cool down period is a fixed period during which events 
> generated from triggers aren't accepted. The cool down starts after actions 
> performed by a previous event finish. But it doesn't protect against triggers 
> acting on intermediate cluster state during the time the actions are 
> executing. Events can still be created and persisted to ZK and will be 
> executed serially once the cool down finishes.
> To protect against, this I intend to modify the behaviour of the system to 
> pause all triggers from execution between the start of actions and end of 
> cool down period. The triggers will be resumed after the cool down period 
> expires.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11747) Pause triggers until actions finish executing and the cool down period expires

2018-01-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-11747.
--
Resolution: Fixed

> Pause triggers until actions finish executing and the cool down period expires
> --
>
> Key: SOLR-11747
> URL: https://issues.apache.org/jira/browse/SOLR-11747
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11747.patch, SOLR-11747.patch
>
>
> Currently, the cool down period is a fixed period during which events 
> generated from triggers aren't accepted. The cool down starts after actions 
> performed by a previous event finish. But it doesn't protect against triggers 
> acting on intermediate cluster state during the time the actions are 
> executing. Events can still be created and persisted to ZK and will be 
> executed serially once the cool down finishes.
> To protect against, this I intend to modify the behaviour of the system to 
> pause all triggers from execution between the start of actions and end of 
> cool down period. The triggers will be resumed after the cool down period 
> expires.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4401 - Unstable!

2018-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4401/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index ٠ out-of-bounds for length ٠

Stack Trace:
java.lang.IndexOutOfBoundsException: Index ٠ out-of-bounds for length ٠
at 
__randomizedtesting.SeedInfo.seed([CED80979EDBD576C:DA90522CCEBAEA72]:0)
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:439)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Updated] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8133:
-
Attachment: image-2018-01-22-12-55-17-096.png

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334179#comment-16334179
 ] 

Ignacio Vera commented on LUCENE-8133:
--

Note I am trying to construct the following polygon:

!image-2018-01-22-12-55-17-096.png!

 

 

 

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png, 
> image-2018-01-22-12-55-17-096.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8133) Small polygon fails with misleading error: Convex polygon has a side that is more than 180 degrees

2018-01-22 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8133:
-
Summary: Small polygon fails with misleading error:  Convex polygon has a 
side that is more than 180 degrees  (was: Polygon fails with misleading error.)

> Small polygon fails with misleading error:  Convex polygon has a side that is 
> more than 180 degrees
> ---
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8133) Polygon fails with misleading error.

2018-01-22 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8133:
-
Attachment: image-2018-01-22-12-52-21-656.png

> Polygon fails with misleading error.
> 
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch, image-2018-01-22-12-52-21-656.png
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8133) Polygon fails with misleading error.

2018-01-22 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334177#comment-16334177
 ] 

Adrien Grand commented on LUCENE-8133:
--

[~ivera] Could you give an example of the error that you are getting in the 
issue description? It's quite hard to figure out what the issue is about 
without context.

> Polygon fails with misleading error.
> 
>
> Key: LUCENE-8133
> URL: https://issues.apache.org/jira/browse/LUCENE-8133
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8133.patch
>
>
> Hi [~karl wright],
> I am trying to create a polygon that is valid but I am getting an error which 
> is probably incorrect. I think it is a problem with precision.
> I will attach a test.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >