[jira] [Commented] (LUCENE-8675) Divide Segment Search Amongst Multiple Threads

2019-02-01 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758862#comment-16758862
 ] 

Atri Sharma commented on LUCENE-8675:
-

{quote}If some segments are getting large enough that intra-segment parallelism 
becomes appealing, then maybe an easier and more efficient way to increase 
parallelism is to instead reduce the maximum segment size so that inter-segment 
parallelism has more potential for parallelizing query execution.
{quote}
Would that not lead to a much higher number of segments than required? That 
could lead to issues such as a lot of open file handles and too many threads 
required for scanning (although we would assign multiple small segments to a 
single thread).

Thanks for the point about range queries, that is an important thought. I will 
follow up with a separate patch on top of this which will do the first phase of 
BKD iteration and share the generated bitset across N parallel threads, where N 
is equal to the remaining clauses and each thread intersects a clause with the 
bitset.

> Divide Segment Search Amongst Multiple Threads
> --
>
> Key: LUCENE-8675
> URL: https://issues.apache.org/jira/browse/LUCENE-8675
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Atri Sharma
>Priority: Major
>
> Segment search is a single threaded operation today, which can be a 
> bottleneck for large analytical queries which index a lot of data and have 
> complex queries which touch multiple segments (imagine a composite query with 
> range query and filters on top). This ticket is for discussing the idea of 
> splitting a single segment into multiple threads based on mutually exclusive 
> document ID ranges.
> This will be a two phase effort, the first phase targeting queries returning 
> all matching documents (collectors not terminating early). The second phase 
> patch will introduce staged execution and will build on top of this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 1026 - Unstable!

2019-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/1026/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestPullReplicaErrorHandling.testPullReplicaDisconnectsFromZooKeeper

Error Message:
Expecting node to be disconnected Timeout waiting to see state for 
collection=pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper
 
:DocCollection(pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper//collections/pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper/state.json/6)={
   "pullReplicas":"1",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   
"core":"pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper_shard1_replica_n1",
   "base_url":"http://127.0.0.1:60673/solr;,   
"node_name":"127.0.0.1:60673_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node4":{   
"core":"pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper_shard1_replica_p2",
   "base_url":"http://127.0.0.1:60799/solr;,   
"node_name":"127.0.0.1:60799_solr",   "state":"active",   
"type":"PULL",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0"} Live 
Nodes: [127.0.0.1:45091_solr, 127.0.0.1:57136_solr, 127.0.0.1:60673_solr, 
127.0.0.1:60799_solr] Last available state: 
DocCollection(pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper//collections/pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper/state.json/6)={
   "pullReplicas":"1",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   
"core":"pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper_shard1_replica_n1",
   "base_url":"http://127.0.0.1:60673/solr;,   
"node_name":"127.0.0.1:60673_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node4":{   
"core":"pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper_shard1_replica_p2",
   "base_url":"http://127.0.0.1:60799/solr;,   
"node_name":"127.0.0.1:60799_solr",   "state":"active",   
"type":"PULL",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expecting node to be disconnected
Timeout waiting to see state for 
collection=pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper
 
:DocCollection(pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper//collections/pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper/state.json/6)={
  "pullReplicas":"1",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  
"core":"pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper_shard1_replica_n1",
  "base_url":"http://127.0.0.1:60673/solr;,
  "node_name":"127.0.0.1:60673_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node4":{
  
"core":"pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper_shard1_replica_p2",
  "base_url":"http://127.0.0.1:60799/solr;,
  "node_name":"127.0.0.1:60799_solr",
  "state":"active",
  "type":"PULL",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:45091_solr, 127.0.0.1:57136_solr, 127.0.0.1:60673_solr, 
127.0.0.1:60799_solr]
Last available state: 
DocCollection(pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper//collections/pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper/state.json/6)={
  "pullReplicas":"1",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  
"core":"pull_replica_error_handling_test_pull_replica_disconnects_from_zoo_keeper_shard1_replica_n1",
  "base_url":"http://127.0.0.1:60673/solr;,
  "node_name":"127.0.0.1:60673_solr",
  "state":"active",
  "type":"NRT",
  

[jira] [Commented] (SOLR-13131) Category Routed Aliases

2019-02-01 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758855#comment-16758855
 ] 

Gus Heck commented on SOLR-13131:
-

Attached a series of images showing how indexing such a use case is simplified. 
With CRA's one could use a single generic send to so solr step, whereas without 
you're writing custom batching code or configuring a myriad of generic senders. 

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: indexingWithCRA.png, indexingwithoutCRA.png, 
> indexintWithoutCRA2.png
>
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13131) Category Routed Aliases

2019-02-01 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-13131:

Attachment: indexingwithoutCRA.png
indexingWithCRA.png
indexintWithoutCRA2.png

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: indexingWithCRA.png, indexingwithoutCRA.png, 
> indexintWithoutCRA2.png
>
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13131) Category Routed Aliases

2019-02-01 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758853#comment-16758853
 ] 

Gus Heck commented on SOLR-13131:
-

This feature starts from the position that you have a use case where you want 
accept a heterogeneous stream of data and segregate it into various 
collections. If you don't have a reason to separate the data into distinct 
collections, or the data flows generating documents are separate and not easily 
merged, there would be little or no call for using a CRA.

The key benefit is that it's data driven, and doesn't require human 
intervention or down time for configuration/devops/programming/etc to begin 
accepting a new type. This could be important if one is feeding a continuous 
stream of IoT sensor data (for example) and new sensor 
types/brands/locations/etc may come on line and be added without notice.

Autmated collection creation from outside solr based on data values in the 
documents doesn't have a smooth, easy solution that I can see. One obviously 
can't run a check for the existence of a collection for every document via 
collections api. That would be insanely slow. Parsing exception messages to 
know when you need to create a new collection also seems very ugly.  A workable 
solution likely involves tracking solr's list of collections separately, but 
that will have obvious concurrency pitfalls.  One could possibly build indexing 
infrastructure that monitored zookeeper directly similar to what Solr does, but 
that's complex and requires skill with zookeeper. Also, I'm not sure I like 
that idea since it turns zookeeper's organization and details into a public API.

By way of contrast, Solr is already well positioned to know it's own state, 
handle concurrency and react to document values.

Another benefit is sheer convenience and reduction of client side (indexing) 
complexity when segregating based on a field value. One doesn't have to build 
and maintain infrastructure to map categories to your collections, which would 
be required when building URL's to send the data to specific collections or 
setting collections on each client... and if you're handling a mixed stream 
then you have to batch each type independently because they will be headed for 
different URL's or handled by separate SolrJ clients... 

I can also imagine CRA's greatly easing construction of systems with a 
collection per tenant pattern. The indexing infrastructure would always stamp 
the tenant's data with their customer_id and so long as that happens you can be 
sure that solr will route to separate collections on customer_id. The front end 
can build it's queries knowing the customer id and setting the appropriate 
collection. Leaks between customers become impossible, and there is absolutely 
no need to change infrastructure to add a customer (other than adding nodes for 
capacity every N customers of course). There also would be no need or write 
code that has to run admin level commands. Admin command access could possibly 
be removed from the application entirely. Running reports across tenants 
(querying via the alias in a back end application) would "just work" again with 
no special programming. Moving big or noisy tenants to preferred hardware would 
not require software/config changes either, just admin commands, or 
auto-scaling labels, and wouldn't disrupt any of the foregoing.

Much like TRA's there are ways to do any/all of this with custom code, or 
alternate infrastructure, the goal is to make it easier and more hands off.

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 1229 - Unstable

2019-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1229/

3 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:42480/ym_/qc

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:42480/ym_/qc
at 
__randomizedtesting.SeedInfo.seed([12BB9A11C4D0E54E:9AEFA5CB6A2C88B6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:338)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1073)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1047)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-02-01 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758826#comment-16758826
 ] 

Mark Miller commented on SOLR-13189:


{quote}i guess i was just hoping for a less complicated
{quote}
I give the least complicated way:
{quote}More practically, the changed behavior mostly affects us injecting 
fails. That type of test should be isolated and have correct checking. For the 
rest of the tests, we probably don't expect fails and so failing if we have 
them seems fine, something likely needs to be fixed or you are checking wrong.
{quote}
We should only inject fails on tests specifically designed for that, not 
generally across tests. That should have worked with the http recovery call, 
but it doesn't anymore.

Also, while that patch is a hack, it's also towards the direction we need to 
move anyway. We need to change all the old style Solr cloud tests to work how I 
changed that check consistency method (it just needs to be done in non hacking 
way). Then we can move all those tests to the more modern solrcloud test base 
class.

The main thing stopping that has been our use of those Jetty instance maps - we 
need to drop that stuff.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tflobbe opened a new pull request #557: Removed some unused variables from DistributedUpdateProcessor

2019-02-01 Thread GitBox
tflobbe opened a new pull request #557: Removed some unused variables from 
DistributedUpdateProcessor
URL: https://github.com/apache/lucene-solr/pull/557
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Parallel Scoring

2019-02-01 Thread J. Delgado
Hi folks,

Assuming documents can be scored independently, what is the level of
document scoring parallelism (thread or process wise) that have people
experimented with on a single multi-core machine containing a single shard?


[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 29 - Unstable!

2019-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/29/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1062)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:882)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1193)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:367)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:975)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:882)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:778)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:975)  at 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 446 - Still Unstable

2019-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/446/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1054)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:508)  
at org.apache.solr.core.SolrCore.(SolrCore.java:959)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:359)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:738)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:770)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-10.0.1) - Build # 119 - Failure!

2019-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/119/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 15437 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/solr/build/solr-core/test/temp/junit4-J1-20190201_223411_04314676816697479619015.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7f0e45520409, pid=18646, tid=18685
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (10.0.1+10) (build 
10.0.1+10)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (10.0.1+10, mixed mode, tiered, 
serial gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xc48409]  PhaseIdealLoop::split_up(Node*, Node*, 
Node*) [clone .part.40]+0x619
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/solr/build/solr-core/test/J1/hs_err_pid18646.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/solr/build/solr-core/test/J1/replay_pid18646.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 215 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-10.0.1/bin/java -XX:-UseCompressedOops 
-XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=406ED4C5E780AAC2 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-8.x-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/solr/build/solr-core/test/J1
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/solr/build/solr-core/test/temp
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dfile.encoding=UTF-8 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[jira] [Updated] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-01 Thread jefferyyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jefferyyuan updated LUCENE-8662:

Summary: Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum  (was: Change 
TermsEnum.seekExact(BytesRef) to abstract)

> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13212) when TestInjection.nonGracefullClose causes a TestShutdownFailError, test is garunteed tofail due to leaked objects (causes failures in (Hdfs)RestartWhileUpdatingTest)

2019-02-01 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13212:
---

 Summary: when TestInjection.nonGracefullClose causes a 
TestShutdownFailError, test is garunteed tofail due to leaked objects (causes 
failures in (Hdfs)RestartWhileUpdatingTest) 
 Key: SOLR-13212
 URL: https://issues.apache.org/jira/browse/SOLR-13212
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


 While investigating suite level test failures in {{RestartWhileUpdatingTest}} 
(and it's subclass  {{HdfsRestartWhileUpdatingTest}}) due to leaked objects i 
realized that this happens anytime {{TestInjection.injectNonGracefullClose}} 
causes a {{TestShutdownFailError}} to be thrown.

The test will still be able to restart the node, and the test (method) will 
succeed, but the suite will fail due to the leaked objects.

NOTE: These are currently the only tests using 
{{TestInjection.nonGracefullClose}}.  





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract

2019-02-01 Thread jefferyyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758746#comment-16758746
 ] 

jefferyyuan commented on LUCENE-8662:
-

[~simonw] [~dsmiley] addressed your comments in the PR and thanks : )

> Change TermsEnum.seekExact(BytesRef) to abstract
> 
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13190) Fuzzy search treated as server error instead of client error when terms are too complex

2019-02-01 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758713#comment-16758713
 ] 

Mike Drob commented on SOLR-13190:
--

[~mikemccand] - WDYT? You were the original person to add this exception in 
LUCENE-6046, not sure if you knew that it was also affecting Fuzzy terms as 
well when planning for direct regex construction.

> Fuzzy search treated as server error instead of client error when terms are 
> too complex
> ---
>
> Key: SOLR-13190
> URL: https://issues.apache.org/jira/browse/SOLR-13190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: master (9.0)
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We've seen a fuzzy search end up breaking the automaton and getting reported 
> as a server error. This usage should be improved by
> 1) reporting as a client error, because it's similar to something like too 
> many boolean clauses queries in how an operator should deal with it
> 2) report what field is causing the error, since that currently must be 
> deduced from adjacent query logs and can be difficult if there are multiple 
> terms in the search
> This trigger was added to defend against adversarial regex but somehow hits 
> fuzzy terms as well, I don't understand enough about the automaton mechanisms 
> to really know how to approach a fix there, but improving the operability is 
> a good first step.
> relevant stack trace:
> {noformat}
> org.apache.lucene.util.automaton.TooComplexToDeterminizeException: 
> Determinizing automaton with 13632 states and 21348 transitions would result 
> in more than 1 states.
>   at 
> org.apache.lucene.util.automaton.Operations.determinize(Operations.java:746)
>   at 
> org.apache.lucene.util.automaton.RunAutomaton.(RunAutomaton.java:69)
>   at 
> org.apache.lucene.util.automaton.ByteRunAutomaton.(ByteRunAutomaton.java:32)
>   at 
> org.apache.lucene.util.automaton.CompiledAutomaton.(CompiledAutomaton.java:247)
>   at 
> org.apache.lucene.util.automaton.CompiledAutomaton.(CompiledAutomaton.java:133)
>   at 
> org.apache.lucene.search.FuzzyTermsEnum.(FuzzyTermsEnum.java:143)
>   at org.apache.lucene.search.FuzzyQuery.getTermsEnum(FuzzyQuery.java:154)
>   at 
> org.apache.lucene.search.MultiTermQuery$RewriteMethod.getTermsEnum(MultiTermQuery.java:78)
>   at 
> org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:58)
>   at 
> org.apache.lucene.search.TopTermsRewrite.rewrite(TopTermsRewrite.java:67)
>   at 
> org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:310)
>   at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
>   at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:374)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-02-01 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758720#comment-16758720
 ] 

Hoss Man commented on SOLR-13189:
-

bq. markmiller: Here is a hack to that test.

yeah, fair enough -- sorry, I wasn't trying to be dismissive of your help, ... 
i guess i was just hoping for a less complicated (from the perspective of test 
writers) solution that we could show case as the gold standard of how to 
(generically) "wait for recovery" after (potentially) injecting failures ... 
but i'm not in a rush to re-add TestInjection back into 
TestStressCloudBlindAtomicUpdates -- it's a "nice to have" but not something I 
care about enough to get over my general feeling of ickiness at needing call 
{{Thread.sleep}} in a loop that much : )


> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459880077
 
 
   Build https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23610/ 
includes this master commit and was BUILD SUCCESSFUL - Total time: 75 minutes 
44 seconds


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12291) OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on each node

2019-02-01 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758685#comment-16758685
 ] 

Mikhail Khludnev commented on SOLR-12291:
-

I extracted those twins {{asyncId, requestMap}} into separate decorator 
{{ShardRequestControler}}. I want to migrate to another base test class, and 
cover all APIs with async checks. I want to keep same straightforward regex 
assert. WDYT? I think week or two is enough to push it into. 

> OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on 
> each node
> --
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-12291.patch, SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12291) OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on each node

2019-02-01 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12291:

Attachment: SOLR-12291.patch

> OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on 
> each node
> --
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-12291.patch, SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12999) Index replication could delete segments first

2019-02-01 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758679#comment-16758679
 ] 

David Smiley commented on SOLR-12999:
-

The fix-version was 8.0 which was incorrect.  Just now I created an "8.x" 
version in JIRA's admin for both Lucene & Jira in accordance with our release 
process [https://wiki.apache.org/lucene-java/ReleaseTodo#Add_New_JIRA_Versions] 
 [~jimczi] seems like you forgot this?

I am still concerned about no code review to a component that is so gnarly as 
IndexFetcher but this isn't a blocker.

> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.x
>
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12999) Index replication could delete segments first

2019-02-01 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12999:

Fix Version/s: 8.x

> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.x
>
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12999) Index replication could delete segments first

2019-02-01 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12999:

Fix Version/s: (was: 8.0)

> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12999) Index replication could delete segments first

2019-02-01 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758664#comment-16758664
 ] 

Noble Paul commented on SOLR-12999:
---

It's not in 8.0 it's in 8_x branch

> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12999) Index replication could delete segments first

2019-02-01 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758658#comment-16758658
 ] 

David Smiley commented on SOLR-12999:
-

Woah; why did this land in 8.0 and without a code review?  That's a no-no 
during a release.

> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13131) Category Routed Aliases

2019-02-01 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758655#comment-16758655
 ] 

David Smiley commented on SOLR-13131:
-

Can you help me understand a use-case where a client system can't easily manage 
this itself (and thus is best done here in SolrCloud internally)?  You loosely 
mentioned a use-case but maybe I'm unimaginative but it doesn't seem like a big 
deal for a client to detect it needs to create a collection first.  With time 
routed data, there is interesting stuff that SolrCloud can do that's a pain for 
a client but I'm not appreciating what this is for a simple "category" case.

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5211) updating parent as childless makes old children orphans

2019-02-01 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-5211.

Resolution: Fixed

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-5211.patch, SOLR-5211.patch, SOLR-5211.patch, 
> SOLR-5211.patch, SOLR-5211.patch, SOLR-5211_docs.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5211) updating parent as childless makes old children orphans

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758647#comment-16758647
 ] 

ASF subversion and git services commented on SOLR-5211:
---

Commit 46151b29be621af49a90f271f60d3c3549aecedb in lucene-solr's branch 
refs/heads/branch_8_0 from David Wayne Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=46151b2 ]

SOLR-5211: Document that delete-by-id (and updates) don't affect child/nested 
docs

(cherry picked from commit 372d68f7f68a5a9238fdfbddeae6488432795603)


> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-5211.patch, SOLR-5211.patch, SOLR-5211.patch, 
> SOLR-5211.patch, SOLR-5211.patch, SOLR-5211_docs.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5211) updating parent as childless makes old children orphans

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758645#comment-16758645
 ] 

ASF subversion and git services commented on SOLR-5211:
---

Commit 10930fd83a0fb8de9d308737f88cacd50951cc13 in lucene-solr's branch 
refs/heads/branch_8x from David Wayne Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=10930fd ]

SOLR-5211: Document that delete-by-id (and updates) don't affect child/nested 
docs

(cherry picked from commit 372d68f7f68a5a9238fdfbddeae6488432795603)


> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-5211.patch, SOLR-5211.patch, SOLR-5211.patch, 
> SOLR-5211.patch, SOLR-5211.patch, SOLR-5211_docs.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5211) updating parent as childless makes old children orphans

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758641#comment-16758641
 ] 

ASF subversion and git services commented on SOLR-5211:
---

Commit 372d68f7f68a5a9238fdfbddeae6488432795603 in lucene-solr's branch 
refs/heads/master from David Wayne Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=372d68f ]

SOLR-5211: Document that delete-by-id (and updates) don't affect child/nested 
docs


> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-5211.patch, SOLR-5211.patch, SOLR-5211.patch, 
> SOLR-5211.patch, SOLR-5211.patch, SOLR-5211_docs.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758634#comment-16758634
 ] 

ASF subversion and git services commented on SOLR-9515:
---

Commit 4a3ddc94d8880f7c1f76dedbe7d892b0542bb6ee in lucene-solr's branch 
refs/heads/master from Mark Robert Miller
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4a3ddc9 ]

SOLR-9515: Update to Hadoop 3 (Mark Miller, Kevin Risden)

Signed-off-by: Kevin Risden 


> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk merged pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk merged pull request #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459850208
 
 
   Planning to commit to master (again) and make sure the world doesn't fall 
apart (again). I'll keep and eye on Jenkins builds. If successful, I'll look at 
8.x/8.0 this weekend.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-01 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758631#comment-16758631
 ] 

Kevin Risden commented on SOLR-9515:


Squashed commits back down to a single commit to make it easier to commit.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9515) Update to Hadoop 3

2019-02-01 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9515:
---
Attachment: SOLR-9515.patch

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459849996
 
 
   Squashed commits back down to a single commit to make it easier to commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-02-01 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758625#comment-16758625
 ] 

Mark Miller commented on SOLR-13189:


{quote} * was bad in real life because if replica was having problems, it might 
not recognize/respond to LIR apprpriate{quote}
It was fine from that perspective when Tim added LIR - the original 
communication through ZK. The problem was that it was tied to each update 
before, so if you had lots of fails, you would make tons of http calls and tons 
of requests to recover (we throttle recoveries now to prevent this type of 
thing). So that either needed to be removed, or made more efficient by not 
linking every http call to a document fail. I think it's been removed or else 
it's broken.

bq. this is good in real life because it's less dependent on healthy 
network/http requests

We already had ZK based LIR on top of the http request attempt. I think the 
rewritten improved LIR removed (rather than making efficient) or broke the 
request attempt.

bq. this is bad in tests because there is an inherent and hard to predict delay 
the replica even realizes it needs to go into recovery

It depends on the test. If you don't want flakey tests, all of them should obey 
the rules of the system when checking things as much as possible. More 
practically, the changed behavior mostly affects us injecting fails. That type 
of test should be isolated and have correct checking. For the rest of the 
tests, we probably don't expect fails and so failing if we have them seems 
fine, something likely needs to be fixed or you are checking wrong.

bq. I haven't dug into your patch that deep, but so far is seems really 
hackish? 

markmiller: Here is a hack to that test.

This is just to fix your test.

bq.  it makes the test wait (or timeout) until it is consistent

If you want to write a test like that, those are the rules, so that is what it 
does. Recovery can be re-triggered and stuff can happen that will take a 
consistent state longer than you might think it should take. So either your 
test is not creating the env you think it is, or it is, and this is how you 
properly test it.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3489 - Unstable!

2019-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3489/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([A42C5845B74A0F98:93B7AC5B8F86D23C]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.renewDelegationToken(TestDelegationWithHadoopAuth.java:120)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.verifyDelegationTokenRenew(TestDelegationWithHadoopAuth.java:303)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew(TestDelegationWithHadoopAuth.java:321)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-13181) NullPointerException in org.apache.solr.request.macro.MacroExpander

2019-02-01 Thread Cesar Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758597#comment-16758597
 ] 

Cesar Rodriguez commented on SOLR-13181:


Thanks [~cpoerschke], and sorry for the patch naming, I've renamed the file now.

I will try to write a test, but note that I found this bug using an automatic 
tool and I'm not sure I follow the code very well!

> NullPointerException in org.apache.solr.request.macro.MacroExpander
> ---
>
> Key: SOLR-13181
> URL: https://issues.apache.org/jira/browse/SOLR-13181
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: SOLR-13181.patch, home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?a=${${b}}
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.StringIndexOutOfBoundsException: String index out of range: -4
>   at java.lang.String.substring(String.java:1967)
>   at 
> org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:150)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:101)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:65)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:51)
>   at 
> org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:159)
>   at 
> org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)
> [...]
> {noformat}
> Parameter [macro 
> expansion|http://yonik.com/solr-query-parameter-substitution/] seems to take 
> place in {{org.apache.solr.request.macro.MacroExpander._expand(String val)}}. 
> From reading the code of the function it seems that macros are not expanded 
> inside curly brackets {{${...}}}, and so the {{${b}}} inside
> {noformat}
> ${${b}}
> {noformat}
> should not be expanded. But the function seems to fail to detect this 
> specific case and graciously refuse to expand it.
> A possible fix could be updating the {{idx}} variable when the {{StrParser}} 
> detects that no valid identifier can be found inside the brackets. See 
> attached file 
> {{0001-Macro-expander-fail-gracefully-on-unsupported-syntax.patch}}.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13181) NullPointerException in org.apache.solr.request.macro.MacroExpander

2019-02-01 Thread Cesar Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cesar Rodriguez updated SOLR-13181:
---
Attachment: (was: 
0001-Macro-expander-fail-gracefully-on-unsupported-syntax.patch)

> NullPointerException in org.apache.solr.request.macro.MacroExpander
> ---
>
> Key: SOLR-13181
> URL: https://issues.apache.org/jira/browse/SOLR-13181
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: SOLR-13181.patch, home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?a=${${b}}
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.StringIndexOutOfBoundsException: String index out of range: -4
>   at java.lang.String.substring(String.java:1967)
>   at 
> org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:150)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:101)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:65)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:51)
>   at 
> org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:159)
>   at 
> org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)
> [...]
> {noformat}
> Parameter [macro 
> expansion|http://yonik.com/solr-query-parameter-substitution/] seems to take 
> place in {{org.apache.solr.request.macro.MacroExpander._expand(String val)}}. 
> From reading the code of the function it seems that macros are not expanded 
> inside curly brackets {{${...}}}, and so the {{${b}}} inside
> {noformat}
> ${${b}}
> {noformat}
> should not be expanded. But the function seems to fail to detect this 
> specific case and graciously refuse to expand it.
> A possible fix could be updating the {{idx}} variable when the {{StrParser}} 
> detects that no valid identifier can be found inside the brackets. See 
> attached file 
> {{0001-Macro-expander-fail-gracefully-on-unsupported-syntax.patch}}.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13181) NullPointerException in org.apache.solr.request.macro.MacroExpander

2019-02-01 Thread Cesar Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cesar Rodriguez updated SOLR-13181:
---
Attachment: SOLR-13181.patch

> NullPointerException in org.apache.solr.request.macro.MacroExpander
> ---
>
> Key: SOLR-13181
> URL: https://issues.apache.org/jira/browse/SOLR-13181
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: SOLR-13181.patch, home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?a=${${b}}
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.StringIndexOutOfBoundsException: String index out of range: -4
>   at java.lang.String.substring(String.java:1967)
>   at 
> org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:150)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:101)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:65)
>   at 
> org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:51)
>   at 
> org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:159)
>   at 
> org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)
> [...]
> {noformat}
> Parameter [macro 
> expansion|http://yonik.com/solr-query-parameter-substitution/] seems to take 
> place in {{org.apache.solr.request.macro.MacroExpander._expand(String val)}}. 
> From reading the code of the function it seems that macros are not expanded 
> inside curly brackets {{${...}}}, and so the {{${b}}} inside
> {noformat}
> ${${b}}
> {noformat}
> should not be expanded. But the function seems to fail to detect this 
> specific case and graciously refuse to expand it.
> A possible fix could be updating the {{idx}} variable when the {{StrParser}} 
> detects that no valid identifier can be found inside the brackets. See 
> attached file 
> {{0001-Macro-expander-fail-gracefully-on-unsupported-syntax.patch}}.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13180) ClassCastExceptions in o.a.s.s.facet.FacetModule for valid JSON inputs that are not objects

2019-02-01 Thread Cesar Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758584#comment-16758584
 ] 

Cesar Rodriguez commented on SOLR-13180:


Thanks [~janhoy], following this and your email we stopped including the 
{{home.zip}} in subsequent tickets. Also, please note that the 'Environment' 
field of all of the bug reports we did contain the necessary instructions to 
rebuild the films collections ;)

> ClassCastExceptions in o.a.s.s.facet.FacetModule for valid JSON inputs that 
> are not objects
> ---
>
> Key: SOLR-13180
> URL: https://issues.apache.org/jira/browse/SOLR-13180
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5, master (9.0)
> Environment: Running on Unix, using a git checkout close to master.
> h2. Steps to reproduce
>  * Build commit ea2c8ba of Solr as described in the section below.
>  * Build the films collection as described below.
>  * Start the server using the command \{{“./bin/solr start -f -p 8983 -s 
> /tmp/home”}}
>  * Request the URL above.
> h2. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h2. Building the collection
> We followed Exercise 2 from the quick start tutorial 
> ([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - 
> for reference, I have attached a copy of the database.
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '\\{"add-field": {"name":"name", "type":"text_general", "multiValued":false, 
> "stored":true}}' http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '\{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> [http://localhost:8983/solr/films/schema]
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Johannes Kloos
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: SOLR-13180.patch, home.zip
>
>
> Requesting the following URL gives a 500 error due to a ClassCastException in 
> o.a.s.s.f.FacetModule: [http://localhost:8983/solr/films/select?json=0]
> The error response is caught by an uncaught ClassCastException, with the 
> stacktrace shown here:
> java.lang.ClassCastException: java.lang.Long cannot be cast to java.util.Map
> at org.apache.solr.search.facet.FacetModule.prepare(FacetModule.java:78)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)
>  
> The cause of this bug is similar to #13178: line 78 in FacetModule reads
> {{jsonFacet = (Map) json.get("facet")}}
> and assumes that the JSON object contained in facet is a JSON object, while 
> we only guarantee that it is a JSON value.
> Line 92 semms to contain another situation like this, but I do not have a 
> test case handy for this specific case.
> This bug was found using [Diffblue Microservices 
> Testing|http://www.diffblue.com/labs]. Find more information on this [test 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-02-01 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758564#comment-16758564
 ] 

Hoss Man commented on SOLR-13189:
-

{quote}In older versions these tests might have worked because before the 
request returns to the client, the leader would have called to the replica and 
told it to go into recovery. I believe we no longer make these calls (for good 
reason, http calls tied to updates was no good). So a replica will only enter 
recovery when it realizes it should via ZooKeeper communication.
{quote}
Ok ... so to re-iterate and make sure i'm following everything:
 * OLD LIR:
 ** LIR was pushed to replica ia HTTP immediately after replica returned 
non-200 status
 ** was bad in real life because if replica was having problems, it might not 
recognize/respond to LIR apprpriate
 ** was good in tests because it ment immediately after doing an index update, 
you could {{waitForRecoveriesToFinish}} and the replica would already be in 
recover
 * CURRENT LIR:
 ** LIR status is managed via flags in ZK (this is the "terms" concept correct?)
 ** replicas monitor ZK to see if/when they need to go into LIR
 ** this is good in real life because it's less dependent on healthy 
network/http requests
 ** this is bad in tests because there is an inherent and hard to predict delay 
the replica even realizes it needs to go into recovery
 *** ie: {{waitForRecoveriesToFinish}} now seems completley useless?

does that cover it?
{quote}The system will be eventually consistent, but there is no promise it 
will be consistent even when all replicas are active. You must be willing to 
wait a short time for consistency and this test does not.
{quote}
Right ... i understand that ... the question at the heart of this jira is what 
a test can/should do to know "the system should now be consistent enough for me 
to make the assertions I want to make" (and how do we make that as easy as 
possible for tests to do).

I haven't dug into your patch that deep, but so far is seems really hackish? 
... sleep looping until all the replicas are live the first 1000 docs from a 
{{*:*}} of a query to each matches each other?

If nothing else this creates a (slow) chicken and egg diagnoses problem in 
tests – did {{waitForConsistency}} eventually time out because the recovery is 
broken, or because the code i'm writting a test for (example: distributed 
atomic updates) is broken?

I'm not saying the {{checkConsistency}} logic is bad – if anything it seems 
like something that might be good to have in the tear down of every test – but 
I'm concerned that just trying to do a "wait for" on it doesn't really get to 
the heart of the problem of tests being able to know when the cluster 
*_should_* be consistent – it makes the test wait (or timeout) until it *_is_* 
consistent)

If recovery is driven by these flags in ZK, then why couldn't we re-write 
{{waitForRecoveriesToFinish}} to check those flags first (in addition to the 
{{Replica.State}}) to know if recovery is pending (or in progress)

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13202) Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode()

2019-02-01 Thread Cesar Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cesar Rodriguez updated SOLR-13202:
---
Description: 
Requesting any of the following URLs causes Solr to return an HTTP 500 error 
response:

{noformat}
http://localhost:8983/solr/films/select?fq={!join%20from=b%20to=a}
http://localhost:8983/solr/films/select?fq={!join%20to=a}
http://localhost:8983/solr/films/select?fq={!join}
{noformat}

The error response seems to be caused by the following uncaught exception:
 {noformat}
java.lang.NullPointerException
 at org.apache.solr.search.JoinQuery.hashCode(JoinQParserPlugin.java:578)
 at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:52)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
 at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
 at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
 at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
[...]
{noformat}

The problem seems to be related with method {{hasCode}} in the class 
{{org.apache.solr.search.JoinQuery}}:

{code:java}
  @Override
  public int hashCode() {
int h = classHash();
h = h * 31 + fromField.hashCode();
h = h * 31 + toField.hashCode();
h = h * 31 + q.hashCode();
h = h * 31 + Objects.hashCode(fromIndex);
h = h * 31 + (int) fromCoreOpenTime;
return h;
  }
{code}

The URLs provided above selectively leave uninitialized the fields 
{{fromField}}, {{fromIndex}}, {{q}}, and {{toField}}, but all of these fields 
are accessed by this method.
 
We found this issue and ~70 more like this using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
information on this [fuzz testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].

  was:
Requesting any of the following URLs causes Solr to return an HTTP 500 error 
response:

{noformat}
http://localhost:8983/solr/films/select?fq=\{!join%20from=b%20to=a}
http://localhost:8983/solr/films/select?fq=\{!join%20to=a}
http://localhost:8983/solr/films/select?fq=\{!join}
{noformat}

The error response seems to be caused by the following uncaught exception:
 {noformat}
java.lang.NullPointerException
 at org.apache.solr.search.JoinQuery.hashCode(JoinQParserPlugin.java:578)
 at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:52)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
 at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
 at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
 at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
[...]
{noformat}

The problem seems to be related with method {{hasCode}} in the class 
{{org.apache.solr.search.JoinQuery}}:

{code:java}
  @Override
  public int hashCode() {
int h = classHash();
h = h * 31 + fromField.hashCode();
h = h * 31 + toField.hashCode();
h = h * 31 + q.hashCode();
h = h * 31 + Objects.hashCode(fromIndex);
h = h * 31 + (int) fromCoreOpenTime;
return h;
  }
{code}

The URLs provided above selectively leave uninitialized the fields 
{{fromField}}, {{fromIndex}}, {{q}}, and {{toField}}, but all of these fields 
are accessed by this method.
 
We found this issue and ~70 more like this using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
information on this [fuzz testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].


> Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode()
> --
>
> Key: SOLR-13202
> URL: https://issues.apache.org/jira/browse/SOLR-13202
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11) - Build # 118 - Unstable!

2019-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/118/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudRecovery2.test

Error Message:
 Timeout waiting to see state for collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/16)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"collection1_shard1_replica_n1",   
"base_url":"http://127.0.0.1:39347/solr;,   
"node_name":"127.0.0.1:39347_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n2",   
"base_url":"http://127.0.0.1:37677/solr;,   
"node_name":"127.0.0.1:37677_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"2",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"} Live Nodes: [127.0.0.1:37677_solr, 127.0.0.1:39347_solr] 
Last available state: 
DocCollection(collection1//collections/collection1/state.json/16)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"collection1_shard1_replica_n1",   
"base_url":"http://127.0.0.1:39347/solr;,   
"node_name":"127.0.0.1:39347_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n2",   
"base_url":"http://127.0.0.1:37677/solr;,   
"node_name":"127.0.0.1:37677_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"2",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: 
Timeout waiting to see state for collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/16)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"http://127.0.0.1:39347/solr;,
  "node_name":"127.0.0.1:39347_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n2",
  "base_url":"http://127.0.0.1:37677/solr;,
  "node_name":"127.0.0.1:37677_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:37677_solr, 127.0.0.1:39347_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/16)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"http://127.0.0.1:39347/solr;,
  "node_name":"127.0.0.1:39347_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n2",
  "base_url":"http://127.0.0.1:37677/solr;,
  "node_name":"127.0.0.1:37677_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([FE787F786AA6B562:762C40A2C45AD89A]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:289)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:267)
at 
org.apache.solr.cloud.TestCloudRecovery2.test(TestCloudRecovery2.java:106)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 

[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459801543
 
 
   Tests have been running across a few machines I have access to for the last 
few hours. Probably ~10 runs about 50/50 JDK8/JDK11 mix. So far no HDFS test 
failures with latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8675) Divide Segment Search Amongst Multiple Threads

2019-02-01 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758489#comment-16758489
 ] 

Adrien Grand commented on LUCENE-8675:
--

If some segments are getting large enough that intra-segment parallelism 
becomes appealing, then maybe an easier and more efficient way to increase 
parallelism is to instead reduce the maximum segment size so that inter-segment 
parallelism has more potential for parallelizing query execution.

> Divide Segment Search Amongst Multiple Threads
> --
>
> Key: LUCENE-8675
> URL: https://issues.apache.org/jira/browse/LUCENE-8675
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Atri Sharma
>Priority: Major
>
> Segment search is a single threaded operation today, which can be a 
> bottleneck for large analytical queries which index a lot of data and have 
> complex queries which touch multiple segments (imagine a composite query with 
> range query and filters on top). This ticket is for discussing the idea of 
> splitting a single segment into multiple threads based on mutually exclusive 
> document ID ranges.
> This will be a two phase effort, the first phase targeting queries returning 
> all matching documents (collectors not terminating early). The second phase 
> patch will introduce staged execution and will build on top of this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13131) Category Routed Aliases

2019-02-01 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758421#comment-16758421
 ] 

Gus Heck commented on SOLR-13131:
-

While working on this it has occurred to me that case-insensitive categories 
might be desirable. Such a feature would also imply a need to define the locale 
for such comparisons. Not yet sure if that should be a sub task or a follow on 
enhancement.

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 979 - Unstable!

2019-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/979/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart

Error Message:
null

Stack Trace:
java.lang.NumberFormatException: null
at 
__randomizedtesting.SeedInfo.seed([9D6BA632E479AE22:459C62D64FA26C7E]:0)
at java.base/java.lang.Integer.parseInt(Integer.java:614)
at java.base/java.lang.Integer.parseInt(Integer.java:770)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:700)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at http://127.0.0.1:57598/solr/stale_state_test_col: 
ClusterState says we are the leader 

[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-02-01 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758457#comment-16758457
 ] 

Mike Sokolov commented on LUCENE-8635:
--

Yes, [~akjain] that approach sounds good to me; we should hold off on the 
FST-reversal. It didn't help here; the random-access approach worked just as 
well.  Also, maybe opening a pull request will help, if only to distinguish it 
from all the patches that are cluttering this now (sorry!)

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, fst-offheap-rev.patch, 
> offheap.patch, optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8675) Divide Segment Search Amongst Multiple Threads

2019-02-01 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758451#comment-16758451
 ] 

Michael McCandless commented on LUCENE-8675:


I think it'd be interesting to explore intra-segment parallelism, but I agree 
w/ [~jpountz] that there are challenges :)

If you pass an {{ExecutorService}} to {{IndexSearcher}} today you can already 
use multiple threads to answer one query, but the concurrency is tied to your 
segment geometry and annoyingly a supposedly "optimized" index gets no 
concurrency ;)

But if you do have many segments, this can give a nice reduction to query 
latencies when QPS is well below the searcher's red-line capacity (probably at 
the expense of some hopefully small loss of red-line throughput because of the 
added overhead of thread scheduling).  For certain use cases (large index, low 
typical query rate) this is a powerful approach.

It's true that one can also divide an index into more shards and run each shard 
concurrently but then you are also multiplying the fixed query setup cost which 
in some cases can be relatively significant.
{quote}Parallelizing based on ranges of doc IDs is problematic for some 
queries, for instance the cost of evaluating a range query over an entire 
segment or only about a specific range of doc IDs is exactly the same given 
that it uses data-structures that are organized by value rather than by doc ID.
{quote}
Yeah that's a real problem – these queries traverse the BKD tree per-segment 
while creating the scorer, which is/can be the costly part, and then produce a 
bit set which is very fast to iterate over.  This phase is not separately 
visible to the caller, unlike e.g. rewrite that MultiTermQueries use to 
translate into simpler queries, so it'd be tricky to build intra-segment 
concurrency on top ...

> Divide Segment Search Amongst Multiple Threads
> --
>
> Key: LUCENE-8675
> URL: https://issues.apache.org/jira/browse/LUCENE-8675
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Atri Sharma
>Priority: Major
>
> Segment search is a single threaded operation today, which can be a 
> bottleneck for large analytical queries which index a lot of data and have 
> complex queries which touch multiple segments (imagine a composite query with 
> range query and filters on top). This ticket is for discussing the idea of 
> splitting a single segment into multiple threads based on mutually exclusive 
> document ID ranges.
> This will be a two phase effort, the first phase targeting queries returning 
> all matching documents (collectors not terminating early). The second phase 
> patch will introduce staged execution and will build on top of this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8679) Test failure in LatLonShape

2019-02-01 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758448#comment-16758448
 ] 

Nicholas Knize commented on LUCENE-8679:


+1 Thx [~ivera]  Ran some further testing and it LGTM.

> Test failure in LatLonShape
> ---
>
> Key: LUCENE-8679
> URL: https://issues.apache.org/jira/browse/LUCENE-8679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8679.patch, LUCENE-8679.patch
>
>
> Error and reproducible seed:
>  
> {code:java}
> [junit4] Suite: org.apache.lucene.document.TestLatLonShape
>    [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
> -Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
> -Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>    [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
>    [junit4]    >        at java.lang.Thread.run(Thread.java:748)
>    [junit4]   2> NOTE: leaving temporary files on disk at: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
> docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
>  locale=no-NO, timezone=America/North_Dakota/Center
>    [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
> 1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
>    [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
> TestLatLonLineShapeQueries, TestLatLonShape]
>    [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
> skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9515) Update to Hadoop 3

2019-02-01 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9515:
---
Attachment: SOLR-9515.patch

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13204) ArrayIndexOutOfBoundsException in org/apache/solr/search/grouping/endresulttransformer/MainEndResultTransformer.java[36]

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13204:
-
Labels: diffblue newdev  (was: diffblue)

> ArrayIndexOutOfBoundsException in 
> org/apache/solr/search/grouping/endresulttransformer/MainEndResultTransformer.java[36]
> 
>
> Key: SOLR-13204
> URL: https://issues.apache.org/jira/browse/SOLR-13204
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection and reproducing the bug
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html].
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> curl -v "URL_BUG"
> {noformat}
> Please check the issue description below to find the "URL_BUG" that will 
> allow you to reproduce the issue reported.
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> solr/films/select?group=true=true=true
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-18) [   x:films] o.a.s.s.HttpSolrCall 
> null:java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.solr.search.grouping.endresulttransformer.MainEndResultTransformer.transform(MainEndResultTransformer.java:36)
>   at 
> org.apache.solr.handler.component.QueryComponent.groupedFinishStage(QueryComponent.java:638)
>   at 
> org.apache.solr.handler.component.QueryComponent.finishStage(QueryComponent.java:601)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:432)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   [...]
> {noformat}
> There is accessed the first element of an empty array of strings, stored in 
> the member 'org.apache.solr.search.grouping.GroupingSpecification.fields'. 
> There is an attept to put some strings to the array at 
> org/apache/solr/handler/component/QueryComponent.java[283]; however, the 
> string "group.field" is not present in params of the processed 
> org.apache.solr.request.SolrQueryRequest instance.
> Look into section 'Environment' above to see installation step of Solr and 
> films collection.
> We found this issue and ~70 more like this using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
> information on this [fuzz testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13182) NullPointerException due to an invariant violation in org/apache/lucene/search/BooleanClause.java[60]

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13182:
-
Labels: diffblue newdev  (was: )

> NullPointerException due to an invariant violation in 
> org/apache/lucene/search/BooleanClause.java[60]
> -
>
> Key: SOLR-13182
> URL: https://issues.apache.org/jira/browse/SOLR-13182
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q={!child%20q={}
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-14) [ x:films] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException: Query must not be null
>  at java.util.Objects.requireNonNull(Objects.java:228)
>  at org.apache.lucene.search.BooleanClause.(BooleanClause.java:60)
>  at org.apache.lucene.search.BooleanQuery$Builder.add(BooleanQuery.java:127)
>  at 
> org.apache.solr.search.join.BlockJoinChildQParser.noClausesQuery(BlockJoinChildQParser.java:50)
>  at org.apache.solr.search.join.FiltersQParser.parse(FiltersQParser.java:60)
>  at org.apache.solr.search.QParser.getQuery(QParser.java:173)
>  at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:158)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>  at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>  at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
> [...]
> {noformat}
> In org/apache/solr/search/join/BlockJoinChildQParser.java[47] there is 
> computed query variable 'parents', which receives value null from call to
> 'parseParentFilter()'. The null value is then passed to
> 'org.apache.lucene.search.BooleanQuery.Builder.add' method at line 50. That
> method calls the constructor where 'Objects.requireNonNull' failes
> (the exception is thrown).
> The call to 'parseParentFilter()' evaluates to null, because:
>  #  In org/apache/solr/search/join/BlockJoinParentQParser.java[59] null is
>     set to string 'filter' (becase "which" is not in 'localParams' map).
>  #  The parser 'parentParser' obtained in the next line has member 'qstr' set
>     to null, because the 'filter' passed to 'subQuery' is passed as the first 
>     argument to 'org.apache.solr.search.QParserPlugin.createParser'.
>  #  Subsequnt call to 'org.apache.solr.search.QParser.getQuery' on the
>     'parentParser' at 
> 

[jira] [Updated] (SOLR-13197) NullPointerException in org/apache/solr/handler/component/StatsField.java[251]

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13197:
-
Labels: diffblue newdev  (was: )

> NullPointerException in org/apache/solr/handler/component/StatsField.java[251]
> --
>
> Key: SOLR-13197
> URL: https://issues.apache.org/jira/browse/SOLR-13197
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?stats=true={!cardinalit}
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-17) [   x:films] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.StatsField.(StatsField.java:251)
>   at 
> org.apache.solr.handler.component.StatsInfo.(StatsComponent.java:194)
>   at 
> org.apache.solr.handler.component.StatsComponent.prepare(StatsComponent.java:47)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   [...]
> {noformat}
> There is called method 'createParser' on local variable 'qplug' which is set 
> to null on the previous line (i.e. 250). The value null is set to the local 
> variable 'qplug', because of failure in finding the sting "cardinalit" in the 
> field 'registry' of the class org.apache.solr.core.PluginBag (at line 
> org/apache/solr/core/PluginBag.java[167]).
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
> information on this [fuzz testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-02-01 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758431#comment-16758431
 ] 

Michael McCandless commented on LUCENE-8635:


{quote}Better would be an attribute of {{FieldInfo}}, where we have 
{{put/getAttribute}}. Then {{FieldReader }}can inspect the {{FieldInfo}} and 
pass the appropriate {{On/OffHeapStore}} when creating its {{FST}}. What do you 
think?
{quote}
Hmm that's also an interesting approach to get per-field control.  One can set 
these attributes in a custom {{FieldType}} when indexing documents, or maybe in 
a custom codec at write time (just subclassing e.g. {{Lucene80Codec}}), or at 
read time using a real (named) custom codec.  So we would pick a specific 
string ({{FST_OFF_HEAP}} or something) and define that as a string constant 
which users could then use for setting the attribute?

So ... maybe we have a default behavior w/ Adrien's cool idea, but then also 
allow the attribute to give per-field control?  We should probably also by 
default (if the field attribute is not present) not do off-heap when the 
directory is not MMapDirectory?  We haven't tested the other directory impls 
but I suspect they'd be quite a bit slower with off-heap FST?

 
{quote}Given that reversing the index during write to make it forward reading 
didn't help the performance (in addition to it not being backward compatible), 
is the consensus to add exception for PK and directories other than mmap for 
offheap FST in [^ra.patch]?
{quote}
Yeah +1 to keep the two changes separated.

 

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, fst-offheap-rev.patch, 
> offheap.patch, optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-01 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758429#comment-16758429
 ] 

Kevin Risden commented on SOLR-9515:


Updated patch with BlockSlicePool workaround. First run of tests locally with 
JDK8 and JDK11 look promising. I have a few machines running the tests to see 
if anything shakes out.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253097101
 
 

 ##
 File path: lucene/tools/src/groovy/check-source-patterns.groovy
 ##
 @@ -149,6 +149,7 @@ ant.fileScanner{
 exclude(name: 'lucene/benchmark/temp/**')
 exclude(name: '**/CheckLoggingConfiguration.java')
 exclude(name: 'lucene/tools/src/groovy/check-source-patterns.groovy') // 
ourselves :-)
+exclude(name: 'solr/core/src/test/org/apache/hadoop/**')
 
 Review comment:
   Also now catches the BlockPoolSlice code


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253096881
 
 

 ##
 File path: solr/core/build.xml
 ##
 @@ -25,6 +25,7 @@
 
   

[jira] [Updated] (SOLR-13188) NullPointerException in org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13188:
-
Labels: diffblue newdev  (was: )

> NullPointerException in 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
> --
>
> Key: SOLR-13188
> URL: https://issues.apache.org/jira/browse/SOLR-13188
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q={!parent%20fq={!collapse%20field=id}}
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-21) [   x:films] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
>   at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
>   at 
> org.apache.lucene.search.join.QueryBitSetProducer.getBitSet(QueryBitSetProducer.java:73)
>   at 
> org.apache.solr.search.join.BlockJoinParentQParser$BitDocIdSetFilterWrapper.getDocIdSet(BlockJoinParentQParser.java:135)
>   at 
> org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.scorer(SolrConstantScoreQuery.java:99)
>   at org.apache.lucene.search.Weight.bulkScorer(Weight.java:177)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
>   at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   [...]
> {noformat}
> In org/apache/lucene/search/join/QueryBitSetProducer.java[73] there is called
>  method 'org.apache.lucene.search.IndexSearcher.rewrite' with null value 
> stored
>  in the member 'query'. Inside the called method there is method 

[jira] [Updated] (SOLR-13194) NullPointerException in org/apache/solr/handler/component/ExpandComponent.java[240]

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13194:
-
Labels: diffblue newdev  (was: )

> NullPointerException in 
> org/apache/solr/handler/component/ExpandComponent.java[240]
> ---
>
> Key: SOLR-13194
> URL: https://issues.apache.org/jira/browse/SOLR-13194
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?expand=true={!collapse%20field=id}=true=genre
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-38) [   x:films] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:240)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
> {noformat}
> There is called method 'size' on the variable 'docList', which is null. The 
> null value comes from parameter 'rb' (an instance of class 
> 'org.apache.solr.handler.component.ResponseBuilder'), where 
> 'rb.results.docList' is assigned to the mentioned local variable 'docList'.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
> information on this [fuzz testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13179) NullPointerException in org/apache/lucene/queries/function/FunctionScoreQuery.java [109]

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13179:
-
Labels: diffblue newdev  (was: )

> NullPointerException in 
> org/apache/lucene/queries/function/FunctionScoreQuery.java [109]
> 
>
> Key: SOLR-13179
> URL: https://issues.apache.org/jira/browse/SOLR-13179
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?facet.query=={!frange%20l=10%20u=100}boost({!v=+},3)
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-23) [   x:films] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
>   at 
> org.apache.lucene.queries.function.FunctionScoreQuery.rewrite(FunctionScoreQuery.java:109)
>   at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
>   at 
> org.apache.lucene.queries.function.valuesource.QueryValueSource.createWeight(QueryValueSource.java:75)
>   at 
> org.apache.solr.search.function.ValueSourceRangeFilter.createWeight(ValueSourceRangeFilter.java:105)
>   at 
> org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.(SolrConstantScoreQuery.java:94)
>   at 
> org.apache.solr.search.SolrConstantScoreQuery.createWeight(SolrConstantScoreQuery.java:119)
>   at 
> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:717)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
>   at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
>   at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   [...]
> {noformat}
> 1. In org/apache/solr/search/ValueSourceParser.java[330] a variable query 'q' 
> is assigned 

[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253096125
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java
 ##
 @@ -239,4 +253,24 @@ public static String getURI(MiniDFSCluster dfsCluster) {
 }
   }
 
+  /**
+   * By default in JDK9+, the ForkJoinWorkerThreadFactory does not give 
SecurityManager permissions
+   * to threads that are created. This works around that with a custom thread 
factory.
+   * See SOLR-9515 and HDFS-14251
+   * Used in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice
+   */
+  public static class HDFSForkJoinThreadFactory implements 
ForkJoinPool.ForkJoinWorkerThreadFactory {
+@Override
+public ForkJoinWorkerThread newThread(ForkJoinPool pool) {
+  ForkJoinWorkerThread worker = new SecurityManagerWorkerThread(pool);
+  worker.setName("solr-hdfs-threadpool-" + worker.getPoolIndex());
 
 Review comment:
   Name used in BadHdfsThreadFilter since this pool is not shutdown correctly 
by BlockPoolSlice and not easily worked around. See SOLR-9515 and HDFS-14251


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13201) NullPointerException in ConcurrentHashMap caused by passing null to get mmethod in org/apache/solr/schema/IndexSchema.java[1201]

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13201:
-
Labels: diffblue newdev  (was: )

> NullPointerException in ConcurrentHashMap caused by passing null to get 
> mmethod in org/apache/solr/schema/IndexSchema.java[1201]
> 
>
> Key: SOLR-13201
> URL: https://issues.apache.org/jira/browse/SOLR-13201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q=initial_release_date:[*%20TO%20NOW-18YEAR]=php=2
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-19) [   x:films] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.schema.IndexSchema.getFieldOrNull(IndexSchema.java:1201)
>   at org.apache.solr.schema.IndexSchema.getField(IndexSchema.java:1225)
>   at 
> org.apache.solr.search.facet.FacetField.createFacetProcessor(FacetField.java:118)
>   at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:397)
>   at 
> org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
>   at 
> org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
>   at 
> org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
>   at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:401)
>   at 
> org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   [...]
> {noformat}
> There is called method 'get' on the member 
> 'org.apache.solr.schema.IndexSchema.dynamicFieldCache' (which os a 
> 'ConcurrentHashMap') with null as an argument; that leads to a crash inside 
> 'get' method. The null value (passed to 'get' method) comes from from member 
> 'field' of 'org.apache.solr.search.facet.FacetField' instance' at 
> org/apache/solr/search/facet/FacetField.java[118].
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
> information on 

[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253095650
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java
 ##
 @@ -62,73 +63,88 @@
   public static MiniDFSCluster setupClass(String dir) throws Exception {
 return setupClass(dir, true, true);
   }
-  
+
   public static MiniDFSCluster setupClass(String dir, boolean haTesting) 
throws Exception {
 return setupClass(dir, haTesting, true);
   }
-  
+
+  /**
+   * Checks that commons-lang3 FastDateFormat works with configured locale
+   */
+  @SuppressForbidden(reason="Call FastDateFormat.format same way Hadoop calls 
it")
+  private static void checkFastDateFormat() {
+try {
+  FastDateFormat.getInstance().format(System.currentTimeMillis());
+} catch (ArrayIndexOutOfBoundsException e) {
+  LuceneTestCase.assumeNoException("commons-lang3 FastDateFormat doesn't 
work with " +
+  Locale.getDefault().toLanguageTag(), e);
+}
+  }
+
+  /**
+   * Hadoop fails to generate locale agnostic ids - Checks that generated 
string matches
+   */
+  private static void checkGeneratedIdMatches() {
+// This is basically how Namenode generates fsimage ids and checks that 
the fsimage filename matches
+LuceneTestCase.assumeTrue("Check that generated id matches regex",
+Pattern.matches("(\\d+)", String.format(Locale.getDefault(),"%019d", 
0)));
+  }
+
   public static MiniDFSCluster setupClass(String dir, boolean safeModeTesting, 
boolean haTesting) throws Exception {
 LuceneTestCase.assumeFalse("HDFS tests were disabled by 
-Dtests.disableHdfs",
-  Boolean.parseBoolean(System.getProperty("tests.disableHdfs", "false"))); 
 
+  Boolean.parseBoolean(System.getProperty("tests.disableHdfs", "false")));
+
+checkFastDateFormat();
+checkGeneratedIdMatches();
 
-savedLocale = Locale.getDefault();
-// TODO: we HACK around HADOOP-9643
-Locale.setDefault(Locale.ENGLISH);
-
 if (!HA_TESTING_ENABLED) haTesting = false;
-
-
-// keep netty from using secure random on startup: SOLR-10098
 
 Review comment:
   Not needed after netty upgrade.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13206) ArrayIndexOutOfBoundsException in org/apache/solr/request/SimpleFacets.java[705]

2019-02-01 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13206:
-
Labels: diffblue newdev  (was: diffblue)

> ArrayIndexOutOfBoundsException in 
> org/apache/solr/request/SimpleFacets.java[705]
> 
>
> Key: SOLR-13206
> URL: https://issues.apache.org/jira/browse/SOLR-13206
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> * Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection and reproducing the bug
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html].
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> curl -v "URL_BUG"
> {noformat}
> Please check the issue description below to find the "URL_BUG" that will 
> allow you to reproduce the issue reported.
>Reporter: Marek
>Priority: Minor
>  Labels: diffblue, newdev
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?group=true=genre=true=_version_=true
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> ERROR (qtp689401025-21) [   x:films] o.a.s.h.RequestHandlerBase 
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:705)
>   at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:495)
>   at 
> org.apache.solr.request.SimpleFacets.getTermCountsForPivots(SimpleFacets.java:414)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:221)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:169)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:279)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
> 394)
>   [...]
> {noformat}
> There is accessed the first element of an empty array of strings, stored in 
> the member 'org.apache.solr.search.grouping.GroupingSpecification.fields'. 
> There is an attept to put some strings to the array at 
> org/apache/solr/handler/component/QueryComponent.java[283]; however, the 
> string "group.field" is not present in params of the processed 
> org.apache.solr.request.SolrQueryRequest instance.
> The cause of the issue seems to be similar to one reported in SOLR-13204.
> To set up an environment to reproduce this bug, follow the description in the 
> 'Environment' field.
> We automatically found this issue and ~70 more like this using [Diffblue 
> Microservices Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. 
> Find more information on this [fuzz testing 
> 

[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253095505
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java
 ##
 @@ -62,73 +63,88 @@
   public static MiniDFSCluster setupClass(String dir) throws Exception {
 return setupClass(dir, true, true);
   }
-  
+
   public static MiniDFSCluster setupClass(String dir, boolean haTesting) 
throws Exception {
 return setupClass(dir, haTesting, true);
   }
-  
+
+  /**
+   * Checks that commons-lang3 FastDateFormat works with configured locale
+   */
+  @SuppressForbidden(reason="Call FastDateFormat.format same way Hadoop calls 
it")
+  private static void checkFastDateFormat() {
+try {
+  FastDateFormat.getInstance().format(System.currentTimeMillis());
+} catch (ArrayIndexOutOfBoundsException e) {
+  LuceneTestCase.assumeNoException("commons-lang3 FastDateFormat doesn't 
work with " +
+  Locale.getDefault().toLanguageTag(), e);
+}
+  }
+
+  /**
+   * Hadoop fails to generate locale agnostic ids - Checks that generated 
string matches
+   */
+  private static void checkGeneratedIdMatches() {
 
 Review comment:
   Hadoop generates filename ids with `String.format` without any Locale set 
and then uses regex to try to match it. This fails on locales 
`th-TH-u-nu-thai-x-lvariant-TH` and `hi-IN`. This check matches what Hadoop is 
doing to make sure there is a match.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253094751
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java
 ##
 @@ -62,73 +63,88 @@
   public static MiniDFSCluster setupClass(String dir) throws Exception {
 return setupClass(dir, true, true);
   }
-  
+
   public static MiniDFSCluster setupClass(String dir, boolean haTesting) 
throws Exception {
 return setupClass(dir, haTesting, true);
   }
-  
+
+  /**
+   * Checks that commons-lang3 FastDateFormat works with configured locale
+   */
+  @SuppressForbidden(reason="Call FastDateFormat.format same way Hadoop calls 
it")
+  private static void checkFastDateFormat() {
 
 Review comment:
   commons-lang3 has an issue with the locale 
`ja-JP-u-ca-japanese-x-lvariant-JP` but this will catch if there are other 
locales with issues.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253094751
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java
 ##
 @@ -62,73 +63,88 @@
   public static MiniDFSCluster setupClass(String dir) throws Exception {
 return setupClass(dir, true, true);
   }
-  
+
   public static MiniDFSCluster setupClass(String dir, boolean haTesting) 
throws Exception {
 return setupClass(dir, haTesting, true);
   }
-  
+
+  /**
+   * Checks that commons-lang3 FastDateFormat works with configured locale
+   */
+  @SuppressForbidden(reason="Call FastDateFormat.format same way Hadoop calls 
it")
+  private static void checkFastDateFormat() {
 
 Review comment:
   commons-lang3 has an issue with the locale 
`ja-JP-u-ca-japanese-x-lvariant-JP` but this will catch if there are other 
locales with issues.
   
   
http://mail-archives.apache.org/mod_mbox/commons-user/201901.mbox/%3CCAJU9nmhqgzh7VcxyhJNfb4czC2SvJzZd4o6ARcuD4msof1U2Zw%40mail.gmail.com%3E


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r253094096
 
 

 ##
 File path: 
solr/core/src/test/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
 ##
 @@ -0,0 +1,1045 @@
+/*
 
 Review comment:
   Copied straight from Hadoop code base with minor modifications to fix 
SecurityManager and ForkJoinPool integration. Details in SOLR-9515 and 
HDFS-14251


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459745591
 
 
   Well naively adding shutdown of the thread pool results in a lot of 
`RejectedExecutionException`s since the pool is static. Shutting down is per 
the lifetime of one BlockPoolSlice. The lifetime of the Hadoop 
`addReplicaThreadPool` needs some more thought.  I don't know if it is 
completely possible to fix this only in BlockPoolSlice. Planning to leave it 
for now as a bad thread.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] uschindler commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
uschindler commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459733827
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-02-01 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459732343
 
 
   Rebased on master and updated the workaround for BlockPoolSlice. Think that 
also found the issue with threads not being cleaned up. Running through all 
tests now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1248 - Failure

2019-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1248/

No tests ran.

Build Log:
[...truncated 23440 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2479 links (2021 relative) to 3245 anchors in 248 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-01 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758279#comment-16758279
 ] 

Kevin Risden commented on SOLR-9515:


Created HDFS-14251 with the Hadoop project.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-01 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758277#comment-16758277
 ] 

Kevin Risden commented on SOLR-9515:


Thanks [~markrmil...@gmail.com] and [~thetaphi]. I'll open a ticket with 
Hadoop. I started to patch the Hadoop code to pass in a custom factory. I can 
improve that to pass in custom thread names as well.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8655) No possibility to access to the underlying "valueSource" of a FunctionScoreQuery

2019-02-01 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758256#comment-16758256
 ] 

Lucene/Solr QA commented on LUCENE-8655:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} LUCENE-8655 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8655 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957141/LUCENE-8655.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/160/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> No possibility to access to the underlying "valueSource" of a 
> FunctionScoreQuery 
> -
>
> Key: LUCENE-8655
> URL: https://issues.apache.org/jira/browse/LUCENE-8655
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.6
>Reporter: Gérald Quaire
>Priority: Major
>  Labels: patch
> Attachments: LUCENE-8655.patch
>
>
> After LUCENE-8099, the "BoostedQuery" is deprecated by the use of the 
> "FunctionScoreQuery". With the BoostedQuery, it was possible to access at its 
> underlying "valueSource". But it is not the case with the class 
> "FunctionScoreQuery". It has got only a getter for the wrapped query,  
> For development of specific parsers, it would be necessary to access the 
> valueSource of a "FunctionScoreQuery". I suggest to add a new getter into the 
> class "FunctionScoreQuery" like below:
> {code:java}
>  /**
>    * @return the wrapped Query
>    */
>   public Query getWrappedQuery() {
>     return in;
>   }
>  /**
>    * @return the a source of scores
>    */
>   public DoubleValuesSource getValueSource() {
>     return source;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9421) Refactor OverseerMessageHandler and make smaller classes

2019-02-01 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758243#comment-16758243
 ] 

Mikhail Khludnev commented on SOLR-9421:


[~noble.paul], Am I right that here 
[https://github.com/apache/lucene-solr/blame/master/solr/core/src/java/org/apache/solr/cloud/api/collections/MigrateCmd.java#L287|ocmh
 processResponses(MIGRATE failed to create replica at L287] requestMap is 
empty, since there's no ocmh.sendShardRequest() after line 263? 

> Refactor OverseerMessageHandler and make smaller classes
> 
>
> Key: SOLR-9421
> URL: https://issues.apache.org/jira/browse/SOLR-9421
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Trivial
>  Labels: refactor, refactoring
> Fix For: 6.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8655) No possibility to access to the underlying "valueSource" of a FunctionScoreQuery

2019-02-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758211#comment-16758211
 ] 

Gérald Quaire commented on LUCENE-8655:
---

Hello [~romseygeek],

 

What is the next step to allow the embedding of this patch into the next 
version? Thank you in advance for your reply.

 

> No possibility to access to the underlying "valueSource" of a 
> FunctionScoreQuery 
> -
>
> Key: LUCENE-8655
> URL: https://issues.apache.org/jira/browse/LUCENE-8655
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.6
>Reporter: Gérald Quaire
>Priority: Major
>  Labels: patch
> Attachments: LUCENE-8655.patch
>
>
> After LUCENE-8099, the "BoostedQuery" is deprecated by the use of the 
> "FunctionScoreQuery". With the BoostedQuery, it was possible to access at its 
> underlying "valueSource". But it is not the case with the class 
> "FunctionScoreQuery". It has got only a getter for the wrapped query,  
> For development of specific parsers, it would be necessary to access the 
> valueSource of a "FunctionScoreQuery". I suggest to add a new getter into the 
> class "FunctionScoreQuery" like below:
> {code:java}
>  /**
>    * @return the wrapped Query
>    */
>   public Query getWrappedQuery() {
>     return in;
>   }
>  /**
>    * @return the a source of scores
>    */
>   public DoubleValuesSource getValueSource() {
>     return source;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8676) TestKoreanTokenizer#testRandomHugeStrings failure

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758205#comment-16758205
 ] 

ASF subversion and git services commented on LUCENE-8676:
-

Commit e05ed2ffb5a2df20163af9a7d8ea425b4218cade in lucene-solr's branch 
refs/heads/branch_7_7 from Jim Ferenczi
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e05ed2f ]

LUCENE-8676: The Korean tokenizer does not update the last position if the 
backtrace is caused by a big buffer (1024 chars).


> TestKoreanTokenizer#testRandomHugeStrings failure
> -
>
> Key: LUCENE-8676
> URL: https://issues.apache.org/jira/browse/LUCENE-8676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8676.patch
>
>
> KoreanTokenizer#testRandomHugeString failed in CI with the following 
> exception:
> {noformat}
>   [junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
>[junit4]>at 
> org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files
> {noformat}
> I am able to reproduce locally with:
> {noformat}
> ant test  -Dtestcase=TestKoreanTokenizer -Dtests.method=testRandomHugeStrings 
> -Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
>  -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {noformat}
> After some investigation I found out that the position of the buffer is not 
> updated when the maximum backtrace size is reached (1024).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8676) TestKoreanTokenizer#testRandomHugeStrings failure

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758204#comment-16758204
 ] 

ASF subversion and git services commented on LUCENE-8676:
-

Commit 5667170cf58732384f185b2983b1f5a21d26369e in lucene-solr's branch 
refs/heads/branch_7x from Jim Ferenczi
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5667170 ]

LUCENE-8676: The Korean tokenizer does not update the last position if the 
backtrace is caused by a big buffer (1024 chars).


> TestKoreanTokenizer#testRandomHugeStrings failure
> -
>
> Key: LUCENE-8676
> URL: https://issues.apache.org/jira/browse/LUCENE-8676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8676.patch
>
>
> KoreanTokenizer#testRandomHugeString failed in CI with the following 
> exception:
> {noformat}
>   [junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
>[junit4]>at 
> org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files
> {noformat}
> I am able to reproduce locally with:
> {noformat}
> ant test  -Dtestcase=TestKoreanTokenizer -Dtests.method=testRandomHugeStrings 
> -Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
>  -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {noformat}
> After some investigation I found out that the position of the buffer is not 
> updated when the maximum backtrace size is reached (1024).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8676) TestKoreanTokenizer#testRandomHugeStrings failure

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758202#comment-16758202
 ] 

ASF subversion and git services commented on LUCENE-8676:
-

Commit bae3e24e8bcdac9a07d2b0592cba72bed2e5365e in lucene-solr's branch 
refs/heads/branch_8_0 from Jim Ferenczi
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bae3e24 ]

LUCENE-8676: The Korean tokenizer does not update the last position if the 
backtrace is caused by a big buffer (1024 chars).


> TestKoreanTokenizer#testRandomHugeStrings failure
> -
>
> Key: LUCENE-8676
> URL: https://issues.apache.org/jira/browse/LUCENE-8676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8676.patch
>
>
> KoreanTokenizer#testRandomHugeString failed in CI with the following 
> exception:
> {noformat}
>   [junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
>[junit4]>at 
> org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files
> {noformat}
> I am able to reproduce locally with:
> {noformat}
> ant test  -Dtestcase=TestKoreanTokenizer -Dtests.method=testRandomHugeStrings 
> -Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
>  -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {noformat}
> After some investigation I found out that the position of the buffer is not 
> updated when the maximum backtrace size is reached (1024).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2019-02-01 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758193#comment-16758193
 ] 

Markus Jelsma commented on SOLR-12743:
--

Hello [~mgibney],

# There are no blocked threads, nothing peculiar and nothing relating to 
autowarming, caches whatsoever. There are as many searcherExecutor threads as 
there are cores on the system, so it 'appears' it is not leaking threads but 
just object instances;
# the system is in general not under heavy load at all;
# this specific collection, the one having this problem, does not have 
autoCommit configured. This collection receives manual commits only, once every 
15-20 minutes or so;
# there never are, overlapping commits on this system, maxWarmingSearchers was 
set to 1 already many years ago. The instance is leaked during the first commit 
after start up;
# precisely, the instance count increments at each commit, a forced GC does't 
clean it up. A second commit 15-20 minutes later it increments again, up until 
the nodes dies horribly.

Thanks!
Markus

> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
> Attachments: SOLR-12743.patch
>
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest instances:
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6ffd47ea8 - 70.087.272 
> (1,35%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x79ea9c040 - 65.678.264 
> (1,27%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6855ad680 - 63.050.600 
> (1,22%) bytes. 
> Problem Suspect 2
> 223 instances of "org.apache.solr.util.ConcurrentLRUCache", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.373.110.208 (26,52%) bytes. 
> {noformat}
> More details in the email threads.
> [1] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201804.mbox/%3Czarafa.5ae201c6.2f85.218a781d795b07b1%40mail1.ams.nl.openindex.io%3E]
>  [2] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201806.mbox/%3Czarafa.5b351537.7b8c.647ddc93059f68eb%40mail1.ams.nl.openindex.io%3E]
>  [3] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3c7b5e78c6-8cf6-42ee-8d28-872230ded...@gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2761 - Unstable

2019-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2761/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.7/1/consoleText

[repro] Revision: 767f1be7d545bc0bdcc37ab7613c3f0356c5498d

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestKoreanTokenizer 
-Dtests.method=testRandomHugeStrings -Dtests.seed=8C5E2BE10F581CB 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=HdfsUnloadDistributedZkTest 
-Dtests.method=test -Dtests.seed=B95938FF7F911E2B -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=no-NO -Dtests.timezone=America/New_York -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestExportWriter 
-Dtests.method=testStringWithCase -Dtests.seed=B95938FF7F911E2B 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=pt-PT -Dtests.timezone=America/Denver -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=HdfsChaosMonkeySafeLeaderTest 
-Dtests.seed=B95938FF7F911E2B -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=hr-HR -Dtests.timezone=Chile/EasterIsland -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestDocTermOrdsUninvertLimit 
-Dtests.method=testTriggerUnInvertLimit -Dtests.seed=B95938FF7F911E2B 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja -Dtests.timezone=America/Dawson -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e4f202c1e30f7c7209f978d7733922245c33ab71
[repro] git fetch
[repro] git checkout 767f1be7d545bc0bdcc37ab7613c3f0356c5498d

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestExportWriter
[repro]   HdfsUnloadDistributedZkTest
[repro]   TestDocTermOrdsUninvertLimit
[repro]   HdfsChaosMonkeySafeLeaderTest
[repro]lucene/analysis/nori
[repro]   TestKoreanTokenizer
[repro] ant compile-test

[...truncated 3583 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.TestExportWriter|*.HdfsUnloadDistributedZkTest|*.TestDocTermOrdsUninvertLimit|*.HdfsChaosMonkeySafeLeaderTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.seed=B95938FF7F911E2B -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=pt-PT -Dtests.timezone=America/Denver -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 30496 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 144 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestKoreanTokenizer" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 216 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest
[repro]   1/5 failed: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest
[repro]   5/5 failed: org.apache.lucene.analysis.ko.TestKoreanTokenizer
[repro]   5/5 failed: 

[jira] [Resolved] (LUCENE-8676) TestKoreanTokenizer#testRandomHugeStrings failure

2019-02-01 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-8676.
--
   Resolution: Fixed
Fix Version/s: 7.7
   8.0

> TestKoreanTokenizer#testRandomHugeStrings failure
> -
>
> Key: LUCENE-8676
> URL: https://issues.apache.org/jira/browse/LUCENE-8676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Fix For: 8.0, 7.7
>
> Attachments: LUCENE-8676.patch
>
>
> KoreanTokenizer#testRandomHugeString failed in CI with the following 
> exception:
> {noformat}
>   [junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
>[junit4]>at 
> org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files
> {noformat}
> I am able to reproduce locally with:
> {noformat}
> ant test  -Dtestcase=TestKoreanTokenizer -Dtests.method=testRandomHugeStrings 
> -Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
>  -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {noformat}
> After some investigation I found out that the position of the buffer is not 
> updated when the maximum backtrace size is reached (1024).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8676) TestKoreanTokenizer#testRandomHugeStrings failure

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758201#comment-16758201
 ] 

ASF subversion and git services commented on LUCENE-8676:
-

Commit e3ac4c9180a0eb6f1c7a3e49d1a8cda8669ae3fa in lucene-solr's branch 
refs/heads/branch_8x from Jim Ferenczi
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e3ac4c9 ]

LUCENE-8676: The Korean tokenizer does not update the last position if the 
backtrace is caused by a big buffer (1024 chars).


> TestKoreanTokenizer#testRandomHugeStrings failure
> -
>
> Key: LUCENE-8676
> URL: https://issues.apache.org/jira/browse/LUCENE-8676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8676.patch
>
>
> KoreanTokenizer#testRandomHugeString failed in CI with the following 
> exception:
> {noformat}
>   [junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
>[junit4]>at 
> org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files
> {noformat}
> I am able to reproduce locally with:
> {noformat}
> ant test  -Dtestcase=TestKoreanTokenizer -Dtests.method=testRandomHugeStrings 
> -Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
>  -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {noformat}
> After some investigation I found out that the position of the buffer is not 
> updated when the maximum backtrace size is reached (1024).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8676) TestKoreanTokenizer#testRandomHugeStrings failure

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758199#comment-16758199
 ] 

ASF subversion and git services commented on LUCENE-8676:
-

Commit e9c02a6f71de3615a5c90f51b66f3709cbbd5e47 in lucene-solr's branch 
refs/heads/master from Jim Ferenczi
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e9c02a6 ]

LUCENE-8676: The Korean tokenizer does not update the last position if the 
backtrace is caused by a big buffer (1024 chars).


> TestKoreanTokenizer#testRandomHugeStrings failure
> -
>
> Key: LUCENE-8676
> URL: https://issues.apache.org/jira/browse/LUCENE-8676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8676.patch
>
>
> KoreanTokenizer#testRandomHugeString failed in CI with the following 
> exception:
> {noformat}
>   [junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
>[junit4]>at 
> org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
>[junit4]>at 
> org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files
> {noformat}
> I am able to reproduce locally with:
> {noformat}
> ant test  -Dtestcase=TestKoreanTokenizer -Dtests.method=testRandomHugeStrings 
> -Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
>  -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> {noformat}
> After some investigation I found out that the position of the buffer is not 
> updated when the maximum backtrace size is reached (1024).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8679) Test failure in LatLonShape

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758185#comment-16758185
 ] 

ASF subversion and git services commented on LUCENE-8679:
-

Commit f51bbff913d7ad34bf037853de3b29ba5bc5cc5f in lucene-solr's branch 
refs/heads/branch_7x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f51bbff ]

LUCENE-8679: return WITHIN in EdgeTree#relateTriangle only when polygon and 
triangle share one edge


> Test failure in LatLonShape
> ---
>
> Key: LUCENE-8679
> URL: https://issues.apache.org/jira/browse/LUCENE-8679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8679.patch
>
>
> Error and reproducible seed:
>  
> {code:java}
> [junit4] Suite: org.apache.lucene.document.TestLatLonShape
>    [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
> -Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
> -Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>    [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
>    [junit4]    >        at java.lang.Thread.run(Thread.java:748)
>    [junit4]   2> NOTE: leaving temporary files on disk at: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
> docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
>  locale=no-NO, timezone=America/North_Dakota/Center
>    [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
> 1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
>    [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
> TestLatLonLineShapeQueries, TestLatLonShape]
>    [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
> skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8679) Test failure in LatLonShape

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758184#comment-16758184
 ] 

ASF subversion and git services commented on LUCENE-8679:
-

Commit daeeeb94cf6f9fd061cc705b1c4b2e28bcab6943 in lucene-solr's branch 
refs/heads/branch_8x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=daeeeb9 ]

LUCENE-8679: return WITHIN in EdgeTree#relateTriangle only when polygon and 
triangle share one edge


> Test failure in LatLonShape
> ---
>
> Key: LUCENE-8679
> URL: https://issues.apache.org/jira/browse/LUCENE-8679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8679.patch
>
>
> Error and reproducible seed:
>  
> {code:java}
> [junit4] Suite: org.apache.lucene.document.TestLatLonShape
>    [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
> -Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
> -Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>    [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
>    [junit4]    >        at java.lang.Thread.run(Thread.java:748)
>    [junit4]   2> NOTE: leaving temporary files on disk at: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
> docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
>  locale=no-NO, timezone=America/North_Dakota/Center
>    [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
> 1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
>    [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
> TestLatLonLineShapeQueries, TestLatLonShape]
>    [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
> skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8679) Test failure in LatLonShape

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758180#comment-16758180
 ] 

ASF subversion and git services commented on LUCENE-8679:
-

Commit fdb635353983a8954c092436db650277aa33c95b in lucene-solr's branch 
refs/heads/master from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fdb6353 ]

LUCENE-8679: return WITHIN in EdgeTree#relateTriangle only when polygon and 
triangle share one edge


> Test failure in LatLonShape
> ---
>
> Key: LUCENE-8679
> URL: https://issues.apache.org/jira/browse/LUCENE-8679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8679.patch
>
>
> Error and reproducible seed:
>  
> {code:java}
> [junit4] Suite: org.apache.lucene.document.TestLatLonShape
>    [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
> -Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
> -Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>    [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
>    [junit4]    >        at java.lang.Thread.run(Thread.java:748)
>    [junit4]   2> NOTE: leaving temporary files on disk at: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
> docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
>  locale=no-NO, timezone=America/North_Dakota/Center
>    [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
> 1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
>    [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
> TestLatLonLineShapeQueries, TestLatLonShape]
>    [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
> skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13211) Fix the position of color legend in Cloud UI.

2019-02-01 Thread Junya Usui (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junya Usui updated SOLR-13211:
--
Attachment: SOLR-13211.patch

> Fix the position of color legend in Cloud UI.
> -
>
> Key: SOLR-13211
> URL: https://issues.apache.org/jira/browse/SOLR-13211
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Junya Usui
>Priority: Major
> Attachments: SOLR-13211.patch, fix_legend_position.pdf
>
>
> This patch contains two display enhancements which make the legend easier to 
> read, especially when the number of solr nodes is larger than 40.
>  # In the Could -> Graph page,
> it difficult to read the server name and legend since they are overlapping. 
> (Page.1)
>  #  In the Could -> Graph (Radial) page,
> the horizontal distance between the graph and the legend is too far.
> (Page.2)
> These issues have been adjusted for a long time. 
> Ref: 
> https://issues.apache.org/jira/browse/SOLR-3915?focusedCommentId=13472876=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13472876
> And this patch provided a solution only modifying the cloud.css. The legend 
> was moved to the outside of the graph so that it will keep at left-bottom 
> corner without overlapping. (Page.3-4)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3487 - Unstable!

2019-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3487/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestCloudSearcherWarming

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestCloudSearcherWarming: 1) Thread[id=318, 
name=SyncThread:0, state=WAITING, group=TGRP-TestCloudSearcherWarming] 
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127)
2) Thread[id=316, name=NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0, 
state=RUNNABLE, group=TGRP-TestCloudSearcherWarming] at 
java.base@11/sun.nio.ch.EPoll.wait(Native Method) at 
java.base@11/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)  
   at 
java.base@11/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124) 
at java.base@11/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:136)   
  at 
app//org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:196)
 at java.base@11/java.lang.Thread.run(Thread.java:834)3) 
Thread[id=319, name=ProcessThread(sid:0 cport:35663):, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:123)
4) Thread[id=317, name=SessionTracker, state=TIMED_WAITING, 
group=TGRP-TestCloudSearcherWarming] at 
java.base@11/java.lang.Object.wait(Native Method) at 
app//org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:147)
5) Thread[id=315, name=ZkTestServer Run Thread, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at 
java.base@11/java.lang.Object.wait(Native Method) at 
java.base@11/java.lang.Thread.join(Thread.java:1305) at 
java.base@11/java.lang.Thread.join(Thread.java:1379) at 
app//org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:313)
 at 
app//org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:343)
 at app//org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:564)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestCloudSearcherWarming: 
   1) Thread[id=318, name=SyncThread:0, state=WAITING, 
group=TGRP-TestCloudSearcherWarming]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
app//org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127)
   2) Thread[id=316, name=NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0, 
state=RUNNABLE, group=TGRP-TestCloudSearcherWarming]
at java.base@11/sun.nio.ch.EPoll.wait(Native Method)
at 
java.base@11/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
at 
java.base@11/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
at java.base@11/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:136)
at 
app//org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:196)
at java.base@11/java.lang.Thread.run(Thread.java:834)
   3) Thread[id=319, name=ProcessThread(sid:0 cport:35663):, state=WAITING, 
group=TGRP-TestCloudSearcherWarming]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
app//org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:123)
   4) Thread[id=317, 

[jira] [Created] (SOLR-13211) Fix the position of color legend in Cloud UI.

2019-02-01 Thread Junya Usui (JIRA)
Junya Usui created SOLR-13211:
-

 Summary: Fix the position of color legend in Cloud UI.
 Key: SOLR-13211
 URL: https://issues.apache.org/jira/browse/SOLR-13211
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Reporter: Junya Usui
 Attachments: fix_legend_position.pdf

This patch contains two display enhancements which make the legend easier to 
read, especially when the number of solr nodes is larger than 40.
 # In the Could -> Graph page,
it difficult to read the server name and legend since they are overlapping. 
(Page.1)
 #  In the Could -> Graph (Radial) page,
the horizontal distance between the graph and the legend is too far.
(Page.2)

These issues have been adjusted for a long time. 
Ref: 
https://issues.apache.org/jira/browse/SOLR-3915?focusedCommentId=13472876=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13472876
And this patch provided a solution only modifying the cloud.css. The legend was 
moved to the outside of the graph so that it will keep at left-bottom corner 
without overlapping. (Page.3-4)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 279 - Unstable

2019-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/279/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [MMapDirectory, 
MMapDirectory, MMapDirectory, SolrCore, InternalHttpClient, MMapDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:770)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1088)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:158)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:359)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:738)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1088)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 

[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 16 - Unstable

2019-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/16/

3 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonShape.testRandomLineEncoding

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([69A925FA69EF8A01:84B09D38BD89D060]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
at 
org.apache.lucene.document.TestLatLonShape.testRandomLineEncoding(TestLatLonShape.java:716)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
acoll: 1549002570721 bcoll: 1549002570771

Stack Trace:
java.lang.AssertionError: acoll: 1549002570721 bcoll: 1549002570771
at 
__randomizedtesting.SeedInfo.seed([33AA96F39669069A:BBFEA92938956B62]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testFillWorkQueue(MultiThreadedOCPTest.java:116)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:71)
at 

[jira] [Created] (LUCENE-8679) Test failure in LatLonShape

2019-02-01 Thread Ignacio Vera (JIRA)
Ignacio Vera created LUCENE-8679:


 Summary: Test failure in LatLonShape
 Key: LUCENE-8679
 URL: https://issues.apache.org/jira/browse/LUCENE-8679
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ignacio Vera


Error and reproducible seed:

 
{code:java}
[junit4] Suite: org.apache.lucene.document.TestLatLonShape
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
-Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
-Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
   [junit4]    > Throwable #1: java.lang.AssertionError
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
   [junit4]    >        at 
org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
   [junit4]    >        at 
org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
   [junit4]    >        at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
 locale=no-NO, timezone=America/North_Dakota/Center
   [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
   [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
TestLatLonLineShapeQueries, TestLatLonShape]
   [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8678) TestIndexWriterDelete timed out on jenkins

2019-02-01 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-8678:
---

 Summary: TestIndexWriterDelete timed out on jenkins
 Key: LUCENE-8678
 URL: https://issues.apache.org/jira/browse/LUCENE-8678
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss


From: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/445/consoleText

It does reproduce for me. I vaguely recall there was another timeout failure in 
this test a while ago.

{code}
ant test  -Dtestcase=TestIndexWriterDelete -Dtests.method=testUpdatesOnDiskFull 
-Dtests.seed=DCF0B4DFB70AB6EA -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.locale=fr-BE -Dtests.timezone=Africa/Bamako 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
{code}

{code}
   [junit4] Suite: org.apache.lucene.index.TestIndexWriterDelete
   [junit4]   2> janv. 31, 2019 2:50:42 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> AVERTISSEMENT: Suite execution timed out: 
org.apache.lucene.index.TestIndexWriterDelete
   [junit4]   2>1) Thread[id=1, name=main, state=WAITING, group=main]
   [junit4]   2> at java.lang.Object.wait(Native Method)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1252)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1326)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:639)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:496)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:269)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:394)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13)
   [junit4]   2>2) Thread[id=9, name=JUnit4-serializer-daemon, 
state=TIMED_WAITING, group=main]
   [junit4]   2> at java.lang.Thread.sleep(Native Method)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:50)
   [junit4]   2>3) Thread[id=497, 
name=SUITE-TestIndexWriterDelete-seed#[DCF0B4DFB70AB6EA], state=RUNNABLE, 
group=TGRP-TestIndexWriterDelete]
   [junit4]   2> at java.lang.Thread.getStackTrace(Thread.java:1559)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:696)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:693)
   [junit4]   2> at java.security.AccessController.doPrivileged(Native 
Method)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getStackTrace(ThreadLeakControl.java:693)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getThreadsWithTraces(ThreadLeakControl.java:709)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.formatThreadStacksFull(ThreadLeakControl.java:689)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.access$1000(ThreadLeakControl.java:65)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:415)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:708)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:138)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:629)
   [junit4]   2>4) Thread[id=498, 
name=TEST-TestIndexWriterDelete.testUpdatesOnDiskFull-seed#[DCF0B4DFB70AB6EA], 
state=RUNNABLE, group=TGRP-TestIndexWriterDelete]
   [junit4]   2> at java.lang.Throwable.fillInStackTrace(Native Method)
   [junit4]   2> at 
java.lang.Throwable.fillInStackTrace(Throwable.java:783)
   [junit4]   2> at java.lang.Throwable.(Throwable.java:250)
   [junit4]   2> at java.lang.Exception.(Exception.java:54)
   [junit4]   2> at 
java.lang.RuntimeException.(RuntimeException.java:51)
   [junit4]   2> at 
java.lang.UnsupportedOperationException.(UnsupportedOperationException.java:42)
   [junit4]   2> at 
java.util.AbstractCollection.add(AbstractCollection.java:262)
   [junit4]   2> at 
org.apache.lucene.util.TestUtil.checkReadOnly(TestUtil.java:252)
   [junit4]   2> at 
org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat$AssertingStoredFieldsReader.getChildResources(AssertingStoredFieldsFormat.java:91)
   [junit4]   2> at 
org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat$AssertingStoredFieldsReader.(AssertingStoredFieldsFormat.java:61)
   [junit4]   2> at 

[jira] [Updated] (LUCENE-8677) JVM SIGSEGV in Node::in

2019-02-01 Thread Dawid Weiss (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-8677:

Labels: jvm  (was: )

> JVM SIGSEGV in Node::in
> ---
>
> Key: LUCENE-8677
> URL: https://issues.apache.org/jira/browse/LUCENE-8677
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Priority: Minor
>  Labels: jvm
>
> Jenkins:
> https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/15
> {code}
>[junit4] # A fatal error has been detected by the Java Runtime Environment:
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x000105bee9d8, pid=85292, tid=18179
>[junit4] #
>[junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+181) (build 
> 9+181)
>[junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (9+181, mixed mode, 
> tiered, concurrent mark sweep gc, bsd-amd64)
>[junit4] # Problematic frame:
>[junit4] # [thread 208539 also had an error]
>[junit4] V  [libjvm.dylib+0x4f49d8]  Node::in(unsigned int) const+0x18
>[junit4] #
>[junit4] # No core dump will be written. Core dumps have been disabled. To 
> enable core dumping, try "ulimit -c unlimited" before starting Java again
>[junit4] #
>[junit4] # An error report file with more information is saved as:
>[junit4] # 
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J0/hs_err_pid85292.log
>[junit4] # [ timer expired, abort... ]
> {code}
> No hs_err or replay log on the jenkins page though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-repro - Build # 2756 - Unstable

2019-02-01 Thread Dawid Weiss
This is a catastrophic failure due to:
java.lang.OutOfMemoryError: GC overhead limit exceeded

The machine isn't powerful enough to keep up with gc junk created by
the test. Don't know what to do with it.

Dawid

On Thu, Jan 31, 2019 at 2:27 AM Apache Jenkins Server
 wrote:
>
> Build: https://builds.apache.org/job/Lucene-Solr-repro/2756/
>
> [...truncated 28 lines...]
> [repro] Jenkins log URL: 
> https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/46/consoleText
>
> [repro] Revision: 21d2b024f4590175f97b82839ff69f96bd022df2
>
> [repro] Ant options: -Dtests.multiplier=2 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
> [repro] Repro line:  ant test  -Dtestcase=FullSolrCloudDistribCmdsTest 
> -Dtests.method=test -Dtests.seed=24FFF8DB5625604C -Dtests.multiplier=2 
> -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ar-JO -Dtests.timezone=America/Dominica -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>
> [repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
> -Dtests.method=test -Dtests.seed=24FFF8DB5625604C -Dtests.multiplier=2 
> -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
>  -Dtests.locale=en-MT -Dtests.timezone=US/Arizona -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>
> [repro] git rev-parse --abbrev-ref HEAD
> [repro] git rev-parse HEAD
> [repro] Initial local git branch/revision: 
> 2beb853cb3d2c05884049719f02706e67e373234
> [repro] git fetch
> [repro] git checkout 21d2b024f4590175f97b82839ff69f96bd022df2
>
> [...truncated 2 lines...]
> [repro] git merge --ff-only
>
> [...truncated 1 lines...]
> [repro] ant clean
>
> [...truncated 6 lines...]
> [repro] Test suites by module:
> [repro]solr/core
> [repro]   HdfsRestartWhileUpdatingTest
> [repro]   FullSolrCloudDistribCmdsTest
> [repro] ant compile-test
>
> [...truncated 3583 lines...]
> [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
> -Dtests.class="*.HdfsRestartWhileUpdatingTest|*.FullSolrCloudDistribCmdsTest" 
> -Dtests.showOutput=onerror -Dtests.multiplier=2 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
>  -Dtests.seed=24FFF8DB5625604C -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
>  -Dtests.locale=en-MT -Dtests.timezone=US/Arizona -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>
> [...truncated 29594 lines...]
>[junit4] ERROR: JVM J2 ended with an exception, command line: 
> /usr/local/asfpackages/java/jdk1.8.0_191/jre/bin/java -ea -esa 
> -Dtests.prefix=tests -Dtests.seed=24FFF8DB5625604C -Xmx512M -Dtests.iters= 
> -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
> -Dtests.postingsformat=random -Dtests.docvaluesformat=random 
> -Dtests.locale=en-MT -Dtests.timezone=US/Arizona -Dtests.directory=random 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
>  -Dtests.luceneMatchVersion=7.8.0 -Dtests.cleanthreads=perClass 
> -Djava.util.logging.config.file=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/tools/junit4/logging.properties
>  -Dtests.nightly=true -Dtests.weekly=false -Dtests.monster=false 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
> -Djava.io.tmpdir=./temp 
> -Dcommon.dir=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene 
> -Dclover.db.dir=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/build/clover/db
>  
> -Djava.security.policy=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/tools/junit4/solr-tests.policy
>  -Dtests.LUCENE_VERSION=7.8.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
> -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
> -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
> -Dtests.src.home=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/core
>  -Djava.security.egd=file:/dev/./urandom 
> -Djunit4.childvm.cwd=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2
>  
> -Djunit4.tempDir=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/temp
>  -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
> -Dtests.filterstacks=true -Dtests.maxfailures=10 -Dtests.badapples=true 
> -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
> 

  1   2   >