[
https://issues.apache.org/jira/browse/BEAM-6279?focusedWorklogId=185790&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-185790
]
ASF GitHub Bot logged work on BEAM-6279:
----------------------------------------
Author: ASF GitHub Bot
Created on: 16/Jan/19 14:08
Start Date: 16/Jan/19 14:08
Worklog Time Spent: 10m
Work Description: echauchot commented on issue #7535: [BEAM-6279]
increase ES restClient timeouts and force embedded clusters to have only one
node.
URL: https://github.com/apache/beam/pull/7535#issuecomment-454791465
@kennknowles actually there is already ITests for the 3 ES versions that
this IO supports. They do basic split, read, write on a real cluster with an
increased number of input docs (50 000 IIRC).
Actually ESIO was the first IO to have ITests because I discovered strange
behavior of the IO with a big number of docs at the beginning of dev.
The thing is that I don't know where the dashboards of IO ITests are.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 185790)
Time Spent: 40m (was: 0.5h)
> Failures in ElasticSearchIOTest: testWriteRetryValidRequest (versions 2 & 5)
> testDefaultRetryPredicate (version 5)
> ------------------------------------------------------------------------------------------------------------------
>
> Key: BEAM-6279
> URL: https://issues.apache.org/jira/browse/BEAM-6279
> Project: Beam
> Issue Type: Bug
> Components: io-java-elasticsearch, test-failures
> Reporter: Kenneth Knowles
> Assignee: Tim Robertson
> Priority: Critical
> Labels: flake
> Fix For: 2.11.0
>
> Time Spent: 40m
> Remaining Estimate: 0h
>
> [https://builds.apache.org/job/beam_PreCommit_Java_Cron/730/]
> [https://scans.gradle.com/s/6rpbvgwx2nk7c]
> I don't see recent changes, so thinking this is a flake.
> testDefaultRetryPredicate:
> {code:java}
> java.lang.AssertionError
> :
> All incoming requests on node [node_s0] should have finished. Expected 0 but
> got 2540 Expected: <0L> but: was <2540L>
> at __randomizedtesting.SeedInfo.seed([4CF46FDC92DC07B8:B39A5DDFBC85274E]:0)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.junit.Assert.assertThat(Assert.java:956)
> at
> org.elasticsearch.test.InternalTestCluster.lambda$assertRequestsFinished$17(InternalTestCluster.java:2086)
> at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:705)
> at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:679)
> at
> org.elasticsearch.test.InternalTestCluster.assertRequestsFinished(InternalTestCluster.java:2083)
> at
> org.elasticsearch.test.InternalTestCluster.assertAfterTest(InternalTestCluster.java:2062)
> at
> org.elasticsearch.test.ESIntegTestCase.afterInternal(ESIntegTestCase.java:581)
> at
> org.elasticsearch.test.ESIntegTestCase.cleanUpCluster(ESIntegTestCase.java:2054)
> ...
> {code}
> This looks like an internal assertion (aka {{checkState}}) failed and the
> cluster became unhealthy, maybe?
> testWriteRetryValidRequest:
> {code:java}
> org.apache.beam.sdk.Pipeline$PipelineExecutionException
> :
> java.io.IOException: listener timeout after waiting for [90000] ms
> Open stacktrace
> Caused by:
> java.io.IOException
> :
> listener timeout after waiting for [90000] ms
> Close stacktrace
> at
> org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:899)
> at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227)
> at org.elasticsearch.client.RestClient.performRequest(RestClient.java:321)
> at
> org.apache.beam.sdk.io.elasticsearch.ElasticsearchIO$Write$WriteFn.flushBatch(ElasticsearchIO.java:1285)
> at
> org.apache.beam.sdk.io.elasticsearch.ElasticsearchIO$Write$WriteFn.finishBundle(ElasticsearchIO.java:1260){code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)