Re: [IGNORE] [RESULT] [VOTE] FLIP-150: Introduce Hybrid Source

2021-07-18 Thread Thomas Weise
Hi Becket,

Thanks for taking a look at the FLIP page.

I just updated it to include the classes that are user facing, i.e. those
that a user would interact with when using the HybridSource. I also updated
the examples in the HybridSource code block to show both of the scenarios,
fixed start position and construction of source at time of switch.

Please let me know if you have other questions.

We could also need few more binding votes:
https://lists.apache.org/thread.html/rb900fcfa4ffb81fe58efde489568b1e5cc3a02d16665679545a534c7%40%3Cdev.flink.apache.org%3E

Thanks,
Thomas



On Tue, Jul 6, 2021 at 1:18 AM Becket Qin  wrote:

> Hi Nicholas and Thomas,
>
> Thanks for pushing this FLIP. The hybrid source is quite useful. It seems
> that the FLIP wiki page did not cover all the new public interfaces we are
> introducing. For example, the interface of the following classes are either
> incomplete or not included in the wiki at the point.
>
> HybridSourceSplit
> HybridSourceEnumeratorState
> SourceFactory
> HybridSourceBuilder
>
> As far as I understand these are public APIs that are visible to the users.
> Not sure if they are all the public interfaces, though. Can you please
> update the FLIP to make the public interface change clear and complete?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
> On Tue, Jul 6, 2021 at 4:11 PM 蒋晓峰  wrote:
>
> > Hello everyone,
> >
> >
> >Sorry for the mistake to report the result of FLIP-150 voting for only
> > 2 binding. Please ignore the vote result for FLIP-150. The vote of
> FLIP-150
> > is still open, please continue to vote FLIP-150: Introduce Hybrid Source.
> >
> >
> > Thanks,
> > Nicholas Jiang
>


Re: [VOTE] Release 1.12.5, release candidate #2

2021-07-18 Thread Xintong Song
+1 (binding)

- verified checksums & signatures
- built from sources
- run example jobs with standalone and native k8s deployments

Thank you~

Xintong Song



On Thu, Jul 15, 2021 at 6:53 PM Jingsong Li  wrote:

> Hi everyone,
>
> Please review and vote on the release candidate #2 for the version 1.12.5,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint FBB83C0A4FFB9CA8 [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.12.5-rc2" [5],
> * website pull request listing the new release and adding announcement blog
> post [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Best,
> Jingsong Lee
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350166
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.12.5-rc2/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1434/
> https://repository.apache.org/content/repositories/orgapacheflink-1436/
> [5] https://github.com/apache/flink/releases/tag/release-1.12.5-rc2
> [6] https://github.com/apache/flink-web/pull/455
>


Re: [VOTE] Release 1.13.2, release candidate #2

2021-07-18 Thread Xintong Song
+1 (binding)

- verified checksums & signatures
- built from sources
- run example jobs with standalone and native k8s deployments


Thank you~

Xintong Song



On Wed, Jul 14, 2021 at 4:49 PM Yun Tang  wrote:

> Hi everyone,
> Please review and vote on the release candidate #2 for the version 1.13.2,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint 78A306590F1081CC6794DC7F62DAD618E07CF996 [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.13.2-rc2" [5],
> * website pull request listing the new release and adding announcement
> blog post [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Best,
> Yun Tang
>
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12350218==12315522
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.13.2-rc2/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1431/
> [5] https://github.com/apache/flink/releases/tag/release-1.13.2-rc2
> [6] https://github.com/apache/flink-web/pull/453
>
>


Re: [DISCUSS] FLIP-171: Async Sink

2021-07-18 Thread Guowei Ma
Hi,
I'm very sorry to participate in this discussion so late. But I have a
little question. I understand the goal of this FLIP is to make `Writer`
support asynchronous. But my question is: why not let `Committer` support
asynchronization? If only `Writer` supports asynchronization, ExactlyOnce
is impossible. Since I am not an expert in Kineses, I would like everyone
to point out if I missed anything.
Best,
Guowei


On Sat, Jul 17, 2021 at 12:27 AM Till Rohrmann  wrote:

> Sure, thanks for the pointers.
>
> Cheers,
> Till
>
> On Fri, Jul 16, 2021 at 6:19 PM Hausmann, Steffen
> 
> wrote:
>
> > Hi Till,
> >
> > You are right, I’ve left out some implementation details, which have
> > actually changed a couple of time as part of the ongoing discussion. You
> > can find our current prototype here [1] and a sample implementation of
> the
> > KPL free Kinesis sink here [2].
> >
> > I plan to update the FLIP. But I think would it be make sense to wait
> > until the implementation has stabilized enough before we update the FLIP
> to
> > the final state.
> >
> > Does that make sense?
> >
> > Cheers, Steffen
> >
> > [1]
> >
> https://github.com/sthm/flink/tree/flip-171-177/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink
> > [2]
> >
> https://github.com/sthm/flink/blob/flip-171-177/flink-connectors/flink-connector-kinesis-171/src/main/java/software/amazon/flink/connectors/AmazonKinesisDataStreamSink.java
> >
> > From: Till Rohrmann 
> > Date: Friday, 16. July 2021 at 18:10
> > To: Piotr Nowojski 
> > Cc: Steffen Hausmann , "dev@flink.apache.org" <
> > dev@flink.apache.org>, Arvid Heise 
> > Subject: RE: [EXTERNAL] [DISCUSS] FLIP-171: Async Sink
> >
> >
> > CAUTION: This email originated from outside of the organization. Do not
> > click links or open attachments unless you can confirm the sender and
> know
> > the content is safe.
> >
> >
> > Hi Steffen,
> >
> > I've taken another look at the FLIP and I stumbled across a couple of
> > inconsistencies. I think it is mainly because of the lacking code. For
> > example, it is not fully clear to me based on the current FLIP how we
> > ensure that there are no in-flight requests when
> > AsyncSinkWriter.snapshotState is called. Also the concrete implementation
> > of the AsyncSinkCommitter could be helpful for understanding how the
> > AsyncSinkWriter works in the end. Do you plan to update the FLIP
> > accordingly?
> >
> > Cheers,
> > Till
> >
> > On Wed, Jun 30, 2021 at 8:36 AM Piotr Nowojski  > > wrote:
> > Thanks for addressing this issue :)
> >
> > Best, Piotrek
> >
> > wt., 29 cze 2021 o 17:58 Hausmann, Steffen  > shau...@amazon.de>> napisał(a):
> > Hey Poitr,
> >
> > I've just adapted the FLIP and changed the signature for the
> > `submitRequestEntries` method:
> >
> > protected abstract void submitRequestEntries(List
> > requestEntries, ResultFuture requestResult);
> >
> > In addition, we are likely to use an AtomicLong to track the number of
> > outstanding requests, as you have proposed in 2b). I've already indicated
> > this in the FLIP, but it's not fully fleshed out. But as you have said,
> > that seems to be an implementation detail and the important part is the
> > change of the `submitRequestEntries` signature.
> >
> > Thanks for your feedback!
> >
> > Cheers, Steffen
> >
> >
> > On 25.06.21, 17:05, "Hausmann, Steffen" 
> wrote:
> >
> > CAUTION: This email originated from outside of the organization. Do
> > not click links or open attachments unless you can confirm the sender and
> > know the content is safe.
> >
> >
> >
> > Hi Piotr,
> >
> > I’m happy to take your guidance on this. I need to think through your
> > proposals and I’ll follow-up on Monday with some more context so that we
> > can close the discussion on these details. But for now, I’ll close the
> vote.
> >
> > Thanks, Steffen
> >
> > From: Piotr Nowojski  pnowoj...@apache.org
> > >>
> > Date: Friday, 25. June 2021 at 14:48
> > To: Till Rohrmann mailto:trohrm...@apache.org
> >>
> > Cc: Steffen Hausmann mailto:shau...@amazon.de>>,
> "
> > dev@flink.apache.org"  > >, Arvid Heise  > ar...@apache.org>>
> > Subject: RE: [EXTERNAL] [DISCUSS] FLIP-171: Async Sink
> >
> >
> > CAUTION: This email originated from outside of the organization. Do
> > not click links or open attachments unless you can confirm the sender and
> > know the content is safe.
> >
> >
> > Hey,
> >
> > I've just synced with Arvid about a couple of more remarks from my
> > side and he shared mine concerns.
> >
> > 1. I would very strongly recommend ditching `CompletableFuture `
> > from the  `protected abstract CompletableFuture
> > submitRequestEntries(List requestEntries);`  in favor of
> > something like
> > `org.apache.flink.streaming.api.functions.async.ResultFuture` interface.
> > `CompletableFuture` would partially make the threading model of the

[jira] [Created] (FLINK-23421) Unify the space handling in RowData CSV parser

2021-07-18 Thread Dian Fu (Jira)
Dian Fu created FLINK-23421:
---

 Summary: Unify the space handling in RowData CSV parser
 Key: FLINK-23421
 URL: https://issues.apache.org/jira/browse/FLINK-23421
 Project: Flink
  Issue Type: Bug
Reporter: Dian Fu


Currently, it calls `trim()` for types such as LocalDateTime, Boolean, 
[Int|https://github.com/apache/flink/blob/41ce9ccbf42537a854087b6ba33a61092a04538f/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvToRowDataConverters.java#L196],
 etc, however, doesn't calls `trim()` for the other types such as 
[Decimal|https://github.com/apache/flink/blob/41ce9ccbf42537a854087b6ba33a61092a04538f/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvToRowDataConverters.java#L285]
 etc. We should unify the behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] FLIP-184: Refine ShuffleMaster lifecycle management for pluggable shuffle service framework

2021-07-18 Thread XING JIN
+1 (non-binding)

Best,
Jin

Guowei Ma  于2021年7月19日周一 上午9:41写道:

> +1(binding)
>
> Best,
> Guowei
>
>
> On Fri, Jul 16, 2021 at 5:36 PM Yingjie Cao 
> wrote:
>
> > Hi all,
> >
> > I'd like to start a vote on FLIP-184 [1] which was
> > discussed in [2] [3]. The vote will be open for at least 72 hours
> > until 7.21 unless there is an objection.
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-184%3A+Refine+ShuffleMaster+lifecycle+management+for+pluggable+shuffle+service+framework
> > [2]
> >
> >
> https://lists.apache.org/thread.html/radbbabfcfb6bec305ddf7aeefb983232f96b18ba013f0ae2ee500288%40%3Cdev.flink.apache.org%3E
> > [3]
> >
> >
> https://lists.apache.org/thread.html/r93e3a72506f3e7ffd3c1ab860b5d1a21f8a47b059f2f2fdd05ca1d46%40%3Cdev.flink.apache.org%3E
> >
>


[jira] [Created] (FLINK-23420) sql stream mode lag function java.lang.NullPointerException

2021-07-18 Thread xiechenling (Jira)
xiechenling created FLINK-23420:
---

 Summary: sql stream mode lag function 
java.lang.NullPointerException
 Key: FLINK-23420
 URL: https://issues.apache.org/jira/browse/FLINK-23420
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.13.1
Reporter: xiechenling


flink 1.13.1  BlinkPlanner  StreamingMode  EXACTLY_ONCE

log

 
{code:java}
2021-07-15 21:07:46,328 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
checkpoint 1 (type=CHECKPOINT) @ 1626354466304 for job 
fd3c2294afe74778cb6ce3bd5d42f0c0.
2021-07-15 21:07:46,774 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - 
OverAggregate(partitionBy=[targetId], orderBy=[lastDt ASC], window=[ RANG 
BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW], select=[displayId, mmsi, 
latitude, longitude, course, heading, speed, len, minLen, maxLen, wid, id, 
province, nationality, lastTm, status, vesselName, sClass, targetId, lastDt, 
$20, LAG(displayId) AS w0$o0, LAG(mmsi) AS w0$o1, LAG($20) AS w0$o2, 
LAG(latitude) AS w0$o3, LAG(longitude) AS w0$o4, LAG(course) AS w0$o5, 
LAG(heading) AS w0$o6, LAG(speed) AS w0$o7, LAG(len) AS w0$o8, LAG(minLen) AS 
w0$o9, LAG(maxLen) AS w0$o10, LAG(wid) AS w0$o11, LAG(id) AS w0$o12, 
LAG(province) AS w0$o13, LAG(nationality) AS w0$o14, LAG(lastTm) AS w0$o15, 
LAG(status) AS w0$o16, LAG(vesselName) AS w0$o17, LAG(sClass) AS w0$o18, 
LAG(targetId) AS w0$o19, LAG(lastDt) AS w0$o20]) -> Calc(select=[displayId, 
mmsi, $20 AS state, latitude, longitude, course, heading, speed, len, minLen, 
maxLen, wid, id, province, nationality, lastTm, status, vesselName, sClass, 
targetId, lastDt, w0$o0 AS previous_displayId, w0$o1 AS previous_mmsi, w0$o2 AS 
previous_state, w0$o3 AS previous_latitude, w0$o4 AS previous_longitude, w0$o5 
AS previous_course, w0$o6 AS previous_heading, w0$o7 AS previous_speed, w0$o8 
AS previous_len, w0$o9 AS previous_minLen, w0$o10 AS previous_maxLen, w0$o11 AS 
previous_wid, w0$o12 AS previous_id, w0$o13 AS previous_province, w0$o14 AS 
previous_nationality, w0$o15 AS previous_lastTm, w0$o16 AS previous_status, 
w0$o17 AS previous_vesselName, w0$o18 AS previous_sClass, w0$o19 AS 
previous_targetId, CAST(w0$o20) AS previous_lastDt], where=[(w0$o1 <> mmsi)]) 
-> TableToDataSteam(type=ROW<`displayId` INT, `mmsi` INT, `state` TINYINT, 
`latitude` DOUBLE, `longitude` DOUBLE, `course` FLOAT, `heading` FLOAT, `speed` 
FLOAT, `len` INT, `minLen` INT, `maxLen` INT, `wid` INT, `id` STRING, 
`province` STRING, `nationality` STRING, `lastTm` BIGINT, `status` STRING, 
`vesselName` STRING, `sClass` STRING, `targetId` STRING, `lastDt` TIMESTAMP(3), 
`previous_displayId` INT, `previous_mmsi` INT, `previous_state` TINYINT, 
`previous_latitude` DOUBLE, `previous_longitude` DOUBLE, `previous_course` 
FLOAT, `previous_heading` FLOAT, `previous_speed` FLOAT, `previous_len` INT, 
`previous_minLen` INT, `previous_maxLen` INT, `previous_wid` INT, `previous_id` 
STRING, `previous_province` STRING, `previous_nationality` STRING, 
`previous_lastTm` BIGINT, `previous_status` STRING, `previous_vesselName` 
STRING, `previous_sClass` STRING, `previous_targetId` STRING, `previous_lastDt` 
TIMESTAMP(3)> NOT NULL, rowtime=false) (3/3) (34f17a50932ba7852cff00dabecae88e) 
switched from RUNNING to FAILED on container_1625646226467_0291_01_05 @ 
hadoop-15 (dataPort=38082).
java.lang.NullPointerException: null
at 
org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:67)
 ~[hlx_bigdata_flink.jar:?]
at 
org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:30)
 ~[hlx_bigdata_flink.jar:?]
at 
org.apache.flink.table.runtime.typeutils.LinkedListSerializer.serialize(LinkedListSerializer.java:114)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.table.runtime.typeutils.LinkedListSerializer.serialize(LinkedListSerializer.java:39)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.util.InstantiationUtil.serializeToByteArray(InstantiationUtil.java:558)
 ~[hlx_bigdata_flink.jar:?]
at 
org.apache.flink.table.data.binary.BinaryRawValueData.materialize(BinaryRawValueData.java:113)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.table.data.binary.LazyBinaryFormat.ensureMaterialized(LazyBinaryFormat.java:126)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.table.runtime.typeutils.RawValueDataSerializer.copy(RawValueDataSerializer.java:60)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.table.runtime.typeutils.RawValueDataSerializer.copy(RawValueDataSerializer.java:36)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:170)
 

Re: [VOTE] FLIP-184: Refine ShuffleMaster lifecycle management for pluggable shuffle service framework

2021-07-18 Thread Guowei Ma
+1(binding)

Best,
Guowei


On Fri, Jul 16, 2021 at 5:36 PM Yingjie Cao  wrote:

> Hi all,
>
> I'd like to start a vote on FLIP-184 [1] which was
> discussed in [2] [3]. The vote will be open for at least 72 hours
> until 7.21 unless there is an objection.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-184%3A+Refine+ShuffleMaster+lifecycle+management+for+pluggable+shuffle+service+framework
> [2]
>
> https://lists.apache.org/thread.html/radbbabfcfb6bec305ddf7aeefb983232f96b18ba013f0ae2ee500288%40%3Cdev.flink.apache.org%3E
> [3]
>
> https://lists.apache.org/thread.html/r93e3a72506f3e7ffd3c1ab860b5d1a21f8a47b059f2f2fdd05ca1d46%40%3Cdev.flink.apache.org%3E
>


Re: [RESULT][VOTE] FLIP-147: Support Checkpoint After Tasks Finished

2021-07-18 Thread Dawid Wysakowicz
I think we're all really close to the same solution.

I second Yun's thoughts that MAX_WATERMARK works well for time based
buffering, but it does not solve flushing other operations such as e.g.
count windows or batching requests in Sinks. I'd prefer to treat the
finish() as a message for Operator to "flush all records". The
MAX_WATERMARK in my opinion is mostly for backwards compatibility imo. I
don't think operators need to get a signal "stop-processing" if they
don't need to flush records. The "WHEN" records are emitted, should be
in control of the StreamTask, by firing timers or by processing a next
record from upstream.

The only difference of my previous proposal compared to Yun's is that I
did not want to send the EndOfUserRecords event in case of stop w/o
drain. My thinking was that we could directly go from RUNNING to
WAITING_FOR_FINAL_CP on EndOfPartitionEvent. I agree we could emit
EndOfUserRecordsEvent with an additional flag and e.g. stop firing
timers and processing events (without calling finish() on Operator). In
my initial suggestion I though we don't care about some events
potentially being emitted after the savepoint was taken, as they would
anyway belong to the next after FINAL, which would be discarded. I think
though the proposal to suspend records processing and timers is a
sensible thing to do and would go with the version that Yun put into the
FLIP Wiki.

What do you think Till?

Best,

Dawid


On 16/07/2021 10:03, Yun Gao wrote:
> Hi Till, Piotr
>
> Very thanks for the comments!
>
>> 1) Does endOfInput entail sending of the MAX_WATERMARK?
> I also agree with Piotr that currently they are independent mechanisms, and 
> they are basically the same
> for the event time. 
>
> For more details, first there are some difference among the three scenarios 
> regarding the finish: 
> For normal finish and stop-with-savepoint --drain,  the job would not be 
> expected  to be restarted, 
> and for stop-with-savepoint the job would be expected restart later. 
>
> Then for finish / stop-with-savepoint --drain, currently Flink would emit 
> MAX_WATERMARK before the 
> EndOfPartition. Besides, as we have discussed before [1], endOfInput / 
> finish() should also only be called
> for finish / stop-with-savepoint --drain. Thus currently they always occurs 
> at the same time. After the change,
> we could emit MAX_WATERMARK before endOfInput event for the finish / 
> stop-with-savepoint --drain cases.
>
>> 2) StreamOperator.finish says to flush all buffered events. Would a
>> WindowOperator close all windows and emit the results upon calling
>> finish, for example?
> As discussed above for stop-with-savepoint, we would always keep the window 
> as is, and restore them after restart. 
> Then for the finish / stop-with-savepoint --drain, I think perhaps it depends 
> on the Triggers.  For 
> event-time triggers / process time triggers, it would be reasonable to flush 
> all the windows since logically
> the time would always elapse and the window would always get triggered in a 
> logical future. But for triggers
> like CountTrigger, no matter how much time pass logically, the windows would 
> not trigger, thus we may not
>  flush these windows. If there are requirements we may provide additional 
> triggers. 
>
>> It's a bit messy and I'm not sure if this should be strengthened out? Each 
>> one of those has a little bit different semantic/meaning, 
>> but at the same time they are very similar. For single input operators 
>> `endInput()` and `finish()` are actually the very same thing. 
> Currently MAX_WATERMARK / endInput / finish indeed always happen at the same 
> time, and for single input operators `endInput()` and `finish()` 
> are indeed the same thing. During the last discussion we ever mentioned this 
> issue and at then we thought that we might deprecate `endInput()`
> in the future, then we would only have endInput(int input) and finish(). 
>
> Best,
> Yun
>
>
> [1] https://issues.apache.org/jira/browse/FLINK-21132
>
>
>
> --
> From:Piotr Nowojski 
> Send Time:2021 Jul. 16 (Fri.) 13:48
> To:dev 
> Cc:Yun Gao 
> Subject:Re: [RESULT][VOTE] FLIP-147: Support Checkpoint After Tasks Finished
>
> Hi Till,
>
>> 1) Does endOfInput entail sending of the MAX_WATERMARK?
>>
>> 2) StreamOperator.finish says to flush all buffered events. Would a> 
>> WindowOperator close all windows and emit the results upon calling
>> finish, for example?
> 1) currently they are independent but parallel mechanisms. With event time, 
> they are basically the same.
> 2) it probably should for the sake of processing time windows.
>
> Here you are touching the bit of the current design that I like the least. We 
> basically have now three different ways of conveying very similar things:
> a) sending `MAX_WATERMARK`, used by event time WindowOperator (what about 
> processing time?)
> b) endInput(), used for example by AsyncWaitOperator to flush it's internal 

[jira] [Created] (FLINK-23419) CheckpointBarrierTrackerTest.testNextFirstCheckpointBarrierOvertakesCancellationBarrier fails on Azure

2021-07-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23419:


 Summary: 
CheckpointBarrierTrackerTest.testNextFirstCheckpointBarrierOvertakesCancellationBarrier
 fails on Azure
 Key: FLINK-23419
 URL: https://issues.apache.org/jira/browse/FLINK-23419
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.12.4
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20620=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=0dbaca5d-7c38-52e6-f4fe-2fb69ccb3ada=9236

{code]
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
CheckpointBarrierTrackerTest.testNextFirstCheckpointBarrierOvertakesCancellationBarrier:366
 
Expected: a value less than <30L>
 but: <30L> was equal to <30L>
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23418) 'Run kubernetes application HA test' fail on Azure

2021-07-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23418:


 Summary: 'Run kubernetes application HA test' fail on Azure
 Key: FLINK-23418
 URL: https://issues.apache.org/jira/browse/FLINK-23418
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes, Runtime / Coordination
Affects Versions: 1.13.1
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20589=logs=68a897ab-3047-5660-245a-cce8f83859f6=16ca2cca-2f63-5cce-12d2-d519b930a729=3747

{code}
Jul 16 23:58:49 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.actor.Actor$class.aroundReceive(Actor.scala:517) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at akka.actor.ActorCell.invoke(ActorCell.scala:561) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at akka.dispatch.Mailbox.run(Mailbox.scala:225) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at akka.dispatch.Mailbox.exec(Mailbox.scala:235) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka.tcp://flink@172.17.0.3:6123/user/rpc/jobmanager_2#2101744934]] 
after [1 ms]. Message of type 
[org.apache.flink.runtime.rpc.messages.RemoteFencedMessage]. A typical reason 
for `AskTimeoutException` is that the recipient actor didn't send a reply.
Jul 16 23:58:49 at 
akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) 
~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) 
~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648) 
~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205) 
~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) 
~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 
scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) 
~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
Jul 16 23:58:49 at 

[jira] [Created] (FLINK-23417) MiniClusterITCase.testHandleBatchJobsWhenNotEnoughSlot fails on Azure

2021-07-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23417:


 Summary: MiniClusterITCase.testHandleBatchJobsWhenNotEnoughSlot 
fails on Azure
 Key: FLINK-23417
 URL: https://issues.apache.org/jira/browse/FLINK-23417
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.12.4
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20588=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=8065

{code}
[ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 14.001 
s <<< FAILURE! - in org.apache.flink.runtime.minicluster.MiniClusterITCase
[ERROR] 
testHandleBatchJobsWhenNotEnoughSlot(org.apache.flink.runtime.minicluster.MiniClusterITCase)
  Time elapsed: 0.524 s  <<< FAILURE!
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.flink.runtime.minicluster.MiniClusterITCase.testHandleBatchJobsWhenNotEnoughSlot(MiniClusterITCase.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)