[jira] [Created] (FLINK-13035) LocalStreamEnvironment shall launch actuall task solts

2019-07-01 Thread Wong (JIRA)
Wong created FLINK-13035:


 Summary: LocalStreamEnvironment shall launch actuall task solts 
 Key: FLINK-13035
 URL: https://issues.apache.org/jira/browse/FLINK-13035
 Project: Flink
  Issue Type: Wish
  Components: Runtime / Task
Affects Versions: 1.9.0
Reporter: Wong


When developing flink jobs, there is some times use different soltgroup to 
expand threads.But now minicluster use default 
jobGraph.getMaximumParallelism(), sometimes is less than actual solts,so it 
can‘’t lanch job if not set TaskManagerOptions.NUM_TASK_SLOTS  . Is this needed?

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13036) Elasticsearch ActionRequestFailureHandler can access RuntimeContext

2019-07-01 Thread FaxianZhao (JIRA)
FaxianZhao created FLINK-13036:
--

 Summary: Elasticsearch ActionRequestFailureHandler can access 
RuntimeContext
 Key: FLINK-13036
 URL: https://issues.apache.org/jira/browse/FLINK-13036
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Reporter: FaxianZhao


We can not count exceptions via accumulator in ActionRequestFailureHandler to 
check message count end-to-end consistency otherwize log them or use external 
counter. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13037) Translate "Concepts -> Glossary" page into Chinese

2019-07-01 Thread Konstantin Knauf (JIRA)
Konstantin Knauf created FLINK-13037:


 Summary: Translate "Concepts -> Glossary" page into Chinese
 Key: FLINK-13037
 URL: https://issues.apache.org/jira/browse/FLINK-13037
 Project: Flink
  Issue Type: Sub-task
Reporter: Konstantin Knauf






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.8.1, release candidate #1

2019-07-01 Thread Tzu-Li (Gordon) Tai
+1 (binding)

- checked signatures and hashes
- built from source without skipping tests (without Hadoop, Scala 2.12)
- no new dependencies were added since 1.8.0
- ran the end-to-end tests locally once, and in a loop specifically for
Kafka tests (to cover FLINK-11987)
- announcement PR for website looks good!

Cheers,
Gordon

On Mon, Jul 1, 2019 at 9:02 AM jincheng sun 
wrote:

> +1 (binding)
>
> With the following checks:
>
> - checked gpg signatures by `gpg --verify 181.asc flink-1.8.1-src.tgz`
> [success]
> - checked the hashes by `shasum -a 512 flink-1.8.1-src.tgz` [success]
> - built from source by `mvn clean package -DskipTests` [success]
> - download the `flink-core-1.8.1.jar` from `repository.apache.org`
> [success]
> - run the example(word count) in local [success]
>
> Cheers,
> Jincheng
>
> Jark Wu  于2019年6月30日周日 下午9:21写道:
>
> > +1 (non-binding)
> >
> > - built from source successfully
> > - checked signatures and hashes
> > - run a couple of end-to-end tests locally with success
> > - started a cluster both for scala-2.11 and scala-2.12, ran examples,
> WebUI
> > is accessible, no suspicious log output
> > - reviewed the release PR and left comments
> >
> > Cheers,
> > Jark
> >
> > On Thu, 27 Jun 2019 at 22:40, Hequn Cheng  wrote:
> >
> > > Hi Jincheng,
> > >
> > > Thanks a lot for the release which contains so many fixes!
> > >
> > > I have done the following checks:
> > >
> > > Local Tests
> > >   - Built from source archive successfully.
> > >   - Signatures and hash are correct.
> > >   - All artifacts have been deployed to the maven central repository.
> > >   - Run WordCount(batch&streaming) on Local cluster successfully.
> > >
> > > Cluster Tests
> > > Cluster environment: 7 nodes, jm 1024m, tm 4096m
> > > Testing Jobs: WordCount(batch&streaming), DataStreamAllroundTestProgram
> > >   - Read and write hdfs file successfully.
> > >   - Run jobs on YARN(with or without session) successfully
> > >   - Job failover and recovery successfully
> > >
> > > PR review
> > > - Left a minor comment. But I think it is not a blocker, we can just
> > update
> > > the PR directly.
> > >
> > > To sum up, I have not spotted any blockers, so +1(non-binding) from my
> > > side.
> > >
> > > Best, Hequn
> > >
> > > On Tue, Jun 25, 2019 at 4:52 PM jincheng sun  >
> > > wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > Please review and vote on the release candidate 1 for Flink 1.8.1, as
> > > > follows:
> > > >
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release and binary convenience releases
> to
> > > be
> > > > deployed to dist.apache.org [2], which are signed with the key with
> > > > fingerprint 8FEA1EE9D0048C0CCC70B7573211B0703B79EA0E [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag "release-1.8.1-rc1" [5],
> > > > * website pull request listing the new release [6]
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > Cheers,
> > > > Jincheng
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345164
> > > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.8.1-rc1/
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4]
> > > https://repository.apache.org/content/repositories/orgapacheflink-1229
> > > > [5]
> > > >
> > > >
> > >
> >
> https://github.com/apache/flink/commit/11ab983ed20068dac93efe7f234ffab9abc2926e
> > > > [6] https://github.com/apache/flink-web/pull/221
> > > >
> > >
> >
>


[jira] [Created] (FLINK-13038) CEP WITHIN don’t support interval expression

2019-07-01 Thread Danny (JIRA)
Danny created FLINK-13038:
-

 Summary: CEP WITHIN don’t support interval expression
 Key: FLINK-13038
 URL: https://issues.apache.org/jira/browse/FLINK-13038
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Affects Versions: 1.8.0
Reporter: Danny


CEP WITHIN don’t support interval expression,like “INTERVAL A.windowTime + 10”



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13039) Clean up thrown exception lists of `ResultSubpartitionView#getNextBuffer`

2019-07-01 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-13039:
--

 Summary: Clean up thrown exception lists of 
`ResultSubpartitionView#getNextBuffer`
 Key: FLINK-13039
 URL: https://issues.apache.org/jira/browse/FLINK-13039
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Affects Versions: 1.9.0
Reporter: Piotr Nowojski


After implementing https://issues.apache.org/jira/browse/FLINK-12070 declared 
exceptions list of \{{ResultSubpartitionView#getNextBuffer}} is no longer 
accurate and cleaned up. This allows us also to finally clean up exceptions 
list of {{InputGate#pollNext}} and remove the confusing 
{{InterruptedException}} from there as well (confusing part was that supposedly 
non blocking operation was throwing {{InterruptedException}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS]Support Upsert mode for Streaming Non-window FlatAggregate

2019-07-01 Thread Hequn Cheng
Hi Kurt,

Thanks for your questions. Here are my thoughts.

> if I want to write such kind function, should I make sure that this
function is used with some keys?
The key information may not be used. We can also use RetractSink to emit
the table directly.

>  If I need a use case to calculate topn without key, should I write
another function or I can reuse previous one.
For the TopN example, you can reuse the previous function if you don't care
about the key information.

So, the key information is only an indicator(or a description), not an
operator, as Jincheng mentioned above.
We do not need to change the function logic and it will not add any other
aggregations.

BTW, we have three approaches in the document. Approach 1 defines keys on
API level as we think it's more common to define keys on Table.
While approach 3 defines keys in the TableAggregateFunction which is more
precise but it is not very clear for Table users. So, we should take all
these into consideration, and make the decision in this discussion thread.

You can take a look at the document and welcome any suggestions or other
better solutions.

Best, Hequn


On Mon, Jul 1, 2019 at 12:13 PM Kurt Young  wrote:

> Hi Jincheng,
>
> Thanks for the clarification. Take 'TopN' for example, if I want to write
> such kind function,
> should I make sure that this function is used with some keys? If I need a
> use case to calculate
> topn without key, should I write another function or I can reuse previous
> one.
>
> I'm not sure about the idea of this does not involve semantic changes. To
> me, it sounds like
> we are doing another nested aggregation inside the table
> which TableAggregateFunction emits.
>
> Maybe I'm not familiar with this function enough, hope you can help me to
> understand.
>
> Best,
> Kurt
>
>
> On Mon, Jul 1, 2019 at 11:59 AM jincheng sun 
> wrote:
>
> > Hi Kurt,
> >
> > Thanks for your questions, I am glad to share my thoughts here:
> >
> > My question is, will that effect the logic ofTableAggregateFunction user
> > > wrote? Should the user know that there will a key and make some changes
> > to
> > > this function?
> >
> >
> > No, the keys information depends on the implementation of the
> > TableAggregateFunction.
> > For example, for a `topN` user defined TableAggregateFunction, we can
> only
> > use the `keys` if the `topN` contains `rankid` in the output. You can
> > treat the
> > `keys` like an indicator.
> >
> > If not, how will framework deal with the output of user's
> > > TableAggregateFunction.  if user output multiple rows with the same
> key,
> > > should be latter one replace previous ones?
> >
> >
> > If a TableAggregateFunction outputs multiple rows with the same key, the
> > latter one should replace the previous one, either with upsert mode or
> > retract mode. i.e., Whether the user defines the Key or not, the Flink
> > framework should ensure the correctness of the semantics.
> >
> > At present, the problem we are discussing does not involve semantic
> > changes. The definition of keys is to support non-window flatAggregate on
> > upsert mode. (The upsert mode is already supported in the flink
> framework.
> > The current discussion only needs to inform the framework that the keys
> > information, which is the `withKeys` API we discussing.)
> >
> > Welcome any other feedbacks :)
> >
> > Best,
> > Jincheng
> >
> > Kurt Young  于2019年7月1日周一 上午9:23写道:
> >
> > > Hi,
> > >
> > > I have a question about the key information of TableAggregateFunction.
> > > IIUC, you need to define
> > > something like primary key or unique key in the result table of
> > > TableAggregateFunction, and also
> > > need a way to let user configure this through the API. My question is,
> > will
> > > that effect the logic of
> > > TableAggregateFunction user wrote? Should the user know that there
> will a
> > > key and make some changes
> > > to this function?
> > >
> > > If so, what's the semantic the user should learned. If not, how will
> > > framework deal with the output of user's
> > > TableAggregateFunction. For example, if user output multiple rows with
> > the
> > > same key, should be latter one
> > > replace previous ones?
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Mon, Jul 1, 2019 at 7:19 AM jincheng sun 
> > > wrote:
> > >
> > > > Hi hequn, Thanks for the reply! I think `withKeys` solution is our
> > better
> > > > choice!
> > > >
> > > >
> > > > Hequn Cheng  于2019年6月26日周三 下午5:11写道:
> > > >
> > > > > Hi Jincheng,
> > > > >
> > > > > Thanks for raising the discussion!
> > > > > The key information is very important for query optimizations. It
> > would
> > > > be
> > > > > nice if we can use upsert mode to achieve better performance.
> > > > >
> > > > > +1 for the `withKeys` proposal. :)
> > > > >
> > > > > Best, Hequn
> > > > >
> > > > >
> > > > > On Wed, Jun 26, 2019 at 4:37 PM jincheng sun <
> > sunjincheng...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > With the continuou

[jira] [Created] (FLINK-13040) promote blink configOptions and add these config descriptions to document

2019-07-01 Thread XuPingyong (JIRA)
XuPingyong created FLINK-13040:
--

 Summary: promote blink configOptions and add these config 
descriptions to document
 Key: FLINK-13040
 URL: https://issues.apache.org/jira/browse/FLINK-13040
 Project: Flink
  Issue Type: Task
  Components: Documentation
Affects Versions: 1.9.0
Reporter: XuPingyong
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13041) Make ScheduleMode configurable on ExecutionConfig

2019-07-01 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-13041:


 Summary: Make ScheduleMode configurable on ExecutionConfig
 Key: FLINK-13041
 URL: https://issues.apache.org/jira/browse/FLINK-13041
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek


The new Blink-based Table API Runner currently needs a way of globally setting 
the {{ScheduleMode}} (either {{LAZY_FROM_SOURCES}} or {{EAGER}}) to support 
both batch-style and streaming-style execution.

This is a stop-gap solution. Ideally, {{Transformations}} would know whether 
they are bounded or not and the scheduler could decide the scheduling mode 
based on that information but this requires more work on both the Table API 
Runner and the DAG API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.8.1, release candidate #1

2019-07-01 Thread Aljoscha Krettek
+1 (binding)

 - I checked the diff in the POM files since 1.8.0 and they look good, i.e. no 
new dependencies that could lead to licensing problems

> On 1. Jul 2019, at 10:02, Tzu-Li (Gordon) Tai  wrote:
> 
> +1 (binding)
> 
> - checked signatures and hashes
> - built from source without skipping tests (without Hadoop, Scala 2.12)
> - no new dependencies were added since 1.8.0
> - ran the end-to-end tests locally once, and in a loop specifically for
> Kafka tests (to cover FLINK-11987)
> - announcement PR for website looks good!
> 
> Cheers,
> Gordon
> 
> On Mon, Jul 1, 2019 at 9:02 AM jincheng sun 
> wrote:
> 
>> +1 (binding)
>> 
>> With the following checks:
>> 
>> - checked gpg signatures by `gpg --verify 181.asc flink-1.8.1-src.tgz`
>> [success]
>> - checked the hashes by `shasum -a 512 flink-1.8.1-src.tgz` [success]
>> - built from source by `mvn clean package -DskipTests` [success]
>> - download the `flink-core-1.8.1.jar` from `repository.apache.org`
>> [success]
>> - run the example(word count) in local [success]
>> 
>> Cheers,
>> Jincheng
>> 
>> Jark Wu  于2019年6月30日周日 下午9:21写道:
>> 
>>> +1 (non-binding)
>>> 
>>> - built from source successfully
>>> - checked signatures and hashes
>>> - run a couple of end-to-end tests locally with success
>>> - started a cluster both for scala-2.11 and scala-2.12, ran examples,
>> WebUI
>>> is accessible, no suspicious log output
>>> - reviewed the release PR and left comments
>>> 
>>> Cheers,
>>> Jark
>>> 
>>> On Thu, 27 Jun 2019 at 22:40, Hequn Cheng  wrote:
>>> 
 Hi Jincheng,
 
 Thanks a lot for the release which contains so many fixes!
 
 I have done the following checks:
 
 Local Tests
  - Built from source archive successfully.
  - Signatures and hash are correct.
  - All artifacts have been deployed to the maven central repository.
  - Run WordCount(batch&streaming) on Local cluster successfully.
 
 Cluster Tests
 Cluster environment: 7 nodes, jm 1024m, tm 4096m
 Testing Jobs: WordCount(batch&streaming), DataStreamAllroundTestProgram
  - Read and write hdfs file successfully.
  - Run jobs on YARN(with or without session) successfully
  - Job failover and recovery successfully
 
 PR review
 - Left a minor comment. But I think it is not a blocker, we can just
>>> update
 the PR directly.
 
 To sum up, I have not spotted any blockers, so +1(non-binding) from my
 side.
 
 Best, Hequn
 
 On Tue, Jun 25, 2019 at 4:52 PM jincheng sun >> 
 wrote:
 
> Hi everyone,
> 
> Please review and vote on the release candidate 1 for Flink 1.8.1, as
> follows:
> 
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> The complete staging area is available for your review, which
>> includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases
>> to
 be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint 8FEA1EE9D0048C0CCC70B7573211B0703B79EA0E [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.8.1-rc1" [5],
> * website pull request listing the new release [6]
> 
> The vote will be open for at least 72 hours. It is adopted by
>> majority
> approval, with at least 3 PMC affirmative votes.
> 
> Cheers,
> Jincheng
> 
> [1]
> 
> 
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345164
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.8.1-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
 https://repository.apache.org/content/repositories/orgapacheflink-1229
> [5]
> 
> 
 
>>> 
>> https://github.com/apache/flink/commit/11ab983ed20068dac93efe7f234ffab9abc2926e
> [6] https://github.com/apache/flink-web/pull/221
> 
 
>>> 
>> 



[jira] [Created] (FLINK-13042) Make slot sharing configurable on ExecutionConfig

2019-07-01 Thread Biao Liu (JIRA)
Biao Liu created FLINK-13042:


 Summary: Make slot sharing configurable on ExecutionConfig
 Key: FLINK-13042
 URL: https://issues.apache.org/jira/browse/FLINK-13042
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Reporter: Biao Liu
Assignee: Biao Liu
 Fix For: 1.9.0


There is a requirement of Blink batch planner that providing a global setting 
that disabling slot sharing. To support that, will expose a {{PublicEvolving}} 
method on {{ExecutionConfig}} to globally disable slot sharing.

Note that, this method might be removed if there is a better approach to 
satisfy Blink batch planner in the future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13043) Fix the bug of parsing Dewey number from string

2019-07-01 Thread Liya Fan (JIRA)
Liya Fan created FLINK-13043:


 Summary: Fix the bug of parsing Dewey number from string
 Key: FLINK-13043
 URL: https://issues.apache.org/jira/browse/FLINK-13043
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Reporter: Liya Fan
Assignee: Liya Fan


There is a bug in the current implementation for parsing the Dewey number:

 

String[] splits = deweyNumberString.split("\\.");

if (splits.length == 0) {
 return new DeweyNumber(Integer.parseInt(deweyNumberString));
 }

 

The length in the if condition should be 1 instead of 0.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Fwd: Flink 1.6.4 Issue on RocksDB incremental checkpoints and fs.default-scheme

2019-07-01 Thread Andrea Spina
Dear community, I am running through the following issue. whenever I use
rocksdb as state backend along with incremental checkpoints, I get the
following error:
















*Caused by: java.lang.Exception: Could not materialize checkpoint 1 for
operator Service Join SuperService (6/8).at
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:942)
  ... 6 moreCaused by: java.util.concurrent.ExecutionException:
java.lang.IllegalStateExceptionat
java.util.concurrent.FutureTask.report(FutureTask.java:122)at
java.util.concurrent.FutureTask.get(FutureTask.java:192)at
org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:53)
at
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47)
  at
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:853)
  ... 5 moreCaused by: java.lang.IllegalStateExceptionat
org.apache.flink.util.Preconditions.checkState(Preconditions.java:179)
  at
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalSnapshotOperation.runSnapshot(RocksDBKeyedStateBackend.java:2568)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)at
org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:50)
... 7 more*

In my case, I am able to use incremental checkopints with rocksdb as long
as I disable *fs.default-scheme* property; in any other case, I get the
above error. Is this a known issue?

Hope this can help,
-- 
*Andrea Spina*
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT


Contributor permission

2019-07-01 Thread qin chaos
Hi,

I want to contribute to Apache Flink. Would you please give me the contributor 
permission?
My JIRA ID is 知一, and full name is QinChao.

Thanks

QinChao


秦超

中国,浙江省,杭州市

邮箱:chaos...@outlook.com

电话:+86-18668011290

Chao Qin

HangZhou,ZheJiang Province,China
Email:  
chaos...@outlook.com
Tel: +86-18668011290


Re: Flink 1.6.4 Issue on RocksDB incremental checkpoints and fs.default-scheme

2019-07-01 Thread Yun Tang
Hi Andrea

The error happens when Flink try to verify whether your local backup directory 
existed[1]. If you could reproduce this, would you please share your 
configuration to RocksDBStateBackend, and what `fs.default-scheme` have you 
configured. Taskmanager log with more details is also very welcome.


[1] 
https://github.com/apache/flink/blob/6f4148180ba372a2c12c1d54bea8579350af6c98/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java#L2568

Best
Yun Tang

From: Andrea Spina 
Sent: Monday, July 1, 2019 20:06
To: dev@flink.apache.org
Subject: Fwd: Flink 1.6.4 Issue on RocksDB incremental checkpoints and 
fs.default-scheme

Dear community, I am running through the following issue. whenever I use
rocksdb as state backend along with incremental checkpoints, I get the
following error:
















*Caused by: java.lang.Exception: Could not materialize checkpoint 1 for
operator Service Join SuperService (6/8).at
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:942)
  ... 6 moreCaused by: java.util.concurrent.ExecutionException:
java.lang.IllegalStateExceptionat
java.util.concurrent.FutureTask.report(FutureTask.java:122)at
java.util.concurrent.FutureTask.get(FutureTask.java:192)at
org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:53)
at
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47)
  at
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:853)
  ... 5 moreCaused by: java.lang.IllegalStateExceptionat
org.apache.flink.util.Preconditions.checkState(Preconditions.java:179)
  at
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalSnapshotOperation.runSnapshot(RocksDBKeyedStateBackend.java:2568)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)at
org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:50)
... 7 more*

In my case, I am able to use incremental checkopints with rocksdb as long
as I disable *fs.default-scheme* property; in any other case, I get the
above error. Is this a known issue?

Hope this can help,
--
*Andrea Spina*
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT


[jira] [Created] (FLINK-13044) Shading of AWS SDK in flink-s3-fs-hadoop results in ClassNotFoundExceptions

2019-07-01 Thread Sebastian J. (JIRA)
Sebastian J. created FLINK-13044:


 Summary: Shading of AWS SDK in flink-s3-fs-hadoop results in 
ClassNotFoundExceptions
 Key: FLINK-13044
 URL: https://issues.apache.org/jira/browse/FLINK-13044
 Project: Flink
  Issue Type: Bug
  Components: BuildSystem / Shaded, Connectors / Hadoop Compatibility
Affects Versions: 1.8.0
Reporter: Sebastian J.


 

 

FLINK-8439



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13045) Move Scala expression DSL to flink-table-api-scala

2019-07-01 Thread Timo Walther (JIRA)
Timo Walther created FLINK-13045:


 Summary: Move Scala expression DSL to flink-table-api-scala
 Key: FLINK-13045
 URL: https://issues.apache.org/jira/browse/FLINK-13045
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Timo Walther
Assignee: Timo Walther


Moves the implicit conversions to the target module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Flink 1.6.4 Issue on RocksDB incremental checkpoints and fs.default-scheme

2019-07-01 Thread Yun Tang
Hi Andrea

Unfortunately, the tm log provided is not the task manager in which 
RocksDBStateBackend first failed. All tasks on this task manager are actually 
canceled by job manager, you could find a lot of "Attempting to cancel task" 
before any task failed.

>From your latest description, this problem happened without any relationship 
>to fs.default-schema. And actually I wonder the previous error "Could not 
>materialize checkpoint 1 for operator Service Join SuperService (6/8)" was 
>whether the root cause of your job's first failover, it might be caused by 
>other task failure and then cancelled via JM leading to that directory cleaned 
>up.

I think you could search your job manager's log to find the first failed task 
exception and locate which task manager that task run. That task manager would 
contain useful messages. If possible, please provide your job manager's log.

Best
Yun Tang

From: Andrea Spina 
Sent: Monday, July 1, 2019 23:14
To: dev@flink.apache.org
Subject: Re: Flink 1.6.4 Issue on RocksDB incremental checkpoints and 
fs.default-scheme

Hi Yun,
rocksDB configuration is set as follows:
```
RocksDB write-buffer size: 512MB
RocksDB BlockSize (cache) [B/K/M]: 128MB
Checkpoints directory: hdfs:///address-to-hdfs-chkp-dir:8020/flink/checkpoints
enable Checkpoints: true
Rocksdb cache index and filters true
RocksDB thread No.: 4
Checkpoints interval: 6
RocksDB BlockSize [B/K/M]: 16KB
RocksDB write-buffer count: 5
Use incremental checkpoints: true
Rocksdb optimize hits: true
RocksDB write-buffer number to merge: 2
```

I use RocksDBStateBackend class, but I recorded the same result by using 
configuration parameter state.backend.incremental: true.

Il giorno lun 1 lug 2019 alle ore 14:41 Yun Tang 
mailto:myas...@live.com>> ha scritto:
Hi Andrea

The error happens when Flink try to verify whether your local backup directory 
existed[1]. If you could reproduce this, would you please share your 
configuration to RocksDBStateBackend, and what `fs.default-scheme` have you 
configured. Taskmanager log with more details is also very welcome.


[1] 
https://github.com/apache/flink/blob/6f4148180ba372a2c12c1d54bea8579350af6c98/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java#L2568

Best
Yun Tang

From: Andrea Spina 
mailto:andrea.sp...@radicalbit.io>>
Sent: Monday, July 1, 2019 20:06
To: dev@flink.apache.org
Subject: Fwd: Flink 1.6.4 Issue on RocksDB incremental checkpoints and 
fs.default-scheme

Dear community, I am running through the following issue. whenever I use
rocksdb as state backend along with incremental checkpoints, I get the
following error:
















*Caused by: java.lang.Exception: Could not materialize checkpoint 1 for
operator Service Join SuperService (6/8).at
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:942)
  ... 6 moreCaused by: java.util.concurrent.ExecutionException:
java.lang.IllegalStateExceptionat
java.util.concurrent.FutureTask.report(FutureTask.java:122)at
java.util.concurrent.FutureTask.get(FutureTask.java:192)at
org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:53)
at
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47)
  at
org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:853)
  ... 5 moreCaused by: java.lang.IllegalStateExceptionat
org.apache.flink.util.Preconditions.checkState(Preconditions.java:179)
  at
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalSnapshotOperation.runSnapshot(RocksDBKeyedStateBackend.java:2568)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)at
org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:50)
... 7 more*

In my case, I am able to use incremental checkopints with rocksdb as long
as I disable *fs.default-scheme* property; in any other case, I get the
above error. Is this a known issue?

Hope this can help,
--
*Andrea Spina*
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT


--
Andrea Spina
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT


[jira] [Created] (FLINK-13046) rename hive-site-path to hive-conf-dir to be consistent with standard name in Hive

2019-07-01 Thread Bowen Li (JIRA)
Bowen Li created FLINK-13046:


 Summary: rename hive-site-path to hive-conf-dir to be consistent 
with standard name in Hive
 Key: FLINK-13046
 URL: https://issues.apache.org/jira/browse/FLINK-13046
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13047) Fix the Optional.orElse() usage issue in DatabaseCalciteSchema

2019-07-01 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created FLINK-13047:
---

 Summary: Fix the Optional.orElse() usage issue in 
DatabaseCalciteSchema 
 Key: FLINK-13047
 URL: https://issues.apache.org/jira/browse/FLINK-13047
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang


It's found that Optional.orElse() will evaluate the argument first before 
returning Optional.get(). If the evaluation throws an exception then the call 
fails even if the Optional object is nonempty. This the case In 
{{DatabaseCalciteSchema.convertCatalogTable()}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13048) support decimal in Flink's integration with Hive user defined functions

2019-07-01 Thread Bowen Li (JIRA)
Bowen Li created FLINK-13048:


 Summary: support decimal in Flink's integration with Hive user 
defined functions
 Key: FLINK-13048
 URL: https://issues.apache.org/jira/browse/FLINK-13048
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Contributor permission

2019-07-01 Thread ying
Dear Flink community:

Per an earlier discussion thread,  I created FLINK-13027 and would like to
contribute to it.

Is it possible to grant me the Contributor access?
*yxu-apache
*

Thanks in advance.

-
Ying

PS: I had contributed a few Flink JIRAs (FLINK-4582
 and FLINK-10358
) in the past, using a
similar but different account.  Now I would like to consolidate and use a
common account for all Apache projects contributions.


[jira] [Created] (FLINK-13049) Port planner expressions to blink-planner from flink-planner

2019-07-01 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-13049:


 Summary: Port planner expressions to blink-planner from 
flink-planner
 Key: FLINK-13049
 URL: https://issues.apache.org/jira/browse/FLINK-13049
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Jingsong Lee
Assignee: Jingsong Lee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13050) Counting more checkpoint failure reason in CheckpointFailureManager

2019-07-01 Thread vinoyang (JIRA)
vinoyang created FLINK-13050:


 Summary: Counting more checkpoint failure reason in 
CheckpointFailureManager
 Key: FLINK-13050
 URL: https://issues.apache.org/jira/browse/FLINK-13050
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Reporter: vinoyang
Assignee: vinoyang


Currently, {{CheckpointFailureManager}} only counted little failure reasons to 
keep compatible with {{setFailOnCheckpointingErrors}}. While 
{{setFailOnCheckpointingErrors}} has been deprecated in FLINK-11662. IMO, we 
can count more checkpoint failure reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] solve unstable build capacity problem on TravisCI

2019-07-01 Thread Bowen Li
By looking at the git history of the Jenkins script, its core part was
finished in March 2017 (and only two minor update in 2017/2018), so it's
been running for over two years now and feels like Zepplin community has
been quite happy with it. @Jeff Zhang  can you share your
insights and user experience with the Jenkins+Travis approach?

Things like:

- has the approach completely solved the resource capacity problem for
Zepplin community? is Zepplin community happy with the result?
- is the whole configuration chain stable (e.g. uptime) enough?
- how often do you need to maintain the Jenkins infra? how many people are
usually involved in maintenance and bug-fixes?

The downside of this approach seems mostly to be on the maintenance to me -
maintain the script and Jenkins infra.

** Having Our Own Travis-CI.com Account **

Another alternative I've been thinking of is to have our own travis-ci.com
account with paid dedicated resources. Note travis-ci.org is the free
version and travis-ci.com is the commercial version. We currently use a
shared resource pool managed by ASK INFRA team on travis-ci.org, but we
have no control over it - we can't see how it's configured, how much
resources are available, how resources are allocated among Apache projects,
etc. The nice thing about having an account on travis-ci.com are:

- relatively low cost with much better resource guarantee than what we
currently have [1]: $249/month with 5 dedicated concurrency, $489/month
with 10 concurrency
- low maintenance work compared to using Jenkins
- (potentially) no migration cost according to Travis's doc [2] (pending
verification)
- full control over the build capacity/configuration compared to using ASF
INFRA's pool

I'd be surprised if we as such a vibrant community cannot find and fund
$249*12=$2988 a year in exchange for a much better developer experience and
much higher productivity.

[1] https://travis-ci.com/plans
[2] https://docs.travis-ci.com/user/migrate/open-source-repository-migration

On Sat, Jun 29, 2019 at 8:39 AM Chesnay Schepler  wrote:

> So yes, the Jenkins job keeps pulling the state from Travis until it
> finishes.
>
> Note sure I'm comfortable with the idea of using Jenkins workers just to
> idle for a several hours.
>
> On 29/06/2019 14:56, Jeff Zhang wrote:
> > Here's what zeppelin community did, we make a python script to check the
> > build status of pull request.
> > Here's script:
> > https://github.com/apache/zeppelin/blob/master/travis_check.py
> >
> > And this is the script we used in Jenkins build job.
> >
> > if [ -f "travis_check.py" ]; then
> >git log -n 1
> >STATUS=$(curl -s $BUILD_URL | grep -e "GitHub pull request.*from.*" |
> sed
> > 's/.*GitHub pull request  > href=\"\(https[^"]*\).*from[^"]*.\(https[^"]*\).*/\1 \2/g')
> >AUTHOR=$(echo $STATUS | sed 's/.*[/]\(.*\)$/\1/g')
> >PR=$(echo $STATUS | awk '{print $1}' | sed 's/.*[/]\(.*\)$/\1/g')
> >#COMMIT=$(git log -n 1 | grep "^Merge:" | awk '{print $3}')
> >#if [ -z $COMMIT ]; then
> >#  COMMIT=$(curl -s
> https://api.github.com/repos/apache/zeppelin/pulls/$PR
> > | grep -e "\"label\":" -e "\"ref\":" -e "\"sha\":" | tr '\n' ' ' | sed
> > 's/\(.*sha[^,]*,\)\(.*ref.*\)/\1 = \2/g' | tr = '\n' | grep -v "apache:"
> |
> > sed 's/.*sha.[^"]*["]\([^"]*\).*/\1/g')
> >#fi
> >
> ># get commit hash from PR
> >COMMIT=$(curl -s
> https://api.github.com/repos/apache/zeppelin/pulls/$PR |
> > grep -e "\"label\":" -e "\"ref\":" -e "\"sha\":" | tr '\n' ' ' | sed
> > 's/\(.*sha[^,]*,\)\(.*ref.*\)/\1 = \2/g' | tr = '\n' | grep -v "apache:"
> |
> > sed 's/.*sha.[^"]*["]\([^"]*\).*/\1/g')
> >sleep 30 # sleep few moment to wait travis starts the build
> >RET_CODE=0
> >python ./travis_check.py ${AUTHOR} ${COMMIT} || RET_CODE=$?
> >if [ $RET_CODE -eq 2 ]; then # try with repository name when
> travis-ci is
> > not available in the account
> >  RET_CODE=0
> >  AUTHOR=$(curl -s
> https://api.github.com/repos/apache/zeppelin/pulls/$PR
> > | grep '"full_name":' | grep -v "apache/zeppelin" | sed
> > 's/.*[:][^"]*["]\([^/]*\).*/\1/g')
> >python ./travis_check.py ${AUTHOR} ${COMMIT} || RET_CODE=$?
> >fi
> >
> >if [ $RET_CODE -eq 2 ]; then # fail with can't find build information
> in
> > the travis
> >  set +x
> >  echo "-"
> >  echo "Looks like travis-ci is not configured for your fork."
> >  echo "Please setup by swich on 'zeppelin' repository at
> > https://travis-ci.org/profile and travis-ci."
> >  echo "And then make sure 'Build branch updates' option is enabled in
> > the settings https://travis-ci.org/${AUTHOR}/zeppelin/settings.";
> >  echo ""
> >  echo "To trigger CI after setup, you will need ammend your last
> commit
> > with"
> >  echo "git commit --amend"
> >  echo "git push your-remote HEAD --force"
> >  echo ""
> >  echo "See
> >
> http://zeppelin.apache.org/contribution/contributions.html#co

[DISCUSS] Publish the PyFlink into PyPI

2019-07-01 Thread jincheng sun
Hi all,

With the effort of FLIP-38 [1], the Python Table API(without UDF support
for now) will be supported in the coming release-1.9.
As described in "Build PyFlink"[2], if users want to use the Python Table
API, they can manually install it using the command:
"cd flink-python && python3 setup.py sdist && pip install dist/*.tar.gz".

This is non-trivial for users and it will be better if we can follow the
Python way to publish PyFlink to PyPI
which is a repository of software for the Python programming language. Then
users can use the standard Python package
manager "pip" to install PyFlink: "pip install pyflink". So, there are some
topic need to be discussed as follows:

1. How to publish PyFlink to PyPI

1.1 Project Name
 We need to decide the project name of PyPI to use, for example,
apache-flink,  pyflink, etc.

Regarding to the name "pyflink", it has already been registered by
@ueqt and there is already a package '1.0' released under this project
which is based on flink-libraries/flink-python.

   @ueqt has kindly agreed to give this project back to the community. And
he has requested that the released package '1.0' should not be removed as
it has already been used in their company.

So we need to decide whether to use the name 'pyflink'?  If yes, we
need to figure out how to tackle with the package '1.0' under this project.

From the points of my view, the "pyflink" is better for our project
name and we can keep the release of 1.0, maybe more people want to use.

1.2 PyPI account for release
We need also decide on which account to use to publish packages to PyPI.

There are two permissions in PyPI: owner and maintainer:

1) The owner can upload releases, delete files, releases or the entire
project.
2) The maintainer can also upload releases. However, they cannot delete
files, releases, or the project.

So there are two options in my mind:

1) Create an account such as 'pyflink' as the owner share it with all
the release managers and then release managers can publish the package to
PyPI using this account.
2) Create an account such as 'pyflink' as owner(only PMC can manage it)
and adds the release manager's account as maintainers of the project.
Release managers publish the package to PyPI using their own account.

As I know, PySpark takes Option 1) and Apache Beam takes Option 2).

From the points of my view, I prefer option 2) as it's pretty safer as
it eliminate the risk of deleting old releases occasionally and at the same
time keeps the trace of who is operating.

2. How to handle Scala_2.11 and Scala_2.12

The PyFlink package bundles the jars in the package. As we know, there are
two versions of jars for each module: one for Scala 2.11 and the other for
Scala 2.12. So there will be two PyFlink packages theoretically. We need to
decide which one to publish to PyPI or both. If both packages will be
published to PyPI, we may need two projects, such as pyflink_211 and
pyflink_212 separately. Maybe more in the future such as pyflink_213.

(BTW, I think we should bring up a discussion for dorp Scala_2.11 in
Flink 1.10 release due to 2.13 is available in early June.)

From the points of my view, for now, we can only release the scala_2.11
version, due to scala_2.11 is our default version in Flink.

3. Legal problems of publishing to PyPI

As @Chesnay Schepler   pointed out in FLINK-13011[3],
publishing PyFlink to PyPI means that we will publish binaries to a
distribution channel not owned by Apache. We need to figure out if there
are legal problems. From my point of view, there are no problems as a few
Apache projects such as Spark, Beam, etc have already done it. Frankly
speaking, I am not familiar with this problem, welcome any feedback on this
if somebody is more family with this.

Great thanks to @ueqt for willing to dedicate PyPI's project name `pyflink`
to the Apache Flink community!!!
Great thanks to @Dian for the offline effort!!!

Best,
Jincheng

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-38%3A+Python+Table+API
[2]
https://ci.apache.org/projects/flink/flink-docs-master/flinkDev/building.html#build-pyflink
[3] https://issues.apache.org/jira/browse/FLINK-13011


Re: [VOTE] Release 1.8.1, release candidate #1

2019-07-01 Thread jincheng sun
Thanks to everyone for checking the release and voting! I’ll tally the
results in a separate email.

Aljoscha Krettek  于2019年7月1日周一 下午6:05写道:

> +1 (binding)
>
>  - I checked the diff in the POM files since 1.8.0 and they look good,
> i.e. no new dependencies that could lead to licensing problems
>
> > On 1. Jul 2019, at 10:02, Tzu-Li (Gordon) Tai 
> wrote:
> >
> > +1 (binding)
> >
> > - checked signatures and hashes
> > - built from source without skipping tests (without Hadoop, Scala 2.12)
> > - no new dependencies were added since 1.8.0
> > - ran the end-to-end tests locally once, and in a loop specifically for
> > Kafka tests (to cover FLINK-11987)
> > - announcement PR for website looks good!
> >
> > Cheers,
> > Gordon
> >
> > On Mon, Jul 1, 2019 at 9:02 AM jincheng sun 
> > wrote:
> >
> >> +1 (binding)
> >>
> >> With the following checks:
> >>
> >> - checked gpg signatures by `gpg --verify 181.asc flink-1.8.1-src.tgz`
> >> [success]
> >> - checked the hashes by `shasum -a 512 flink-1.8.1-src.tgz` [success]
> >> - built from source by `mvn clean package -DskipTests` [success]
> >> - download the `flink-core-1.8.1.jar` from `repository.apache.org`
> >> [success]
> >> - run the example(word count) in local [success]
> >>
> >> Cheers,
> >> Jincheng
> >>
> >> Jark Wu  于2019年6月30日周日 下午9:21写道:
> >>
> >>> +1 (non-binding)
> >>>
> >>> - built from source successfully
> >>> - checked signatures and hashes
> >>> - run a couple of end-to-end tests locally with success
> >>> - started a cluster both for scala-2.11 and scala-2.12, ran examples,
> >> WebUI
> >>> is accessible, no suspicious log output
> >>> - reviewed the release PR and left comments
> >>>
> >>> Cheers,
> >>> Jark
> >>>
> >>> On Thu, 27 Jun 2019 at 22:40, Hequn Cheng 
> wrote:
> >>>
>  Hi Jincheng,
> 
>  Thanks a lot for the release which contains so many fixes!
> 
>  I have done the following checks:
> 
>  Local Tests
>   - Built from source archive successfully.
>   - Signatures and hash are correct.
>   - All artifacts have been deployed to the maven central repository.
>   - Run WordCount(batch&streaming) on Local cluster successfully.
> 
>  Cluster Tests
>  Cluster environment: 7 nodes, jm 1024m, tm 4096m
>  Testing Jobs: WordCount(batch&streaming),
> DataStreamAllroundTestProgram
>   - Read and write hdfs file successfully.
>   - Run jobs on YARN(with or without session) successfully
>   - Job failover and recovery successfully
> 
>  PR review
>  - Left a minor comment. But I think it is not a blocker, we can just
> >>> update
>  the PR directly.
> 
>  To sum up, I have not spotted any blockers, so +1(non-binding) from my
>  side.
> 
>  Best, Hequn
> 
>  On Tue, Jun 25, 2019 at 4:52 PM jincheng sun <
> sunjincheng...@gmail.com
> >>>
>  wrote:
> 
> > Hi everyone,
> >
> > Please review and vote on the release candidate 1 for Flink 1.8.1, as
> > follows:
> >
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which
> >> includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases
> >> to
>  be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint 8FEA1EE9D0048C0CCC70B7573211B0703B79EA0E [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.8.1-rc1" [5],
> > * website pull request listing the new release [6]
> >
> > The vote will be open for at least 72 hours. It is adopted by
> >> majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Cheers,
> > Jincheng
> >
> > [1]
> >
> >
> 
> >>>
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345164
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.8.1-rc1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> 
> https://repository.apache.org/content/repositories/orgapacheflink-1229
> > [5]
> >
> >
> 
> >>>
> >>
> https://github.com/apache/flink/commit/11ab983ed20068dac93efe7f234ffab9abc2926e
> > [6] https://github.com/apache/flink-web/pull/221
> >
> 
> >>>
> >>
>
>


[RESULT] [VOTE] Release 1.8.1, release candidate #1

2019-07-01 Thread jincheng sun
I'm happy to announce that we have unanimously approved the 1.8.1 release.

There are 5 approving votes, 3 of which are binding:
* Hequn
* Jark
* Jincheng
* Gordon
* Aljoscha

There are no disapproving votes.

Thanks everyone!


[jira] [Created] (FLINK-13051) Drop the non-selectable two-input StreamTask and Processor

2019-07-01 Thread Haibo Sun (JIRA)
Haibo Sun created FLINK-13051:
-

 Summary: Drop the non-selectable two-input StreamTask and Processor
 Key: FLINK-13051
 URL: https://issues.apache.org/jira/browse/FLINK-13051
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: Haibo Sun
Assignee: Haibo Sun


After `StreamTwoInputSelectableProcessor` supports `CheckpointBarrierHandler`, 
we should  drop the non-selectable  two-input StreamTask and Processor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)