Re: apply for the flink contributor permission

2019-05-13 Thread Fabian Hueske
Hi,

Welcome to the Flink community,
I gave you contributor permissions for Jira.

Best, Fabian

Am Fr., 10. Mai 2019 um 14:13 Uhr schrieb clay clay :

> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is zhangzqit
>


Re: Apply for contributor permission

2019-05-13 Thread Fabian Hueske
Hi Jiyang,

Welcome to the Flink community!
I gave you contributor permissions for Jira.

Best, Fabian

Am Sa., 11. Mai 2019 um 05:15 Uhr schrieb bryan yan :

> Hi Flink community,
>
> This is Jiyang from Amazon applying for contributor permission. My id is
> yjiyang.
> Best,
> Jiyang
>


Re: apply for the flink contributor permission

2019-05-13 Thread Zhinan Cheng
Hello,

It seems that you reply to the wrong email.
This is Zhinan Cheng from CUHK.
I also want to contribute to Apache Flink and I have applied for the
contributor permission.
However,  my JIRA ID is zncheng.
Would you please make sure that you also give me the contributor permission?

Regards,
Zhinan



On Mon, 13 May 2019 at 15:14, Fabian Hueske  wrote:

> Hi,
>
> Welcome to the Flink community,
> I gave you contributor permissions for Jira.
>
> Best, Fabian
>
> Am Fr., 10. Mai 2019 um 14:13 Uhr schrieb clay clay  >:
>
> > Hi,
> >
> > I want to contribute to Apache Flink.
> > Would you please give me the contributor permission?
> > My JIRA ID is zhangzqit
> >
>


Re: Applying for Contributor Permission

2019-05-13 Thread Fabian Hueske
Hi Zhinan,

Welcome to the Flink community!
I gave you contributor permissions for Jira.

Best, Fabian

Am So., 12. Mai 2019 um 09:48 Uhr schrieb Zhinan Cheng <
chingchi...@gmail.com>:

> Hello,
>
> I want to contribute to Apache Flink.
> Would please give me the contributor permission?
>
> My JIRA id is zncheng.
> Thanks.
>
> Regards,
> Zhinan
>


Re: apply for the flink contributor permission

2019-05-13 Thread Fabian Hueske
Hi Zhinan,

Thanks for noticing and letting me know!
You should also have contributor permissions now.

Thanks, Fabian

Am Mo., 13. Mai 2019 um 09:23 Uhr schrieb Zhinan Cheng <
chingchi...@gmail.com>:

> Hello,
>
> It seems that you reply to the wrong email.
> This is Zhinan Cheng from CUHK.
> I also want to contribute to Apache Flink and I have applied for the
> contributor permission.
> However,  my JIRA ID is zncheng.
> Would you please make sure that you also give me the contributor
> permission?
>
> Regards,
> Zhinan
>
>
>
> On Mon, 13 May 2019 at 15:14, Fabian Hueske  wrote:
>
> > Hi,
> >
> > Welcome to the Flink community,
> > I gave you contributor permissions for Jira.
> >
> > Best, Fabian
> >
> > Am Fr., 10. Mai 2019 um 14:13 Uhr schrieb clay clay <
> clay4me...@gmail.com
> > >:
> >
> > > Hi,
> > >
> > > I want to contribute to Apache Flink.
> > > Would you please give me the contributor permission?
> > > My JIRA ID is zhangzqit
> > >
> >
>


[jira] [Created] (FLINK-12496) Support translation from StreamExecGroupWindowAggregate to StreamTransformation.

2019-05-13 Thread Jing Zhang (JIRA)
Jing Zhang created FLINK-12496:
--

 Summary: Support translation from StreamExecGroupWindowAggregate 
to StreamTransformation.
 Key: FLINK-12496
 URL: https://issues.apache.org/jira/browse/FLINK-12496
 Project: Flink
  Issue Type: Task
  Components: Table SQL / Runtime
Reporter: Jing Zhang
Assignee: Jing Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Flink contributor permission application

2019-05-13 Thread Tong Sun
Hi,

I want to contribute to Apache Flink.
Would you please give me the contributor permission?
My JIRA ID is stong


Re: Flink contributor permission application

2019-05-13 Thread Fabian Hueske
Hi,

Welcome to the Flink community.
I've given you contributor permissions for Jira.

Best, Fabian

Am Mo., 13. Mai 2019 um 10:27 Uhr schrieb Tong Sun :

> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is stong
>


[jira] [Created] (FLINK-12497) Refactor the start method of ConnectionManager

2019-05-13 Thread zhijiang (JIRA)
zhijiang created FLINK-12497:


 Summary: Refactor the start method of ConnectionManager
 Key: FLINK-12497
 URL: https://issues.apache.org/jira/browse/FLINK-12497
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Reporter: zhijiang
Assignee: zhijiang


In current `ConnectionManager#start(ResultPartitionProvider, 
TaskEventDispatcher)`, the parameters in start are only reasonable for 
`NettyConnectionManager` implementation, reductant for 
`LocalConnectionManager`. 

We could put these parameters in the constructor of `NettyConnectionManager`, 
then `ConnectionManager#start()` would be more cleaner for both 
implementations. And it also bring benefits for calling start in 
`NetworkEnvironment` which does not need to maintain private 
`TaskEventDispatcher`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12498) java.io.IOException: Failed to fetch BLOB

2019-05-13 Thread JIRA
周瑞平 created FLINK-12498:
---

 Summary: java.io.IOException: Failed to fetch BLOB 
 Key: FLINK-12498
 URL: https://issues.apache.org/jira/browse/FLINK-12498
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.7.2
Reporter: 周瑞平


java.io.IOException: Failed to fetch BLOB 
64037370bc0546e9a7b30086c0d3428f/p-89e9942f18b8597bd7b47d7f69fd12062b06cb51-392cc0951e8bb027e5ab884aa464be2a
 from Task-4/172.17.8.118:45039 and store it under 
/tmp/blobStore-231b7d54-427a-49ce-9791-d0cc42d0febf/incoming/temp-00154965\n\tat
 
org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:169)\n\tat
 
org.apache.flink.runtime.blob.AbstractBlobCache.getFileInternal(AbstractBlobCache.java:181)\n\tat
 
org.apache.flink.runtime.blob.PermanentBlobCache.getFile(PermanentBlobCache.java:202)\n\tat
 
org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager.registerTask(BlobLibraryCacheManager.java:120)\n\tat
 
org.apache.flink.runtime.taskmanager.Task.createUserCodeClassloader(Task.java:858)\n\tat
 org.apache.flink.runtime.taskmanager.Task.run(Task.java:582)\n\tat 
java.lang.Thread.run(Thread.java:748)\nCaused by: java.io.IOException: Could 
not connect to BlobServer at address Task-4/172.17.8.118:45039\n\tat 
org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:100)\n\tat 
org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:143)\n\t...
 6 more\nCaused by: java.net.SocketException: Too many open files\n\tat 
java.net.Socket.createImpl(Socket.java:460)\n\tat 
java.net.Socket.connect(Socket.java:587)\n\tat 
java.net.Socket.connect(Socket.java:538)\n\tat 
org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:95)\n\t... 7 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12499) JDBCAppendTableSink support retry

2019-05-13 Thread FaxianZhao (JIRA)
FaxianZhao created FLINK-12499:
--

 Summary: JDBCAppendTableSink support retry
 Key: FLINK-12499
 URL: https://issues.apache.org/jira/browse/FLINK-12499
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: FaxianZhao


JDBCOutputFormat will try to restart the whole job while it catch any 
SQLException depend on the Restart Strategy. 
So, the network spike is a terrible thing for streaming job which has a jdbc 
sink. Sometimes one more try will save it. 
Currently, the common case for streaming job delivering messages at-least-once 
is to run idempotent queries, retry more than once is acceptable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release flink-shaded 7.0, release candidate 1

2019-05-13 Thread jincheng sun
Thanks for your vote! @Chesnay Schepler !

When moving the modules, we copy the dependency version of the
dependencyManagement in the `flink-parent` pom in the specific sub-module
inside the `flink-shaded`, do you meant that we should add
`dependencyManagement` section in flink-shaded? I don't fully understand
the problem you are talking about, so can you create a JIRA for the problem
and make a corresponding description and suggested solution? I will take
the JIRA and try to solve it :)

Best,
Jincheng

Chesnay Schepler  于2019年5月10日周五 下午8:35写道:

> -1
>
> We forgot to account for the dependencyManagement section of the root
> pom when moving the modules. Multiple dependencies are now no longer
> matching.
> Just to reduce the risk of something breaking I think we should make
> sure that the dependency set remains the same.
>
> On 08/05/2019 15:39, jincheng sun wrote:
> > Hi everyone,
> >
> > Please review and vote on the release candidate #1 for the version 7.0,
> as
> > follows:
> >
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> [2],
> > which are signed with the key with fingerprint 8FEA1EE9 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-7.0-rc1" [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > NOTE:
> > After I finished RC1, we found that
> > https://issues.apache.org/jira/browse/FLINK-11580, Chesnay Schepler,
> Nico
> > and Me had reached an agreement. It was better to put FLINK-11580 in
> > flink-shaded-7.0.
> > But in order to find out other problems earlier, we vote on RC1 first. If
> > FLINK-11580 is completed in a week, I will be very willing to prepare
> RC2.
> > If it takes a long time, we will release FLINK-11580 again later. Please
> > let me know what do you think?
> >
> > Thanks,
> > Jincheng
> >
> > [1]
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12345226&styleName=Html&projectId=12315522&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7C8ba061049bec0c5a72dc0191c47bb53a73b82cb4%7Clin
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-shaded-7.0-rc1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1217
> > [5] https://github.com/apache/flink-shaded/tree/release-7.0-rc1
> > [6] https://github.com/apache/flink-web/pull/209
> >
>
>


[jira] [Created] (FLINK-12500) Flink flink-taskmanager restarts and running job also get killed

2019-05-13 Thread Chethan U (JIRA)
Chethan U created FLINK-12500:
-

 Summary: Flink flink-taskmanager restarts and running job also get 
killed 
 Key: FLINK-12500
 URL: https://issues.apache.org/jira/browse/FLINK-12500
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Cassandra, Deployment / Kubernetes
Affects Versions: 1.8.0
 Environment: GKE

Kubernetes
Reporter: Chethan U
 Attachments: jobmanager-deployment.yaml, jobmanager-service.yaml, 
taskmanager-deployment.yaml

I am running Flink on GKE

{{kubectl create -f jobmanager-service.yaml}} : [^jobmanager-service.yaml]

{{kubectl create -f jobmanager-deployment.yaml}} :( 
[^jobmanager-deployment.yaml]
{{kubectl create -f taskmanager-deployment.yaml}} : 
[^taskmanager-deployment.yaml]
 
But the task manager get restarted every now and then :(
 
||Revision||Name||Status||Restarts||Created on||
|1|flink-taskmanager-*|Running|2|May 13, 2019, 6:32:50 PM|
|1|flink-taskmanager-*|Running|1|May 13, 2019, 6:32:52 PM|
|1 |flink-taskmanager-*|Running| 2|May 13, 2019, 6:32:53 PM|

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12501) AvroTypeSerializer does not work with types generated by avrohugger

2019-05-13 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-12501:


 Summary: AvroTypeSerializer does not work with types generated by 
avrohugger
 Key: FLINK-12501
 URL: https://issues.apache.org/jira/browse/FLINK-12501
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Aljoscha Krettek


The main problem is that the code in {{SpecificData.createSchema()}} tries to 
reflectively read the {{SCHEMA$}} field, that is normally there in Avro 
generated classes. However, avrohugger generates this field in a companion 
object, which the reflective Java code will therefore not find.

This is also described in these ML threads:
 * 
[https://lists.apache.org/thread.html/5db58c7d15e4e9aaa515f935be3b342fe036e97d32e1fb0f0d1797ee@%3Cuser.flink.apache.org%3E]
 * 
[https://lists.apache.org/thread.html/cf1c5b8fa7f095739438807de9f2497e04ffe55237c5dea83355112d@%3Cuser.flink.apache.org%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12502) Harden JobMasterTest#testRequestNextInputSplitWithDataSourceFailover

2019-05-13 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-12502:
-

 Summary: Harden 
JobMasterTest#testRequestNextInputSplitWithDataSourceFailover
 Key: FLINK-12502
 URL: https://issues.apache.org/jira/browse/FLINK-12502
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination, Tests
Affects Versions: 1.9.0
Reporter: Till Rohrmann


The {{JobMasterTest#testRequestNextInputSplitWithDataSourceFailover}} relies on 
how many files you have in your working directory. This assumption is quite 
brittle. Instead we should explicitly instantiate an {{InputSplitAssigner}} 
with a defined number of input splits. 

Moreover, we should make the assertions more explicit: Input split comparisons 
should not rely solely on the length of the input split data.

Maybe it is also not necessary to capture the full {{TaskDeploymentDescriptor}} 
because we could already know the producer's and consumer's {{JobVertexID}} 
when we create the {{JobGraph}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12503) Improve the exceptions [Proper Error]

2019-05-13 Thread Chethan U (JIRA)
Chethan U created FLINK-12503:
-

 Summary: Improve the exceptions [Proper Error]
 Key: FLINK-12503
 URL: https://issues.apache.org/jira/browse/FLINK-12503
 Project: Flink
  Issue Type: Bug
Reporter: Chethan U
 Fix For: 1.8.0
 Attachments: Screenshot 2019-05-13 at 10.49.50 PM.png

What does this mean also? 

```
{panel}
Root exception{panel}
{panel}
*Timestamp:* 2019-05-13, 22:49:31{panel}
{panel}
java.lang.NullPointerException{panel}
```

Yes, it says "java.lang.NullPointerException" but doesn't imply anything...

!Screenshot 2019-05-13 at 10.49.50 PM.png!

Please improve the Exceptions and error outputs...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12504) NullPoint here NullPointException there.. It's every where

2019-05-13 Thread Chethan U (JIRA)
Chethan U created FLINK-12504:
-

 Summary: NullPoint here NullPointException there.. It's every where
 Key: FLINK-12504
 URL: https://issues.apache.org/jira/browse/FLINK-12504
 Project: Flink
  Issue Type: Bug
Reporter: Chethan U


I was trying to push data from Kafka to Cassandra, after around 220K sometimes, 
300K points are pushed into C*, this java.lang.NullPointerException throws in..

```
java.lang.NullPointerException
at 
org.apache.flink.api.common.functions.util.PrintSinkOutputWriter.write(PrintSinkOutputWriter.java:73)
at 
org.apache.flink.streaming.api.functions.sink.PrintSinkFunction.invoke(PrintSinkFunction.java:81)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$BroadcastingOutputCollector.collect(OperatorChain.java:649)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$BroadcastingOutputCollector.collect(OperatorChain.java:602)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
at 
org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.emitRecord(KafkaFetcher.java:185)
at 
org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:150)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:711)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:93)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:57)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:97)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
```

How can normal Flink Users understand these error? The Job's keep failing and 
it's very unstable to be considered in production... 

 

In RoadMap, is there plans to make Kotlin supported language as well?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Adding Metrics to StreamingFileSink

2019-05-13 Thread Kailash Dayanand
Hello,

I was looking to add metrics to the streaming file sink. Currently the only
details available is the generic information about for any operator like
the number of records in, number of records out etc. I was looking at
adding some metrics and contributing back as well as enabling the metrics
which are already getting published by the aws-hadoop. Is that something
which is of value for the community?

Another change I am proposing is to make the constructor of
StreamingFileSink protected instead of private here:
https://tinyurl.com/y5vh4jn6. If we can make this as protected, then it is
possible to extend this class and have custom metrics for anyone to add in
the 'open' method.

Thanks
Kailash


[jira] [Created] (FLINK-12505) Unify database operations to HiveCatalogBase from its subclasses

2019-05-13 Thread Bowen Li (JIRA)
Bowen Li created FLINK-12505:


 Summary: Unify database operations to HiveCatalogBase from its 
subclasses
 Key: FLINK-12505
 URL: https://issues.apache.org/jira/browse/FLINK-12505
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.9.0


Currently each subclass of HiveCatalogBase has its own impl for catalog related 
database APIs. We should unify them as much as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12506) Add more over window unit tests

2019-05-13 Thread Kurt Young (JIRA)
Kurt Young created FLINK-12506:
--

 Summary: Add more over window unit tests
 Key: FLINK-12506
 URL: https://issues.apache.org/jira/browse/FLINK-12506
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime, Tests
Reporter: Kurt Young


We only have ITCase for streaming over window, need to add more unit tests for 
various process functions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12507) Fix AsyncLookupJoin doesn't close all generated ResultFutures

2019-05-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-12507:
---

 Summary: Fix AsyncLookupJoin doesn't close all generated 
ResultFutures
 Key: FLINK-12507
 URL: https://issues.apache.org/jira/browse/FLINK-12507
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Reporter: Jark Wu
Assignee: Jark Wu


There is a fragile test in AsyncLookupJoinITCase, that not all the udfs are 
closed at the end.

{code:java}
02:40:48.787 [ERROR] Tests run: 22, Failures: 2, Errors: 0, Skipped: 0, Time 
elapsed: 47.098 s <<< FAILURE! - in 
org.apache.flink.table.runtime.stream.sql.AsyncLookupJoinITCase
02:40:48.791 [ERROR] 
testAsyncJoinTemporalTableWithUdfFilter[StateBackend=HEAP](org.apache.flink.table.runtime.stream.sql.AsyncLookupJoinITCase)
  Time elapsed: 1.266 s  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<2>
at 
org.apache.flink.table.runtime.stream.sql.AsyncLookupJoinITCase.testAsyncJoinTemporalTableWithUdfFilter(AsyncLookupJoinITCase.scala:268)

02:40:48.794 [ERROR] 
testAsyncJoinTemporalTableWithUdfFilter[StateBackend=ROCKSDB](org.apache.flink.table.runtime.stream.sql.AsyncLookupJoinITCase)
  Time elapsed: 1.033 s  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<2>
at 
org.apache.flink.table.runtime.stream.sql.AsyncLookupJoinITCase.testAsyncJoinTemporalTableWithUdfFilter(AsyncLookupJoinITCase.scala:268)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Applying for permission as a contributor

2019-05-13 Thread Wei Sun
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is *Andrew Sun*

Best Regards,
Andrew