[jira] [Created] (FLINK-12542) Kafka011ITCase#testAutoOffsetRetrievalAndCommitToKafka test failed

2019-05-16 Thread vinoyang (JIRA)
vinoyang created FLINK-12542:


 Summary: Kafka011ITCase#testAutoOffsetRetrievalAndCommitToKafka 
test failed
 Key: FLINK-12542
 URL: https://issues.apache.org/jira/browse/FLINK-12542
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Reporter: vinoyang


{code:java}
03:06:51.127 [ERROR] Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 132.873 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.Kafka011ITCase
03:06:51.128 [ERROR] 
testAutoOffsetRetrievalAndCommitToKafka(org.apache.flink.streaming.connectors.kafka.Kafka011ITCase)
  Time elapsed: 30.699 s  <<< FAILURE!
java.lang.AssertionError: expected:<50> but was:
at 
org.apache.flink.streaming.connectors.kafka.Kafka011ITCase.testAutoOffsetRetrievalAndCommitToKafka(Kafka011ITCase.java:175)
{code}
error detail:
{code:java}
Test 
testAutoOffsetRetrievalAndCommitToKafka(org.apache.flink.streaming.connectors.kafka.Kafka011ITCase)
 failed with:
java.lang.AssertionError: expected:<50> but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runAutoOffsetRetrievalAndCommitToKafka(KafkaConsumerTestBase.java:352)
at 
org.apache.flink.streaming.connectors.kafka.Kafka011ITCase.testAutoOffsetRetrievalAndCommitToKafka(Kafka011ITCase.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}
log detail: [https://api.travis-ci.org/v3/job/533598881/log.txt]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


contributor permission

2019-05-16 Thread 王红斌
Hi,

I want to contribute to Apache Flink.
Would you please give me the contributor permission?
My JIRA ID is wanghongbin

thanks




[jira] [Created] (FLINK-12541) Add deploy a Python Flink job and session cluster on Kubernetes support.

2019-05-16 Thread sunjincheng (JIRA)
sunjincheng created FLINK-12541:
---

 Summary: Add deploy a Python Flink job and session cluster on 
Kubernetes support.
 Key: FLINK-12541
 URL: https://issues.apache.org/jira/browse/FLINK-12541
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Affects Versions: 1.9.0
Reporter: sunjincheng


Add deploy a Python Flink job and session cluster on Kubernetes support.

We need to have the same deployment step as the Java job. Please see: 
[https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12540) Kafka011ProducerExactlyOnceITCase#testExactlyOnceCustomOperator

2019-05-16 Thread vinoyang (JIRA)
vinoyang created FLINK-12540:


 Summary: 
Kafka011ProducerExactlyOnceITCase#testExactlyOnceCustomOperator
 Key: FLINK-12540
 URL: https://issues.apache.org/jira/browse/FLINK-12540
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Reporter: vinoyang


Always print this message until exceed :
{code:java}
17:56:34,950 INFO  
org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper  - 
> Failing mapper  0: count=690, totalCount=1000

{code}
log details : [https://api.travis-ci.org/v3/job/533358203/log.txt]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Job recovery with task manager restart

2019-05-16 Thread Kim, Hwanju
Hi Thomas,

I have a sort of question regarding the class loader issue, as it seems 
interesting. 
My understanding is that at least user class loader is unregistered and 
re-registered (from/to library cache on TM) across task restart. If I 
understand it correctly, unregistered one should be GCed as long as no object 
loaded by the user class loader is lingering across task restart. Indeed, 
however, there is no guarantee that UDF cleans up everything on close(). I've 
seen that some libraries used in UDF let a daemon thread outlive a task, so any 
object loaded by unregistered user class loader in the thread causes the class 
loader to be leaked (also daemon threads are also leaked since those keep being 
spawned, albeit singleton, due to newly registered class loader). If a job 
keeps restarting, this behavior leads to metaspace OOM or out of threads/OOM. 
So, my question is if the memory issue you've seen is due to whether Flink 
issue or the side-effect that UDF causes (as I described). Second question is 
if there's anything else other than class loader issue. Of course, I also 
wonder if any prior discussion is going on.

Best,
Hwanju

On 5/16/19, 8:01 AM, "Thomas Weise"  wrote:

Hi,

When a job fails and is recovered by Flink, task manager JVMs are reused.
That can cause problems when the failed job wasn't cleaned up properly, for
example leaving behind the user class loader. This would manifest in rising
base for memory usage, leading to a death spiral.

It would be good to provide an option that guarantees isolation, by
restarting the task manager processes. Managing the processes would depend
on how Flink is deployed, but the recovery sequence would need to provide a
hook for the user.

Has there been prior discussion or related work?

Thanks,
Thomas




[jira] [Created] (FLINK-12539) StreamingFileSink: Make the class extendable to customize for different usecases

2019-05-16 Thread Kailash Hassan Dayanand (JIRA)
Kailash Hassan Dayanand created FLINK-12539:
---

 Summary: StreamingFileSink: Make the class extendable to customize 
for different usecases
 Key: FLINK-12539
 URL: https://issues.apache.org/jira/browse/FLINK-12539
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / FileSystem
Reporter: Kailash Hassan Dayanand
Assignee: Kailash Hassan Dayanand


Currently the StreamingFileSink has Builder pattern and the actual constructor 
of StreamingFileSink is private. This makes it hard to extend the class to 
built on top of this and customize the sink. (Example: Adding new metrics). 
Proposing to make this protected as well as protected for the Builder interface.

 

Discussion is here: 
[http://mail-archives.apache.org/mod_mbox/flink-dev/201905.mbox/%3CCAC27z=phl8+gw-ugmjkxbriseky9zimi2crpqvlpcnyupt8...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Job recovery with task manager restart

2019-05-16 Thread Thomas Weise
Hi,

When a job fails and is recovered by Flink, task manager JVMs are reused.
That can cause problems when the failed job wasn't cleaned up properly, for
example leaving behind the user class loader. This would manifest in rising
base for memory usage, leading to a death spiral.

It would be good to provide an option that guarantees isolation, by
restarting the task manager processes. Managing the processes would depend
on how Flink is deployed, but the recovery sequence would need to provide a
hook for the user.

Has there been prior discussion or related work?

Thanks,
Thomas


[jira] [Created] (FLINK-12538) Network notifyDataAvailable() only called after getting a new buffer

2019-05-16 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-12538:
---

 Summary: Network notifyDataAvailable() only called after getting a 
new buffer
 Key: FLINK-12538
 URL: https://issues.apache.org/jira/browse/FLINK-12538
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.8.0, 1.7.2, 1.6.3, 1.9.0
Reporter: Nico Kruber


There is a potential regression in Flink 1.5+ which came with the low-latency 
changes. Whenever the {{RecordWriter}} finishes a buffer, it will first ask for 
a new buffer, then adds it to the appropriate result subpartition which 
notifies Netty of data being available.

In back-pressured scenarios where all buffers from the local pool are taken, it 
may happen that you do not immediately get a new buffer and have to wait for as 
long as it takes to get it before Netty can make use of the finished network 
buffer. Pre 1.5, Flink always immediately notified the downwards stack.
Although we do still have the output flusher notifying Netty within at most 
100ms (by default), the new behaviour may actually decrease throughput and 
latency in a back-pressured scenario.

Having a quick look at the code, changing this behaviour is probably not too 
difficult but only needs to take care not to introduce additional locking / 
locking multiple times compared to now. Things to do/consider:
* {{PipelinedSubpartition#add()}} contains some optimisations to avoid 
unnecessary flushes but these conditions are under a lock -> try to not acquire 
it twice
* {{RecordWriter#requestNewBufferBuilder()}} could therefore maybe have an 
optimised path with a non-blocking buffer builder request if successful and if 
not, notify/flush and do another blocking request

After talking to [~pnowojski] offline, we are not sure how grave the issue is 
and whether we would improve by changing it. If you are willing to take a look 
and have code changing the current behaviour, please verify that it does not 
cause any performance regression itself and actually does improve some scenario 
(shown by a performance test, e.g. via 
https://github.com/dataArtisans/flink-benchmarks ).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12537) Improve Documentation Build Time

2019-05-16 Thread Seth Wiesman (JIRA)
Seth Wiesman created FLINK-12537:


 Summary: Improve Documentation Build Time
 Key: FLINK-12537
 URL: https://issues.apache.org/jira/browse/FLINK-12537
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Seth Wiesman
Assignee: Seth Wiesman


Flink documentation today uses Jekyll and heavy use of liquid tags, the time to 
build the docs from scratch is > 3min and incremental updates to a single page 
can still take 5 - 10 seconds. 

This is an umbrella issue to profile the documentation build and improve the 
render times. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12536) Make BufferOrEventSequence#getNext() non-blocking

2019-05-16 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-12536:
--

 Summary: Make BufferOrEventSequence#getNext() non-blocking
 Key: FLINK-12536
 URL: https://issues.apache.org/jira/browse/FLINK-12536
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Affects Versions: 1.9.0
Reporter: Piotr Nowojski


Currently it is non-blocking in case of credit-based flow control (default), 
however for \{{SpilledBufferOrEventSequence}} it is blocking on reading from 
file. We might want to consider reimplementing it to be non blocking with 
{{CompletableFuture isAvailable()}} method.

 

Otherwise we will block mailbox processing for the duration of reading from 
file - for example we will block processing time timers and potentially in the 
future network flushes.

 

This is not a high priority change, since it affects non-default configuration 
option AND at the moment only processing time timers are planned to be moved to 
the mailbox for 1.9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12535) Make CheckpointBarrierHandler non-blocking

2019-05-16 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-12535:
--

 Summary: Make CheckpointBarrierHandler non-blocking
 Key: FLINK-12535
 URL: https://issues.apache.org/jira/browse/FLINK-12535
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
 Environment: Replace blocking {{CheckpointBarrierHandler.getNext()}} 
method with {{poll()}} and {{CompletableFuture isAvailable()}}.
Reporter: Piotr Nowojski
Assignee: Piotr Nowojski






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] FLIP-39: Flink ML pipeline and ML libs

2019-05-16 Thread Aljoscha Krettek
Hi,

I had a look at the document mostly from a module structure/dependency 
structure perspective.

We should make the expected dependency structure explicit in the document.

From the discussion in the doc it seems that the intention is that flink-ml-lib 
should depend on flink-table-planner (the current, pre-blink Table API planner 
that has a dependency on the DataSet API and DataStream API). I think we should 
not have this because it ties the Flink ML implementation to a module that is 
going to be deprecated. As far as I understood, the intention for this new 
Flink ML module is to be the next generation approach, based on the Table API. 
If this is true, we should make sure that this only depends on the Table API 
and is independent of the underlying planner implementation. Especially if we 
want this to work with the new Blink-based planner that is currently being 
added to Flink.

What do you think?

Best,
Aljoscha

> On 10. May 2019, at 11:22, Shaoxuan Wang  wrote:
> 
> Hi everyone,
> 
> I created umbrella Jira FLINK-12470
>  for FLIP39 and added an
> "implementation plan" section in the google doc
> (https://docs.google.com/document/d/1StObo1DLp8iiy0rbukx8kwAJb0BwDZrQrMWub3DzsEo/edit#heading=h.pggjwvwg8mrx)
> 
> .
> Need your special attention on the organization of modules/packages of
> flink-ml. @Aljosha, @Till, @Rong, @Jincheng, @Becket, and all.
> 
> We anticipate a quick development growth of Flink ML in the next several
> releases. Several components (for instance, pipeline, mllib, model serving,
> ml integration test) need to be separated into different submodules.
> Therefore, we propose to create a new flink-ml module at the root, and add
> sub-modules for ml-pipeline and ml-lib of FLIP39, and potentially we
> can also design FLIP23 as another sub-module under this new flink-ml
> module (I will raise a discussion in FLIP23 ML thread about this). The
> legacy flink-ml module (under flink-libraries) can be remained as it is and
> await to be deprecated in the future, or alternatively we move it under
> this new flink-ml module and rename it to flink-dataset-ml. What do you
> think?
> 
> Looking forward to your feedback.
> 
> Regards,
> Shaoxuan
> 
> 
> On Tue, May 7, 2019 at 8:42 AM Rong Rong  wrote:
> 
>> Thanks for following up promptly and sharing the feedback @shaoxuan.
>> 
>> Yes I share the same view with you on the convergence of these 2 FLIPs
>> eventually. I also have some questions regarding the API as well as the
>> possible convergence challenges (especially current Co-processor approach
>> vs. FLIP-39's table API approach), I will follow up on the discussion
>> thread and the PR on FLIP-23 with you and Boris :-)
>> 
>> --
>> Rong
>> 
>> On Mon, May 6, 2019 at 3:30 AM Shaoxuan Wang  wrote:
>> 
>>> 
>>> Thanks for the feedback, Rong and Flavio.
>>> 
>>> @Rong Rong
 There's another thread regarding a close to merge FLIP-23 implementation
 [1]. I agree this might still be early stage to talk about
>>> productionizing
 and model-serving. But I would be nice to keep the
>>> design/implementation in
 mind that: ease of use for productionizing a ML pipeline is also very
 important.
 And if we can leverage the implementation in FLIP-23 in the future,
>>> (some
 adjustment might be needed) that would be super helpful.
>>> Your raised a very good point. Actually I have been reviewing FLIP23 for
>>> a while (mostly offline to help Boris polish the PR). FMPOV, FLIP23 and
>>> FLIP39 can be well unified at some point. Model serving in FLIP23 is
>>> actually a special case of “transformer/model” proposed in FLIP39. Boris's
>>> implementation of model serving can be designed as an abstract class on top
>>> of transformer/model interface, and then can be used by ML users as a
>>> certain ML lib.  I have some other comments WRT FLIP23 x FLIP39, I will
>>> reply to the FLIP23 ML later with more details.
>>> 
>>> @Flavio
 I have read many discussion about Flink ML and none of them take into
 account the ongoing efforts carried out of by the Streamline H2020
>>> project
 [1] on this topic.
 Have you tried to ping them? I think that both projects could benefits
>>> from
 a joined effort on this side..
 [1] https://h2020-streamline-project.eu/objectives/
>>> Thank you for your info. I am not aware of the Streamline H2020 projects
>>> before. Just did a quick look at its website and github. IMO these projects
>>> could be very good Flink ecosystem projects and can be built on top of ML
>>> pipeline & ML lib interfaces introduced in FLIP39. I will try to contact
>>> the owners of these projects to understand their plans and blockers of
>>> using Flink (if there is any). In the meantime, if you have the direct
>>> contact of person who might be interested on ML pipeline & ML lib, 

[jira] [Created] (FLINK-12534) Reduce the test cost for Python API

2019-05-16 Thread sunjincheng (JIRA)
sunjincheng created FLINK-12534:
---

 Summary: Reduce the test cost for Python API
 Key: FLINK-12534
 URL: https://issues.apache.org/jira/browse/FLINK-12534
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Travis
Affects Versions: 1.9.0
Reporter: sunjincheng


Currently, we add the Python API Travis test for Scala 2.12 / Java 9 / Hadoop 
2.4.1. due to Python API using Py4j communicate with JVM, the test for Java 9 
is enough, and we can remove the test for Scala 2.12 and  Hadoop 2.4.1. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Proposal for Flink job execution/availability metrics impovement

2019-05-16 Thread Chesnay Schepler

On 16/05/2019 11:34, Piotr Nowojski wrote:

Luckily it seems like those four issues/proposals could be 
implemented/discussed independently or in stages.
I fully agree, and believe we should split this thread. We will end up 
discussing too many issues at once.


Nevertheless,

On 16/05/2019 11:34, Piotr Nowojski wrote:

1. Do we currently account state restore as “RUNNING”? If yes, this might be 
incorrect from your perspective.


I don't believe we do.

The Task state is set to running on the TM once the Invokable has been 
instantiated, but at that point we aren't even on the Streaming API 
level and hence haven't loaded anything. AFAIK this is all done in 
StreamTask#invoke which is called afterwards.


On 16/05/2019 11:34, Piotr Nowojski wrote:

2a. This might be more tricky if various Tasks are in various stages. For 
example in streaming, it should be safe to assume that state of the job, is 
“minimum” of it’s Tasks’ states, so Job should be accounted as RUNNING only if 
all of the Tasks are either RUNNING or COMPLETED.
2b. However in batch - including DataStream jobs running against bounded data 
streams, like Blink SQL - this might be more tricky, since there are ongoing 
efforts to schedule part of the job graphs in stages. For example do not 
schedule probe side of the join until build side is done/completed.


I have my doubts that there's anything we can/should do here. The job 
state works the way it does; I'd rather not change it now tih no much 
work on the scheduler going on, nor would I want metrics to report 
something that is no line with what is logged.






[jira] [Created] (FLINK-12533) Introduce TABLE_AGGREGATE_FUNCTION FunctionDefinition.Type

2019-05-16 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-12533:
---

 Summary: Introduce TABLE_AGGREGATE_FUNCTION FunctionDefinition.Type
 Key: FLINK-12533
 URL: https://issues.apache.org/jira/browse/FLINK-12533
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: Hequn Cheng
Assignee: Hequn Cheng


Currently, there are four kinds of {{FunctionDefinition.Type}},
{code:java}
public enum Type {
AGGREGATE_FUNCTION,
SCALAR_FUNCTION,
TABLE_FUNCTION,
OTHER_FUNCTION
}
{code}
The Type AGGREGATE_FUNCTION is used to express both AggregateFunction and 
TableAggregateFunction. However, due to the two kinds of the function contains 
different semantics. It would be nice if we can separate these two kinds of 
functions more clearly by introducing another type TABLE_AGGREGATE_FUNCTION. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Discuss]: Adding Metrics to StreamingFileSink

2019-05-16 Thread Till Rohrmann
Hi Kailash,

have you seen FLIP-33 [1] and the corresponding ML thread [2]. The scope of
this improvement proposal is to extend the set of standard metrics a
connector should offer. Maybe this can already solve your problem.

Concerning your second proposal for the StreamingFileSink, I think this
should be doable and help users to build their custom StreamingFileSink.

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-33%3A+Standardize+Connector+Metrics
[2] https://www.mail-archive.com/dev@flink.apache.org/msg25296.html

Cheers,
Till

On Thu, May 16, 2019 at 2:38 AM Thomas Weise  wrote:

> +1 to both suggestions
>
> It should be possible to extend the connector (we run into the same issues
> with KinesisConsumer).
>
> Metrics are essential to understand the performance, especially for things
> like S3 writes, error, retries, memory buffers and so on.
>
> Thomas
>
> On 2019/05/15 07:43:39, Kailash Dayanand  wrote:
> > Hello,
> >
> > I was looking to add metrics to the streaming file sink. Currently the
> only
> > details available is the generic information about for any operator like
> > the number of records in, number of records out etc. I was looking at
> > adding some metrics and contributing back as well as enabling the metrics
> > which are already getting published by the aws-hadoop. Is that something
> > which is of value for the community?
> >
> > Another change I am proposing is to make the constructor of
> > StreamingFileSink protected instead of private here:
> > https://tinyurl.com/y5vh4jn6. If we can make this as protected, then it
> is
> > possible to extend this class and have custom metrics for anyone to add
> in
> > the 'open' method.
> >
> > Thanks
> > Kailash
> >
>


[jira] [Created] (FLINK-12532) Upgrade Avro to version 1.9.0

2019-05-16 Thread JIRA
Ismaël Mejía created FLINK-12532:


 Summary: Upgrade Avro to version 1.9.0
 Key: FLINK-12532
 URL: https://issues.apache.org/jira/browse/FLINK-12532
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Ismaël Mejía


Avro 1.9.0 was released with many nice features including reduced size (1MB 
less), and removed dependencies, no paranmer, no shaded guava, security 
updates, so probably a worth upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Proposal for Flink job execution/availability metrics impovement

2019-05-16 Thread Piotr Nowojski
Hi Hwanju,

Thanks for starting the discussion. Definitely any improvement in this area 
would be very helpful and valuable. Generally speaking +1 from my side, as long 
as we make sure that either such changes do not add performance overhead (which 
I think they shouldn’t) or they are optional. 

> Firstly, we need to account time for each stage of task execution such as 
> scheduling, deploying, and running, to enable better visibility of how long a 
> job takes in which stage while not running user functions.

Couple of questions/remarks:
1. Do we currently account state restore as “RUNNING”? If yes, this might be 
incorrect from your perspective.
2a. This might be more tricky if various Tasks are in various stages. For 
example in streaming, it should be safe to assume that state of the job, is 
“minimum” of it’s Tasks’ states, so Job should be accounted as RUNNING only if 
all of the Tasks are either RUNNING or COMPLETED. 
2b. However in batch - including DataStream jobs running against bounded data 
streams, like Blink SQL - this might be more tricky, since there are ongoing 
efforts to schedule part of the job graphs in stages. For example do not 
schedule probe side of the join until build side is done/completed.

>  Secondly, any downtime in each stage can be associated with a failure cause, 
> which could be identified by Java exception notified to job manager on task 
> failure or unhealthy task manager (Flink already maintains a cause but it can 
> be associated with an execution stage for causal tracking)

What exactly would you like to report here? List of exception with downtime 
caused by it, for example: exception X caused a job to be down for 13 minutes, 
1 minute in scheduling, 1 minute deploying, 11 minutes state restore?

>  Thirdly, downtime reason should be classified into user- or system-induced 
> failure. This needs exception classifier by drawing the line between 
> user-defined functions (or public API) and Flink runtime — This is 
> particularly challenging to have 100% accuracy at one-shot due to empirical 
> nature and custom logic injection like serialization, so pluggable classifier 
> filters are must-have to enable incremental improvement. 

Why do you think about implementing classifiers? Couldn’t we classify 
exceptions by exception type, like `FlinkUserException`, 
`FlinkNetworkException`, `FlinkStateBackendException` … and make sure that we 
throw correct exception types + handle/wrap exceptions correctly when crossing 
Flink system/user code border? This way we could know exactly whether exception 
occurred in the user code or in Flink code. 

One thing that might be tricky is if error in Flink code is caused by user’s 
mistake.


>  Fourthly, stuck progress

Hmmm, this might be tricky. We can quite easily detect which exact Task is 
causing back pressure in at least couple of different ways. Tricky part would 
be to determine whether this is caused by user or not, but probably some simple 
stack trace probing on back pressured task once every N seconds should solve 
this - similar how sampling profilers work.

Luckily it seems like those four issues/proposals could be 
implemented/discussed independently or in stages.

Piotrek

> On 11 May 2019, at 06:50, Kim, Hwanju  wrote:
> 
> Hi,
> 
> I am Hwanju at AWS Kinesis Analytics. We would like to start a discussion 
> thread about a project we consider for Flink operational improvement in 
> production. We would like to start conversation early before detailed design, 
> so any high-level feedback would welcome.
> 
> For service providers who operate Flink in a multi-tenant environment, such 
> as AWS Kinesis Data Analytics, it is crucial to measure application health 
> and clearly differentiate application unavailability issue caused by Flink 
> framework or service environment from the ones caused by application code. 
> The current metrics of Flink represent overall job availability in time, it 
> still needs to be improved to give Flink operators better insight for the 
> detailed application availability. The current availability metrics such as 
> uptime and downtime measures the time based on the running state of a job, 
> which does not necessarily represent actual running state of a job (after a 
> job transitions to running, each task should still be scheduled/deployed in 
> order to run user-defined functions). The detailed view should enable 
> operators to have visibility on 1) how long each specific stage takes (e.g., 
> task scheduling or deployment), 2) what failure is introduced in which stage 
> leading to job downtime, 3) whether such failure is classified to user code 
> error (e.g., uncaught exception from user-defined function) or 
> platform/environmental errors (e.g., checkpointing issue, unhealthy nodes 
> hosting job/task managers, Flink bug). The last one is particularly needed to 
> allow Flink operators to define SLA where only a small fraction of downtime 
> should be introduced by 

[jira] [Created] (FLINK-12531) flink sql-client throw NoMatchingTableFactoryException

2019-05-16 Thread leishuiyu (JIRA)
leishuiyu created FLINK-12531:
-

 Summary: flink sql-client throw NoMatchingTableFactoryException
 Key: FLINK-12531
 URL: https://issues.apache.org/jira/browse/FLINK-12531
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.7.2
Reporter: leishuiyu


1.bin/sql-client.sh embedded -e conf/sql-client-cp.yaml

2.sql-client-cp.yaml
{code:java}
//代码占位符
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.



# This file defines the default environment for Flink's SQL Client.
# Defaults might be overwritten by a session specific environment.


# See the Table API & SQL documentation for details about supported properties.


#==
# Tables
#==

# Define tables here such as sources, sinks, views, or temporal tables.

tables: [] # empty list
# A typical table source definition looks like:
# - name: ...
# type: source-table
# connector: ...
# format: ...
# schema: ...

# A typical view definition looks like:
# - name: ...
# type: view
# query: "SELECT ..."

# A typical temporal table definition looks like:
# - name: ...
# type: temporal-table
# history-table: ...
# time-attribute: ...
# primary-key: ...

#==
# User-defined functions
#==

tables:
- name: MyUserTable # name the new table
type: source # declare if the table should be "source", "sink", or "both"
update-mode: append # specify the update-mode for streaming tables

# declare the external system to connect to
connector:
type: kafka
version: "0.11"
topic: test-input
startup-mode: earliest-offset
properties:
- key: zookeeper.connect
value: centos-6:2181
- key: bootstrap.servers
value: centos-6:9092

# declare a format for this system
format:
type: avro
avro-schema: >
{
"namespace": "org.myorganization",
"type": "record",
"name": "UserMessage",
"fields": [
{"name": "ts", "type": "string"},
{"name": "user", "type": "long"},
{"name": "message", "type": ["string", "null"]}
]
}

# declare the schema of the table
schema:
- name: rowtime
type: TIMESTAMP
rowtime:
timestamps:
type: from-field
from: ts
watermarks:
type: periodic-bounded
delay: "6"
- name: user
type: BIGINT
- name: message
type: VARCHAR
# Define scalar, aggregate, or table functions here.

functions: [] # empty list
# A typical function definition looks like:
# - name: ...
# from: class
# class: ...
# constructor: ...

#==
# Execution properties
#==

# Execution properties allow for changing the behavior of a table program.

execution:
# 'batch' or 'streaming' execution
type: streaming
# allow 'event-time' or only 'processing-time' in sources
time-characteristic: event-time
# interval in ms for emitting periodic watermarks
periodic-watermarks-interval: 200
# 'changelog' or 'table' presentation of results
result-mode: table
# maximum number of maintained rows in 'table' presentation of results
max-table-result-rows: 100
# parallelism of the program
parallelism: 1
# maximum parallelism
max-parallelism: 128
# minimum idle state retention in ms
min-idle-state-retention: 0
# maximum idle state retention in ms
max-idle-state-retention: 0
# controls how table programs are restarted in case of a failures
restart-strategy:
# strategy type
# possible values are "fixed-delay", "failure-rate", "none", or "fallback" 
(default)
type: fallback


#==
# Deployment properties
#==

# Deployment properties allow for describing the cluster to which table
# programs are submitted to.

deployment:
# general cluster communication timeout in ms
response-timeout: 5000
# (optional) 

[jira] [Created] (FLINK-12530) Move Task.inputGatesById to NetworkEnvironment

2019-05-16 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-12530:
---

 Summary: Move Task.inputGatesById to NetworkEnvironment
 Key: FLINK-12530
 URL: https://issues.apache.org/jira/browse/FLINK-12530
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Reporter: Andrey Zagrebin
Assignee: Andrey Zagrebin
 Fix For: 1.9.0


Task.inputGatesById indexes SingleInputGates by id. The end user of this 
indexing is NetworkEnviroment for two cases:

- SingleInputGate triggers producer partition readiness check and then the 
successful result of check is dispatched back to this SingleInputGate by id. We 
can just add an additional argument to 
TaskActions.triggerPartitionProducerStateCheck. The argument is an immediate 
callback to that SingleInputGate. Then inputGatesById is not needed for 
dispatching.

- TaskExecutor.updatePartitions uses inputGatesById to dispatch PartitionInfo 
update to the right SingleInputGate. If inputGatesById is moved to 
NetworkEnviroment, which should be a better place for gate management, and add 
NetworkEnviroment.updatePartitionInfo then TaskExecutor.updatePartitions could 
directly call NetworkEnviroment.updatePartitionInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release flink-shaded 7.0, release candidate 1

2019-05-16 Thread jincheng sun
Hi All,

Currently,  https://issues.apache.org/jira/browse/FLINK-11580 has
merged by @Chesnay
Schepler ! Great thanks!
And I have opened the PR(https://github.com/apache/flink-shaded/pull/62
) for FLINK-12515 .  I
appreciate if you can review the changes.

After the FLINK-12515  is
merged, I'll prepare the RC2 for flink-shaded 7.0, And It's better to find
out more potential problems before the RC2,
So,Please help to check the RC1 as much as possible. :)

Regards,
Jincheng





jincheng sun  于2019年5月15日周三 下午7:13写道:

> Thanks a lot for open the JIRA @Chesnay Schepler  , I
> will try to fix it ASAP.  :)
>
> Chesnay Schepler  于2019年5月15日周三 下午5:40写道:
>
>> I've opened a JIRA: https://issues.apache.org/jira/browse/FLINK-12515
>>
>> On 13/05/2019 12:03, jincheng sun wrote:
>> > Thanks for your vote! @Chesnay Schepler !
>> >
>> > When moving the modules, we copy the dependency version of the
>> > dependencyManagement in the `flink-parent` pom in the specific
>> sub-module
>> > inside the `flink-shaded`, do you meant that we should add
>> > `dependencyManagement` section in flink-shaded? I don't fully understand
>> > the problem you are talking about, so can you create a JIRA for the
>> problem
>> > and make a corresponding description and suggested solution? I will take
>> > the JIRA and try to solve it :)
>> >
>> > Best,
>> > Jincheng
>> >
>> > Chesnay Schepler  于2019年5月10日周五 下午8:35写道:
>> >
>> >> -1
>> >>
>> >> We forgot to account for the dependencyManagement section of the root
>> >> pom when moving the modules. Multiple dependencies are now no longer
>> >> matching.
>> >> Just to reduce the risk of something breaking I think we should make
>> >> sure that the dependency set remains the same.
>> >>
>> >> On 08/05/2019 15:39, jincheng sun wrote:
>> >>> Hi everyone,
>> >>>
>> >>> Please review and vote on the release candidate #1 for the version
>> 7.0,
>> >> as
>> >>> follows:
>> >>>
>> >>> [ ] +1, Approve the release
>> >>> [ ] -1, Do not approve the release (please provide specific comments)
>> >>>
>> >>> The complete staging area is available for your review, which
>> includes:
>> >>> * JIRA release notes [1],
>> >>> * the official Apache source release to be deployed to
>> dist.apache.org
>> >> [2],
>> >>> which are signed with the key with fingerprint 8FEA1EE9 [3],
>> >>> * all artifacts to be deployed to the Maven Central Repository [4],
>> >>> * source code tag "release-7.0-rc1" [5],
>> >>> * website pull request listing the new release [6].
>> >>>
>> >>> The vote will be open for at least 72 hours. It is adopted by majority
>> >>> approval, with at least 3 PMC affirmative votes.
>> >>>
>> >>> NOTE:
>> >>> After I finished RC1, we found that
>> >>> https://issues.apache.org/jira/browse/FLINK-11580, Chesnay Schepler,
>> >> Nico
>> >>> and Me had reached an agreement. It was better to put FLINK-11580 in
>> >>> flink-shaded-7.0.
>> >>> But in order to find out other problems earlier, we vote on RC1
>> first. If
>> >>> FLINK-11580 is completed in a week, I will be very willing to prepare
>> >> RC2.
>> >>> If it takes a long time, we will release FLINK-11580 again later.
>> Please
>> >>> let me know what do you think?
>> >>>
>> >>> Thanks,
>> >>> Jincheng
>> >>>
>> >>> [1]
>> >>>
>> >>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12345226=Html=12315522=Create_token=A5KQ-2QAV-T4JA-FDED%7C8ba061049bec0c5a72dc0191c47bb53a73b82cb4%7Clin
>> >>> [2]
>> https://dist.apache.org/repos/dist/dev/flink/flink-shaded-7.0-rc1/
>> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> >>> [4]
>> >> https://repository.apache.org/content/repositories/orgapacheflink-1217
>> >>> [5] https://github.com/apache/flink-shaded/tree/release-7.0-rc1
>> >>> [6] https://github.com/apache/flink-web/pull/209
>> >>>
>> >>
>>
>>


[jira] [Created] (FLINK-12529) Release buffers of the record deserializer timely to improve the efficiency of heap memory usage on taskmanager

2019-05-16 Thread Haibo Sun (JIRA)
Haibo Sun created FLINK-12529:
-

 Summary: Release buffers of the record deserializer timely to 
improve the efficiency of heap memory usage on taskmanager
 Key: FLINK-12529
 URL: https://issues.apache.org/jira/browse/FLINK-12529
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.8.0
Reporter: Haibo Sun
Assignee: Haibo Sun


In input processors (`StreamInputProcessor` and `StreamTwoInputProcessor`), 
each input channel has a corresponding record deserializer. Currently, these 
record deserializers are cleaned up at the end of the task (look at 
`StreamInputProcessor#cleanup()` and `StreamTwoInputProcessor#cleanup()`). This 
is not a problem for unbounded streams, but it may reduce the efficiency of 
heap memory usage on taskmanger when input is bounded stream.

For example, in case that all inputs are bounded streams, some of them end very 
early because of the small amount of data, and the other end very late because 
of the large amount of data, then the buffers of the record deserializers 
corresponding to the input channels finished early is idle for a long time and 
no longer used.

In another case, when both unbounded and bounded streams exist in the inputs, 
the buffers of the record deserializers corresponding to the bounded stream are 
idle for ever (no longer used) after the bounded streams are finished. 
Especially when the record and the parallelism of upstream are large, the total 
size of `SpanningWrapper#buffer` are very large. The size of 
`SpanningWrapper#buffer` is allowed to reach up to 5 MB, and if the parallelism 
of upstream is 100, the maximum total size will reach 500 MB (in our 
production, there are jobs with the record size up to hundreds of KB and the 
parallelism of upstream up to 1000).

Overall, after receiving `EndOfPartitionEvent` from the input channel, the 
corresponding record deserializer should be cleared immediately to improve the 
efficiency of heap memory usage on taskmanager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12528) Remove progressLock in ExecutionGraph

2019-05-16 Thread vinoyang (JIRA)
vinoyang created FLINK-12528:


 Summary: Remove progressLock in ExecutionGraph
 Key: FLINK-12528
 URL: https://issues.apache.org/jira/browse/FLINK-12528
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: vinoyang
Assignee: vinoyang


Since {{ExecutionGraph}} can only be accessed from a single 
thread(FLINK-11417), we can remove the progressLock from {{ExecutionGraph}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Applying for permission as a contributor

2019-05-16 Thread Chesnay Schepler

Done.

On 14/05/2019 08:16, Wei Sun wrote:

Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is *Andrew Sun*

Best Regards,
Andrew





[jira] [Created] (FLINK-12527) Remove GLOBAL_VERSION_UPDATER in ExecutionGraph

2019-05-16 Thread vinoyang (JIRA)
vinoyang created FLINK-12527:


 Summary: Remove GLOBAL_VERSION_UPDATER in ExecutionGraph
 Key: FLINK-12527
 URL: https://issues.apache.org/jira/browse/FLINK-12527
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: vinoyang
Assignee: vinoyang


Since {{ExecutionGraph}} can only be accessed from a single thread. We can 
remove {{AtomicLongFieldUpdater GLOBAL_VERSION_UPDATER}} from 
{{ExecutionGraph}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12526) Remove STATE_UPDATER in ExecutionGraph

2019-05-16 Thread vinoyang (JIRA)
vinoyang created FLINK-12526:


 Summary: Remove STATE_UPDATER in ExecutionGraph
 Key: FLINK-12526
 URL: https://issues.apache.org/jira/browse/FLINK-12526
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: vinoyang
Assignee: vinoyang


Since {{ExecutionGraph}} can only be accessed from a single 
thread(FLINK-11417). We can remove the 
{{AtomicReferenceFieldUpdater STATE_UPDATER}} from 
{{ExecutionGraph}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: User Interface not showing the actual count received and produced

2019-05-16 Thread Fabian Hueske
Hi Shakir,

This is a frequently reported issue in Flink's metrics collection / UI.
Send and received records and bytes only include data that is shared
between Flink tasks but not between a source system (Kafka) and Flink or
Flink and a sink system (Kinesis).
IIRC, there is an effort to fix this problem.

Best, Fabian

Am Mi., 15. Mai 2019 um 17:04 Uhr schrieb PoolakkalMukkath, Shakir <
shakir_poolakkalmukk...@comcast.com>:

>
>
> Hi Flink team,
>
>
>
> I am developing a flow which uses
>
>- FlinkKafkaConsumer010 to consume message from Kafka  and
>- FlinkKinesisProducer to produce the results to Kinesis.
>
>
>
> In the user interface, I always see Bytes and Record received from Kafka
> is zero even though it is receiving events and processing.  And same with
> Kinesis Sink, Bytes and record sent is always zero even though it is
> posting events to Kinesis.
>
>
>
> Any reason why my UI is not showing the actual count ? Thanks
>
>
>
>
>
> Thanks,
>
> Shakir
>


Re: FW: Apply for a Contributor permission

2019-05-16 Thread Chesnay Schepler

Done.

On 16/05/2019 08:51, Zhou, Brian wrote:

My JIRA ID: Brian Zhou

Best Regards,
Brian

From: Zhou, Brian
Sent: Thursday, May 16, 2019 13:39
To: dev@flink.apache.org
Subject: Apply for a Contributor permission

Hi,

I want to contribute to Apache Flink, starting from Chinese translation.
Would you please give me the contributor permission?
The specific JIRA FLINK-11560: Translate "Flink Applications" page into Chinese.

Best Regards,
Brian






Re: grant jira permission

2019-05-16 Thread Chesnay Schepler

Done.

On 16/05/2019 08:15, Charoes wrote:

Hello,
 I want to contribute to flink project
 Please grant me the jira permission.
 My Jira ID: charoes

Thanks
Charoes





Re: [DISCUSS] Clean up and reorganize the JIRA components

2019-05-16 Thread Piotr Nowojski
Just to clarify, by adding benchmark component I meant just admitting that we 
have some benchmarks both in the flink and flink-benchmarks repositories, and 
additional support infrastructure (machine executing benchmarks + Jenkins and 
Codespeed service) and to assign ownership of those components in a similar way 
as we are doing with Build System, Tests etc.

Piotrek

> On 16 May 2019, at 03:51, JingsongLee  wrote:
> 
> Big +1 to add benchmark component.
> 1.Many of our code changes now require benchmark. Having a benchmark 
> component makes it much easier for us to align.
> 2.Running benchmark regularly can also prevent performance degradation caused 
> by our code.
> 
> Best, JingsongLee
> 
> 
> --
> From:Kurt Young 
> Send Time:2019年5月15日(星期三) 20:06
> To:dev 
> Subject:Re: [DISCUSS] Clean up and reorganize the JIRA components
> 
> +1 to add benchmark component.
> 
> Best,
> Kurt
> 
> 
> On Wed, May 15, 2019 at 6:13 PM Piotr Nowojski  wrote:
> 
>> Hi,
>> 
>> I would like to propose two changes:
>> 
>> 1. Renaming “Runtime / Operators” to “Runtime / Task” or something like
>> “Runtime / Processing”. “Runtime / Operators” was confusing me, since it
>> sounded like it covers concrete implementations of the operators, like
>> “WindowOperator” or various join implementations.
>> 
>> 2. I think we should add additional component for benchmarks and
>> benchmarking infrastructure. While this is more complicated topic (because
>> of the setup and how is it running), it should be on the same level as
>> correctness tests.
>> 
>> Piotrek
>> 
>>> On 20 Feb 2019, at 10:53, Robert Metzger  wrote:
>>> 
>>> Thanks a lot Timo!
>>> 
>>> I will start a vote Chesnay!
>>> 
>>> On Wed, Feb 20, 2019 at 10:11 AM Timo Walther 
>> wrote:
>>> 
 +1 for the vote. Btw I can help cleaning up the "Table API & SQL"
 component. It seems to be the biggest with 1229 Issues.
 
 Thanks,
 Timo
 
 Am 20.02.19 um 10:09 schrieb Chesnay Schepler:
> I would prefer if you'd start a vote with a new cleaned up proposal.
> 
> On 18.02.2019 15:23, Robert Metzger wrote:
>> I added "Runtime / Configuration" to the proposal:
>> 
 
>> https://cwiki.apache.org/confluence/display/FLINK/Proposal+for+new+JIRA+Components
>> 
>> 
>> Since this discussion has been open for 10 days, I assume we have
>> reached
>> consensus here. I will soon start renaming components.
>> 
>> On Wed, Feb 13, 2019 at 10:51 AM Chesnay Schepler >> 
>> wrote:
>> 
>>> The only parent I can think of is "Infrastructure", but I don't quite
>>> like it :/
>>> 
>>> +1 for "Runtime / Configuration"; this is too general to be placed in
>>> coordination imo.
>>> 
>>> On 12.02.2019 18:25, Robert Metzger wrote:
 Thanks a lot for your feedback Chesnay!
 
 re build/travis/release: Do you have a good idea for a common
 parent for
 "Build System", "Travis" and "Release System"?
 
 re legacy: Okay, I see your point. I will keep the Legacy Components
>>> prefix.
 re library: I think I don't have a argument here. My proposal is
 based on
 what I felt as being right :) I added the "Library / " prefix to the
 proposal.
 
 re core/config: From the proposed components, I see the best match
 with
 "Runtime / Coordination", but I agree that this example is
 difficult to
 place into my proposed scheme. Do you think we should introduce
 "Runtime
>>> /
 Configuration" as a component?
 
 
 I updated the proposal accordingly!
 
 
 
 
 
 On Tue, Feb 12, 2019 at 12:19 PM Chesnay Schepler <
>> ches...@apache.org
> 
 wrote:
 
> re build/travis/release: No, I'm against merging build system,
>> travis
> and release system.
> 
> re legacy: So going forward you're proposing to move dropped
>> features
> into the legacy bucket and make it impossible to search for
>> specific
> issues for that component? There's 0 overhead to having these
> components, so I really don't get the benefit here, but see the
>>> overhead.
> I don't buy the argument of "people will not open issues if the
> component doesn't exist", they will just leave the component field
> blank
> or add a random one (that would be wrong). In fact, if you had a
> storm/tez component (that users would adhere to) then it would be
> _easier_ to figure out whether an issue can be rejected right away.
> 
> re library: If you are against a library category, what's your
> argument
> for a connector category?
> 
> re tests: I don't mind "tests" being removed from tickets about
>> test

FW: Apply for a Contributor permission

2019-05-16 Thread Zhou, Brian
My JIRA ID: Brian Zhou

Best Regards,
Brian

From: Zhou, Brian
Sent: Thursday, May 16, 2019 13:39
To: dev@flink.apache.org
Subject: Apply for a Contributor permission

Hi,

I want to contribute to Apache Flink, starting from Chinese translation.
Would you please give me the contributor permission?
The specific JIRA FLINK-11560: Translate "Flink Applications" page into Chinese.

Best Regards,
Brian



grant jira permission

2019-05-16 Thread Charoes
Hello,
I want to contribute to flink project
Please grant me the jira permission.
My Jira ID: charoes

Thanks
Charoes