[jira] [Created] (FLINK-12209) Refactor failure handling with CheckpointFailureManager

2019-04-15 Thread vinoyang (JIRA)
vinoyang created FLINK-12209:


 Summary: Refactor failure handling with CheckpointFailureManager
 Key: FLINK-12209
 URL: https://issues.apache.org/jira/browse/FLINK-12209
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Reporter: vinoyang
Assignee: vinoyang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Discuss][FLINK-8297]A solution for FLINK-8297 Timebased RocksDBListState

2019-04-15 Thread faxianzhao
Hi Yun
I think whether atomic increased number or timestamp, the key point is
disperse the elements in the different keys. 
My point is how to design a useful key. 
For the atomic increased number, it will array the elements one by one but I
think the key is useless. Because the max key is not the elements count,
when we implement the remove method.
Currently, for the CountEvictor, TimeEvictor and TTL scenario, we should
iteration all of the elements to find what we want. But if we use timestamp
key, we could terminal the iteration early to save performance or start from
the available timestamp to iteration the rest elements.




--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/


Re: [Discuss][FLINK-8297]A solution for FLINK-8297 Timebased RocksDBListState

2019-04-15 Thread faxianzhao
Hi Andrey,
I think the mailing archive reformat my mail and confuse you.
If the elements have the same processing time, the behavior of the them will
same as the original RocksDBListState. So, it will involve the OOM issue. I
think we can add an inner time shift to resolve it (only put limit count
elements for the same key), in another way we can use hash function to
dispersed the key, but it will reorder the elements.



--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/


[jira] [Created] (FLINK-12208) Introduce Sort / TemporalSort / SortLimit/ Limit operators for blink streaming runtime

2019-04-15 Thread Jing Zhang (JIRA)
Jing Zhang created FLINK-12208:
--

 Summary:  Introduce Sort / TemporalSort / SortLimit/ Limit 
operators for blink streaming runtime 
 Key: FLINK-12208
 URL: https://issues.apache.org/jira/browse/FLINK-12208
 Project: Flink
  Issue Type: Task
  Components: Table SQL / Runtime
Reporter: Jing Zhang


 Introduce Sort / TemporalSort / SortLimit/ Limit operators for blink streaming 
runtime 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Contributor permission Request

2019-04-15 Thread Kevin Tang
Hi,
I want to contribute to Apache Flink.
Would you please give me the contributor permission?
My JIRA ID is tangshangwen (
https://issues.apache.org/jira/secure/ViewProfile.jspa?name=tangshangwen).


Re: [DISCUSS] Backtracking for failover regions

2019-04-15 Thread Zhu Zhu
Thanks to Chesnay for bringing up this proposal.
It's good news that we can have a applicable fine grained recovery for
batch jobs soon.
+1 for this proposal.

Regards,
Zhu

Till Rohrmann  于2019年4月15日周一 下午5:57写道:

> Thanks for summarizing the current state of Flip-1 and outlining the way to
> move forward with it Chesnay.
>
> I think we should implement the first version of the backtracking logic
> using the DataConsumptionException (FLINK-6227) to signal if an
> intermediate result partition has been lost.
>
> Moreover, I think it would be best to base the new implementation on the
> refined FailoverStrategy interface proposed by the scheduler refactorings
> [1]. We could have an adaptor to make work with the existing code for
> testing purposes and until the scheduler interfaces have been introduced.
>
> Apart from that, +1 for completing Flink's first improvement proposal :-)
>
> [1]
>
> https://docs.google.com/document/d/1fstkML72YBO1tGD_dmG2rwvd9bklhRVauh4FSsDDwXU/edit?usp=sharing
>
> Cheers,
> Till
>
> On Sun, Apr 14, 2019 at 8:20 PM Chesnay Schepler 
> wrote:
>
> > Hello everyone,
> >
> > Till, Zhu Zhu and myself have prepared a Design Document
> > <
> >
> https://docs.google.com/document/d/1YHOpMLdC-dtgjcM-EDn6v-oXgsEQKXSoMjqRcYVbJA8
> >
> >
> > for introducing backtracking for failover regions. This is an
> > optimization of the failure handling logic for jobs with blocking result
> > partitions (which primarily exist in batch jobs), where only part of the
> > job has to be restarted.
> > This has a continuation of the FLIP-1
> > <
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures
> >
> >
> > efforts to introduce fine-grained recovery from task failures.
> > The associated JIRA can be found here
> > .
> >
> > Any feedback is highly appreciated.
> >
> > Regards,
> > Chesnay
> >
>


[jira] [Created] (FLINK-12207) JsonRowDeserializationSchema and JsonRowSerializationSchema Types "==" bug

2019-04-15 Thread lynn (JIRA)
lynn created FLINK-12207:


 Summary: JsonRowDeserializationSchema and 
JsonRowSerializationSchema Types "==" bug
 Key: FLINK-12207
 URL: https://issues.apache.org/jira/browse/FLINK-12207
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: lynn
 Attachments: image-2019-04-16-09-39-18-981.png, 
image-2019-04-16-09-42-51-471.png

When I use Json class as format instance object as fllows:

!image-2019-04-16-09-39-18-981.png!

I find " convert(JsonNode node, TypeInformation info) “ method in 
JsonRowDeserializationSchema use ”==“ assert equals:

!image-2019-04-16-09-42-51-471.png!

therefore, all if statement returns false, so i think it is a bug. 

The "convert" method in JsonRowSerializationSchema  Class has the the same 
problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: How can I use 1.9-SNAPSHOT Blink version for MapBundleFunction?

2019-04-15 Thread Kurt Young
Replied in user mailing list.

Best,
Kurt


On Mon, Apr 15, 2019 at 11:48 PM Felipe Gutierrez <
felipe.o.gutier...@gmail.com> wrote:

> Hi,
>
> I am trying to use the Blink implementation for "MapBundleFunction
> <
> https://github.com/felipegutierrez/explore-blink/blob/master/src/main/java/org/sense/blink/examples/stream/BundleOperatorExample.java
> >".
> For some reason, my pom.xml
> 
> 1.9-SNAPSHOT is not importing this class. Could anyone help me to find the
> reason that I cannot use this implementation of Blink source code?
>
> Thanks a lot!
> Felipe
>
> *--*
> *-- Felipe Gutierrez*
>
> *-- skype: felipe.o.gutierrez*
> *--* *https://felipeogutierrez.blogspot.com
> *
>


[jira] [Created] (FLINK-12206) cannot query nested fields using Flink SQL

2019-04-15 Thread Yu Yang (JIRA)
Yu Yang created FLINK-12206:
---

 Summary: cannot query nested fields using Flink SQL
 Key: FLINK-12206
 URL: https://issues.apache.org/jira/browse/FLINK-12206
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.8.0
Reporter: Yu Yang


We feed list of events with the following RowTypeInfo  to flink,
{code:java}
Row(
  timestamp: Long, 
  userId: Long,
  eventType: String, 
  auxData: Map, 
  userActions: 
 Map,
  diagnostics: Row(hostname: String, ipaddress: String)
)
{code}
and run the following SQL query
{code:sql}
SELECT event.userId, event.diagnostics.hostname

FROM event

WHERE event.userId < 10;

{code}
 

We are prompted "Column 'diagnostics.hostname' not found in table 'event'". Do 
I miss anything while constructuing the RowTypeInfo? Or it is because any SQL 
validation issue? 

 

 

The following is the detailed exceptions:

org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: SQL validation failed. From line 1, column 28 to line 1, 
column 47: Column 'diagnostics.hostname' not found in table 'event'

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)

at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)

at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)

at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)

at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)

at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)

at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)

at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)

at 
org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)

Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
failed. From line 1, column 28 to line 1, column 47: Column 
'diagnostics.hostname' not found in table 'event'

at 
org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:109)

at 
org.apache.flink.table.api.TableEnvironment.sqlQuery(TableEnvironment.scala:746)

at 
com.pinterest.flink.samples.ThriftRowSerializerSample.main(ThriftRowSerializerSample.java:71)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)

... 9 more

Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 28 to line 1, column 47: Column 'diagnostics.hostname' not found in 
table 'event'

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

at org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463)

at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:783)

at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:768)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4764)

at 
org.apache.calcite.sql.validate.DelegatingScope.fullyQualify(DelegatingScope.java:439)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl$Expander.visit(SqlValidatorImpl.java:5624)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl$Expander.visit(SqlValidatorImpl.java:5606)

at org.apache.calcite.sql.SqlIdentifier.accept(SqlIdentifier.java:334)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl.expand(SqlValidatorImpl.java:5213)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:435)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4028)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3291)

at 
org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)

at 
org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:972)

at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:948)

at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:225)

at 
org.apache.calcite.sql.validate.Sql

Re: [DISCUSS] Apache Flink at Season of Docs

2019-04-15 Thread Aizhamal Nurmamat kyzy
+Konstantin Knauf  this is looking good, thanks
for sharing!

I also created a similar doc for Apache Airflow [1]. It is a bit messy, but
it has questions from the application form that you can work with.

Cheers,
Aizhamal

[1]
https://docs.google.com/document/d/1HoL_yjNYiTAP9IxSlhx3EUnPFU4l9WOT9EnwBZjCZo0/edit#


On Mon, Apr 15, 2019 at 2:24 AM Robert Metzger  wrote:

> Hi all,
> I'm very happy to see this project happening!
>
> Thank you for the proposal Konstantin! One idea for the "related
> material": we could also link to talks or blog posts about concepts /
> monitoring / operations. Potential writers could feel overwhelmed by our
> demand for improvements, without any additional material.
>
>
> On Mon, Apr 15, 2019 at 10:16 AM Konstantin Knauf <
> konstan...@ververica.com> wrote:
>
>> Hi everyone,
>>
>> thanks @Aizhamal Nurmamat kyzy . As we only have one
>> week left until the application deadline, I went ahead and created a
>> document for the project ideas [1]. I have added the description for the
>> "stream processing concepts" as well as the "deployment & operations
>> documentation" project idea. Please let me know what you think, edit &
>> comment. We also need descriptions for the other two projects (Table
>> API/SQL & Flink Internals). @Fabian/@Jark/@Stephan can you chime in?
>>
>> Any more project ideas?
>>
>> Best,
>>
>> Konstantin
>>
>>
>> [1]
>>
>> https://docs.google.com/document/d/1Up53jNsLztApn-mP76AB6xWUVGt3nwS9p6xQTiceKXo/edit?usp=sharing
>>
>>
>>
>> On Fri, Apr 12, 2019 at 6:50 PM Aizhamal Nurmamat kyzy <
>> aizha...@google.com>
>> wrote:
>>
>> > Hello everyone,
>> >
>> > @Konstantin Knauf  - yes, you are correct.
>> > Between steps 1 and 2 though, the open source organization, in this case
>> > Flink, has to be selected by SoD as one of the participating orgs
>> *fingers
>> > crossed*.
>> >
>> > One tip about organizing ideas is that you want to communicate potential
>> > projects to the tech writers that are applying. Just make sure the
>> scope of
>> > the project is clear to them. The SoD wants to set up the tech writers
>> for
>> > success by making sure the work can be done in the allotted time.
>> >
>> > Hope it helps.
>> >
>> > Aizhamal
>> >
>> >
>> >
>> > On Fri, Apr 12, 2019 at 7:37 AM Konstantin Knauf <
>> konstan...@ververica.com>
>> > wrote:
>> >
>> >> Hi all,
>> >>
>> >> I read through the SoD documentation again, and now I think, it would
>> >> actually make sense to split (1) up into multiple project ideas. Let me
>> >> summarize the overall process:
>> >>
>> >> 1. We create & publish a list of project ideas, e.g. in a blog post.
>> >> (This can be any number of ideas.)
>> >> 2. Potential technical writers look at our list of ideas and sent a
>> >> proposal for a particular project to Google. During that time they can
>> >> reach out to us for clarification.
>> >> 3. Google forwards all proposals for our project ideas to us and we
>> sent
>> >> back a prioritized list of proposals, which we would like to accept.
>> >> 4. Of all these proposals, Google accepts 50 proposals for SoD 2019.
>> Per
>> >> organization Google will only accept a maximum of two proposals.
>> >>
>> >> @Aizhamal Nurmamat kyzy  Please correct me!
>> >>
>> >> For me this means we should splits this up in a way, that each project
>> is
>> >> a) still relevant in September b) makes sense as a 3 month project.
>> Based
>> >> on the ideas we have right now these could for example be:
>> >>
>> >> (I) Rework/Extract/Improve the documentation of stream processing
>> concepts
>> >> (II) Improve & extend Apache Flink's documentation for deployment,
>> >> operations (incl. configuration)
>> >> (III) Add documentation for Flink internals
>> >> (IV) Rework Table API / SQL documentation
>> >>
>> >> We would then get proposals potentially for all of these topics and
>> could
>> >> decide which of these proposals, we would sent back to Google. My
>> feeling
>> >> is that a technical writer could easily spent three months on any of
>> these
>> >> projects. What do others think? Any other project ideas?
>> >>
>> >> Cheers,
>> >>
>> >> Konstantin
>> >>
>> >>
>> >>
>> >>
>> >> On Fri, Apr 12, 2019 at 1:47 PM Jark Wu  wrote:
>> >>
>> >>> Hi all,
>> >>>
>> >>> I'm fine with only preparing the first proposal. I think it's
>> reasonable
>> >>> because the first proposal is more attractive
>> >>> and maybe there is not enough Chinese writer. We can focus on one
>> project
>> >>> to come up with a concrete and
>> >>> attractive project plan.
>> >>>
>> >>> One possible subproject could be rework Table SQL docs.
>> >>> (1). Improve concepts in Table SQL.
>> >>> (2). A more detailed introduction of built-in functions, currently we
>> >>> only
>> >>> have a simple explanation for each function.
>> >>>   We should add more descriptions, especially more concrete
>> examples,
>> >>> and maybe some notes. We can take
>> >>>   MySQL doc [1] as a reference.
>> >>> (3). As Flink SQL is evolving rapidly and features from B

[jira] [Created] (FLINK-12205) Internal server error.,

2019-04-15 Thread Hobo Chen (JIRA)
Hobo Chen created FLINK-12205:
-

 Summary: Internal server error., https://issues.apache.org/jira/browse/FLINK-12205
 Project: Flink
  Issue Type: Bug
  Components: Examples
Affects Versions: 1.8.0
 Environment: OSX
flink-1.8.0-bin-scala_2.11
Reporter: Hobo Chen


I followed Local Step Tutorial to start flink.

*env*
OSX
flink-1.8.0-bin-scala_2.11

 

*start flink cluster result, it's ok.*

 
{code:java}
$ ./bin/start-cluster.sh
$ tail log/flink-*-standalonesession-*.log
2019-04-15 23:46:14,335 INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Rest endpoint 
listening at localhost:8081
2019-04-15 23:46:14,337 INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - 
http://localhost:8081 was granted leadership with 
leaderSessionID=----
2019-04-15 23:46:14,337 INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Web frontend 
listening at http://localhost:8081.
2019-04-15 23:46:14,517 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - 
Starting RPC endpoint for 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at 
akka://flink/user/resourcemanager .
2019-04-15 23:46:14,621 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - 
Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/dispatcher .
2019-04-15 23:46:14,675 INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - 
ResourceManager akka.tcp://flink@localhost:6123/user/resourcemanager was 
granted leadership with fencing token 
2019-04-15 23:46:14,676 INFO 
org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager - Starting the 
SlotManager.
2019-04-15 23:46:14,698 INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka.tcp://flink@localhost:6123/user/dispatcher was granted leadership with 
fencing token ----
2019-04-15 23:46:14,701 INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
2019-04-15 23:46:15,559 INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - 
Registering TaskManager with ResourceID 6e31e97e88429e4eb8a55489e7334560 
(akka.tcp://flink@192.168.1.5:65505/user/taskmanager_0) at ResourceManager
{code}
 

 

*When I run the example, have error.*
{code:java}
$ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000{code}
*client error log*
{code:java}
2019-04-15 23:53:11,171 ERROR org.apache.flink.client.cli.CliFrontend - Error 
while running the command.
org.apache.flink.client.program.ProgramInvocationException: Could not retrieve 
the execution result. (JobID: 42cf26445edd68aef39b67a543cca421)
at 
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:261)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:483)
at 
org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
at 
org.apache.flink.streaming.examples.socket.SocketWindowWordCount.main(SocketWindowWordCount.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
at org.apache.flink.client.cli.CliFrontend$$Lambda$5/1160003871.call(Unknown 
Source)
at 
org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
submit JobGraph.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:388)
at 
org.apache.flink.client.program.rest.RestClusterClient$$Lambda$17/1740223770.apply(Unknown
 Source)
at 
java.util.concurrent.CompletableFuture$ExceptionCompletion.run(CompletableFuture.java:1246)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193)
at 
java.util.concurrent.CompletableFuture.internalComple

[jira] [Created] (FLINK-12204) Improve JDBCOutputFormat ClassCastException

2019-04-15 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-12204:
-

 Summary: Improve JDBCOutputFormat ClassCastException
 Key: FLINK-12204
 URL: https://issues.apache.org/jira/browse/FLINK-12204
 Project: Flink
  Issue Type: Task
  Components: Connectors / JDBC
Affects Versions: 1.8.0
Reporter: Fabian Hueske
 Fix For: 1.9.0, 1.8.1


ClassCastExceptions thrown by JDBCOutputFormat are not very helpful because 
they do not provide information for which input field the cast failed.

We should catch the exception and enrich it with information about the affected 
field to make it more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12203) Refactor ResultPartitionManager to break tie with Task

2019-04-15 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-12203:
---

 Summary: Refactor ResultPartitionManager to break tie with Task
 Key: FLINK-12203
 URL: https://issues.apache.org/jira/browse/FLINK-12203
 Project: Flink
  Issue Type: Sub-task
Reporter: Andrey Zagrebin


At the moment, we have ResultPartitionManager.releasePartitionsProducedBy which 
uses indexing by task in network environment. These methods are eventually used 
only by Task which already knows its partitions so Task and TE.failPartition 
could directly use NetworkEnviroment.releasePartition(ResultPartitionID). This 
also requires that JM Execution sends produced partition ids instead of just 
ExecutionAttemptID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


How can I use 1.9-SNAPSHOT Blink version for MapBundleFunction?

2019-04-15 Thread Felipe Gutierrez
Hi,

I am trying to use the Blink implementation for "MapBundleFunction
".
For some reason, my pom.xml

1.9-SNAPSHOT is not importing this class. Could anyone help me to find the
reason that I cannot use this implementation of Blink source code?

Thanks a lot!
Felipe

*--*
*-- Felipe Gutierrez*

*-- skype: felipe.o.gutierrez*
*--* *https://felipeogutierrez.blogspot.com
*


[jira] [Created] (FLINK-12202) Consider introducing batch metric register in NetworkEnviroment

2019-04-15 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-12202:
---

 Summary: Consider introducing batch metric register in 
NetworkEnviroment
 Key: FLINK-12202
 URL: https://issues.apache.org/jira/browse/FLINK-12202
 Project: Flink
  Issue Type: Sub-task
Reporter: Andrey Zagrebin
Assignee: zhijiang


As we have some network specific metrics registered in TaskIOMetricGroup 
(In/OutputBuffersGauge, In/OutputBufferPoolUsageGauge), we can introduce batch 
metric registering in NetworkEnviroment.registerMetrics(ProxyMetricGroup, 
partitions, gates), where task passes its TaskIOMetricGroup into 
ProxyMetricGroup. This way we could break a tie between task and 
NetworkEnviroment. TaskIOMetricGroup.initializeBufferMetrics, 
In/OutputBuffersGauge, In/OutputBufferPoolUsageGauge could be moved into 
NetworkEnviroment.registerMetrics and network code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12201) Introduce InputGateWithMetrics in Task to increment in/out byte metrics

2019-04-15 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-12201:
---

 Summary: Introduce InputGateWithMetrics in Task to increment 
in/out byte metrics
 Key: FLINK-12201
 URL: https://issues.apache.org/jira/browse/FLINK-12201
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Reporter: Andrey Zagrebin
Assignee: zhijiang
 Fix For: 1.9.0


Incrementing of in/out byte metrics in local/remote input channel does not 
depend on shuffle service and can be moved out of network internals into Task. 
Task could wrap InputGate provided by ShuffleService with InputGateWithMetrics 
which would increment in/out byte metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12200) [Table API] Support UNNEST for MAP types

2019-04-15 Thread Artsem Semianenka (JIRA)
Artsem Semianenka created FLINK-12200:
-

 Summary: [Table API] Support UNNEST for MAP types 
 Key: FLINK-12200
 URL: https://issues.apache.org/jira/browse/FLINK-12200
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.8.0, 1.7.2, 1.7.3, 1.8.1
Reporter: Artsem Semianenka
Assignee: Artsem Semianenka


In case if the input dataset has the following schema :
Row(a: Integer, b: Long, c: Map)

I would like to have the ability to execute the SQL query like:
{code:sql}
SELECT a, k, v FROM src, UNNEST(c) as m (k,v)
{code}

Currently, the UNNEST operator is supported only for ARRAY and MULTISET  

I would like to propose adding the support of UNNEST functionality for MAP 
types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12199) Refactor IOMetrics to not distinguish between local/remote

2019-04-15 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-12199:
---

 Summary: Refactor IOMetrics to not distinguish between 
local/remote  
 Key: FLINK-12199
 URL: https://issues.apache.org/jira/browse/FLINK-12199
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Reporter: Andrey Zagrebin
Assignee: zhijiang
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12198) Flink JDBC Connector: Update existing JDBCInputFormat to support setting auto-commit mode of DB connection

2019-04-15 Thread Konstantinos Papadopoulos (JIRA)
Konstantinos Papadopoulos created FLINK-12198:
-

 Summary: Flink JDBC Connector: Update existing JDBCInputFormat to 
support setting auto-commit mode of DB connection
 Key: FLINK-12198
 URL: https://issues.apache.org/jira/browse/FLINK-12198
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Affects Versions: 1.8.0
Reporter: Konstantinos Papadopoulos






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] A more restrictive JIRA workflow

2019-04-15 Thread Timo Walther

I think this really depends on the contribution.

Sometimes "triviality" means that people just want to fix a typo in some 
docs. For this, a hotfix PR is sufficient and does not need a JIRA issue.


However, sometimes "triviality" is only trivial at first glance but 
introduces side effects. In any case, any contribution needs to be 
reviewed and merged by a committer so follow-up responses and follow-up 
work might always be required. But you are right, committers need to 
respond quicker in any case.


Timo


Am 15.04.19 um 14:54 schrieb Konstantin Knauf:

Hi everyone,

just my two cents:  as a non-committer I appreciate a lightweight,
frictionless process for trivial changes or small fixes without the need to
approach a committer beforehand. If it takes 5 days, so that I can start
with a triviality, I might not bother in the first place. So, in order for
this not to backfire by making the community more exclusive, we need more
timely responses & follow ups by committers after the change to the
workflow. Having said that, I am slightly leaning towards Andrey's
interpretation of option 2.

Cheers,

Konstantin



On Mon, Apr 15, 2019 at 1:39 PM Andrey Zagrebin 
wrote:


@Robert thanks for pointing out and sorry for confusion. The correct text:

+1 for option 1.

I also do not mind option 2, after 1-2 contributions, any contributor could
just ask the committer (who merged those contributions) about contributor
permissions.

Best,
Andrey

On Mon, Apr 15, 2019 at 10:34 AM Feng LI  wrote:


Hello there,

New to the community. Just thought you might want some inputs from new
comers too.

I prefer option 2, where you need to prove the ability and commitment

with

commits  before contributor permission is assigned.

Cheers,
Feng

Le lun. 15 avr. 2019 à 09:17, Robert Metzger  a
écrit :


@Andrey: You mention "option 2" two times, I guess one of the two uses

of

"option 2" contains a typo?

On Wed, Apr 10, 2019 at 10:33 AM Andrey Zagrebin 
Hi all,

+1 for option 2.

I also do not mind option 2, after 1-2 contributions, any contributor

could

just ask the committer (who merged those contributions) about

contributor

permissions.

Best,
Andrey

On Wed, Apr 10, 2019 at 3:58 AM Robert Metzger 
wrote:


I'm +1 on option 1.

On Tue, Apr 9, 2019 at 1:58 AM Timo Walther 

wrote:

Hi everyone,

I'd like to bring up this discussion thread again. In summary, I

think

we all agreed on improving the JIRA workflow to move

design/consensus

discussions from PRs to the issues first, before implementing

them.

Two options have been proposed:
1. Only committers can assign people to issues. PRs of unassigned

issues

are closed automatically.
2. Committers upgrade assignable users to contributors as an
intermediate step towards committership.

I would prefer option 1 as some people also mentioned that

option 2

requires some standadized processes otherwise it would be

difficult

to

communicate why somebody is a contributor and some somebody is

not.

What do you think?

Regards,
Timo


Am 18.03.19 um 14:25 schrieb Robert Metzger:

@Fabian: I don't think this is a big problem. Moving away from

"giving

everybody contributor permissions" to "giving it to some

people"

is

not

risky.
I would leave this decision to the committers who are working

with

a

person.


We should bring this discussion to a conclusion and implement

the

changes

to JIRA.


Nobody has raised any objections to the overall idea.

Points raised:
1. We need to update the contribution guide and describe the

workflow.

2. I brought up changing Flinkbot so that it auto-closes PRs

without

somebody assigned in JIRA.

Who wants to work on an update of the contribution guide?
If there's no volunteers, I'm happy to take care of this.


On Fri, Mar 15, 2019 at 9:20 AM Fabian Hueske <

fhue...@gmail.com

wrote:

Hi,

I'm not sure about adding an additional stage.
Who's going to decide when to "promote" a user to a

contributor,

i.e.,

grant assigning permission?

Best, Fabian

Am Do., 14. März 2019 um 13:50 Uhr schrieb Timo Walther <
twal...@apache.org

:
Hi Robert,

I also like the idea of making every Jira user an "Assignable

User",

but

restrict assigning a ticket to people with committer

permissions.

Instead of giving contributor permissions to everyone, we

could

have

a

more staged approach from user, to contributor, and finally

to

committer.

Once people worked on a couple of JIRA issues, we can make

them

contributors.

What do you think?

Regards,
Timo

Am 06.03.19 um 12:33 schrieb Robert Metzger:

Hi Tison,
I also thought about this.
Making a person a "Contributor" is required for being an

"Assignable

User",

so normal Jira accounts can't be assigned to a ticket.

We could make every Jira user an "Assignable User", but

restrict

assigning

a ticket to people with committer permissions.
There are some other permissions attached to the

"Contributor"

role,

such

as "Closing" and "Editing" (including "Transition", "Logging

work",

et

Re: [REMINDER] Please add entries for newly added dependencies to NOTICE file

2019-04-15 Thread Robert Metzger
I will add such a feature to the bot!

On Mon, Mar 25, 2019 at 7:41 PM Stephan Ewen  wrote:

> +1 to an enhancement to the Flink bot as a simple first step.
>
> The first step could be as simple as adding a red warning message as a
> comment to the PR whenever a PR touches a POM file.
> That needs special attention for various reasons, including (but not only)
> license checks and file updates.
>
> On Mon, Mar 25, 2019 at 2:12 AM jincheng sun 
> wrote:
>
>> Hi Aljoscha,
>> Thanks for bringing this up. The release-1.8 we have been prepared 4 times
>> RC, and in addition to one performance issue, all of the rest are NOTICE
>> issues. We really need to pay attention on this.
>>
>> I agree with Ufuk, improve the `flink-bot` is a good idea. And the
>> committer who merge the changes also needs to pay more attention to check
>> if the change involves NOTICE changes.
>>
>> Thanks,
>> Jincheng
>>
>> Bowen Li  于2019年3月24日周日 下午2:04写道:
>>
>> > Hi,
>> >
>> > I agree with Ufuk that we can start with something simple, achievable,
>> yet
>> > effective, like using flink-bot. The wiki that explains licensing of
>> Flink
>> > is very good but hard to be found and noticed by new contributors, do we
>> > have plan to move it to a more discoverable place like flink.apache.org
>> ?
>> > Well, even with that, it may not be so directly effective as flinkt-bot
>> > IMO. We will of course also continue evaluating more automated ways to
>> > solve this problem.
>> >
>> > Besides, there's another proposal from Jark [1] to use flink-bot to help
>> > community keep docs in English and Chinese in sync. Looks like we have
>> > general desires for flink-bot to remind contributors of different
>> > requirements according to modules they modify, and we may want to
>> develop
>> > and adapt flink-bot to fulfill that need. I personally believe flink-bot
>> > has proven to be handy, productive and user-friendly since it's created,
>> > and we may increase investment on flink-bot for helping devs with
>> > miscellaneous issues like LICENSING and NOTICE.
>> >
>> > [1]
>> >
>> >
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Improve-the-flinkbot-tp26965p27863.html
>> >
>> > On Tue, Mar 19, 2019 at 10:50 AM Ufuk Celebi  wrote:
>> >
>> > > There are definitely license checking tools around that can generate
>> > > NOTICE files etc. I don't have the details, but Robert should have
>> > > some input here. I don't know whether they would fit our setup and how
>> > > we would integrate them or whether INFRA can get them for is.
>> > >
>> > > Note that simple things can already improve the experience going
>> > > forward. A simple thing for the flink-bot could be to require/propose
>> > > a NOTICE file check whenever a pom.xml file was modified. What do you
>> > > think?
>> > >
>> > > – Ufuk
>> > >
>> > > On Tue, Mar 19, 2019 at 6:43 PM Chesnay Schepler 
>> > > wrote:
>> > > >
>> > > > Realistically you can't automatically infer from changes to any pom
>> > > whether and what we have to change in the notice files.
>> > > > Doing this on the XML level requires a view over the entire project
>> to
>> > > detect dependency changes in parent modules / dependency management
>> and
>> > > packaging changes in downstream modules (like flink-dist); this would
>> be
>> > > ridiculously complex check.
>> > > >
>> > > > Whether you have to change something can be inferred from the
>> > > shade-plugin output, but not 100% reliably (for example, if a
>> dependency
>> > is
>> > > declared to be included but everything is filtered out (yeah, that
>> > > happened)).
>> > > > Theoretically it is even possible to generate the licensing files
>> from
>> > > said output, but haven't had time yet to look into whether this is
>> truly
>> > > possible.
>> > > >
>> > > > On 19.03.2019 07:15, Ufuk Celebi wrote:
>> > > >
>> > > > Hey Aljoscha,
>> > > >
>> > > > thanks for bringing this up. I think that we should either integrate
>> > > > checks for this into our CI/CD environment (using existing tools) or
>> > > > add a conditional check for this into flink-bot in case a pom.xml
>> was
>> > > > modified. Otherwise it will be easy to forget in the future.
>> > > >
>> > > > – Ufuk
>> > > >
>> > > > On Mon, Mar 18, 2019 at 12:03 PM Aljoscha Krettek <
>> aljos...@apache.org
>> > >
>> > > wrote:
>> > > >
>> > > > Hi All,
>> > > >
>> > > > Please remember to add newly added dependencies to the NOTICE file
>> of
>> > > flink-dist (which will then end up in NOTICE-binary and so on).
>> > Discovering
>> > > this late will cause delays in releases, as it is doing now.
>> > > >
>> > > > There is a handy guide that Chesnay and Till worked on that explains
>> > > licensing for Apache projects and Flink specifically:
>> > > https://cwiki.apache.org/confluence/display/FLINK/Licensing <
>> > > https://cwiki.apache.org/confluence/display/FLINK/Licensing>
>> > > >
>> > > > Best,
>> > > > Aljoscha
>> > > >
>> > > >
>> > >
>> >
>>
>


[jira] [Created] (FLINK-12197) Avro row deser for Confluent binary format

2019-04-15 Thread eugen yushin (JIRA)
eugen yushin created FLINK-12197:


 Summary: Avro row deser for Confluent binary format
 Key: FLINK-12197
 URL: https://issues.apache.org/jira/browse/FLINK-12197
 Project: Flink
  Issue Type: New Feature
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.8.0
Reporter: eugen yushin
Assignee: eugen yushin


Currently, the following avro deserializators are available:
- core avro binary layout for both Specific/Generic and Row
- Confluent Specific/Generic record layout
The intention of this task is to fill the gap and provide Confluent avro deser 
for Row type.
This will allow to harness Table API to pipelines that work with avro and 
Confluent schema registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] A more restrictive JIRA workflow

2019-04-15 Thread Konstantin Knauf
Hi everyone,

just my two cents:  as a non-committer I appreciate a lightweight,
frictionless process for trivial changes or small fixes without the need to
approach a committer beforehand. If it takes 5 days, so that I can start
with a triviality, I might not bother in the first place. So, in order for
this not to backfire by making the community more exclusive, we need more
timely responses & follow ups by committers after the change to the
workflow. Having said that, I am slightly leaning towards Andrey's
interpretation of option 2.

Cheers,

Konstantin



On Mon, Apr 15, 2019 at 1:39 PM Andrey Zagrebin 
wrote:

> @Robert thanks for pointing out and sorry for confusion. The correct text:
>
> +1 for option 1.
>
> I also do not mind option 2, after 1-2 contributions, any contributor could
> just ask the committer (who merged those contributions) about contributor
> permissions.
>
> Best,
> Andrey
>
> On Mon, Apr 15, 2019 at 10:34 AM Feng LI  wrote:
>
> > Hello there,
> >
> > New to the community. Just thought you might want some inputs from new
> > comers too.
> >
> > I prefer option 2, where you need to prove the ability and commitment
> with
> > commits  before contributor permission is assigned.
> >
> > Cheers,
> > Feng
> >
> > Le lun. 15 avr. 2019 à 09:17, Robert Metzger  a
> > écrit :
> >
> > > @Andrey: You mention "option 2" two times, I guess one of the two uses
> of
> > > "option 2" contains a typo?
> > >
> > > On Wed, Apr 10, 2019 at 10:33 AM Andrey Zagrebin  >
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > +1 for option 2.
> > > >
> > > > I also do not mind option 2, after 1-2 contributions, any contributor
> > > could
> > > > just ask the committer (who merged those contributions) about
> > contributor
> > > > permissions.
> > > >
> > > > Best,
> > > > Andrey
> > > >
> > > > On Wed, Apr 10, 2019 at 3:58 AM Robert Metzger 
> > > > wrote:
> > > >
> > > > > I'm +1 on option 1.
> > > > >
> > > > > On Tue, Apr 9, 2019 at 1:58 AM Timo Walther 
> > > wrote:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > I'd like to bring up this discussion thread again. In summary, I
> > > think
> > > > > > we all agreed on improving the JIRA workflow to move
> > design/consensus
> > > > > > discussions from PRs to the issues first, before implementing
> them.
> > > > > >
> > > > > > Two options have been proposed:
> > > > > > 1. Only committers can assign people to issues. PRs of unassigned
> > > > issues
> > > > > > are closed automatically.
> > > > > > 2. Committers upgrade assignable users to contributors as an
> > > > > > intermediate step towards committership.
> > > > > >
> > > > > > I would prefer option 1 as some people also mentioned that
> option 2
> > > > > > requires some standadized processes otherwise it would be
> difficult
> > > to
> > > > > > communicate why somebody is a contributor and some somebody is
> not.
> > > > > >
> > > > > > What do you think?
> > > > > >
> > > > > > Regards,
> > > > > > Timo
> > > > > >
> > > > > >
> > > > > > Am 18.03.19 um 14:25 schrieb Robert Metzger:
> > > > > > > @Fabian: I don't think this is a big problem. Moving away from
> > > > "giving
> > > > > > > everybody contributor permissions" to "giving it to some
> people"
> > is
> > > > not
> > > > > > > risky.
> > > > > > > I would leave this decision to the committers who are working
> > with
> > > a
> > > > > > person.
> > > > > > >
> > > > > > >
> > > > > > > We should bring this discussion to a conclusion and implement
> the
> > > > > changes
> > > > > > > to JIRA.
> > > > > > >
> > > > > > >
> > > > > > > Nobody has raised any objections to the overall idea.
> > > > > > >
> > > > > > > Points raised:
> > > > > > > 1. We need to update the contribution guide and describe the
> > > > workflow.
> > > > > > > 2. I brought up changing Flinkbot so that it auto-closes PRs
> > > without
> > > > > > > somebody assigned in JIRA.
> > > > > > >
> > > > > > > Who wants to work on an update of the contribution guide?
> > > > > > > If there's no volunteers, I'm happy to take care of this.
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Mar 15, 2019 at 9:20 AM Fabian Hueske <
> fhue...@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > >> Hi,
> > > > > > >>
> > > > > > >> I'm not sure about adding an additional stage.
> > > > > > >> Who's going to decide when to "promote" a user to a
> contributor,
> > > > i.e.,
> > > > > > >> grant assigning permission?
> > > > > > >>
> > > > > > >> Best, Fabian
> > > > > > >>
> > > > > > >> Am Do., 14. März 2019 um 13:50 Uhr schrieb Timo Walther <
> > > > > > >> twal...@apache.org
> > > > > > >>> :
> > > > > > >>> Hi Robert,
> > > > > > >>>
> > > > > > >>> I also like the idea of making every Jira user an "Assignable
> > > > User",
> > > > > > but
> > > > > > >>> restrict assigning a ticket to people with committer
> > permissions.
> > > > > > >>>
> > > > > > >>> Instead of giving contributor permissions to everyone, we
> could
> > > > have
> > > > > a
> > > > > > >>> 

Re: [jira] [Created] (FLINK-12196) FlinkKafkaProducerITCase#testRunOutOfProducersInThePool throws NPE

2019-04-15 Thread Alexander “Imtec” Chermenin
!

Regards, Alex. Sent from my BlackBerry Passport.
  Original Message  
From: vinoyang (JIRA)
Sent: Monday, April 15, 2019 4:08 PM
To: dev@flink.apache.org
Reply To: dev@flink.apache.org
Subject: [jira] [Created] (FLINK-12196) 
FlinkKafkaProducerITCase#testRunOutOfProducersInThePool throws NPE

vinoyang created FLINK-12196:


Summary: FlinkKafkaProducerITCase#testRunOutOfProducersInThePool throws NPE
Key: FLINK-12196
URL: https://issues.apache.org/jira/browse/FLINK-12196
Project: Flink
Issue Type: Test
Components: Connectors / Kafka, Tests
Reporter: vinoyang


{code:java}
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
09:25:28.562 [ERROR] 
testRunOutOfProducersInThePool(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
 Time elapsed: 65.393 s <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testRunOutOfProducersInThePool(FlinkKafkaProducerITCase.java:514)
{code}
log detail : [https://api.travis-ci.org/v3/job/520162132/log.txt]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] A more restrictive JIRA workflow

2019-04-15 Thread Andrey Zagrebin
@Robert thanks for pointing out and sorry for confusion. The correct text:

+1 for option 1.

I also do not mind option 2, after 1-2 contributions, any contributor could
just ask the committer (who merged those contributions) about contributor
permissions.

Best,
Andrey

On Mon, Apr 15, 2019 at 10:34 AM Feng LI  wrote:

> Hello there,
>
> New to the community. Just thought you might want some inputs from new
> comers too.
>
> I prefer option 2, where you need to prove the ability and commitment with
> commits  before contributor permission is assigned.
>
> Cheers,
> Feng
>
> Le lun. 15 avr. 2019 à 09:17, Robert Metzger  a
> écrit :
>
> > @Andrey: You mention "option 2" two times, I guess one of the two uses of
> > "option 2" contains a typo?
> >
> > On Wed, Apr 10, 2019 at 10:33 AM Andrey Zagrebin 
> > wrote:
> >
> > > Hi all,
> > >
> > > +1 for option 2.
> > >
> > > I also do not mind option 2, after 1-2 contributions, any contributor
> > could
> > > just ask the committer (who merged those contributions) about
> contributor
> > > permissions.
> > >
> > > Best,
> > > Andrey
> > >
> > > On Wed, Apr 10, 2019 at 3:58 AM Robert Metzger 
> > > wrote:
> > >
> > > > I'm +1 on option 1.
> > > >
> > > > On Tue, Apr 9, 2019 at 1:58 AM Timo Walther 
> > wrote:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > > I'd like to bring up this discussion thread again. In summary, I
> > think
> > > > > we all agreed on improving the JIRA workflow to move
> design/consensus
> > > > > discussions from PRs to the issues first, before implementing them.
> > > > >
> > > > > Two options have been proposed:
> > > > > 1. Only committers can assign people to issues. PRs of unassigned
> > > issues
> > > > > are closed automatically.
> > > > > 2. Committers upgrade assignable users to contributors as an
> > > > > intermediate step towards committership.
> > > > >
> > > > > I would prefer option 1 as some people also mentioned that option 2
> > > > > requires some standadized processes otherwise it would be difficult
> > to
> > > > > communicate why somebody is a contributor and some somebody is not.
> > > > >
> > > > > What do you think?
> > > > >
> > > > > Regards,
> > > > > Timo
> > > > >
> > > > >
> > > > > Am 18.03.19 um 14:25 schrieb Robert Metzger:
> > > > > > @Fabian: I don't think this is a big problem. Moving away from
> > > "giving
> > > > > > everybody contributor permissions" to "giving it to some people"
> is
> > > not
> > > > > > risky.
> > > > > > I would leave this decision to the committers who are working
> with
> > a
> > > > > person.
> > > > > >
> > > > > >
> > > > > > We should bring this discussion to a conclusion and implement the
> > > > changes
> > > > > > to JIRA.
> > > > > >
> > > > > >
> > > > > > Nobody has raised any objections to the overall idea.
> > > > > >
> > > > > > Points raised:
> > > > > > 1. We need to update the contribution guide and describe the
> > > workflow.
> > > > > > 2. I brought up changing Flinkbot so that it auto-closes PRs
> > without
> > > > > > somebody assigned in JIRA.
> > > > > >
> > > > > > Who wants to work on an update of the contribution guide?
> > > > > > If there's no volunteers, I'm happy to take care of this.
> > > > > >
> > > > > >
> > > > > > On Fri, Mar 15, 2019 at 9:20 AM Fabian Hueske  >
> > > > wrote:
> > > > > >
> > > > > >> Hi,
> > > > > >>
> > > > > >> I'm not sure about adding an additional stage.
> > > > > >> Who's going to decide when to "promote" a user to a contributor,
> > > i.e.,
> > > > > >> grant assigning permission?
> > > > > >>
> > > > > >> Best, Fabian
> > > > > >>
> > > > > >> Am Do., 14. März 2019 um 13:50 Uhr schrieb Timo Walther <
> > > > > >> twal...@apache.org
> > > > > >>> :
> > > > > >>> Hi Robert,
> > > > > >>>
> > > > > >>> I also like the idea of making every Jira user an "Assignable
> > > User",
> > > > > but
> > > > > >>> restrict assigning a ticket to people with committer
> permissions.
> > > > > >>>
> > > > > >>> Instead of giving contributor permissions to everyone, we could
> > > have
> > > > a
> > > > > >>> more staged approach from user, to contributor, and finally to
> > > > > committer.
> > > > > >>>
> > > > > >>> Once people worked on a couple of JIRA issues, we can make them
> > > > > >>> contributors.
> > > > > >>>
> > > > > >>> What do you think?
> > > > > >>>
> > > > > >>> Regards,
> > > > > >>> Timo
> > > > > >>>
> > > > > >>> Am 06.03.19 um 12:33 schrieb Robert Metzger:
> > > > >  Hi Tison,
> > > > >  I also thought about this.
> > > > >  Making a person a "Contributor" is required for being an
> > > "Assignable
> > > > > >>> User",
> > > > >  so normal Jira accounts can't be assigned to a ticket.
> > > > > 
> > > > >  We could make every Jira user an "Assignable User", but
> restrict
> > > > > >>> assigning
> > > > >  a ticket to people with committer permissions.
> > > > >  There are some other permissions attached to the "Contributor"
> > > role,
> > > > > >> such
> > > > >  as "Clo

[jira] [Created] (FLINK-12196) FlinkKafkaProducerITCase#testRunOutOfProducersInThePool throws NPE

2019-04-15 Thread vinoyang (JIRA)
vinoyang created FLINK-12196:


 Summary: FlinkKafkaProducerITCase#testRunOutOfProducersInThePool 
throws NPE
 Key: FLINK-12196
 URL: https://issues.apache.org/jira/browse/FLINK-12196
 Project: Flink
  Issue Type: Test
  Components: Connectors / Kafka, Tests
Reporter: vinoyang


{code:java}
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
09:25:28.562 [ERROR] 
testRunOutOfProducersInThePool(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
  Time elapsed: 65.393 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testRunOutOfProducersInThePool(FlinkKafkaProducerITCase.java:514)
{code}
log detail : [https://api.travis-ci.org/v3/job/520162132/log.txt]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Backtracking for failover regions

2019-04-15 Thread Till Rohrmann
Thanks for summarizing the current state of Flip-1 and outlining the way to
move forward with it Chesnay.

I think we should implement the first version of the backtracking logic
using the DataConsumptionException (FLINK-6227) to signal if an
intermediate result partition has been lost.

Moreover, I think it would be best to base the new implementation on the
refined FailoverStrategy interface proposed by the scheduler refactorings
[1]. We could have an adaptor to make work with the existing code for
testing purposes and until the scheduler interfaces have been introduced.

Apart from that, +1 for completing Flink's first improvement proposal :-)

[1]
https://docs.google.com/document/d/1fstkML72YBO1tGD_dmG2rwvd9bklhRVauh4FSsDDwXU/edit?usp=sharing

Cheers,
Till

On Sun, Apr 14, 2019 at 8:20 PM Chesnay Schepler  wrote:

> Hello everyone,
>
> Till, Zhu Zhu and myself have prepared a Design Document
> <
> https://docs.google.com/document/d/1YHOpMLdC-dtgjcM-EDn6v-oXgsEQKXSoMjqRcYVbJA8>
>
> for introducing backtracking for failover regions. This is an
> optimization of the failure handling logic for jobs with blocking result
> partitions (which primarily exist in batch jobs), where only part of the
> job has to be restarted.
> This has a continuation of the FLIP-1
> <
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures>
>
> efforts to introduce fine-grained recovery from task failures.
> The associated JIRA can be found here
> .
>
> Any feedback is highly appreciated.
>
> Regards,
> Chesnay
>


[jira] [Created] (FLINK-12195) Incorrect resource time setting causes flink to fail to submit

2019-04-15 Thread tangshangwen (JIRA)
tangshangwen created FLINK-12195:


 Summary: Incorrect resource time setting causes flink to fail to 
submit
 Key: FLINK-12195
 URL: https://issues.apache.org/jira/browse/FLINK-12195
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.6.3
Reporter: tangshangwen


We used Tencent cos as defaultFS, and when we submitted the job, we ran into a 
YARN checking resource time mismatch that prevented the job from being submitted

 

{{2019-04-15 14:45:47,683 DEBUG 
org.apache.hadoop.security.UserGroupInformation: PrivilegedActionException 
as:hadoop (auth:SIMPLE) cause:java.io.IOException: Resource 
cosn://xxx-xxx/user/hadoop/.flink/application_1555078596113_0014/logback.xml 
changed on src filesystem (expected 1555259286000, was 1555310742000}}

 

 

I found that flink uses the lastModified of the local file, and why is it not 
the latest time for the remote file system?
{code:java}
LOG.debug("Copying from {} to {}", localSrcPath, dst);

fs.copyFromLocalFile(false, true, localSrcPath, dst);

// Note: If we used registerLocalResource(FileSystem, Path) here, we would 
access the remote
// file once again which has problems with eventually consistent 
read-after-write file
// systems. Instead, we decide to preserve the modification time at the remote
// location because this and the size of the resource will be checked by YARN 
based on
// the values we provide to #registerLocalResource() below.
fs.setTimes(dst, localFile.lastModified(), -1);

// now create the resource instance
LocalResource resource = registerLocalResource(dst, localFile.length(), 
localFile.lastModified());

return Tuple2.of(dst, resource);{code}
Maybe it should be
{code:java}
// now create the resource instance
LocalResource resource = registerLocalResource(dst, localFile.length(), 
fs.getFileStatus(dst).getModificationTime());{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Discuss][FLINK-8297]A solution for FLINK-8297 Timebased RocksDBListState

2019-04-15 Thread Yun Tang
Hi Faxian

We also try to fix the single large list state problem in RocksDB and had a 
private solution by adding atomic increased number in the RocksDB's key bytes. 
We would keep the number in the checkpoint so that the order would not be 
broken after restoring from checkpoint.

I think FLINK-8297 mainly focus on resolving large list storage in RocksDB, and 
timestamp is just one solution. Actually I did not get your point why we should 
use record's timestamp.

After we implement the elemnt-wise rocksDB list state in our environment, we 
found this behaves much worse than original list state as expected and not 
recommend users to directly use this feature if they're not sure the list state 
really large.

Best
Yun Tang


From: Andrey Zagrebin 
Sent: Monday, April 15, 2019 17:09
To: dev
Subject: Re: [Discuss][FLINK-8297]A solution for FLINK-8297 Timebased 
RocksDBListState

Hi Faxian,

Thanks for thinking on this new approach. Here are my thoughts:

- In case of event time, although, this approach changes semantics of
original list state, it could be a good fit for certain use cases. The main
advantage is that it is deterministic in event time. The list should end up
always in the same order.

- In case of processing time, the time skew might be a problem. If task
executor's clock jumps back for some reason or it fails and another TE with
shifted clock takes over, this can potentially reorder list elements. If we
rather think about the list state as a bag, reordering might be ok but
there is also a risk that different elements might end up having the same
processing time and rewrite each other.

- In general, exploding a storage size is a trade-off to achieve more
scalability for list state and should be ok if we do not degrade existing
approach.

Let's see other opinions.

Best,
Andrey

On Fri, Apr 12, 2019 at 10:34 AM Faxian Zhao  wrote:

> Refer from PR#5185, I think we can use Timebased RocksDBListState to
> resolve it.
> Timebased RocksDBListState store list entries dispersed in rocksdb like
> RocksDBMapState.
> Key pair:
> For the timebased flink inner class like StreamRecord(enable
> event/ingestion time), the rocksdb key is
> #KeyGroup#Key#Namespace#StreamRecord.getTimestamp().
> Otherwise, the key is current procssing time.
> Value pair:
> The rocksdb value is the entries which have the same
> timestamp(event/ingestion/processing time), like the original
> RocksDBListState.
>
> The ListState.get() implement like
> org.apache.flink.contrib.streaming.state.RocksDBMapState#iterator.
> Generally, it won't load all entries one time.
>
> The rocksdb store structure.
> ---Key--- Value-
> #KeyGroup#Key#Namespace #KeyGroup#Key#Namespace#ts3 (max lexicographically
> key)
> #KeyGroup#Key#Namespace#ts1value1,value2,value7
> #KeyGroup#Key#Namespace#ts2value4,value6
> #KeyGroup#Key#Namespace#ts3value3,value5
>
>
> Advantage:
> 1. Due to the rocksdb store key with lexicographically order, so the
> entries is monotonous by time. It's friendly to event time records
> processing.
> 2. We can store the max timestamp key in the rocksdb default
> key(#KeyGroup#Key#Namespace), then we can reverse iterate the stored list.
> 3. For the CountEvictor and TimeEvictor, we can stop the iteration early
> instead of read all of them into memory.
> 4. This ListState is monotonous by time, we can provide some more methods
> for event time records processing.
> 5. I think it resolve the ttl issue naturally.
>
> Disadvantage:
> 1. It will add 8 bytes cost to store extended timestamp in key part, and
> I'm not good at rocksdb, I don't know the performance affect.
> 2. For the event time StreamRecord, it will reorder the entries by event
> time. This behavior is not align with other ListState implement.
> 3. For other records, the key is useless useless overhead.
> 4. If all of the entries have the same timestamp, the store structure is
> almost same as the original RocksDBListState.
> 5. We can't easily implement remove, size method for ListState yet.
>
> Implement:
> We can abstract a new class which is the parent of Time based
> RocksDBListState and RocksDBMapState, but we should modify
> InternalLargeListState.
> I draft some code for this in PR#7675
>


Re: [DISCUSS] Create a Flink ecosystem website

2019-04-15 Thread Robert Metzger
Hey Daryl,

thanks a lot for posting a link to this first prototype on the mailing
list! I really like it!

Becket: Our plan forward is that Congxian is implementing the backend for
the website. He has already started with the work, but needs at least one
more week.


[Re-sending this email because the first one was blocked on dev@f.a.o]


On Mon, Apr 15, 2019 at 7:59 AM Becket Qin  wrote:

> Hi Daryl,
>
> Thanks a lot for the update. The site looks awesome! This is a great
> progress. I really like the conciseness of GUI.
>
> One minor suggestion is that for the same library, there might be multiple
> versions compatible with different Flink versions. It would be good to show
> that somewhere in the project page as it seems important to the users.
>
> BTW, will you share the plan to move forward? Would additional hands help?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Sat, Apr 13, 2019 at 7:10 PM Daryl Roberts  wrote:
>
>> > Shall we add a guide page to show people how to publish their projects
>> to the website? The exact rules can be discussed and drafted in a separate
>> email thread IMO
>>
>> This is a good idea. (Both the guise, and separate thread), I think once
>> there is an actual packed in place we’ll be in a lot better position to
>> discuss this.
>>
>> > The "Log in with Github" link doesn't seem to work yet. Will it only
>> allow login for admins and publishers, or for everyone?
>>
>> Correct, all the oauth stuff requires a real server. We are currently
>> just faking everything.
>>
>> I will add a mock-login page (username/password that just accepts
>> anything and displays whatever username you type in) so we can see the
>> add-comment field and add-packages page once they exist.
>>
>>
>>
>>


Re: [DISCUSS] Apache Flink at Season of Docs

2019-04-15 Thread Robert Metzger
Hi all,
I'm very happy to see this project happening!

Thank you for the proposal Konstantin! One idea for the "related material":
we could also link to talks or blog posts about concepts / monitoring /
operations. Potential writers could feel overwhelmed by our demand for
improvements, without any additional material.


On Mon, Apr 15, 2019 at 10:16 AM Konstantin Knauf 
wrote:

> Hi everyone,
>
> thanks @Aizhamal Nurmamat kyzy . As we only have one
> week left until the application deadline, I went ahead and created a
> document for the project ideas [1]. I have added the description for the
> "stream processing concepts" as well as the "deployment & operations
> documentation" project idea. Please let me know what you think, edit &
> comment. We also need descriptions for the other two projects (Table
> API/SQL & Flink Internals). @Fabian/@Jark/@Stephan can you chime in?
>
> Any more project ideas?
>
> Best,
>
> Konstantin
>
>
> [1]
>
> https://docs.google.com/document/d/1Up53jNsLztApn-mP76AB6xWUVGt3nwS9p6xQTiceKXo/edit?usp=sharing
>
>
>
> On Fri, Apr 12, 2019 at 6:50 PM Aizhamal Nurmamat kyzy <
> aizha...@google.com>
> wrote:
>
> > Hello everyone,
> >
> > @Konstantin Knauf  - yes, you are correct.
> > Between steps 1 and 2 though, the open source organization, in this case
> > Flink, has to be selected by SoD as one of the participating orgs
> *fingers
> > crossed*.
> >
> > One tip about organizing ideas is that you want to communicate potential
> > projects to the tech writers that are applying. Just make sure the scope
> of
> > the project is clear to them. The SoD wants to set up the tech writers
> for
> > success by making sure the work can be done in the allotted time.
> >
> > Hope it helps.
> >
> > Aizhamal
> >
> >
> >
> > On Fri, Apr 12, 2019 at 7:37 AM Konstantin Knauf <
> konstan...@ververica.com>
> > wrote:
> >
> >> Hi all,
> >>
> >> I read through the SoD documentation again, and now I think, it would
> >> actually make sense to split (1) up into multiple project ideas. Let me
> >> summarize the overall process:
> >>
> >> 1. We create & publish a list of project ideas, e.g. in a blog post.
> >> (This can be any number of ideas.)
> >> 2. Potential technical writers look at our list of ideas and sent a
> >> proposal for a particular project to Google. During that time they can
> >> reach out to us for clarification.
> >> 3. Google forwards all proposals for our project ideas to us and we sent
> >> back a prioritized list of proposals, which we would like to accept.
> >> 4. Of all these proposals, Google accepts 50 proposals for SoD 2019. Per
> >> organization Google will only accept a maximum of two proposals.
> >>
> >> @Aizhamal Nurmamat kyzy  Please correct me!
> >>
> >> For me this means we should splits this up in a way, that each project
> is
> >> a) still relevant in September b) makes sense as a 3 month project.
> Based
> >> on the ideas we have right now these could for example be:
> >>
> >> (I) Rework/Extract/Improve the documentation of stream processing
> concepts
> >> (II) Improve & extend Apache Flink's documentation for deployment,
> >> operations (incl. configuration)
> >> (III) Add documentation for Flink internals
> >> (IV) Rework Table API / SQL documentation
> >>
> >> We would then get proposals potentially for all of these topics and
> could
> >> decide which of these proposals, we would sent back to Google. My
> feeling
> >> is that a technical writer could easily spent three months on any of
> these
> >> projects. What do others think? Any other project ideas?
> >>
> >> Cheers,
> >>
> >> Konstantin
> >>
> >>
> >>
> >>
> >> On Fri, Apr 12, 2019 at 1:47 PM Jark Wu  wrote:
> >>
> >>> Hi all,
> >>>
> >>> I'm fine with only preparing the first proposal. I think it's
> reasonable
> >>> because the first proposal is more attractive
> >>> and maybe there is not enough Chinese writer. We can focus on one
> project
> >>> to come up with a concrete and
> >>> attractive project plan.
> >>>
> >>> One possible subproject could be rework Table SQL docs.
> >>> (1). Improve concepts in Table SQL.
> >>> (2). A more detailed introduction of built-in functions, currently we
> >>> only
> >>> have a simple explanation for each function.
> >>>   We should add more descriptions, especially more concrete
> examples,
> >>> and maybe some notes. We can take
> >>>   MySQL doc [1] as a reference.
> >>> (3). As Flink SQL is evolving rapidly and features from Blink is being
> >>> merged,  for example, SQL DDL, Hive integration,
> >>>   Python Table API, Interactive Programing, SQL optimization and
> >>> tuning, etc... We can redesign the doc structure of
> >>>   Table SQL in a higher vision.
> >>>
> >>> Cheers,
> >>> Jark
> >>>
> >>> [1]:
> >>>
> >>>
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_bin
> >>>
> >>>
> >>>
> >>> On Fri, 12 Apr 2019 at 18:19, jincheng sun 
> >>> wrote:
> >>>
> >>> > I am honored to have the opportunity to do a second organization
> >>

[jira] [Created] (FLINK-12194) Enable setting in DataStreamAllroundTestProgram

2019-04-15 Thread Gary Yao (JIRA)
Gary Yao created FLINK-12194:


 Summary: Enable setting  in DataStreamAllroundTestProgram
 Key: FLINK-12194
 URL: https://issues.apache.org/jira/browse/FLINK-12194
 Project: Flink
  Issue Type: New Feature
Reporter: Gary Yao
Assignee: Gary Yao


Enable setting the {{failOnCheckpointingErrors}} flag in the checkpoint 
configuration of the {{DataStreamAllroundTestProgram}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Discuss][FLINK-8297]A solution for FLINK-8297 Timebased RocksDBListState

2019-04-15 Thread Andrey Zagrebin
Hi Faxian,

Thanks for thinking on this new approach. Here are my thoughts:

- In case of event time, although, this approach changes semantics of
original list state, it could be a good fit for certain use cases. The main
advantage is that it is deterministic in event time. The list should end up
always in the same order.

- In case of processing time, the time skew might be a problem. If task
executor's clock jumps back for some reason or it fails and another TE with
shifted clock takes over, this can potentially reorder list elements. If we
rather think about the list state as a bag, reordering might be ok but
there is also a risk that different elements might end up having the same
processing time and rewrite each other.

- In general, exploding a storage size is a trade-off to achieve more
scalability for list state and should be ok if we do not degrade existing
approach.

Let's see other opinions.

Best,
Andrey

On Fri, Apr 12, 2019 at 10:34 AM Faxian Zhao  wrote:

> Refer from PR#5185, I think we can use Timebased RocksDBListState to
> resolve it.
> Timebased RocksDBListState store list entries dispersed in rocksdb like
> RocksDBMapState.
> Key pair:
> For the timebased flink inner class like StreamRecord(enable
> event/ingestion time), the rocksdb key is
> #KeyGroup#Key#Namespace#StreamRecord.getTimestamp().
> Otherwise, the key is current procssing time.
> Value pair:
> The rocksdb value is the entries which have the same
> timestamp(event/ingestion/processing time), like the original
> RocksDBListState.
>
> The ListState.get() implement like
> org.apache.flink.contrib.streaming.state.RocksDBMapState#iterator.
> Generally, it won't load all entries one time.
>
> The rocksdb store structure.
> ---Key--- Value-
> #KeyGroup#Key#Namespace #KeyGroup#Key#Namespace#ts3 (max lexicographically
> key)
> #KeyGroup#Key#Namespace#ts1value1,value2,value7
> #KeyGroup#Key#Namespace#ts2value4,value6
> #KeyGroup#Key#Namespace#ts3value3,value5
>
>
> Advantage:
> 1. Due to the rocksdb store key with lexicographically order, so the
> entries is monotonous by time. It's friendly to event time records
> processing.
> 2. We can store the max timestamp key in the rocksdb default
> key(#KeyGroup#Key#Namespace), then we can reverse iterate the stored list.
> 3. For the CountEvictor and TimeEvictor, we can stop the iteration early
> instead of read all of them into memory.
> 4. This ListState is monotonous by time, we can provide some more methods
> for event time records processing.
> 5. I think it resolve the ttl issue naturally.
>
> Disadvantage:
> 1. It will add 8 bytes cost to store extended timestamp in key part, and
> I'm not good at rocksdb, I don't know the performance affect.
> 2. For the event time StreamRecord, it will reorder the entries by event
> time. This behavior is not align with other ListState implement.
> 3. For other records, the key is useless useless overhead.
> 4. If all of the entries have the same timestamp, the store structure is
> almost same as the original RocksDBListState.
> 5. We can't easily implement remove, size method for ListState yet.
>
> Implement:
> We can abstract a new class which is the parent of Time based
> RocksDBListState and RocksDBMapState, but we should modify
> InternalLargeListState.
> I draft some code for this in PR#7675
>


Re: contributor permission

2019-04-15 Thread Fabian Hueske
Hi Francis,

Welcome to the Flink community!
I've given you contributor permissions for Jira.

Best, Fabian

Am Mo., 15. Apr. 2019 um 03:49 Uhr schrieb du francis <
francisdu...@gmail.com>:

> Hello Flink Community:
>
> My name is Francis Du ,I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is francisdu
> Thanks!
>
> Best Regards,
> Francis
>


Re: SQL CLI and JDBC

2019-04-15 Thread Fabian Hueske
Hi,

I don't have much experience with Calcite connectors.

One potential problem might be fetching the results. The CLI client uses
the DataSet.collect() method which collects all results from all TMs in the
JM and (AFAIK) transfers it in a single RPC message back to the client.
Hence, this only works for small results (a few MBs) and breaks if the
result size exceeds the max message size of RPC calls. For even larger
results, it might even crash the JM.
You would need a robust mechanism to collect results from multiple TMs.

Best, Fabian


Am So., 14. Apr. 2019 um 09:28 Uhr schrieb Hanan Yehudai <
hanan.yehu...@radcom.com>:

> Fabian , looking at the response below again..
>
> As I’m currently looking into the Batch mode only ( execution result mode
> = table )
> I was thinking of wrapping the SQL CLI code with a Calcite Adapter might
> do the trick.
>
> I don’t want to have a different execution engine ( like  DRILL) just to
> allow ad hoc queries. And JDBC will allow me to use a lot of 3rd part
> display ( BI tools , notebooks , etc..).
>
> Do you believe its  a viable solution while the JDBC and SQL GW is still
> work in progress ?
>
>
> -Original Message-
> From: Fabian Hueske 
> Sent: 8 April 2019 11:18
> To: dev 
> Subject: Re: SQL CLI and JDBC
>
> Hi Hanan,
>
> I'm not aware of any plans to add a JDBC Driver.
>
> One issue with the JDBC interface is that it only works well for queries
> on batch data and a subset of queries on streaming data.
>
> Many streaming SQL queries are not able to emit final results (or need to
> update previously emitted results).
> Take for instance a query like
>
> SELECT colA, COUNT(*)
> FROM tab
> GROUP BY colA;
>
> If tab is a continuously growing table, no row of the queries result will
> ever be final because a new row with any value of colA can be added at any
> point in time.
> JDBC does not support to retract or update result rows that were emitted
> before.
>
> Best, Fabian
>
>
> Am So., 7. Apr. 2019 um 11:31 Uhr schrieb Hanan Yehudai <
> hanan.yehu...@radcom.com>:
>
> > I didn’t see any docs on this -  is there a JDBC Driver that allows
> > the same functionalities as the SQL CLI ?
> > If not , is it on the roadmap ?
> >
> >
>


Re: [PROGRESS-UPDATE] Redesign Flink Scheduling, introducing dedicated Scheduler Component

2019-04-15 Thread Zhu Zhu
The new interface will be very helpful to extend the scheduling strategy.
It also makes the scheduling and failover process much cleaner.
Thanks Gary to bring up this proposal. I really like it.
+1 for this proposal.

Regards,
Zhu

zhijiang  于2019年4月15日周一 下午2:45写道:

> Thanks for sharing the latest updates for the scheduler refactoring Gary.
> It is really good to me and wish seeing the new interface ready soon.
>
> Best,
> Zhijiang
> --
> From:Guowei Ma 
> Send Time:2019年4月15日(星期一) 09:25
> To:dev 
> Subject:Re: [PROGRESS-UPDATE] Redesign Flink Scheduling, introducing
> dedicated Scheduler Component
>
> Thanks Gary for sharing the documents to the community.
> The idea makes scheduler more flexible.
>
> Best,
> Guowei
>
>
> Till Rohrmann  于2019年4月13日周六 上午12:21写道:
>
> > Thanks for sharing the current state of the scheduler refactorings with
> the
> > community Gary. The proposed changes look good to me and, hence, +1 for
> > proceeding with bringing the interfaces into place.
> >
> > Cheers,
> > Till
> >
> > On Fri, Apr 12, 2019 at 5:53 PM Gary Yao  wrote:
> >
> > > Hi all,
> > >
> > > As you might have seen already, we are currently reworking Flink's
> > > scheduling.
> > > At the moment, scheduling is a concern that is scattered across
> different
> > > components, such as ExecutionGraph, Execution and SlotPool. Scheduling
> > also
> > > happens only on the granularity of individual tasks, which make
> holistic
> > > scheduling strategies hard to implement. For more details on our
> > motivation
> > > see [1]. To track the progress, we have created the umbrella issue
> > > FLINK-10429
> > > [2] (bear in mind that the current sub-tasks are still subject to
> > change).
> > >
> > > We are currently in the process of finalizing the scheduler interfaces.
> > Our
> > > current state can be found in [3]. Feel free to review and comment on
> our
> > > design proposal.
> > >
> > > Best,
> > > Gary
> > >
> > > [1]
> > >
> > >
> >
> https://docs.google.com/document/d/1q7NOqt05HIN-PlKEEPB36JiuU1Iu9fnxxVGJzylhsxU/edit
> > > [2] https://issues.apache.org/jira/browse/FLINK-10429
> > > [3]
> > >
> > >
> >
> https://docs.google.com/document/d/1fstkML72YBO1tGD_dmG2rwvd9bklhRVauh4FSsDDwXU/edit?usp=sharing
> > >
> >
>
>


[jira] [Created] (FLINK-12193) [Distributed Coordination] Send TM "can be released status" with RM heartbeat

2019-04-15 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-12193:
---

 Summary: [Distributed Coordination] Send TM "can be released 
status" with RM heartbeat
 Key: FLINK-12193
 URL: https://issues.apache.org/jira/browse/FLINK-12193
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Andrey Zagrebin
 Fix For: 1.9.0


We introduced a conditional release of Task Executor in Resource Manager in 
FLINK-10941. At the moment RM directly asks TE every release timeout whether it 
can be released (all depending consumers are done). We can piggyback TE/RM 
heartbeats for this purpose. In this case, we do not need additional RPC call 
to TE gateway and could potentially release TE quicker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] A more restrictive JIRA workflow

2019-04-15 Thread Feng LI
Hello there,

New to the community. Just thought you might want some inputs from new
comers too.

I prefer option 2, where you need to prove the ability and commitment with
commits  before contributor permission is assigned.

Cheers,
Feng

Le lun. 15 avr. 2019 à 09:17, Robert Metzger  a écrit :

> @Andrey: You mention "option 2" two times, I guess one of the two uses of
> "option 2" contains a typo?
>
> On Wed, Apr 10, 2019 at 10:33 AM Andrey Zagrebin 
> wrote:
>
> > Hi all,
> >
> > +1 for option 2.
> >
> > I also do not mind option 2, after 1-2 contributions, any contributor
> could
> > just ask the committer (who merged those contributions) about contributor
> > permissions.
> >
> > Best,
> > Andrey
> >
> > On Wed, Apr 10, 2019 at 3:58 AM Robert Metzger 
> > wrote:
> >
> > > I'm +1 on option 1.
> > >
> > > On Tue, Apr 9, 2019 at 1:58 AM Timo Walther 
> wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > I'd like to bring up this discussion thread again. In summary, I
> think
> > > > we all agreed on improving the JIRA workflow to move design/consensus
> > > > discussions from PRs to the issues first, before implementing them.
> > > >
> > > > Two options have been proposed:
> > > > 1. Only committers can assign people to issues. PRs of unassigned
> > issues
> > > > are closed automatically.
> > > > 2. Committers upgrade assignable users to contributors as an
> > > > intermediate step towards committership.
> > > >
> > > > I would prefer option 1 as some people also mentioned that option 2
> > > > requires some standadized processes otherwise it would be difficult
> to
> > > > communicate why somebody is a contributor and some somebody is not.
> > > >
> > > > What do you think?
> > > >
> > > > Regards,
> > > > Timo
> > > >
> > > >
> > > > Am 18.03.19 um 14:25 schrieb Robert Metzger:
> > > > > @Fabian: I don't think this is a big problem. Moving away from
> > "giving
> > > > > everybody contributor permissions" to "giving it to some people" is
> > not
> > > > > risky.
> > > > > I would leave this decision to the committers who are working with
> a
> > > > person.
> > > > >
> > > > >
> > > > > We should bring this discussion to a conclusion and implement the
> > > changes
> > > > > to JIRA.
> > > > >
> > > > >
> > > > > Nobody has raised any objections to the overall idea.
> > > > >
> > > > > Points raised:
> > > > > 1. We need to update the contribution guide and describe the
> > workflow.
> > > > > 2. I brought up changing Flinkbot so that it auto-closes PRs
> without
> > > > > somebody assigned in JIRA.
> > > > >
> > > > > Who wants to work on an update of the contribution guide?
> > > > > If there's no volunteers, I'm happy to take care of this.
> > > > >
> > > > >
> > > > > On Fri, Mar 15, 2019 at 9:20 AM Fabian Hueske 
> > > wrote:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> I'm not sure about adding an additional stage.
> > > > >> Who's going to decide when to "promote" a user to a contributor,
> > i.e.,
> > > > >> grant assigning permission?
> > > > >>
> > > > >> Best, Fabian
> > > > >>
> > > > >> Am Do., 14. März 2019 um 13:50 Uhr schrieb Timo Walther <
> > > > >> twal...@apache.org
> > > > >>> :
> > > > >>> Hi Robert,
> > > > >>>
> > > > >>> I also like the idea of making every Jira user an "Assignable
> > User",
> > > > but
> > > > >>> restrict assigning a ticket to people with committer permissions.
> > > > >>>
> > > > >>> Instead of giving contributor permissions to everyone, we could
> > have
> > > a
> > > > >>> more staged approach from user, to contributor, and finally to
> > > > committer.
> > > > >>>
> > > > >>> Once people worked on a couple of JIRA issues, we can make them
> > > > >>> contributors.
> > > > >>>
> > > > >>> What do you think?
> > > > >>>
> > > > >>> Regards,
> > > > >>> Timo
> > > > >>>
> > > > >>> Am 06.03.19 um 12:33 schrieb Robert Metzger:
> > > >  Hi Tison,
> > > >  I also thought about this.
> > > >  Making a person a "Contributor" is required for being an
> > "Assignable
> > > > >>> User",
> > > >  so normal Jira accounts can't be assigned to a ticket.
> > > > 
> > > >  We could make every Jira user an "Assignable User", but restrict
> > > > >>> assigning
> > > >  a ticket to people with committer permissions.
> > > >  There are some other permissions attached to the "Contributor"
> > role,
> > > > >> such
> > > >  as "Closing" and "Editing" (including "Transition", "Logging
> > work",
> > > > >>> etc.).
> > > >  I think we should keep the "Contributor" role, but we could be
> (as
> > > you
> > > >  propose) make it more restrictive. Maybe "invite only" for
> people
> > > who
> > > > >> are
> > > >  apparently active in our Jira.
> > > > 
> > > >  Best,
> > > >  Robert
> > > > 
> > > > 
> > > > 
> > > >  On Wed, Mar 6, 2019 at 11:02 AM ZiLi Chen  >
> > > > >> wrote:
> > > > > Hi devs,
> > > > >
> > > > > Just now I find that one not a contributor can file issue and
> > > > >>> par

[jira] [Created] (FLINK-12192) Add support for generating optimized logical plan for grouping sets and distinct aggregate

2019-04-15 Thread godfrey he (JIRA)
godfrey he created FLINK-12192:
--

 Summary: Add support for generating optimized logical plan for 
grouping sets and distinct aggregate
 Key: FLINK-12192
 URL: https://issues.apache.org/jira/browse/FLINK-12192
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Reporter: godfrey he
Assignee: godfrey he


This issue aims to supports generating optimized logical plan for grouping sets 
and distinct aggregate. (mentioned in 
[FLINK-12076|https://issues.apache.org/jira/browse/FLINK-12076] and 
[FLINK-12098|https://issues.apache.org/jira/browse/FLINK-12098])

for batch, query with distinct aggregate will be rewritten into two 
non-distinct aggregates by extended 
[AggregateExpandDistinctAggregatesRule|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/rel/rules/AggregateExpandDistinctAggregatesRule.java],
 the first aggregate computes the distinct result and non-distinct aggregate 
function result, and the second aggregate computes the distinct aggregate 
function result  based on first aggregate result. The first aggregate has 
grouping sets if there are more than one distinct aggregate on different fields.

for stream, query with distinct aggregate is handled by SplitAggregateRule in 
[FLINK-12161|https://issues.apache.org/jira/browse/FLINK-12161].

query with grouping sets (or cube, rollup) will be rewritten into a regular 
aggregate with expand.
The expand node will duplicates the input data for each simple group.
e.g.
{noformat}
schema:
MyTable: a: INT, b: BIGINT, c: VARCHAR(32), d: VARCHAR(32)

 Original records:
+-+-+-+-+
|  a  |  b  |  c  |  d  |
+-+-+-+-+
|  1  |  1  |  c1 |  d1 |
+-+-+-+-+
|  1  |  2  |  c1 |  d2 |
+-+-+-+-+
|  2  |  1  |  c1 |  d1 |
+-+-+-+-+

SELECT a, c, SUM(b) as b FROM MyTable GROUP BY GROUPING SETS (a, c)

logical plan after expanded:
LogicalCalc(expr#0..3=[{inputs}], proj#0..1=[{exprs}], b=[$t3])
LogicalAggregate(group=[{0, 2, 3}], groups=[[]], b=[SUM($1)])
LogicalExpand(projects=[{a=[$0], b=[$1], c=[null], $e=[1]}, {a=[null], 
b=[$1], c=[$2], $e=[2]}])
LogicalNativeTableScan(table=[[builtin, default, MyTable]])

notes:
'$e = 1' is equivalent to 'group by a'
'$e = 2' is equivalent to 'group by c'

expanded records:
+-+-+-+-+
|  a  |  b  |  c  | $e  |
+-+-+-+-+---+---
|  1  |  1  | null|  1  |   |
+-+-+-+-+  records expanded by record1
| null|  1  |  c1 |  2  |   |
+-+-+-+-+---+---
|  1  |  2  | null|  1  |   |
+-+-+-+-+  records expanded by record2
| null|  2  |  c1 |  2  |   |
+-+-+-+-+---+---
|  2  |  1  | null|  1  |   |
+-+-+-+-+  records expanded by record3
| null|  1  |  c1 |  2  |   |
+-+-+-+-+---+---
{noformat}







--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Apache Flink at Season of Docs

2019-04-15 Thread Konstantin Knauf
Hi everyone,

thanks @Aizhamal Nurmamat kyzy . As we only have one
week left until the application deadline, I went ahead and created a
document for the project ideas [1]. I have added the description for the
"stream processing concepts" as well as the "deployment & operations
documentation" project idea. Please let me know what you think, edit &
comment. We also need descriptions for the other two projects (Table
API/SQL & Flink Internals). @Fabian/@Jark/@Stephan can you chime in?

Any more project ideas?

Best,

Konstantin


[1]
https://docs.google.com/document/d/1Up53jNsLztApn-mP76AB6xWUVGt3nwS9p6xQTiceKXo/edit?usp=sharing



On Fri, Apr 12, 2019 at 6:50 PM Aizhamal Nurmamat kyzy 
wrote:

> Hello everyone,
>
> @Konstantin Knauf  - yes, you are correct.
> Between steps 1 and 2 though, the open source organization, in this case
> Flink, has to be selected by SoD as one of the participating orgs *fingers
> crossed*.
>
> One tip about organizing ideas is that you want to communicate potential
> projects to the tech writers that are applying. Just make sure the scope of
> the project is clear to them. The SoD wants to set up the tech writers for
> success by making sure the work can be done in the allotted time.
>
> Hope it helps.
>
> Aizhamal
>
>
>
> On Fri, Apr 12, 2019 at 7:37 AM Konstantin Knauf 
> wrote:
>
>> Hi all,
>>
>> I read through the SoD documentation again, and now I think, it would
>> actually make sense to split (1) up into multiple project ideas. Let me
>> summarize the overall process:
>>
>> 1. We create & publish a list of project ideas, e.g. in a blog post.
>> (This can be any number of ideas.)
>> 2. Potential technical writers look at our list of ideas and sent a
>> proposal for a particular project to Google. During that time they can
>> reach out to us for clarification.
>> 3. Google forwards all proposals for our project ideas to us and we sent
>> back a prioritized list of proposals, which we would like to accept.
>> 4. Of all these proposals, Google accepts 50 proposals for SoD 2019. Per
>> organization Google will only accept a maximum of two proposals.
>>
>> @Aizhamal Nurmamat kyzy  Please correct me!
>>
>> For me this means we should splits this up in a way, that each project is
>> a) still relevant in September b) makes sense as a 3 month project. Based
>> on the ideas we have right now these could for example be:
>>
>> (I) Rework/Extract/Improve the documentation of stream processing concepts
>> (II) Improve & extend Apache Flink's documentation for deployment,
>> operations (incl. configuration)
>> (III) Add documentation for Flink internals
>> (IV) Rework Table API / SQL documentation
>>
>> We would then get proposals potentially for all of these topics and could
>> decide which of these proposals, we would sent back to Google. My feeling
>> is that a technical writer could easily spent three months on any of these
>> projects. What do others think? Any other project ideas?
>>
>> Cheers,
>>
>> Konstantin
>>
>>
>>
>>
>> On Fri, Apr 12, 2019 at 1:47 PM Jark Wu  wrote:
>>
>>> Hi all,
>>>
>>> I'm fine with only preparing the first proposal. I think it's reasonable
>>> because the first proposal is more attractive
>>> and maybe there is not enough Chinese writer. We can focus on one project
>>> to come up with a concrete and
>>> attractive project plan.
>>>
>>> One possible subproject could be rework Table SQL docs.
>>> (1). Improve concepts in Table SQL.
>>> (2). A more detailed introduction of built-in functions, currently we
>>> only
>>> have a simple explanation for each function.
>>>   We should add more descriptions, especially more concrete examples,
>>> and maybe some notes. We can take
>>>   MySQL doc [1] as a reference.
>>> (3). As Flink SQL is evolving rapidly and features from Blink is being
>>> merged,  for example, SQL DDL, Hive integration,
>>>   Python Table API, Interactive Programing, SQL optimization and
>>> tuning, etc... We can redesign the doc structure of
>>>   Table SQL in a higher vision.
>>>
>>> Cheers,
>>> Jark
>>>
>>> [1]:
>>>
>>> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_bin
>>>
>>>
>>>
>>> On Fri, 12 Apr 2019 at 18:19, jincheng sun 
>>> wrote:
>>>
>>> > I am honored to have the opportunity to do a second organization
>>> > administrator‘s works!
>>> >
>>> > It seems that one project and multiple projects have their own
>>> advantages.
>>> >
>>> > My understanding is that even if we only have one project, we also can
>>> have
>>> > multiple mentors and recruit enough writers.
>>> >
>>> > Fabian Hueske  于2019年4月12日周五 下午5:57写道:
>>> >
>>> > > Yes, I think we would get at most one project accepted.
>>> > > Having all options in a rather generic proposal gives us the most
>>> > > flexibility to decide what to work on once the proposal is accepted.
>>> > > On the other hand, a more concrete proposal might look more
>>> attractive
>>> > for
>>> > > candidates.
>>> > > I'm fine either way, but my gut feeling is that

[jira] [Created] (FLINK-12191) Flink SVGs on "Material" page broken, render incorrectly on Firefox

2019-04-15 Thread Patrick Lucas (JIRA)
Patrick Lucas created FLINK-12191:
-

 Summary: Flink SVGs on "Material" page broken, render incorrectly 
on Firefox
 Key: FLINK-12191
 URL: https://issues.apache.org/jira/browse/FLINK-12191
 Project: Flink
  Issue Type: Bug
  Components: Project Website
Reporter: Patrick Lucas
Assignee: Patrick Lucas
 Attachments: Screen Shot 2019-04-15 at 09.48.15.png

Like FLINK-11043, the Flink SVGs on the [Material page of the Flink 
website|https://flink.apache.org/material.html] are invalid and do not render 
correctly on Firefox.

I'm not sure if there is an additional source-of-truth for these images, or if 
these hosted on the website are canonical, but I can fix them nonetheless.

I also noticed that one of the squirrels in both {{color_black.svg}} and 
{{color_white.svg}} is missing the eye gradient, which can also be easily fixed.

 !Screen Shot 2019-04-15 at 09.48.15.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12190) Fix NPE throwed by FlinkKinesisConsumerMigrationTest#writeSnapshot

2019-04-15 Thread vinoyang (JIRA)
vinoyang created FLINK-12190:


 Summary: Fix NPE throwed by 
FlinkKinesisConsumerMigrationTest#writeSnapshot
 Key: FLINK-12190
 URL: https://issues.apache.org/jira/browse/FLINK-12190
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis, Tests
Reporter: vinoyang
Assignee: vinoyang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] A more restrictive JIRA workflow

2019-04-15 Thread Robert Metzger
@Andrey: You mention "option 2" two times, I guess one of the two uses of
"option 2" contains a typo?

On Wed, Apr 10, 2019 at 10:33 AM Andrey Zagrebin 
wrote:

> Hi all,
>
> +1 for option 2.
>
> I also do not mind option 2, after 1-2 contributions, any contributor could
> just ask the committer (who merged those contributions) about contributor
> permissions.
>
> Best,
> Andrey
>
> On Wed, Apr 10, 2019 at 3:58 AM Robert Metzger 
> wrote:
>
> > I'm +1 on option 1.
> >
> > On Tue, Apr 9, 2019 at 1:58 AM Timo Walther  wrote:
> >
> > > Hi everyone,
> > >
> > > I'd like to bring up this discussion thread again. In summary, I think
> > > we all agreed on improving the JIRA workflow to move design/consensus
> > > discussions from PRs to the issues first, before implementing them.
> > >
> > > Two options have been proposed:
> > > 1. Only committers can assign people to issues. PRs of unassigned
> issues
> > > are closed automatically.
> > > 2. Committers upgrade assignable users to contributors as an
> > > intermediate step towards committership.
> > >
> > > I would prefer option 1 as some people also mentioned that option 2
> > > requires some standadized processes otherwise it would be difficult to
> > > communicate why somebody is a contributor and some somebody is not.
> > >
> > > What do you think?
> > >
> > > Regards,
> > > Timo
> > >
> > >
> > > Am 18.03.19 um 14:25 schrieb Robert Metzger:
> > > > @Fabian: I don't think this is a big problem. Moving away from
> "giving
> > > > everybody contributor permissions" to "giving it to some people" is
> not
> > > > risky.
> > > > I would leave this decision to the committers who are working with a
> > > person.
> > > >
> > > >
> > > > We should bring this discussion to a conclusion and implement the
> > changes
> > > > to JIRA.
> > > >
> > > >
> > > > Nobody has raised any objections to the overall idea.
> > > >
> > > > Points raised:
> > > > 1. We need to update the contribution guide and describe the
> workflow.
> > > > 2. I brought up changing Flinkbot so that it auto-closes PRs without
> > > > somebody assigned in JIRA.
> > > >
> > > > Who wants to work on an update of the contribution guide?
> > > > If there's no volunteers, I'm happy to take care of this.
> > > >
> > > >
> > > > On Fri, Mar 15, 2019 at 9:20 AM Fabian Hueske 
> > wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> I'm not sure about adding an additional stage.
> > > >> Who's going to decide when to "promote" a user to a contributor,
> i.e.,
> > > >> grant assigning permission?
> > > >>
> > > >> Best, Fabian
> > > >>
> > > >> Am Do., 14. März 2019 um 13:50 Uhr schrieb Timo Walther <
> > > >> twal...@apache.org
> > > >>> :
> > > >>> Hi Robert,
> > > >>>
> > > >>> I also like the idea of making every Jira user an "Assignable
> User",
> > > but
> > > >>> restrict assigning a ticket to people with committer permissions.
> > > >>>
> > > >>> Instead of giving contributor permissions to everyone, we could
> have
> > a
> > > >>> more staged approach from user, to contributor, and finally to
> > > committer.
> > > >>>
> > > >>> Once people worked on a couple of JIRA issues, we can make them
> > > >>> contributors.
> > > >>>
> > > >>> What do you think?
> > > >>>
> > > >>> Regards,
> > > >>> Timo
> > > >>>
> > > >>> Am 06.03.19 um 12:33 schrieb Robert Metzger:
> > >  Hi Tison,
> > >  I also thought about this.
> > >  Making a person a "Contributor" is required for being an
> "Assignable
> > > >>> User",
> > >  so normal Jira accounts can't be assigned to a ticket.
> > > 
> > >  We could make every Jira user an "Assignable User", but restrict
> > > >>> assigning
> > >  a ticket to people with committer permissions.
> > >  There are some other permissions attached to the "Contributor"
> role,
> > > >> such
> > >  as "Closing" and "Editing" (including "Transition", "Logging
> work",
> > > >>> etc.).
> > >  I think we should keep the "Contributor" role, but we could be (as
> > you
> > >  propose) make it more restrictive. Maybe "invite only" for people
> > who
> > > >> are
> > >  apparently active in our Jira.
> > > 
> > >  Best,
> > >  Robert
> > > 
> > > 
> > > 
> > >  On Wed, Mar 6, 2019 at 11:02 AM ZiLi Chen 
> > > >> wrote:
> > > > Hi devs,
> > > >
> > > > Just now I find that one not a contributor can file issue and
> > > >>> participant
> > > > discussion.
> > > > One becomes contributor can additionally assign an issue to a
> > person
> > > >> and
> > > > modify fields of any issues.
> > > >
> > > > For a more restrictive JIRA workflow, maybe we achieve it by
> making
> > > >> it a
> > > > bit more
> > > > restrictive granting contributor permission?
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Robert Metzger  于2019年2月27日周三 下午9:53写道:
> > > >
> > > >> I like this idea and I would like to try it to see if it solves
> > the
> > > >> problem.
>