[jira] [Created] (FLINK-7661) Add credit field in PartitionRequest message

2017-09-21 Thread zhijiang (JIRA)
zhijiang created FLINK-7661:
---

 Summary: Add credit field in PartitionRequest message
 Key: FLINK-7661
 URL: https://issues.apache.org/jira/browse/FLINK-7661
 Project: Flink
  Issue Type: Sub-task
  Components: Network
Affects Versions: 1.4.0
Reporter: zhijiang
Assignee: zhijiang


Currently the {{PartitionRequest}} message contains 
{{ResultPartitionID}}|{{queueIndex}}|{{InputChannelID}} fields.

We will add a new {credit} field indicating the initial credit of 
{{InputChannel}, and this info can be got from {{InputChannel}} directly after 
assigning exclusive buffers to it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Dropping Scala 2.10

2017-09-21 Thread Stephan Ewen
+1

@ Frederico: I think Aljoscha is right - Flink only executes Kafka client
code, which is Scala independent from 0.9 on. Do you use Kafka 0.8 still?

On Wed, Sep 20, 2017 at 10:00 PM, Aljoscha Krettek 
wrote:

> Hi Federico,
>
> As far as I know, the Kafka client code has been rewritten in Java for
> version 0.9, meaning there is no more Scala dependency in there. Only the
> server (broker) code still contains Scala but it doesn't matter what Scala
> version a client uses, if any.
>
> Best,
> Aljoscha
>
> On 20. Sep 2017, at 14:32, Federico D'Ambrosio  wrote:
>
> Hi, as far as I know some vendors like Hortonworks still use Kafka_2.10 as
> part of their hadoop distribution.
> Could the use of a different scala version cause issues with the Kafka
> connector? I'm asking because we are using HDP 2.6 and we once already had
> some issue with conflicting scala versions concerning Kafka (though, we
> were using Storm, I still haven't tested the Flink connector in this
> context).
>
> Regards,
> Federico
>
> 2017-09-20 14:19 GMT+02:00 Ted Yu :
>
>> +1
>>
>>  Original message 
>> From: Hai Zhou 
>> Date: 9/20/17 12:44 AM (GMT-08:00)
>> To: Aljoscha Krettek , dev@flink.apache.org, user <
>> u...@flink.apache.org>
>> Subject: Re: [DISCUSS] Dropping Scala 2.10
>>
>> +1
>>
>> > 在 2017年9月19日,17:56,Aljoscha Krettek  写道:
>> >
>> > Hi,
>> >
>> > Talking to some people I get the impression that Scala 2.10 is quite
>> outdated by now. I would like to drop support for Scala 2.10 and my main
>> motivation is that this would allow us to drop our custom Flakka build of
>> Akka that we use because newer Akka versions only support Scala 2.11/2.12
>> and we need a backported feature.
>> >
>> > Are there any concerns about this?
>> >
>> > Best,
>> > Aljoscha
>>
>>
>
>
> --
> Federico D'Ambrosio
>
>
>


[jira] [Created] (FLINK-7662) Remove unnecessary packaged licenses

2017-09-21 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-7662:
---

 Summary: Remove unnecessary packaged licenses
 Key: FLINK-7662
 URL: https://issues.apache.org/jira/browse/FLINK-7662
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.4.0


With the new shading approach, we no longer shade ASM into Flink artifacts, so 
we do not need to package the ASM license into those artifacts any more.

Instead, a shaded ASM artifact already containing a packaged license is used in 
the distribution build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Dropping Scala 2.10

2017-09-21 Thread Federico D'Ambrosio
Ok, thank you all for the clarification.

@Stephan: I'm using Kafka 0.10, I guess the problem I had then was actually
unrelated to specific Kafka version

Federico D'Ambrosio

Il 21 set 2017 16:30, "Stephan Ewen"  ha scritto:

> +1
>
> @ Frederico: I think Aljoscha is right - Flink only executes Kafka client
> code, which is Scala independent from 0.9 on. Do you use Kafka 0.8 still?
>
> On Wed, Sep 20, 2017 at 10:00 PM, Aljoscha Krettek 
> wrote:
>
>> Hi Federico,
>>
>> As far as I know, the Kafka client code has been rewritten in Java for
>> version 0.9, meaning there is no more Scala dependency in there. Only the
>> server (broker) code still contains Scala but it doesn't matter what Scala
>> version a client uses, if any.
>>
>> Best,
>> Aljoscha
>>
>> On 20. Sep 2017, at 14:32, Federico D'Ambrosio 
>> wrote:
>>
>> Hi, as far as I know some vendors like Hortonworks still use Kafka_2.10
>> as part of their hadoop distribution.
>> Could the use of a different scala version cause issues with the Kafka
>> connector? I'm asking because we are using HDP 2.6 and we once already had
>> some issue with conflicting scala versions concerning Kafka (though, we
>> were using Storm, I still haven't tested the Flink connector in this
>> context).
>>
>> Regards,
>> Federico
>>
>> 2017-09-20 14:19 GMT+02:00 Ted Yu :
>>
>>> +1
>>>
>>>  Original message 
>>> From: Hai Zhou 
>>> Date: 9/20/17 12:44 AM (GMT-08:00)
>>> To: Aljoscha Krettek , dev@flink.apache.org, user <
>>> u...@flink.apache.org>
>>> Subject: Re: [DISCUSS] Dropping Scala 2.10
>>>
>>> +1
>>>
>>> > 在 2017年9月19日,17:56,Aljoscha Krettek  写道:
>>> >
>>> > Hi,
>>> >
>>> > Talking to some people I get the impression that Scala 2.10 is quite
>>> outdated by now. I would like to drop support for Scala 2.10 and my main
>>> motivation is that this would allow us to drop our custom Flakka build of
>>> Akka that we use because newer Akka versions only support Scala 2.11/2.12
>>> and we need a backported feature.
>>> >
>>> > Are there any concerns about this?
>>> >
>>> > Best,
>>> > Aljoscha
>>>
>>>
>>
>>
>> --
>> Federico D'Ambrosio
>>
>>
>>
>


[jira] [Created] (FLINK-7663) Allow AbstractRestHandler to handle bad requests

2017-09-21 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7663:


 Summary: Allow AbstractRestHandler to handle bad requests
 Key: FLINK-7663
 URL: https://issues.apache.org/jira/browse/FLINK-7663
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The {{AbstractRestHandler}} parses the request and tries to generate a 
{{HandlerRequest}}. If this fails, then the server answers with an internal 
server error. Instead we should allow the {{AbstractRestHandler}} to be able to 
return a BAD_REQUEST status code. In order to do that, I would like to 
introduce a {{HandlerRequestException}} which can be thrown while creating the 
{{HandlerRequest}}. If this exception is thrown, then we return an error 
message with {{BAD_REQUEST}} {{HttpResponseStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7664) Replace FlinkFutureException by CompletionException

2017-09-21 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7664:


 Summary: Replace FlinkFutureException by CompletionException
 Key: FLINK-7664
 URL: https://issues.apache.org/jira/browse/FLINK-7664
 Project: Flink
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The {{FlinkFutureException}} was introduced to fail in a {{CompletableFuture}} 
callback. This can, however, better be done via the {{CompletionException}}. 
Therefore, we should remove the {{FlinkFutureException}} and replace it with 
the {{CompletionException}} instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7665) Use wait/notify in ContinuousFileReaderOperator

2017-09-21 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-7665:
--

 Summary: Use wait/notify in ContinuousFileReaderOperator
 Key: FLINK-7665
 URL: https://issues.apache.org/jira/browse/FLINK-7665
 Project: Flink
  Issue Type: Improvement
  Components: Streaming Connectors
Affects Versions: 1.4.0
Reporter: Ufuk Celebi
Priority: Minor


{{ContinuousFileReaderOperator}} has the following loop to receive input splits:
{code}
synchronized (checkpointLock) {
  if (currentSplit == null) {
currentSplit = this.pendingSplits.poll();
if (currentSplit == null) {
  if (this.shouldClose) {
isRunning = false;
  } else {
checkpointLock.wait(50);
  }
  continue;
}
  }
}
{code}

I think we can replace this with a {{wait()}} and {{notify()}} in {{addSplit}} 
and {{close}}.

If there is a reason to keep the {{wait(50)}}, feel free to close this issue. 
:-)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7666) ContinuousFileReaderOperator swallows chained watermarks

2017-09-21 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-7666:
--

 Summary: ContinuousFileReaderOperator swallows chained watermarks
 Key: FLINK-7666
 URL: https://issues.apache.org/jira/browse/FLINK-7666
 Project: Flink
  Issue Type: Improvement
  Components: Streaming Connectors
Affects Versions: 1.3.2
Reporter: Ufuk Celebi


I use event time and read from a (finite) file. I assign watermarks right after 
the {{ContinuousFileReaderOperator}} with parallelism 1.

{code}
env
  .readFile(new TextInputFormat(...), ...)
  .setParallelism(1)
  .assignTimestampsAndWatermarks(...)
  .setParallelism(1)
  .map()...
{code}

The watermarks I assign never progress through the pipeline.

I can work around this by inserting a {{shuffle()}} after the file reader or 
starting a new chain at the assigner:
{code}
env
  .readFile(new TextInputFormat(...), ...)
  .setParallelism(1)
  .shuffle() 
  .assignTimestampsAndWatermarks(...)
  .setParallelism(1)
  .map()...
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)