Sorry Robert and all, pressed Send button too early =(
One of the main reasons to keep the max 100 chars line (or 120) is to
make sure that the code is readable an understandable, which in Scala
you can easily get the code to be complicated and in a single line.
- Henry
[1] http://www.scalastyle
Oh, since we are using tabs I think they are counted as one character?
On Tuesday, February 17, 2015, Robert Metzger wrote:
> I agree with Stephan that we should remove the scalastyle rule enforcing
> lines of 100 characters length.
>
>
>
> On Mon, Jan 5, 2015 at 10:21 AM, Henry Saputra >
> wro
Stephan was taking about imports statements.
I want to keep line length to 100 or 120.
Code that is longer than 100 char per line need to be revisited.
On Tuesday, February 17, 2015, Robert Metzger wrote:
> I agree with Stephan that we should remove the scalastyle rule enforcing
> lines of 100 c
+1
Checked signatures, checksums, pom. Built from src, run local examples.
On Tue, Feb 17, 2015 at 11:59 PM, Robert Metzger
wrote:
> +1
>
> I've checked the RC on a HDP 2.2 sandbox (using Flink on YARN). Also ran
> wordcount on it.
> The hadoop1 quickstarts have the correct version set (that wa
+1
I've checked the RC on a HDP 2.2 sandbox (using Flink on YARN). Also ran
wordcount on it.
The hadoop1 quickstarts have the correct version set (that was an issue
with the RC before)
On Mon, Feb 16, 2015 at 5:59 PM, Fabian Hueske wrote:
> +1 (forgot that earlier)
>
> 2015-02-16 17:03 GMT+01:0
Andra Lungu created FLINK-1576:
--
Summary: Change the examples to be consistent with the other Flink
examples
Key: FLINK-1576
URL: https://issues.apache.org/jira/browse/FLINK-1576
Project: Flink
Robert Metzger created FLINK-1575:
-
Summary:
JobManagerConnectionTest.testResolveUnreachableActorRemoteHost times out on
travis
Key: FLINK-1575
URL: https://issues.apache.org/jira/browse/FLINK-1575
P
Fabian Hueske created FLINK-1574:
Summary: Flink fails due to non-initialized RuntimeContext in
CombiningUnilateralSortMerger
Key: FLINK-1574
URL: https://issues.apache.org/jira/browse/FLINK-1574
Proj
Robert Metzger created FLINK-1573:
-
Summary: Add per-job metrics to flink.
Key: FLINK-1573
URL: https://issues.apache.org/jira/browse/FLINK-1573
Project: Flink
Issue Type: Sub-task
Affect
Robert Metzger created FLINK-1572:
-
Summary: Output directories are created before input paths are
checked
Key: FLINK-1572
URL: https://issues.apache.org/jira/browse/FLINK-1572
Project: Flink
I agree with Stephan that we should remove the scalastyle rule enforcing
lines of 100 characters length.
On Mon, Jan 5, 2015 at 10:21 AM, Henry Saputra
wrote:
> @Stephan - sure I could work on it. Been wanting to do it for a while.
> No, it is not the checkstyle issue.
>
> - Henry
>
> On Mon,
I think the way we shaded Guava is a problem for the way IntelliJ uses
maven (compile dependency projects, not package them).
Since we do not apply relocation to our code for this, it should have no
effect on the IDE usability.
On Tue, Feb 17, 2015 at 3:37 PM, Ufuk Celebi wrote:
>
> On 17 Feb 2
On 17 Feb 2015, at 09:40, Stephan Ewen wrote:
> Hi everyone!
>
> We have been time and time again struck by the problem that Hadoop bundles
> many dependencies in certain versions, that conflict either with versions
> of the dependencies we use, or with versions that users use.
>
> The most pr
+1
On Tue, Feb 17, 2015 at 1:34 PM, Till Rohrmann wrote:
> +1
>
> On Tue, Feb 17, 2015 at 1:34 PM, Kostas Tzoumas wrote:
>
>> +1
>>
>> On Tue, Feb 17, 2015 at 12:14 PM, Márton Balassi
>> wrote:
>>
>> > When it comes to the current use cases I'm for this separation.
>> > @Ufuk: As Gyula has alre
Ufuk Celebi created FLINK-1571:
--
Summary: Add failure-case tests for blocking data exchange
Key: FLINK-1571
URL: https://issues.apache.org/jira/browse/FLINK-1571
Project: Flink
Issue Type: Sub-t
Ufuk Celebi created FLINK-1570:
--
Summary: Add failure-case tests for pipelined data exchange
Key: FLINK-1570
URL: https://issues.apache.org/jira/browse/FLINK-1570
Project: Flink
Issue Type: Sub-
Robert Metzger created FLINK-1569:
-
Summary: Object reuse mode is not working with KeySelector
functions.
Key: FLINK-1569
URL: https://issues.apache.org/jira/browse/FLINK-1569
Project: Flink
+1
On Tue, Feb 17, 2015 at 1:34 PM, Kostas Tzoumas wrote:
> +1
>
> On Tue, Feb 17, 2015 at 12:14 PM, Márton Balassi
> wrote:
>
> > When it comes to the current use cases I'm for this separation.
> > @Ufuk: As Gyula has already pointed out with the current design of
> > integration it should not
+1
On Tue, Feb 17, 2015 at 12:14 PM, Márton Balassi
wrote:
> When it comes to the current use cases I'm for this separation.
> @Ufuk: As Gyula has already pointed out with the current design of
> integration it should not be a problem. Even if we submitted programs to
> the wrong clusters it wou
Ufuk Celebi created FLINK-1568:
--
Summary: Add failure-case tests for data exchange
Key: FLINK-1568
URL: https://issues.apache.org/jira/browse/FLINK-1568
Project: Flink
Issue Type: Improvement
Robert Metzger created FLINK-1567:
-
Summary: Add option to switch between Avro and Kryo serialization
for GenericTypes
Key: FLINK-1567
URL: https://issues.apache.org/jira/browse/FLINK-1567
Project: Fl
When it comes to the current use cases I'm for this separation.
@Ufuk: As Gyula has already pointed out with the current design of
integration it should not be a problem. Even if we submitted programs to
the wrong clusters it would only cause performance issues.
Eventually it would be nice to have
Till Rohrmann created FLINK-1566:
Summary: WindowIntegrationTest fails
Key: FLINK-1566
URL: https://issues.apache.org/jira/browse/FLINK-1566
Project: Flink
Issue Type: Bug
Component
So the current setup is to share results between the two apis by files. So
I dont see any reason why this couldnt work with the 2 cluster setup. It
makes deployment a little trickier but still feasible.
On Tue, Feb 17, 2015 at 11:55 AM, Ufuk Celebi wrote:
> I think this separation reflects the w
Fabian Hueske created FLINK-1565:
Summary: Document object reuse behavior
Key: FLINK-1565
URL: https://issues.apache.org/jira/browse/FLINK-1565
Project: Flink
Issue Type: Improvement
I think this separation reflects the way that Flink is used currently
anyways. I would be in favor of it as well.
- What about the ongoing efforts (I think by Gyula) to combine both the
batch and stream processing APIs? I assume that this would only effect the
performance and wouldn't pose a funda
+1
Let's do this soon to avoid performance issues for streaming.
On Tue, Feb 17, 2015 at 11:39 AM, Fabian Hueske wrote:
> sounds like a good idea to me.
> +1
>
> 2015-02-17 11:28 GMT+01:00 Stephan Ewen :
>
> > Hi everyone!
> >
> > What do you think about making the streaming execution mode of th
Stephan Ewen created FLINK-1564:
---
Summary: Make sure BLOB client downloads files only once
Key: FLINK-1564
URL: https://issues.apache.org/jira/browse/FLINK-1564
Project: Flink
Issue Type: Bug
sounds like a good idea to me.
+1
2015-02-17 11:28 GMT+01:00 Stephan Ewen :
> Hi everyone!
>
> What do you think about making the streaming execution mode of the system
> explicit? That means that people start a Flink cluster explicitly in Batch
> mode or in Streaming mode.
>
> The rational behin
Hi everyone!
What do you think about making the streaming execution mode of the system
explicit? That means that people start a Flink cluster explicitly in Batch
mode or in Streaming mode.
The rational behind this idea is that I am not sure how batch and streaming
clusters are really shared in a
Stephan Ewen created FLINK-1563:
---
Summary: NullPointer during state update call
Key: FLINK-1563
URL: https://issues.apache.org/jira/browse/FLINK-1563
Project: Flink
Issue Type: Bug
Co
Stephan Ewen created FLINK-1562:
---
Summary: Introduce retries for fetching data from the BLOB manager
Key: FLINK-1562
URL: https://issues.apache.org/jira/browse/FLINK-1562
Project: Flink
Issue T
Stephan Ewen created FLINK-1561:
---
Summary: Improve build server robustness by not reusing JVMs in
integration tests
Key: FLINK-1561
URL: https://issues.apache.org/jira/browse/FLINK-1561
Project: Flink
Hi everyone!
We have been time and time again struck by the problem that Hadoop bundles
many dependencies in certain versions, that conflict either with versions
of the dependencies we use, or with versions that users use.
The most prominent examples are Guava and Protobuf.
One way to solve this
34 matches
Mail list logo