Robert Metzger created FLINK-3151:
-
Summary: YARN kills Flink TM containers due to memory overuse
(outside heap/offheap)
Key: FLINK-3151
URL: https://issues.apache.org/jira/browse/FLINK-3151
Project:
Timo Walther created FLINK-3152:
---
Summary: Support all comparisons for Date type
Key: FLINK-3152
URL: https://issues.apache.org/jira/browse/FLINK-3152
Project: Flink
Issue Type: Improvement
Hi everyone,
Just a reminder, the community vote for the Hadoop Summit Europe 2016 talks
in Dublin is still open until December 15.
There is a very good number of talks around Flink submitted, here are the
ones that mention "flink" in their abstract:
Timo Walther created FLINK-3153:
---
Summary: Support all comparisons for String type
Key: FLINK-3153
URL: https://issues.apache.org/jira/browse/FLINK-3153
Project: Flink
Issue Type: Improvement
Hi,
after talking to several people and getting some feedback already, I
would like to suggest a new blog post for the project web site about the
Storm compatibility layer.
You can find the draft here:
Robert Metzger created FLINK-3150:
-
Summary: Make YARN container invocation configurable
Key: FLINK-3150
URL: https://issues.apache.org/jira/browse/FLINK-3150
Project: Flink
Issue Type:
Maximilian Michels created FLINK-3155:
-
Summary: Update Flink docker version to latest stable Flink version
Key: FLINK-3155
URL: https://issues.apache.org/jira/browse/FLINK-3155
Project: Flink
Matthias,
This is great blog!
I would like to suggest the following:
Change the title to: How to run your existing Storm applications on Apache
Flink stream processing engine?
Fixing the few typos
For this reasons -> For these reasons
Storm compatibility package which allows users -> Storm
Maximilian Michels created FLINK-3154:
-
Summary: Update Kryo version from 2.24.0 to 3.0.3
Key: FLINK-3154
URL: https://issues.apache.org/jira/browse/FLINK-3154
Project: Flink
Issue Type:
Hi!
Did you change anything in the POM files, with respect to Guava, or add
another dependency that might transitively pull Guava?
Stephan
On Tue, Dec 8, 2015 at 9:25 PM, Nick Dimiduk wrote:
> Hi there,
>
> I'm attempting to build locally a flink based on release-0.10.0
Hi Matthias,
Thank you for the blog post. You had already shared a first draft with
me. This one looks even better!
I've made some minor comments. +1 to merge if these are addressed.
Cheers,
Max
On Wed, Dec 9, 2015 at 1:20 PM, Matthias J. Sax wrote:
> Just updated the draft
Great, thank you for writing the article.
I like the general idea, but I've found some small typos.
Can you open a pull request against the "flink-web" repo to make reviewing
it easier?
On Wed, Dec 9, 2015 at 11:32 AM, Matthias J. Sax wrote:
> Hi,
>
> after talking to several
Just updated the draft (thanks to Till and Slim for feedback) and opened
a PR.
https://github.com/apache/flink-web/pull/15
@Slim: we discussed about benchmark result beforehand and decided to do
a second blog post later on
-Matthias
On 12/09/2015 12:14 PM, Slim Baltagi wrote:
> Matthias,
>
>
Hi Stephan,
That was my original understanding, until I realized that I was not using
a parallel socket source. I had a custom source that extended
SourceFunction which always runs with parallelism = 1. I looked through
the API and found the ParallelSourceFunction interface so I implemented
that
Thanks Matthias! This is a very nice blog post and reads easily.
On 9 December 2015 at 19:21, Ufuk Celebi wrote:
> Great post! Thanks!
>
> I have also made some comments in the commit.
>
> – Ufuk
>
> > On 09 Dec 2015, at 14:19, Maximilian Michels wrote:
> >
>
Great post! Thanks!
I have also made some comments in the commit.
– Ufuk
> On 09 Dec 2015, at 14:19, Maximilian Michels wrote:
>
> Hi Matthias,
>
> Thank you for the blog post. You had already shared a first draft with
> me. This one looks even better!
>
> I've made some
mvn dependency:tree from flink-dist module does not include any mention of
guava. When I build (mvn clean package -DskipTests) vs master (fc8be1c) I
see the same packaging problem.
On Wed, Dec 9, 2015 at 9:29 AM, Stephan Ewen wrote:
> Usually, no command line magic is needed.
Ufuk Celebi created FLINK-3157:
--
Summary: Web frontend json files contain author attribution
Key: FLINK-3157
URL: https://issues.apache.org/jira/browse/FLINK-3157
Project: Flink
Issue Type:
I can confirm that guava is part of the fat jar for the 2.7.0, scala 2.11
distribution.
I'll look into the issue tomorrow
On Wed, Dec 9, 2015 at 7:58 PM, Nick Dimiduk wrote:
> mvn dependency:tree from flink-dist module does not include any mention of
> guava. When I build
Thanks, I appreciate it.
On Wed, Dec 9, 2015 at 12:50 PM, Robert Metzger wrote:
> I can confirm that guava is part of the fat jar for the 2.7.0, scala 2.11
> distribution.
>
> I'll look into the issue tomorrow
>
> On Wed, Dec 9, 2015 at 7:58 PM, Nick Dimiduk
Hello squirrels,
I have been discussing with the Apache Tinkerpop [1] community regarding an
integration with Flink/Gelly.
You can read our discussion in [2].
Tinkerpop has a graph traversal machine called Gremlin, which supports many
high-level graph processing languages and runs on top of
Hi Stephan,
Here’s a link to the screenshot I tried to attach earlier:
https://drive.google.com/open?id=0B0_jTR8-IvUcMEdjWGFmYXJYS28
It looks to me like the distribution is fairly skewed across the nodes,
even though they’re executing the same pipeline.
Thanks,
Ali
On 2015-12-09, 12:36 PM,
Till Rohrmann created FLINK-3156:
Summary: FlinkKafkaConsumer fails with NPE on
notifyCheckpointComplete
Key: FLINK-3156
URL: https://issues.apache.org/jira/browse/FLINK-3156
Project: Flink
I did not. All I did was apply the PR from FLINK-3147. I thought perhaps
there's some command line incantation I'm missing.
On Wed, Dec 9, 2015 at 3:29 AM, Stephan Ewen wrote:
> Hi!
>
> Did you change anything in the POM files, with respect to Guava, or add
> another
Hi!
The parallel socket source looks good.
I think you forgot to attach the screenshot, or the mailing list dropped
the attachment...
Not sure if I can diagnose that without more details. The sources all do
the same. Assuming that the server distributes the data evenly across all
connected
25 matches
Mail list logo