Ah, that is actually a good argument...
On Mon, Nov 24, 2014 at 11:50 PM, Till Rohrmann
wrote:
> The latter version would allow to use the apply method in Scala
> without calling it directly, whereas in the first case the user would
> have to spell it out.
>
> On Mon, Nov 24, 2014 at 8:56 PM, Fa
Hi Viktor,
I had a look at your branch.
First of all, it looks like very good work! Good code quality, lots of
tests, well documented, nice!
I like the first approach (ds.aggregate(min(1), max(2), count()) much
better than the other one. It basically shows how the result tuple is
constructed.
I a
+1 for Marton and the maintenance release.
On Mon, Nov 24, 2014 at 6:52 PM, Henry Saputra wrote:
> +1
>
> Would be good to have different RM to give feedback how existing
> process working.
>
> Thanks for volunteering.
>
> - Henry
>
> On Mon, Nov 24, 2014 at 1:51 AM, Márton Balassi
> wrote:
>> +
The latter version would allow to use the apply method in Scala
without calling it directly, whereas in the first case the user would
have to spell it out.
On Mon, Nov 24, 2014 at 8:56 PM, Fabian Hueske wrote:
> I prefer the first option where partitioning (assigning keys to partitions)
> follows
I actually prefer the first one as well...
Am 24.11.2014 20:56 schrieb "Fabian Hueske" :
> I prefer the first option where partitioning (assigning keys to partitions)
> follows key selection.
>
>
> 2014-11-24 19:52 GMT+01:00 Stephan Ewen :
>
> > Hi all!
> >
> > Custom partitioners allow you manual
I prefer the first option where partitioning (assigning keys to partitions)
follows key selection.
2014-11-24 19:52 GMT+01:00 Stephan Ewen :
> Hi all!
>
> Custom partitioners allow you manually define the assignment of keys to
> partitions, for cases that have special constraints.
>
> This is a
Hi all!
Custom partitioners allow you manually define the assignment of keys to
partitions, for cases that have special constraints.
This is a call for opinion on the Syntax for custom partitioners, in the
case of Join and CoGroup.
Option 1:
input1
.join(input2)
.where("key1").equalTo("
+1
Would be good to have different RM to give feedback how existing
process working.
Thanks for volunteering.
- Henry
On Mon, Nov 24, 2014 at 1:51 AM, Márton Balassi
wrote:
> +1 There are a couple of streaming bugfix commits that I'd like to push
> there.
>
> I would also like to volunteer as
Dear all,
we should be able to host and organize that via the Berlin Big Data Center, as
we will use Apache Flink as platform there and plan to develop an open,
extensible repository of data anaylsis algorithms in this context anyways.
In this context, I would indeed suggest some form of Githu
I some sense the wiki would be nice, but to me, wiki pages feel like "we
did not find a better place for that"...
On Mon, Nov 24, 2014 at 6:00 PM, Robert Metzger wrote:
> Hi,
>
> great idea!
> Maybe we should use the Wiki for such a list? It would make it easier for
> users to just drop a link t
Hi,
great idea!
Maybe we should use the Wiki for such a list? It would make it easier for
users to just drop a link to a github repo of an algorithm implementation.
On Mon, Nov 24, 2014 at 4:05 PM, Kruse, Sebastian
wrote:
> Hi everyone,
>
> at HPI, we recently had the idea of a projects page
Till Rohrmann created FLINK-1277:
Summary: Support sort order for coGroup inputs
Key: FLINK-1277
URL: https://issues.apache.org/jira/browse/FLINK-1277
Project: Flink
Issue Type: Improvement
Thanks Marton for volunteering!
2014-11-24 15:18 GMT+01:00 Ufuk Celebi :
> @Marton: +1 :)
>
> On Mon, Nov 24, 2014 at 1:57 PM, Stephan Ewen wrote:
>
> > +1
> >
> > I can help assembling a list of commits to cherry-pick from 0.8-SNAPSHOT
> to
> > the 0.7.1 release branch tomorrow or on Wednesday.
Hi everyone,
at HPI, we recently had the idea of a projects page on the Flink web site. Such
a page could present programs that have been implemented atop of Flink, ideally
with a link to a github repository of the respective project's code. This could
bring benefits for both the Flink maintain
@Marton: +1 :)
On Mon, Nov 24, 2014 at 1:57 PM, Stephan Ewen wrote:
> +1
>
> I can help assembling a list of commits to cherry-pick from 0.8-SNAPSHOT to
> the 0.7.1 release branch tomorrow or on Wednesday.
>
> Stephan
>
>
> On Mon, Nov 24, 2014 at 11:37 AM, Robert Metzger
> wrote:
>
> > +1 for
+1
I can help assembling a list of commits to cherry-pick from 0.8-SNAPSHOT to
the 0.7.1 release branch tomorrow or on Wednesday.
Stephan
On Mon, Nov 24, 2014 at 11:37 AM, Robert Metzger
wrote:
> +1 for Marton as a release manager. Let me know if you need any help.
>
> I'm trying to find some
+1 for Marton as a release manager. Let me know if you need any help.
I'm trying to find some time today to collect a list of commits I'd like to
include.
On Mon, Nov 24, 2014 at 10:51 AM, Márton Balassi
wrote:
> +1 There are a couple of streaming bugfix commits that I'd like to push
> there.
>
+1 There are a couple of streaming bugfix commits that I'd like to push
there.
I would also like to volunteer as release manager.
Best,
Marton
On Mon, Nov 24, 2014 at 10:39 AM, Ufuk Celebi wrote:
> Hey all,
>
> I would like to discuss your view on having a 0.7.1 maintenance release.
>
> Altho
Hey all,
I would like to discuss your view on having a 0.7.1 maintenance release.
Although there are no commits in the respective branches (except
documentation updates), I think we already have a set of issues/fixes,
which would be beneficial to have in a release.
I vote to start collecting/che
On Mon, Nov 24, 2014 at 10:20 AM, Stephan Ewen wrote:
> Will the compression Codec will be inserted in the Netty loops, or before
> that?
>
In the current master, I would say that it makes sense to do it in the
Netty loops during shuffling. The compression would then be totally
transparent to th
Always depends on the data - we need to measure that. From what I have
heard in Hadoop, anywhere between none and a factor of 3-4 may happen.
Especially if the records contain repetitive strings...
Am 24.11.2014 10:24 schrieb "Sebastian Schelter" :
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
What reduction in network traffic can we expect from using compression?
- -s
On 11/24/2014 10:20 AM, Stephan Ewen wrote:
> Will the compression Codec will be inserted in the Netty loops, or
> before that?
>
> In any case, would it make sense to pr
Will the compression Codec will be inserted in the Netty loops, or before
that?
In any case, would it make sense to prototype this on the current code and
forward port this to the new network stack later? I assume the code would
mostly be similar, especially all the JNI vs. Java considerations and
23 matches
Mail list logo