Perhaps we should have both code paths (Spargel and Spargel inside Graph
API) in the code for a while to enable current Spargel users to migrate
their applications before the breaking change.
This means that you have Spargel inside Graph API as you plan to, but you
do not remove the current Sparge
+1 makes sense :)
On 7 January 2015 at 09:51, Kostas Tzoumas wrote:
> Perhaps we should have both code paths (Spargel and Spargel inside Graph
> API) in the code for a while to enable current Spargel users to migrate
> their applications before the breaking change.
>
> This means that you have S
+1 as well
Am 07.01.2015 10:04 schrieb "Vasiliki Kalavri" :
> +1 makes sense :)
>
> On 7 January 2015 at 09:51, Kostas Tzoumas wrote:
>
> > Perhaps we should have both code paths (Spargel and Spargel inside Graph
> > API) in the code for a while to enable current Spargel users to migrate
> > thei
Till Rohrmann created FLINK-1363:
Summary: Race condition in
ExecutionVertexCancelTest.testSendCancelAndReceiveFail test
Key: FLINK-1363
URL: https://issues.apache.org/jira/browse/FLINK-1363
Project:
Just FYI, the svnpubsub for the website is currently not working.
This is the respective issue for the website migration:
https://issues.apache.org/jira/browse/INFRA-8915
On Wed, Jan 7, 2015 at 11:40 AM, wrote:
> Author: ktzoumas
> Date: Wed Jan 7 10:40:31 2015
> New Revision: 1650029
>
> URL:
Hi,
I feel we never really talked about this. So, should we open Jira issues
even for very small fixes and then add the ticket number to the commit? Or
should we just commit those small fixes. Right now, I have two small fixes
(one is 4 lines, the other one is two lines) for the ValueTypeInfo and
T
+1
On Wed, Jan 7, 2015 at 10:12 AM, Stephan Ewen wrote:
> +1 as well
> Am 07.01.2015 10:04 schrieb "Vasiliki Kalavri" >:
>
> > +1 makes sense :)
> >
> > On 7 January 2015 at 09:51, Kostas Tzoumas wrote:
> >
> > > Perhaps we should have both code paths (Spargel and Spargel inside
> Graph
> > >
Thanks Henry!
Do you know of a good source that gives pointers or examples how to
interact with H2O ?
Stephan
On Sun, Jan 4, 2015 at 7:14 PM, Till Rohrmann wrote:
> The idea to work with H2O sounds really interesting.
>
> In terms of the Mahout DSL this would mean that we have to translate a
We have not exactly defined this so far, but it is a good point to do so.
I personally find it good to have changes associated with an issue, because
it allows you to trace back why the change was done.
To make sure we do not overdo this and impose totally unnecessary overhead,
I would suggest the
Yes, we should have a guide like that somewhere.
On Wed, Jan 7, 2015 at 12:33 PM, Stephan Ewen wrote:
> We have not exactly defined this so far, but it is a good point to do so.
>
> I personally find it good to have changes associated with an issue, because
> it allows you to trace back why the
Gyula Fora created FLINK-1364:
-
Summary: No simple way to group on the whole Tuple/element for
DataSets/Streams
Key: FLINK-1364
URL: https://issues.apache.org/jira/browse/FLINK-1364
Project: Flink
+1
On Wed, Jan 7, 2015 at 12:41 PM, Aljoscha Krettek
wrote:
> Yes, we should have a guide like that somewhere.
>
>
> On Wed, Jan 7, 2015 at 12:33 PM, Stephan Ewen wrote:
>
> > We have not exactly defined this so far, but it is a good point to do so.
> >
> > I personally find it good to have cha
+1 for the guide, thanks for clarifying the issue
On Wed, Jan 7, 2015 at 2:30 PM, Till Rohrmann wrote:
> +1
>
> On Wed, Jan 7, 2015 at 12:41 PM, Aljoscha Krettek
> wrote:
>
> > Yes, we should have a guide like that somewhere.
> >
> >
> > On Wed, Jan 7, 2015 at 12:33 PM, Stephan Ewen wrote:
> >
+1
Perhaps declaring the component should be optional because it leaves
less space for the actual commit message. We should definitely add
this guide to the wiki.
On Wed, Jan 7, 2015 at 2:34 PM, Márton Balassi wrote:
> +1 for the guide, thanks for clarifying the issue
>
> On Wed, Jan 7, 2015 at
Aljoscha Krettek created FLINK-1365:
---
Summary: ValueTypeInfo is not Creating Correct TypeComparator
Key: FLINK-1365
URL: https://issues.apache.org/jira/browse/FLINK-1365
Project: Flink
Issu
Should we put that to an official vote, or wait for people to comment and
(if nobody objects) consider it as agreed on through lazy consensus?
On Wed, Jan 7, 2015 at 2:34 PM, Márton Balassi
wrote:
> +1 for the guide, thanks for clarifying the issue
>
> On Wed, Jan 7, 2015 at 2:30 PM, Till Rohrma
Aljoscha Krettek created FLINK-1366:
---
Summary: TextValueInputFormat ASCII FastPath is never used
Key: FLINK-1366
URL: https://issues.apache.org/jira/browse/FLINK-1366
Project: Flink
Issue T
I would only start a vote if somebody objects.
How about adding this rule to our website, to make it even more official. I
would like to establish a document that contains all the rules we agreed on.
Similarly to the coding guidelines (
http://flink.apache.org/coding_guidelines.html) we could esta
I prefer component declarations, the current best practice comes in handy
when searching through commits. Answering a "when did key selection change
for streaming?" type question I just had to answer would have been a bit
more difficult without it - manageable though.
In case of streaming it does
I personally like the tags very much. I think the streaming component was
the first to introduce it and it stuck me as a very good idea.
+1 to stick with them
On Wed, Jan 7, 2015 at 3:03 PM, Márton Balassi
wrote:
> I prefer component declarations, the current best practice comes in handy
> when
+1 for the guide and JIRA references.
I'd keep the component tags optional though.
As Max said, there is less space to display a meaning message if a commit
addresses several components. Separating changes into commits by components
sounds not very practical to me.
Also without a clear definition
+1
Let's encourage the use of component tags, I don't see the need for
enforcing it. For commits that affect one component, I expect people will
use it.
On Wed, Jan 7, 2015 at 3:40 PM, Fabian Hueske wrote:
> +1 for the guide and JIRA references.
>
> I'd keep the component tags optional though.
Gyula Fora created FLINK-1367:
-
Summary: Add field aggregations to Streaming Scala api
Key: FLINK-1367
URL: https://issues.apache.org/jira/browse/FLINK-1367
Project: Flink
Issue Type: New Feature
Hi everyone,
as some of you may already know, I have worked on a patch for OpenJDK
that introduces a compiler option for javac which allows to compile and
run programs with Java 8 Lambda Expressions in Flink without loss of
type information and user inconvenience.
I have published my patch t
Hi all!
Since the feedback was positive, I added the guidelines to the wiki, with a
disclaimer that this is being refined.
https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+development+guidelines
Stephan
On Wed, Jan 7, 2015 at 4:13 PM, Kostas Tzoumas wrote:
> +1
>
> Let's encour
Amazing work, thank you!
On Wed, Jan 7, 2015 at 4:43 PM, Timo Walther wrote:
> Hi everyone,
>
> as some of you may already know, I have worked on a patch for OpenJDK that
> introduces a compiler option for javac which allows to compile and run
> programs with Java 8 Lambda Expressions in Flink w
Great Timo!
On Wed, Jan 7, 2015 at 4:47 PM, Robert Metzger wrote:
> Amazing work, thank you!
>
> On Wed, Jan 7, 2015 at 4:43 PM, Timo Walther wrote:
>
> > Hi everyone,
> >
> > as some of you may already know, I have worked on a patch for OpenJDK
> that
> > introduces a compiler option for javac
+1
@Stephan: thanks! :-)
On Wed, Jan 7, 2015 at 4:44 PM, Stephan Ewen wrote:
> Hi all!
>
> Since the feedback was positive, I added the guidelines to the wiki, with a
> disclaimer that this is being refined.
>
>
> https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+development+guidel
Very nice :)
On Wed, Jan 7, 2015 at 5:07 PM, Till Rohrmann wrote:
> Great Timo!
>
> On Wed, Jan 7, 2015 at 4:47 PM, Robert Metzger
> wrote:
>
> > Amazing work, thank you!
> >
> > On Wed, Jan 7, 2015 at 4:43 PM, Timo Walther wrote:
> >
> > > Hi everyone,
> > >
> > > as some of you may already k
Yes, great work.
I am subscribed to the OpenJDK compiler-dev list and they are discussing
the patch already actively...
On Wed, Jan 7, 2015 at 5:55 PM, Ufuk Celebi wrote:
> Very nice :)
>
> On Wed, Jan 7, 2015 at 5:07 PM, Till Rohrmann
> wrote:
>
> > Great Timo!
> >
> > On Wed, Jan 7, 2015 at
Nice.
On Wed, Jan 7, 2015 at 5:51 PM, Ufuk Celebi wrote:
> +1
>
> @Stephan: thanks! :-)
>
> On Wed, Jan 7, 2015 at 4:44 PM, Stephan Ewen wrote:
>
>> Hi all!
>>
>> Since the feedback was positive, I added the guidelines to the wiki, with a
>> disclaimer that this is being refined.
>>
>>
>> https
+1
This was much needed :)
2015.01.07. 18:10 ezt írta ("Max Michels" ):
> Nice.
>
>
> On Wed, Jan 7, 2015 at 5:51 PM, Ufuk Celebi wrote:
> > +1
> >
> > @Stephan: thanks! :-)
> >
> > On Wed, Jan 7, 2015 at 4:44 PM, Stephan Ewen wrote:
> >
> >> Hi all!
> >>
> >> Since the feedback was positive, I
0xdata (now is called H2O) is developing integration with Spark with
the project called Sparkling Water [1].
It creates new RDD that could connect to H2O cluster to pass the
higher order function to execute in the ML flow.
The easiest way to use H2O is with R binding [2][3] but I think we
would wa
On Wednesday, January 7, 2015, StephanEwen wrote:
> Github user StephanEwen commented on the pull request:
>
> https://github.com/apache/flink/pull/236#issuecomment-69058288
>
> I think we need initially use the same mechanism for Kryo registration
> as you used for Pojos.
>
> With `r
Please vote on releasing the following candidate as Apache Flink version
0.8.0
This release will be the first major release for Flink as a top level
project.
-
The commit to be voted on is in the branch "release-0.8.0-rc1"
(commit 8c30f6
-1
There is a major issue with the hadoop1 and hadoop2 versions: We made
hadoop2 the default profile, so the 0.8.0 version will pull the hadoop2
dependencies.
We basically need an explicit 0.8.0-hadoop1 version now.
The release candidate contains a 0.8.0 version which has hadoop2 activated
by defa
... I did some more checks:
The hadoop200alpha version has some issues with the hadoop dependency
exclusions. I fixed the issues in this pull request:
https://github.com/apache/flink/pull/268. The dependency exclusions for
hadoop are basically disabled because they are overwritten by the
hadoop200
I also see a warning when building the yarn fat jar.
This is a consequence of the duplicate dependencies.
[WARNING] servlet-api-3.0.20100224.jar, javax.servlet-api-3.0.1.jar,
javax.servlet-3.1.jar, servlet-api-2.5.jar define 42 overlappping classes:
[WARNING] - javax.servlet.http.Cookie
[WARNIN
Gyula Fora created FLINK-1368:
-
Summary: Change memory management settings for Streaming programs
Key: FLINK-1368
URL: https://issues.apache.org/jira/browse/FLINK-1368
Project: Flink
Issue Type:
To all people interested in streaming and streaming algorithms:
Here is a great collection of algorithms and data structures that are
useful for streaming data analysis, both for users and system developers.
https://gist.github.com/debasishg/8172796
Awesome collection, thanks!
On Wed, Jan 7, 2015 at 11:06 PM, Stephan Ewen wrote:
> To all people interested in streaming and streaming algorithms:
>
> Here is a great collection of algorithms and data structures that are
> useful for streaming data analysis, both for users and system developers.
Pretty nice find indeed. Thanks for sharing Stephan!
Paris
From: Gyula Fóra [gyf...@apache.org]
Sent: Wednesday, January 07, 2015 11:15 PM
To: dev@flink.incubator.apache.org
Subject: Re: Streaming data structures and algorithms
Awesome collection, thanks!
Hi everyone!
It is time we bring the Flink roadmap up to speed with what has happened in
the last months and what further goals features ideas have come up.
The link below leads to a Google Doc that contains an initial set of
suggestions that some of the committers have come up with. Please share
Thanks for sharing, Stephan!
Just tweeted this =)
- Henry
On Wed, Jan 7, 2015 at 2:06 PM, Stephan Ewen wrote:
> To all people interested in streaming and streaming algorithms:
>
> Here is a great collection of algorithms and data structures that are
> useful for streaming data analysis, both fo
Using JAXRS isn't an issue here for me. The problem is that the JobManager
fields [2] are initialized in the Servlet [1] via constructor; If I were to
use a JAXRS endpoint I wouldn't be able to use those, unless the fields
resided in a configuration file or could be passed around in some other
way.
I added some text about my work on the Logical Query feature.
On Thu, Jan 8, 2015 at 12:42 AM, Stephan Ewen wrote:
> Hi everyone!
>
> It is time we bring the Flink roadmap up to speed with what has happened in
> the last months and what further goals features ideas have come up.
>
> The link bel
Aljoscha Krettek created FLINK-1369:
---
Summary: The Pojo Serializers/Comparators fail when using
Subclasses or Interfaces
Key: FLINK-1369
URL: https://issues.apache.org/jira/browse/FLINK-1369
Project
47 matches
Mail list logo