+1
On Wed, Apr 1, 2015 at 4:58 PM, Henry Saputra
wrote:
> +1
>
> Superbly needed :)
>
> On Wednesday, April 1, 2015, Ufuk Celebi wrote:
>
> > Hey all,
> >
> > I think our documentation has grown to a point where we need to think
> about
> > how to make it more accessible.
> >
> > I would like t
Vasia Kalavri created FLINK-1815:
Summary: Add methods to read and write a Graph as adjacency list
Key: FLINK-1815
URL: https://issues.apache.org/jira/browse/FLINK-1815
Project: Flink
Issue T
+1
On Apr 2, 2015 9:02 AM, "Kostas Tzoumas" wrote:
> +1
>
> On Wed, Apr 1, 2015 at 4:58 PM, Henry Saputra
> wrote:
>
> > +1
> >
> > Superbly needed :)
> >
> > On Wednesday, April 1, 2015, Ufuk Celebi wrote:
> >
> > > Hey all,
> > >
> > > I think our documentation has grown to a point where we n
Till Rohrmann created FLINK-1816:
Summary: DegreesWithExceptionITCase.testGetDegreesInvalidEdgeSrcId
fails with wrong exception
Key: FLINK-1816
URL: https://issues.apache.org/jira/browse/FLINK-1816
Pr
Hi,
I want to cancel a running job from a Java program. However, I don't
know how to do it. The Client class
(org.apache.flink.client.program.Client) that is used to submit a job,
does not provide a method for it (a "Client.cancel(JobId)" would be nice).
There is also the Scala JobClient classm t
Fabian Hueske created FLINK-1817:
Summary: ClassLoaderObjectInputStream fails with
ClassNotFoundException for primitive classes
Key: FLINK-1817
URL: https://issues.apache.org/jira/browse/FLINK-1817
Pr
OK, I've added the change as PRs. Nothing fancy. Would be still nice if someone
checked it out locally and make sure that the search results refer to the
correct doc version.
https://github.com/apache/flink/pull/563
https://github.com/apache/flink/pull/564
– Ufuk
Works really nicely.
Two things:
- Formatting issues: http://i.imgur.com/AUy53Oj.png
- I don't like the placement. How about we place it in the navigation bar
at the top? There is enough space.
On Thu, Apr 2, 2015 at 11:49 AM, Ufuk Celebi wrote:
> OK, I've added the change as PRs. Nothing fancy
Hi Matthias,
I think there is no utility method right now to cancel a running job. I'll
file a JIRA for this.
Have a look at how the CLI frontend is cancelling a job:
https://github.com/apache/flink/blob/6b0d40764da9dce2e2d21882e9a03a21c6783ff0/flink-clients/src/main/java/org/apache/flink/client/
On 02 Apr 2015, at 12:08, Maximilian Michels wrote:
> Works really nicely.
>
> Two things:
> - Formatting issues: http://i.imgur.com/AUy53Oj.png
I'll see what we can be done about this. But as you see from the commit, it's
essentially just a JavaScript include.
> - I don't like the placement
Thanks for the answer. It works! :)
-Matthias
On 04/02/2015 12:27 PM, Robert Metzger wrote:
> Hi Matthias,
>
> I think there is no utility method right now to cancel a running job. I'll
> file a JIRA for this.
>
> Have a look at how the CLI frontend is cancelling a job:
> https://github.com/apa
Robert Metzger created FLINK-1818:
-
Summary: Provide API to cancel running job
Key: FLINK-1818
URL: https://issues.apache.org/jira/browse/FLINK-1818
Project: Flink
Issue Type: Improvement
I just found out that the execution time limit for the container-based
infra (the one we're using) is 120 minutes ;)
So we have some room left to write more test ;) (But please don't overdo it
)
On Tue, Mar 31, 2015 at 11:32 AM, Maximilian Michels wrote:
> Very nice. Thanks Robert!
>
> On Mon, M
Fabian Hueske created FLINK-1819:
Summary: Allow access to RuntimeContext from Input and
OutputFormats
Key: FLINK-1819
URL: https://issues.apache.org/jira/browse/FLINK-1819
Project: Flink
Is
Hi,
I have run the following program:
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
List l = Arrays.asList(new Tuple1(1L));
TypeInformation t = TypeInfoParser.parse("Tuple1");
DataSet> data = env.fromCollection(l, t);
long value = data.count();
System.out.prin
I have a similar issue here:
I would like to run a dataflow up to a particular point and materialize (in
memory) the intermediate result. Is this possible at the moment?
Regards,
Alex
2015-04-02 17:33 GMT+02:00 Felix Neutatz :
> Hi,
>
> I have run the following program:
>
> final ExecutionEnvir
I'll post the checklist along with the preview release candidate (RC0).
I actually wanted to create it today (as announced three days ago) but
apache's git repositories are down :(
I'll try again tomorrow.
On Tue, Mar 31, 2015 at 11:35 AM, Maximilian Michels wrote:
> +1
>
> Would be great if we
Hi Felix,
count() defines a sink through the DiscardingOutputFormat. The error you're
seeing is because the execution of the plan is already triggered within the
count() method. When you call env.execute() again, the plan has been
already cleared from the ExecutionEnvironment and it fails to execu
Hey guys,
As Aljoscha has highlighted earlier the current window join semantics in
the streaming api doesn't follow the changes in the windowing api. More
precisely, we currently only support joins over time windows of equal size
on both streams. The reason for this is that we now take a window of
Hi @all,
I started to work on an compatibility layer to run Storm Topologies on
Flink. I just pushed a first beta:
https://github.com/mjsax/flink/tree/flink-storm-compatibility
Please check it out, and let me know how you like it. In this first
version, I tried to code without changing too many t
In my opinion it should not be handled like print. The idea behind
count()/collect() is that they immediately return the result which can
then be used in further flink operations.
Right now, this is not properly/efficiently implemented but once we
have support for intermediate results on this leve
Hey Matthias,
a Storm compatibility layer sounds really great!
I'll soon take a closer look into the code, but the features you're listing
sound really amazing! Since the code has already testcases included, I'm
open to merging a first stable version and then continue the development of
the featu
HI Matthias,
Where do you put the code for the Storm compatibility? Under streams
module directory?
- Henry
On Thu, Apr 2, 2015 at 10:31 AM, Matthias J. Sax
wrote:
> Hi @all,
>
> I started to work on an compatibility layer to run Storm Topologies on
> Flink. I just pushed a first beta:
> https:
Hey Henry,
you can check out the files here:
https://github.com/mjsax/flink/tree/flink-storm-compatibility/flink-staging/flink-streaming/flink-storm-compatibility
... so yes, they are located in the flink-streaming directory .. which is a
good place for now.
Once we move flink-streaming out of sta
Hi Matthias,
this is really cool!I especially like that you can use Storm code within a
Flink streaming program :-)
One thing that might be good to do rather soon is to collect all your
commits and put them on top of a fresh forked Flink master branch.
When merging we cannot change the history an
Big +1 for the proposal for Peter and Gyula. I'm really for bringing the
windowing and window join API in sync.
On Thu, Apr 2, 2015 at 6:32 PM, Gyula Fóra wrote:
> Hey guys,
>
> As Aljoscha has highlighted earlier the current window join semantics in
> the streaming api doesn't follow the change
Hey Mathias,
Thanks, this is a really nice contribution. I just scrolled through the
code, but I really like it and big thanks for the the tests for the
examples.
The rebase Fabian suggested would help a lot when merging.
On Thu, Apr 2, 2015 at 9:19 PM, Fabian Hueske wrote:
> Hi Matthias,
>
This sounds amazing :) thanks Matthias!
Tomorrow I will spend some time to look through your work and give some
comments.
Also I would love to help with this effort so once we merge an initial
prototype let's open some Jiras and I will pick some up :)
Gyula
On Thursday, April 2, 2015, Márton Ba
That’s pretty nice Matthias, we could use a compositional API in streaming that
many people are familiar with.
I can also help in some parts, I see some issues we already encountered while
creating the samoa adapter (eg. dealing with circles in the topology). Thanks
again for initiating this!
P
Felix Neutatz created FLINK-1820:
Summary: Bug in DoubleParser and FloatParser - empty String is not
casted to 0
Key: FLINK-1820
URL: https://issues.apache.org/jira/browse/FLINK-1820
Project: Flink
Hi to all,
I was trying to compile Flink 0.9 skipping test compilation
(-Dmaven.test.skip=true) but this is not possible because there are
projects like flink-test-utils (for example) that requires test classes at
compile scope..shouldn't be better to keep the test source files in the
test folder
Now I made my fork (https://github.com/fpompermaier/flink) but when I run
the application I get this error:
java.io.NotSerializableException: org.apache.hadoop.hbase.client.Put
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
at java.io.ObjectOutputStream.writeObject(Objec
If Put is not Serializable it cannot be serialized and shipped.
Is it possible to make that field transient and initialize Put in configure()?
From: Flavio Pompermaier
Sent: Friday, 3. April, 2015 01:42
To: dev@flink.apache.org
Now I made my fork (https://github.com/fpompermai
Which field?the Tuple2?I use it with Flink 0.8.1 without errors
On Apr 3, 2015 2:27 AM, wrote:
> If Put is not Serializable it cannot be serialized and shipped.
>
> Is it possible to make that field transient and initialize Put in
> configure()?
>
>
>
>
>
>
> From: Flavio Pompermaier
> Sent: Fri
Or you could define it like this:
stream_A = a.window(...)
stream_B = b.window(...)
stream_A.join(stream_B).where().equals().with()
So a join would just be a join of two WindowedDataStreamS. This would
neatly move the windowing stuff into one place.
On Thu, Apr 2, 2015 at 9:54 PM, Márton Balass
35 matches
Mail list logo