Yes, I agree that the Avro serializer should be available by default. That
is one case of a typical type that should work out of the box, given that
we support Avro file formats.
Let me summarize how I understood that suggestion:
- We make Avro available by default by registering a default seria
Hi,
thank you for putting our discussion to the mailing list. This is indeed
where such discussions belong. For the others, we started discussing here:
https://github.com/apache/flink/pull/304
I think there is one additional approach, which is probably close to (1):
We only register those seriali
Paris Carbone created FLINK-1421:
Summary: Implement a SAMOA Adapter for Flink Streaming
Key: FLINK-1421
URL: https://issues.apache.org/jira/browse/FLINK-1421
Project: Flink
Issue Type: New F
Hi all!
We have various pending pull requests that add support for certain types by
adding extra kryo serializers.
I think we need to decide how we want to handle the support for extra
types, because more are certainly to come.
As I understand it, we have three broad options:
(1)
Add as many se
Henry Saputra created FLINK-1420:
Summary: Small cleanup on code after 0.8 release
Key: FLINK-1420
URL: https://issues.apache.org/jira/browse/FLINK-1420
Project: Flink
Issue Type: Bug
Hi everyone,
I started a wiki page about this:
https://cwiki.apache.org/confluence/display/FLINK/Flink+Roadmap
If you are working on one of these features, could you insert the
corresponding JIRA ticket and expand the description if you think it's not
informative enough?
I saw that there is a st
Robert helped me out big time with the process.
The release is out, in 24 hours it will be synced to all mirrors. [1]
[1] https://dist.apache.org/repos/dist/release/flink/
On Mon, Jan 19, 2015 at 12:22 PM, Till Rohrmann
wrote:
> Great Marton,
>
> thanks for being our release manager and all yo
Excellent! :-)
Have a great time and enjoy the snow!
2015-01-19 19:17 GMT+01:00 Aljoscha Krettek :
> Yes, Java support is working in my head.
>
> P. S. Sorry for my slow reaction times. I'm in my winter holidays. 😀
> On Jan 19, 2015 5:37 PM, "Kostas Tzoumas" wrote:
>
> > I think the plan is to
Yes, Java support is working in my head.
P. S. Sorry for my slow reaction times. I'm in my winter holidays. 😀
On Jan 19, 2015 5:37 PM, "Kostas Tzoumas" wrote:
> I think the plan is to add this for both Scala and Java (starting with
> Scala)
>
> On Mon, Jan 19, 2015 at 1:33 AM, Fabian Hueske wro
I think the plan is to add this for both Scala and Java (starting with
Scala)
On Mon, Jan 19, 2015 at 1:33 AM, Fabian Hueske wrote:
> This is great!
> Will this be exclusive for the Scala API or are we adding this (or similar)
> functionality to the Java API as well?
>
> 2015-01-16 17:30 GMT+01:
Great Marton,
thanks for being our release manager and all your efforts :-)
On Mon, Jan 19, 2015 at 12:06 PM, Márton Balassi
wrote:
> Maven dependencies are already out. [1]
> Dist publish is unfortunately blocked by infra, having fun with the first
> top level release. :) [2]
>
> [1] http://se
Maven dependencies are already out. [1]
Dist publish is unfortunately blocked by infra, having fun with the first
top level release. :) [2]
[1] http://search.maven.org/#search%7Cga%7C1%7Cflink
[2] https://issues.apache.org/jira/browse/INFRA-9039
On Sun, Jan 18, 2015 at 6:45 PM, Henry Saputra
wro
This is a difficult question.
A program might also later refer to some intermediate data set that would
have been already computed if sinks are executed together with the count()
call and need to be computed again.
Also what do we do with sinks that are not connected with the collected or
counted
I agree with Ufuk that it depends on how much both subgraphs and also
future subgraphs overlap. It is conceivable that the user will reuse
subgraphs of an already computed data sink after he called collect(). Then
we also would have to reexecute parts of the dataflow graph. I guess we
easily find e
I think this question depends on how much both subgraphs overlap? But in
general, I agree that the first approach seems more desirable from the
runtime view (multiple consumers at the branch point).
On Mon, Jan 19, 2015 at 10:59 AM, Robert Metzger
wrote:
> I would also execute the sinks immediat
great !
Devan M.S. | Research Associate | Cyber Security | AMRITA VISHWA
VIDYAPEETHAM | Amritapuri | Cell +919946535290 |
On Mon, Jan 19, 2015 at 3:25 PM, Till Rohrmann wrote:
> I agree that this looks awesome. I'm looking forward writing new jobs with
> it.
>
> On Mon, Jan 19, 2015 at 10:33
I would also execute the sinks immediately. I think its a corner case
because the sinks are usually the last thing in a plan and all print() or
collect() statements are earlier in the plan.
print() should go to the client command line, yes.
On Mon, Jan 19, 2015 at 1:42 AM, Stephan Ewen wrote:
>
I agree that this looks awesome. I'm looking forward writing new jobs with
it.
On Mon, Jan 19, 2015 at 10:33 AM, Fabian Hueske wrote:
> This is great!
> Will this be exclusive for the Scala API or are we adding this (or similar)
> functionality to the Java API as well?
>
> 2015-01-16 17:30 GMT+0
Chesnay Schepler created FLINK-1419:
---
Summary: DistributedCache doesn't preserver files for subsequent
operations
Key: FLINK-1419
URL: https://issues.apache.org/jira/browse/FLINK-1419
Project: Flink
This is great!
Will this be exclusive for the Scala API or are we adding this (or similar)
functionality to the Java API as well?
2015-01-16 17:30 GMT+01:00 Stephan Ewen :
> Very exciting!
>
> This looks amazing. It almost looks like half a SQL interface ;-)
>
> On Fri, Jan 16, 2015 at 11:04 AM,
20 matches
Mail list logo