Gyula Fora created FLINK-1280:
-
Summary: Rework StreamInvokables to use the PactDriver interface
for better future integration
Key: FLINK-1280
URL: https://issues.apache.org/jira/browse/FLINK-1280
Project
Gyula Fora created FLINK-1279:
-
Summary: Change default partitioning setting for low parallelism
stream sources from forward to distribute
Key: FLINK-1279
URL: https://issues.apache.org/jira/browse/FLINK-1279
+1
Lets do wiki page to get things up quickly then either provide links
or migrate to website when it is settle.
- Henry
On Tue, Nov 25, 2014 at 6:34 AM, Kostas Tzoumas wrote:
> Very nice idea!
>
> How about starting with a wiki page and move/mirror to the website once
> some content is there?
Hi,
if you really want to add compression on the data path, I would
encourage you to choose something as lightweight as possible. 10 GBit
Ethernet is becoming pretty much commodity these days in the server
space and it is not easy to saturate such a link even without compression.
Snappy is n
Thanks to Aljoscha and Stefano for pointing out the flaw.
We corrected the issue as follows:
[CODE]
import org.apache.flink.api.java.tuple. Tuple4 ;
import org.apache.flink.util.Collector;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
imp
I would start with a simple compression of network buffers as a blob.
At some point, Flink's internal data layout may become columnar, which
should also help the blob-style compression, because more similar strings
will be within one window...
On Tue, Nov 25, 2014 at 11:26 AM, Viktor Rosenfeld <
Very quickly, it seems you are trying to sum on Strings
Caused by: org.apache.flink.api.java.
aggregation.UnsupportedAggregationTypeException: The type java.lang.String
has currently not supported for built-in sum aggregations.
Check your tuple types and be sure that you are not summing on string
Hello all,
We are using Flink 0.7 and trying to read a large JSON file, reading some
fields into a flink (3-tuple based) dataset, then performing some operations.
We encountered the following runtime error:
[QUOTE]
Error: The main method caused an error.
org.apache.flink.client.program.Pro
Sean, which is the API you are referring to. I am actually looking for a
similar API for memory optimization but wasnt able to find it. JavaDoubleRDD
doesnt serve the purpose. Looking for a object double sort of primitve map.
--
View this message in context:
http://apache-flink-incubator-mailin
Koloboke is the fastest. Also speeds may improve further.
May be you can as the library author to implement some of the features you
want as he makes that request. See:
https://github.com/OpenHFT/Koloboke/wiki/Koloboke:-roll-the-collection-implementation-with-features-you-need
--
View this mess
FWIW I've been happy with Carrot HPPC in Java:
https://github.com/carrotsearch/hppc
On Tue, Nov 25, 2014 at 3:24 PM, sirinath wrote:
> There is also https://github.com/OpenHFT/Koloboke
>
> But I feel Flink can have its own collections which are more optimized for
> Flink use cases. You can bench
There is also https://github.com/OpenHFT/Koloboke
But I feel Flink can have its own collections which are more optimized for
Flink use cases. You can bench mark and see what works best.
--
View this message in context:
http://apache-flink-incubator-mailing-list-archive.1008284.n3.nabble.com/Ja
Stephan Ewen created FLINK-1278:
---
Summary: Remove the Record special code paths
Key: FLINK-1278
URL: https://issues.apache.org/jira/browse/FLINK-1278
Project: Flink
Issue Type: Bug
Co
Very nice idea!
How about starting with a wiki page and move/mirror to the website once
some content is there?
Asking people to push their stuff to one github repository will probably
not work IMO
On Mon, Nov 24, 2014 at 6:23 PM, Markl, Volker, Prof. Dr. <
volker.ma...@tu-berlin.de> wrote:
> De
+1, thanks Marton!
On Mon, Nov 24, 2014 at 11:51 PM, Till Rohrmann
wrote:
> +1 for Marton and the maintenance release.
>
> On Mon, Nov 24, 2014 at 6:52 PM, Henry Saputra
> wrote:
> > +1
> >
> > Would be good to have different RM to give feedback how existing
> > process working.
> >
> > Thanks
Hi,
A codec like Snappy would work on an entire network buffer as one big blob,
right? I was more thinking along the lines of compressing individual tuples
fields by treating them as columns, e.g., using frame-of-reference encoding
and bit backing. Compression on tuple fields should yield much bet
16 matches
Mail list logo