Hi again,
I implemented the scenario with the combiner and answered my 3rd question by
myself:
The combiners start after the mapper finished, so the reducer will not start
processing partial results until the mappers are completely done.
Regards
Robert
Von: Paschek, Robert
Hi Mailing List,
according to my questions (and your answers!) at this topic
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Performance-issues-with-GroupBy-td8130.html
I have eliminated my ArrayList in my collect methods. Additional I want to
emit partial results. My mapper
eGroup` that might also improve the performance.
> Also, if you are using a `reduce`, then you can try calling
> `.setCombineHint(CombineHint.HASH)` after the reduce. (The combine
> hint is a relatively new feature, so you need the current master for
> this.)
>
> B
M, Chesnay Schepler
mailto:ches...@apache.org>> wrote:
Within the mapper you cannot access the parallelism of the following nor
preceding operation.
On 20.06.2016 15:56, Paschek, Robert wrote:
Hi Mailing list,
using a RichMapPartitionFunction i can access the total number m of this mapper
ut
Hi Mailing list,
using a RichMapPartitionFunction i can access the total number m of this mapper
utilized in my job with
int m = getRuntimeContext().getNumberOfParallelSubtasks();
I think that would be - in general - the total number of CPU Cores used by
Apache Flink among the cluster.
Is ther
n't get the fine-grained
control you probably need for your use case.
- Ufuk
On Thu, May 5, 2016 at 3:29 PM, Paschek, Robert
mailto:robert.pasc...@tu-berlin.de>> wrote:
> Hi Mailing List,
>
>
>
> I want to write and read intermediates to/from disk.
>
> The f
where all
variables in the return type can be deduced from the input type(s).
After adding adding ".returns(input.getType())" to my transformation,
everything works great now : - )
Many thanks to these developers, who added this messages in the last versions!
Best,
Robert
__
Hi Mailing List,
I probably have a problem with the Lazy Evaluation. Depending of the “return”
Datatype of my last Transformation (GroupReduce), the integrated Flink Mini
Clusters does not start. I have done the following:
// Configuration
Configuration parameters = new Configuration();
paramet
Hi Mailing List,
I want to write and read intermediates to/from disk.
The following foo- codesnippet may illustrate my intention:
public void mapPartition(Iterable tuples, Collector out) {
for (T tuple : tuples) {
if (Condition)