Hi Stephan, Vasiliki, Fabian,
Thanks so much for getting back to me so quickly. I apologies I wasn't able to
reply with the same speed. I have been waiting until I had a large enough chunk
of time to go through each of your suggestions in order to ensure I could
provide useful feedback.
Unfo
Hey Ufuk,
thank you for this. I have not yet taken a look at this hash table, will
look into it tommorrow at the office.
Sebastian
2014-07-27 21:02 GMT+02:00 Ufuk Celebi :
> Thanks! I didn't take a look at the plan or code yet, but the call of
> joinFunction.join(probeSideRecord, null, collect
Thanks! I didn't take a look at the plan or code yet, but the call of
joinFunction.join(probeSideRecord, null, collector) in
JoinWithSolutionSetSecondDriver.java:143 is the root of the problem. It is
taking the branch, because the solution set join is *not* finding a match in
the hash table for
Hi,
attached is the whole stacktrace. I am working in this branch
https://github.com/skunert/incubator-flink/tree/constantFields_renamed .The
question is whether the plan is incorrectly build due to my changes or if
there is maybe a optimizer bug which only comes to effect because of my
changes. T
Hey Sebastian,
Could you also post the exception?
Thanks!
Ufuk
> On 27 Jul 2014, at 18:23, Sebastian Kunert wrote:
>
> Hey guys,
>
> I am currently working on optimizer integration of forwarded fields. I get
> NullPointerExceptions during the execution of our ConnectedComponentITCase
> in th
Gomathivinayagam Muthuvinayagam created FLINK-1037:
--
Summary: Projects wiki page link in contribution page is broken
Key: FLINK-1037
URL: https://issues.apache.org/jira/browse/FLINK-1037
Hey guys,
I am currently working on optimizer integration of forwarded fields. I get
NullPointerExceptions during the execution of our ConnectedComponentITCase
in the NeighborWithComponentIDJoin. A second pair of eyes on this plan
would help me, maybe there are some obvious problems with it that I
i think this is what martin is currently doing:
StringIDs --map-> (StringIDs,LongIDs) --map-> LongIDs
and he wants to use both the second and third set. he asks for a way to
replace the second map operation. (since it seems unnecessary to create
an extra map for that)
i believe the appropria
Hey Martin,
On 27 Jul 2014, at 12:56, Martin Neumann wrote:
> Is there a way to do a operation that allows for more the one output set
> (basically split a set into 2 sets)? This would reduce the complexity of
> the code a lot.
What exactly do you mean with split?
I am not sure if this is what
Hej,
I have a dataset of StringID's and I want to map them to Longs by using a
hash function. I will use the LongID's in a series of Iterative
computations and then map back to StringID's.
Currently I have a map operation that creates tuples with the string and
the long. I have an other mapper cle
10 matches
Mail list logo