Looks good, I made a comment replying to one of Mircea's comments.
Cheers,
On Dec 12, 2013, at 6:57 PM, Vladimir Blagojevic wrote:
> On 12/6/2013, 11:40 AM, Mircea Markus wrote:
>> Hmm I think you could leverage the parallel iteration from the
>> EquivalentConcurrentHashMapV8 there instead of
> On 9 Dec 2013, at 08:10, Radim Vansa wrote:
>
> There is one thing I really don't like about the current implementation:
> DefaultCollector. And any other collection that keeps one (or more)
> object per entry.
> We can't assume that if you double the number of objects in memory (and
> in f
Sent from my iPhone
>> On 9 Dec 2013, at 16:45, Radim Vansa wrote:
>>
>> On 12/09/2013 04:21 PM, Vladimir Blagojevic wrote:
>> Radim, these are some very good ideas. And I think we should put them on
>> the roadmap.
> Do you have any JIRA where this could be marked down?
>>
>> Also, I like your
On 12/6/2013, 11:40 AM, Mircea Markus wrote:
> Hmm I think you could leverage the parallel iteration from the
> EquivalentConcurrentHashMapV8 there instead of writing it yourself ;)
>
Hi, for those interested in parallel M/R I have uploaded my first
proposal that will hopefully, with your input,
On 12/09/2013 04:21 PM, Vladimir Blagojevic wrote:
> Radim, these are some very good ideas. And I think we should put them on
> the roadmap.
Do you have any JIRA where this could be marked down?
>
> Also, I like your ExecutorAllCompletionService, however, I think it will
> not work in this case as
On 12/09/2013 05:33 PM, Radim Vansa wrote:
> On 12/09/2013 04:21 PM, Vladimir Blagojevic wrote:
>> Radim, these are some very good ideas. And I think we should put them on
>> the roadmap.
> Do you have any JIRA where this could be marked down?
>> Also, I like your ExecutorAllCompletionService, howe
Radim, these are some very good ideas. And I think we should put them on
the roadmap.
Also, I like your ExecutorAllCompletionService, however, I think it will
not work in this case as we often do not have exclusive access to the
underlying executor service used in ExecutorAllCompletionService.
There is one thing I really don't like about the current implementation:
DefaultCollector. And any other collection that keeps one (or more)
object per entry.
We can't assume that if you double the number of objects in memory (and
in fact, if you map entry to bigger object, you do that), they'd
Hey Mircea,
On 12/6/2013, 11:40 AM, Mircea Markus wrote:
> Ah right. Still for each key an StatelessTask instance is created though.
I don't think this is the case. There is one StatelessTask per
map/reduce/combine invocation.
>> What I intend to do is move this code to DataContainer because t
On 12/06/2013 04:18 PM, Mircea Markus wrote:
> - the DefaultDetaContainer uses an EquivalentConcurrentHashMapV8 for holding
> the entries, which already supports parallel iteration so the heavy lifting
> is already in place
>
not entirely true. If we configure size-based eviction, the
Defaul
On Dec 6, 2013, at 4:36 PM, Vladimir Blagojevic wrote:
> Hey Mircea,
>
> On the right track but not exactly. I do not use separate thread per
> key. Input keys for map/combine/reduce are split into a List of Lists
> and then each lists is submitted to Executor as a separate Runnable.
> Have
Hey Mircea,
On the right track but not exactly. I do not use separate thread per
key. Input keys for map/combine/reduce are split into a List of Lists
and then each lists is submitted to Executor as a separate Runnable.
Have a look at submitToExecutor method on
https://github.com/vblagoje/infi
Thanks Vladimir, I like the hands on approach!
Adding -dev, there's a lot of interest around the parallel M/R so I think
others will have some thoughts on it as well.
So what you're basically doing in your branch is iterate over all the keys in
the cache and then for each key invoke the mapping
13 matches
Mail list logo