Hi Naveen,

In our case biggest performance gain happened when we started adding data
to IgniteStreamer in parallel.


Earlier we are doing :
entryMapToBeStreamed.entrySet().*stream*().forEach(dataStreamer::addData);

Perf improved tremendously when we did something like this :
entryMapToBeStreamed.entrySet().*parallelStream*
().forEach(dataStreamer::addData);


If you are not doing this (calling addData() in multiple threads), I would
do that first and check performance.


Best Regards,
Gaurav


On Mon, Mar 12, 2018 at 9:55 AM, Naveen <[email protected]> wrote:

> Hi Gaurav
>
> Decoupling file reading and cache streaming requires kind of a messaging
> layer in between right. Initially I was thinking since its a bulk activity
> we will be doing, I did not want to have additional memory and system
> resources consumed by the introduction of messaging layer.
>
> But the whole purpose of using DataStreamer for bulk loading is violated, I
> may have go for it.
>
> If my understanding is correct, are we trying to implement the below
> solution
>
> 1. Read in batches, for a collection object, lets say for every 10K records
> 2. Publish collection object to Solace (Messaging)
> 3. Consume the collection object from Solace Queue and use DatStreamer <Map
> entry> to load it to cache.
>
> OR through StreamRecivers and DataStreamers we can avoid messaging layer
> for
> decoupling the read part with cache streaming part.
>
> If you have any code snippet which does the same, would you be able to
> share
> the code.
>
>
> Thanks
> Naveen
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to