Hi,


In our checkpoint application we have huge amount of data, some sections
have upto 500MB of data, while testing bulk-sync scenario I have observed
that performance is very poor.



Steps (active-standby model):

1.       Start active node and  checkpoint 500MB of data

2.       Start standby node



In 2nd step data is synced from active to standby node during
saCkptCheckpointOpen() call, after this application reads data from
checkpoint database(10MB at a time) using saCkptCheckpointRead() function
call and deposits data to application so that application on standby node
is in same state as active. We have observed that saCkptCheckpointRead()
call takes lot of time(increases exponentialy? If the total section size is
>= 400MB)



Any reasoning/performance test results available for checkpoint read/write
call?



How is checkpoint data transferred between osafckptnd process and
application? Is it that agent and osafckptnd share a common shared memory?



Regards,

Girish

-- 
.
------------------------------------------------------------------------------
_______________________________________________
Opensaf-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-users

Reply via email to