Hi Neel,

Here are some replies to your comments on the pre-review patches for 2PBE:

Neelakanta Reddy wrote:
> Hi Anders,
>
> Following are the comments/questions for the given patch set:
>
> 1. check has to be there if  OPENSAF_IMM_PBE_RT_CLASS_NAME is not found
>
> @@ -942,6 +942,14 @@ int loadImmFromPbe(void* pbeHandle, bool
>         TRACE_2("Successfully Applied the CCB ");
>         saImmOmCcbFinalize(ccbHandle);
>
> +       cimi = 
> classInfoMap.find(std::string(OPENSAF_IMM_PBE_RT_CLASS_NAME));
> +       LOG_NO("Removed class %s RC:%u", (char *) 
> OPENSAF_IMM_PBE_RT_CLASS_NAME,
> +               saImmOmClassDelete(immHandle, (char *) 
> OPENSAF_IMM_PBE_RT_CLASS_NAME));
> +
> +       /*      if(cimi == classInfoMap.end()) {*/
> +       opensafPbeRtClassCreate(immHandle);
> +       /*      }*/
> +
The class OPENSAF_IMM_PBE_RT_CLASS_NAME is the name of a new runtime 
class defining
runtime objects (non persistent of course) used by the new PBE 
implementation. There is one runtime object
of this class for each PBE. And each PBE creates a separate thread for 
handling the runtime object.
The new runtime object has two purposes. The first and main one is to 
allow the primary and
secondary PBE to synchronize persistent writes. where the sqlite CCB 
prepare is the most important.
It allows the CCB prepare to execute in parallel at both PBEs. The slave 
will reply ok on the prepare
if it has received the same number of operations for the CCB as the 
primary has. If that is the case
it replies OK immediately and then executes the sqlite calls for the CCB 
operations, but not for the commit.
The primary on receiving Ok on the prepare will execute the sqlitecalls 
for the CCB operations and
the sqlite commit. Tyhe primary replies to the imm-ram in the same way 
as 1PBE today so the CCB commits
in imm-ram. Finally the slave PBE as an applier will receive the 
completed and apply callbacks for the CCB
which will trigger the sqlite commit in the slave.

If the slave has not yet received all operations (as an applier) when a 
prepare request arrives from the
primary, or if the slave is still busy with committing the previous CCB, 
then the rt-b-thread will
do a *local* TRY_AGAIN wait in the slave for a few seconds. If that 
local TRY_AGAIN expires,
the prepare admin-op replies with TRY_AGAIN to the preimary, allowing 
the primary to decide
whether to TRY_AGAIN remotely or to abort the CCB.

The above is not visible in the pre-review patches though. The second 
purpose of this class and the
their associated runtime threads, is to allow the operator to check on 
the current state of each PBE.
Even if the PBE is busy/blocked in sqlite commit processing against a 
sluggish or blocked file system,
the real-time thread can reply on the PBEs state. There are some pure 
realtime attributes defined in
the class for this. This should be useful even for regular 1PBE.

The incomplete code above where the class is deleted and re-created at 
loading has been cleaned up.
The point of that is to allow simple upgrades (changes) to the 
definition of this runtime class between
releases. The imm can do this (avoid having to go via an upgrade 
campaign) because it is in the
special position of being in control of imm loading. So right after the 
actual loading has been successfully
completed, but before the loader has closed and signaled the loading as 
completed to the rest of the
system, we know there can be no instances of any non persistent runtime 
objects yet, so we know
that the class-delete should succeed (if the class exists) and the 
create of the new version of the class
should succeed.

Of course this sneaky way of upgrading a class for non persistent 
runtime data does not work for
normal real upgrades which should not normally involve a cluster 
reload.  So any change to this class would
still have to be carefully managed by the imm implementation.

> 2. creating of OsafImmPbeRt should not have been allowed in the 
> pre-load time.
>
> osafimmloadd: ER Failed to create the class OsafImmPbeRt err:6
> osafimmloadd: NO The class OsafImmPbeRt has been created since it was 
> missing from the imm.xml load file

Fixed.
>
>
> 3. when the pbe tries to update the epoch immeadietly after loading 
> following error is returned from the slave:
>
> osafimmpbed: NO Got ERR_NOT_EXIST on atempt to update epoch towards 
> slave PBE
> osafimmpbed: NO Update epoch 4 committing with ccbId:4294967299
> osafimmnd[4773]: NO Implementer (applier) connected: 15 
> (@OpenSafImmPBE) <0, 2010f>
> osafimmnd[4773]: NO Implementer connected: 16 (OsafImmPbeRt_B) <0, 2010f>
This log printout has been suppressed (converted from ERrror to INfo) 
and is actually harmless.
>
> 4. The Adminoperation towards slave PBE for class create/delete and 
> ccb apply may effect the IMMSV synchronous timeout.
I am not sure what you mean by that.
If you mean that the class-create operation may take longer to execute 
with 2PBE than with 1PBE then that is not yet known.
I have not done any measurements but I dont notice any difference when 
testing so if there is a difference it is normally
less than human cognition can detect (at least for me when testing on UML).

Remember that 2PBE executes two PBEs that write towards local file 
systems, compared to regular 1PBE that writes
to a shared file system, where each sqlite commit involves a duplicated 
fsync, each of which has to be secured remote.
So for 1PBE there would be two remote (blocking) turnarounds whereas 
here we have one. The admop requests here
goes via fevs but that still amouts to one blocking  turnaround. Reply 
is direct. The fact that fevs is a broadcast should
not significantly slow down the turnarround.
In any case class-create and class-delete are rare operations and thus 
not on the top of the list of things to optimize
for performance, wven if it turns would turn out that response time is 
lowered by 2PBE.

I should also say generally that the goal of 2PBE is not to get better 
performance.
The goal is to allow deployment of OpenSAF on systems that do not have 
any shared file system.
I expect performance for persistent writes to be "roughly the same".
It could even be improved compared with some serups that currently 
overload the shared file system.
At least 2PBE will be less prone to the complete choke-ups for peristent 
imm writes we have seen on
occasion with DRBD. For non persistent writes and for reads there should 
of course be no difference
with 2PBE.

>
> 5. creating OPENSAF_IMM_PBE_RT_CLASS_NAME while loading if 2PBE is not 
> set must be avoided.
Why?
I see the runtime class as usefull also for 1PBE.
There will of course then only be one instance of the class.
>
> 6. what is the use of creating OPENSAF_IMM_PBE_RT_IMPL_NAME_A
This is the instance  that is owned by the primary PBE or the only PBE 
in regular 1PBE.
So far in the 2PBE implementation it is only used to allow probing the 
PBE for current status.
Currently this just shows the last committed ccb-id, commit time and epoch.
>
> 7. If the callback functions are same, what is the use of creating 
> OPENSAF_IMM_PBE_RT_IMPL_NAME_A and OPENSAF_IMM_PBE_RT_IMPL_NAME_B.
Currently the callback functions are implemented as shared. Thus instead 
of having separate function implementations for
the A/primary and B/slave, they use the same function bodies, but branch 
on their role (primary/A or slave/B).
This is an advantage when there is a lot of shared code that needs to be 
executed by both sides.

>
> manipulating of present pbe callback functions may be considered. what 
> is the real use of these implementers.
Se above in the reply to question (1).
>
> 8. when the same class created/deleted and PRTO create/update is 
> performed, the ccb ID is more in the slave PBE. once the cluster is 
> restarted always slave PBE will have greater ccbid which is chosen by 
> IMMD arbiteration.
This has been fixed. Class/create (and PRTO/PRTA operations are not real 
CCBs. But in the fixed implementation, the primary sends its
pseudo-ccb'id to the slave for these operations. So the slave can commit 
the same logical operation using the same pseudo-ccb-id.
A pseudo-ccb-id is recognizable by having a value in the high range of 
the 64 bit value.
>
> 9. while continuely creating ccb-creates, remove the imm.db(a case 
> like nfs failure/stop) then slave PBE tries to recreate the db but fails.
>
> where In the primary PBE failed to aplly the PBE because slave PBE is 
> not there:
>
> osafimmpbed: WA Failed to find CCB object for 18407
> osafimmpbed: WA Start prepare for ccb: 18408 towards slave PBE 
> returned: '12' from Immsv
> osafimmpbed: WA Ccb:18408 failed to prepare towards slave PBE
> osafimmnd[30144]: NO Ccb 18408 ABORTED (immcfg_Slot-3_330)
> osafimmpbed: WA Failed to find CCB object for 18408
> osafimmpbed: WA Start prepare for ccb: 18409 towards slave PBE 
> returned: '12' from Immsv
> osafimmpbed: WA Ccb:18409 failed to prepare towards slave PBE
> osafimmnd[30144]: NO Ccb 18409 ABORTED (immcfg_Slot-3_333)
> osafimmpbed: WA Failed to find CCB object for 18409

Well this is stress testing.
The primary requirement here is that nothing incorrect happens, that is 
nothing that would violate the
transaction ACID properties. It seems that you are seing problems with 
progress here. That is you seem
to end up getting stuck. But for how long? The problem should resolve 
itself.

I am not convinced that removing the imm.db is equivalent to an NFS 
choke up.
AS long as the file is still open (the sqlite handle is not closed) then 
the file will actually
not be removed by the kernel, even if it is not visible in the directory 
any more.

If you want to simulate a choked or slow file system, then inserting a 
sleep in the PBE slave
would be a better simulation. Removing the imm.db.xxxx file I would not 
consider a "fair"
test. Although I dont quite understand why you see any problems at all. 
Perhaps the sqlite
library logic gets upset/confused in handling the journal file etc ? But 
the sqlite library
has acces to the file still ..

> The slave PBE can not be able to do classimplementerset
>
> Oct  4 15:05:58 Slot-4 osafimmnd[3039]: NO ERR_TRY_AGAIN: ccb 9657 is 
> active on object cscfRdn=75387 of class neNumber. Can not add class 
> applier
> Oct  4 15:06:04 Slot-4 last message repeated 10 times
> Oct  4 15:06:04 Slot-4 osafimmpbed: ER saImmOiClassImplementerSet for 
> neNumber failed 6
> Oct  4 15:06:04 Slot-4 osafimmnd[3039]: NO Implementer locally 
> disconnected. Marking it as doomed 131 <429, 2020f> (@OpenSafImmPBE)
> Oct  4 15:06:04 Slot-4 osafimmnd[3039]: NO Implementer locally 
> disconnected. Marking it as doomed 132 <430, 2020f> (OsafImmPbeRt_B)
> Oct  4 15:06:04 Slot-4 osafimmnd[3039]: NO Implementer disconnected 
> 131 <429, 2020f> (@OpenSafImmPBE)
> Oct  4 15:06:04 Slot-4 osafimmnd[3039]: NO Implementer disconnected 
> 132 <430, 2020f> (OsafImmPbeRt_B)
> Oct  4 15:06:04 Slot-4 osafimmnd[3039]: WA SLAVE PBE process has 
> apparently died at non coord
> Oct  4 15:06:04 Slot-4 osafimmnd[3039]: NO STARTING SLAVE PBE process.
Not a serious problem, assuming it does not happen often.
That is, this is a performance problem.
The slave will restart and should hopefully succeed in initailizing the 
next time.
New CCBs will not generate when the imm is not persistent writable and 
the imm is not
persistent writable in 2PBE wehn not both PBEs are available.
So this problem should dissapear once ccb 9657 has been aborted.
>
>
> 10. In the master-slave of 2PBE approach, another level of time 
> dependecy is created(by calling admin operations). using atomic 
> approach for both PBE's may be faster because, PBEB (@OpenSafImmPBE) 
> receives the callbacks same as master(PBEA).
Dont know what you mean by "another level of time dependency".
AGain remember that regular old 1PBE dpends on DRBD whoch has 
essentially the same "time dependency",
i.e. a synchronous wait on remote operations.

Thanks for the early testing and thanks for the frrdback!

/AndersBj
>
> Thanks,
> Neel.
>
> On Friday 19 July 2013 03:25 PM, Anders Bjornerstedt wrote:
>> Summary: Pre-review of 2PBE complete dataflow.
>> Review request for Trac Ticket(s): (#21)
>> Peer Reviewer(s): Neel
>> Pull request to:
>> Affected branch(es): devel(4.4)
>> Development branch:
>>
>> --------------------------------
>> Impacted area       Impact y/n
>> --------------------------------
>>   Docs                    n
>>   Build system            n
>>   RPM/packaging           n
>>   Configuration files     n
>>   Startup scripts         n
>>   SAF services            n
>>   OpenSAF services        n
>>   Core libraries          n
>>   Samples                 n
>>   Tests                   n
>>   Other                   n
>>
>>
>> Comments (indicate scope for each "y" above):
>> ---------------------------------------------
>>
>> Thisd is a pre-review of the basic data flow solution that I propose 
>> for 2PBE.
>> By pre-review I mean:
>> These specific patches will not be pushed.
>> Thus I am not waiting for an "ack" on these patches.
>> The intention is to communicate the development of 2PBE so far and to 
>> allow
>> anyone to test and experimentat with it, or just inspect the code to get
>> an understanding of how it will work.
>>
>> This is not a complete solution for 2PBE.
>> There is still work to be done to get tighter syncronisation in the 
>> commit
>> of an imm-transaction between the two PBEs. With regular single PBE, 
>> the commit
>> of one imm-transaction was atomic with the commit with the commit to 
>> the sqlite file.
>> ion fact the commit of the sqlite transaction *was* the commit of the 
>> imm-transaction.
>>
>> With 2PBE, the commit of the transaction to PBE is still the commit 
>> of the imm-transaction.
>> But now there is also the issue of if and how to get the two sqlite 
>> files to commit atomically.
>> My current stance on this is that I will not attempt to make the 
>> commit to the two sqlite files
>> 100% atomic. This would either require at least two sqlite 
>> transaction commits/writes for every
>> one imm-transaction, or require the utilization of some open 2pc 
>> interface in sqlite if that
>> is available. I believe there is some kind of callback hook availble 
>> for sqlite transaction
>> prepare. But I will not attempt to use that for now. The basic idea 
>> is instead to keep
>> one primary PBE and then add a slave PBE. To make each 
>> imm-transaction commit with
>> syncronization between the two PBEs before sqlite commit, minimizing 
>> the probability that
>> one sqlite instance succeeds in commiting while the other sqlite 
>> instance fails.
>> Still they may of course diverge, due to file system problems or 
>> crashes.
>> The first thing to note here is that a cluster start, after such a 
>> broken PBE commit,
>> will be handled by the loading arbitration and that arbitration will 
>> choose to load
>> from the sqlite file that succeeded in the commit (the latest file) 
>> as long as it
>> is available. And even if the cluster restart should come up with one 
>> SC with the
>> file that failed to commit the last ccb and timeout in waiting for 
>> the other and
>> thus load from the older file, then there is also no problem because 
>> this only means
>> that the transaction will have been aborted. Note that the 
>> user/client of  the transaction
>> will not have obtained any answer on the outcome unless both PBEs 
>> succeeded in their commit.
>>
>> For regular CCBs, the primary PBE will prepare by executing all the 
>> sqlite calls needed to
>> build the transaction, but before comitting it sends a a message to 
>> the slave PBE asking
>> it it has received all requests to be part of the ccb and if so to 
>> also execute all
>> the sqlite calls to prepare the ccb. When/if the standby replies with 
>> ok, then the
>> primary PBE will commit its sqlite transaction and reply to immsv. 
>> The immsv will then
>> commit the ccb in imm-cluster-ram. Finaly as part of sending messages 
>> about the commit
>> of the ccb to all appliers, the slave PBE will get both completed and 
>> apply and will
>> (hopefully) commit its sqlite transaction.
>>
>> In general, the slave PBE is tightly controlled by the primary PBE 
>> (an asymetric solution).
>> For class management and PRTO/PRTA handling the solution is slightly 
>> different.
>> In general I have also tried to leverage from existing mechanisms, 
>> such as making the
>> slave PBE an applier of all config classes, thus avoiding additional 
>> and unnecessary
>> distributed messaging for the payload of a ccb.
>>
>> changeset 131d635a526a0e894245ecb08b630268f9035976
>> Author:    Anders Bjornerstedt <[email protected]>
>> Date:    Fri, 19 Jul 2013 08:39:25 +0200
>>
>>     IMM: 2PBE loading (test patch-1) [#21]
>>
>>     This is a testpatch containing a test version of the 2PBE loading 
>> mechanism.
>>     This patch will not be pushed. The intent with this patch is to 
>> allow
>>     testing and obtaining feedback on the 2PBE loading.
>>
>> This firt patch has aready been sent out once earlier for pre-review.
>> Keep in mind that it is for cluster restarts that all this is intended.
>> So in one sense this is the most important patch to be as reliable as 
>> possible.
>> --------------------------------------------------------------
>>
>> changeset f240bedd6aa2b92bdea38d1c6532ac8097ac9444
>> Author:    Anders Bjornerstedt <[email protected]>
>> Date:    Fri, 19 Jul 2013 08:45:07 +0200
>>
>>     IMM: 2PBE ccb-handling (test patch-2) [#21]
>>
>>     This is a testpatch containing a test version of the 2PBE 
>> ccb-handling. This
>>     patch will not be pushed. The intent with this patch is to allow 
>> testing and
>>     obtaining feedback on the 2PBE ccb-handling. This test-patch goes 
>> on top of
>>     the "2PBE loading" testpatch.
>>
>> Contains the process management changes to start and restart 2 PBEs, 
>> including failover
>> handling. Plus the data flow solution for CCBs. THe detailed 
>> synhronisation of ccb commit
>> between the two PBEs is still missing.
>> -------------------------------------------------------------
>>
>> changeset 421acf69d0765d0860bfe726b8f1ba1ccb519e6b
>> Author:    Anders Bjornerstedt <[email protected]>
>> Date:    Fri, 19 Jul 2013 08:55:23 +0200
>>
>>     IMM: 2PBE class-create/delete/schema change handling (test 
>> patch-3) [#21]
>>
>>     This is a testpatch containing a test version of the 2PBE 
>> class-handling.
>>     This patch will not be pushed. The intent with this patch is to 
>> allow
>>     testing and obtaining feedback on the 2PBE handling of imm 
>> classes. This
>>     test-patch goes on top of the "2PBE ccb-handling" testpatch.
>>
>> Provides 2PBE persistification of class create/delete/schema-change.
>> Extends the existing PBE oultion that uses an admin-op from immsv to PBE
>> so that the primary PBE invokes the same admin-op towards slave.
>>
>> ---------------------------------------------------------------
>>
>> changeset d6f91100b51e2b7215bf5fab95e10d0c069d14db
>> Author:    Anders Bjornerstedt <[email protected]>
>> Date:    Fri, 19 Jul 2013 09:10:43 +0200
>>
>>     IMM: 2PBE PRTO-create handling (test patch-4) [#21]
>>
>>     This is a testpatch containing a test version of the 2PBE 
>> handling of
>>     creates of persistent runtime objects (PRTOs). This patch will 
>> not be
>>     pushed. The intent with this patch is to allow testing and obtaining
>>     feedback on the 2PBE handling of PRTO creates This test-patch 
>> goes on top of
>>     the "2PBE class-handling" testpatch.
>>
>> Provides 2PBE persistification of PRTO create.
>> In this case the immsv directly sends the same payload callbacks to 
>> both the primary
>> and slave PBE. The primary will invoke an admin-op towards the slave 
>> to syncronize
>> (not implemented).
>>
>> changeset 416b82d2d116c2a9ede63b507c72c10ee2126f42
>> Author:    Anders Bjornerstedt <[email protected]>
>> Date:    Fri, 19 Jul 2013 09:16:17 +0200
>>
>>     IMM: 2PBE PRTO-delete handling (test patch-5) [#21]
>>
>>     This is a testpatch containing a test version of the 2PBE 
>> handling of
>>     deletes of persistent runtime objects (PRTOs). This patch will 
>> not be
>>     pushed. The intent with this patch is to allow testing and obtaining
>>     feedback on the 2PBE handling of PRTO deletes This test-patch 
>> goes on top of
>>     the "2PBE PRTO-create" testpatch.
>>
>> Provides 2PBE persistification of PRTO delete. This operation is 
>> quite different
>> from PRTO create because PRTO delete can in general be the delete of 
>> a subtree,
>> i.e. several PRTOs that have to be deleted as one transaction. Again 
>> the immsv
>> sends the same messages that it sent to the PBE, now also to the 
>> slave PBE.
>>
>> changeset 894df553ef7e1b829db43329a5bafe4ce7fb2cbd
>> Author:    Anders Bjornerstedt <[email protected]>
>> Date:    Fri, 19 Jul 2013 09:20:45 +0200
>>
>>     IMM: 2PBE PRTA-update handling (test patch-6) [#21]
>>
>>     This is a testpatch containing a test version of the 2PBE 
>> handling of
>>     updates of persistent runtime attributes (PRTAs). PRTAs can exist 
>> in either
>>     PRTOs or config objects. This patch will not be pushed. The 
>> intent with this
>>     patch is to allow testing and obtaining feedback on the 2PBE 
>> handling of
>>     PRTA updates This test-patch goes on top of the "2PBE PRTO-delete"
>>
>> Provides 2PBE persistification of PRTA updates. This more similar to 
>> PRTO create
>> than it is PRTO delete, but PRTA updaes can be an update in a config 
>> object.
>>
>>
>>
>> Complete diffstat:
>> ------------------
>>   osaf/libs/agents/saf/imma/imma_oi_api.c          |    4 +-
>>   osaf/libs/agents/saf/imma/imma_proc.c            |   34 +++-
>>   osaf/libs/common/immsv/immpbe_dump.cc            |   62 ++++++-
>>   osaf/libs/common/immsv/immsv_evt.c               |   55 ++++++-
>>   osaf/libs/common/immsv/include/immpbe_dump.hh    |    6 +-
>>   osaf/libs/common/immsv/include/immsv_api.h       |   17 +-
>>   osaf/libs/common/immsv/include/immsv_evt.h       |   18 +-
>>   osaf/libs/common/immsv/include/immsv_evt_model.h |    4 +
>>   osaf/services/saf/immsv/immd/immd_amf.c          |    5 +-
>>   osaf/services/saf/immsv/immd/immd_cb.h           |    6 +-
>>   osaf/services/saf/immsv/immd/immd_db.c           |    2 +
>>   osaf/services/saf/immsv/immd/immd_evt.c          |   82 ++++++++++-
>>   osaf/services/saf/immsv/immd/immd_main.c         |   40 +++++-
>>   osaf/services/saf/immsv/immd/immd_proc.c         |  229 
>> +++++++++++++++++++++++++++++-
>>   osaf/services/saf/immsv/immd/immd_proc.h         |    3 +-
>>   osaf/services/saf/immsv/immd/immd_sbevt.c        |   23 ++-
>>   osaf/services/saf/immsv/immloadd/imm_loader.cc   |  315 
>> ++++++++++++++++++++++++++++++++++++------
>>   osaf/services/saf/immsv/immloadd/imm_loader.hh   |    8 +-
>>   osaf/services/saf/immsv/immloadd/imm_pbe_load.cc |  196 
>> ++++++++++++++++++++++++-
>>   osaf/services/saf/immsv/immnd/ImmModel.cc        |  333 
>> ++++++++++++++++++++++++++++++++++---------
>>   osaf/services/saf/immsv/immnd/ImmModel.hh        |   17 +-
>>   osaf/services/saf/immsv/immnd/ImmSearchOp.cc     |    5 -
>>   osaf/services/saf/immsv/immnd/immnd_cb.h         |    6 +-
>>   osaf/services/saf/immsv/immnd/immnd_evt.c        |  345 
>> +++++++++++++++++++++++++++++++++++++++++++---
>>   osaf/services/saf/immsv/immnd/immnd_init.h       |    8 +-
>>   osaf/services/saf/immsv/immnd/immnd_main.c       |    3 +-
>>   osaf/services/saf/immsv/immnd/immnd_proc.c       |  246 
>> +++++++++++++++++++++++++++------
>>   osaf/services/saf/immsv/immpbed/immpbe.cc        |   77 +++++----
>>   osaf/services/saf/immsv/immpbed/immpbe.hh        |    4 +
>>   osaf/services/saf/immsv/immpbed/immpbe_daemon.cc |  677 
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------
>>  
>>
>>   tests/immsv/implementer/applier.c                |  121 
>> ++++++++++++---
>>   31 files changed, 2559 insertions(+), 392 deletions(-)
>>
>>
>> Testing Commands:
>> -----------------
>>
>>
>> Testing, Expected Results:
>> --------------------------
>> The basic normal positive use cases should work,
>> including the persistification of data to *both* pbe files.
>>
>> Most negative test cases (failover, killing of processes) should also.
>>
>> The main risk to watch for is when/if a PBE restarts with '--recover'
>> which means it simply re-attaches to the sqlite file without 
>> regenerating
>> a fresh file dumped from imm-ram, then there is a risk that there could
>> have been introduced a discrepancy between the files. In particular, the
>> last transaction being processed at the time of the crash could have
>> commited at the other PBE, yet have been rolled back at the restarted 
>> PBE.
>>
>> Persistent writes are blocked as long as not both PBEs are available.
>>
>> Conditions of Submission:
>> -------------------------
>> These patches will not be pushed.
>>
>>
>> Arch      Built     Started    Linux distro
>> -------------------------------------------
>> mips        n          n
>> mips64      n          n
>> x86         n          n
>> x86_64      n          n
>> powerpc     n          n
>> powerpc64   n          n
>>
>>
>> Reviewer Checklist:
>> -------------------
>> [Submitters: make sure that your review doesn't trigger any checkmarks!]
>>
>>
>> Your checkin has not passed review because (see checked entries):
>>
>> ___ Your RR template is generally incomplete; it has too many blank 
>> entries
>>      that need proper data filled in.
>>
>> ___ You have failed to nominate the proper persons for review and push.
>>
>> ___ Your patches do not have proper short+long header
>>
>> ___ You have grammar/spelling in your header that is unacceptable.
>>
>> ___ You have exceeded a sensible line length in your 
>> headers/comments/text.
>>
>> ___ You have failed to put in a proper Trac Ticket # into your commits.
>>
>> ___ You have incorrectly put/left internal data in your comments/files
>>      (i.e. internal bug tracking tool IDs, product names etc)
>>
>> ___ You have not given any evidence of testing beyond basic build tests.
>>      Demonstrate some level of runtime or other sanity testing.
>>
>> ___ You have ^M present in some of your files. These have to be removed.
>>
>> ___ You have needlessly changed whitespace or added whitespace crimes
>>      like trailing spaces, or spaces before tabs.
>>
>> ___ You have mixed real technical changes with whitespace and other
>>      cosmetic code cleanup changes. These have to be separate commits.
>>
>> ___ You need to refactor your submission into logical chunks; there is
>>      too much content into a single commit.
>>
>> ___ You have extraneous garbage in your review (merge commits etc)
>>
>> ___ You have giant attachments which should never have been sent;
>>      Instead you should place your content in a public tree to be 
>> pulled.
>>
>> ___ You have too many commits attached to an e-mail; resend as threaded
>>      commits, or place in a public tree for a pull.
>>
>> ___ You have resent this content multiple times without a clear 
>> indication
>>      of what has changed between each re-send.
>>
>> ___ You have failed to adequately and individually address all of the
>>      comments and change requests that were proposed in the initial 
>> review.
>>
>> ___ You have a misconfigured ~/.hgrc file (i.e. username, email etc)
>>
>> ___ Your computer have a badly configured date and time; confusing the
>>      the threaded patch review.
>>
>> ___ Your changes affect IPC mechanism, and you don't present any results
>>      for in-service upgradability test.
>>
>> ___ Your changes affect user manual and documentation, your patch series
>>      do not contain the patch that updates the Doxygen manual.
>>
>



------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk
_______________________________________________
Opensaf-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-devel

Reply via email to