Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-19 Thread Chuck Hill


On 2015-05-17, 3:12 AM, ocs.cz wrote:

Samuel,

On 14 5 2015, at 2:12 am, Samuel Pelletier 
sam...@samkar.commailto:sam...@samkar.com wrote:
I think your problem is with the locking. Optimistic locking does not lock 
anything it check on commit if things have changed.

Right; but does it potentially mess up PK generation? I thought it should not, 
but of course, as so often, I can be wrong.

I think that switching to pessimistic locking will help this situation

Originally, we have used pessimistic locking, but we have found it gets a bit 
slow. Correct me please if I am overlooking something of importance, but far as 
I know (and by my testing) it seems when pessimistic, even _reads_ are locked 
out and wait till the first transaction is committed.

We sort of need that anybody can read, and that any locking and induced delays 
happens only for those who edit, leaving readers unaffected.

for a multiple instance setup

Actually, the problem did happen in single-instance, but with concurrent 
requests on.

the sequence will be locked for the remaining transaction time. This will 
prevent other instance to obtain primary keys during the remaining of the 
transaction but will keep your primary key generator safe.

Hmmm I wonder.

What if I generated all the PKs myself programmatically, using something like a 
milli- or even microsecond timestamp?

EOF has a 40 byte UUID PK type that, in theory, would avoid this problem.  But 
you would have to convert all of the existing keys.


That should make clashes possible, but extremely improbable. I might even 
dedicate some bits to encode the thread number into the PK; in that case the 
clashes should be nearly impossible.

For the extremely rare cases when they would happen, I suppose it should be 
somewhat hairy: I would have to change the affected PK to the current 
timestamp, and go through all the relationships to change the appropriate 
FKs... ick, that could get ugly, but I suppose it should happen _extremely_ 
rarely?

This current problem was extremely rare too, no?  :-)


(I know EOF can do better itself, but not with INTEGER PKs.)

That said, it might be better to let EOF do its best with optimistic locking, 
and if the PK clash -- again very improbable, but possible -- happens, send to 
the DB SET UNIQUE FOR offending table(its PK), and retry?

That would work too, but I am surprised that this is needed.  I really am not 
grasping what could have happened here.

Chuck


Thanks a big lot,
OC

Le 2015-05-13 à 13:05, OC o...@ocs.czmailto:o...@ocs.cz a écrit :
Samuel,
On 12. 5. 2015, at 23:49, Samuel Pelletier 
sam...@samkar.commailto:sam...@samkar.com wrote:
Sequence generation for concurrent access may be tricky to do right, especially 
if the system is tuned for performance. There is a confrontation between the 
sequence integrity and the concurrent access. It is easy to use a sequence 
table wrong...
Definitely, and I am far from sure I am doing it right. Nevertheless it seems 
to be reasonably well tested.
Also, I do not use a separate sequence table; my approach is much simpler: 
there is a sequential attribute guarded by a UNIQUE constraint, and the saving 
code simply detects that this constraint failed, and if so, increments the 
value of the attribute and tries again.
That is far from efficient in case there is a lot of clashes, but they happen 
to be reasonably rare; and it should be pretty fail-proof, or am I overlooking 
something of importance?
OC, which database are you using
FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty old 
one.
Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT / 
Java 1.6.0_65 / Mac OS X 10.6.8.
with which connection settings for isolation and locking
Read-committed, optimistic.
and how your primary key are generated ?
Standard untouched EOF approach. All my PKs are INTEGERs.
Thanks a lot,
OC
Le 2015-05-12 à 17:09, Chuck Hill 
ch...@gevityinc.commailto:ch...@gevityinc.com a écrit :
You really do come up with the absolute best problems!  :-)
www.youtube.com/watch?v=otCpCn0l4Wo
My guess is that somehow the database failed to record the update to the 
sequence number.  Every time you ran it after that, it generated the used one 
and then failed. When you added logging, something that you added caused two to 
get generated with the first not used.  Then everything worked again.
Except... sequences should be generated outside of the ACID transition so I 
can't see how this could happen once, let alone multiple times.
Chuck
On 2015-05-12, 1:56 PM, OC wrote:
Hello there,
my application, among others, generates and stores audit records. The 
appropriate code is comparatively straightforward; it boils down to something 
like
===
... ec might contain unsaved objects at this moment ...
DBAudit audit=new DBAudit()
ec.insertObject(audit)
audit.takeValuesFromDictionary(... couple of plain attributes ...)
for (;;) { // see below the specific situation which causes a retry
try {

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-19 Thread Chuck Hill
On 2015-05-17, 3:19 AM, ocs.cz wrote:

Chuck,

On 14 5 2015, at 2:22 am, Chuck Hill 
ch...@gevityinc.commailto:ch...@gevityinc.com wrote:
FrontBase will return the sequence number if the transaction is rolled back, 
but I am pretty sure that EOF does a commit immediately after selecting for a 
PK.
It is possible that somehow the commit after the PK select failed and the 
exception got eaten, I suppose.  That seems a bit far fetched.

Hmmm here I might possibly see a way to prevent the problem in future: 
correct me please if I am wrong, but I understand permanentGlobalID causes this 
generation (and commit), right?

IIRC, the generation and commit are from EOF.  permanentGlobalID calls into 
that code to get the ID.



Well then, what if I, at the moment any EO gets inserted into an EC, 
immediatelly called permanentGlobalID for it?

The original problem was caused, as best I can call, by FrontBase vending the 
same sequence number twice.  Doing what you describe won't change or avoid that 
underlying problem.  It will just change when it happens.


Chuck



Unless I am overlooking something, it should get, commit and assign a safe PK 
for the EO. Later, when the EO gets saved, no PK clash would be possible.

About the only drawback I can see is that when generating lots of new EOs, 
there would be many unnecessary roundtrips to the DB and it would be sloow. But 
normally I create at worst tens (normally just a couple) of EOs inside a r/r 
loop, and batch imports etc. need to be optimised separately anyway.

Might this be a solution? Or am I overlooking something of importance, as so 
often?

Thanks a lot,
OC



 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-19 Thread ocs.cz
Chuck,

 On 19 5 2015, at 11:13 pm, Chuck Hill ch...@gevityinc.com wrote:
 
 Well then, what if I, at the moment any EO gets inserted into an EC, 
 immediatelly called permanentGlobalID for it?
 
 The original problem was caused, as best I can call, by FrontBase vending the 
 same sequence number twice.

Which itself was (probably, far as I can say) caused by an exception during a 
transaction (namely, an exception triggered by an UNIQUE constraint) and 
rollback.

As always, I might be overlooking something of importance, but it seemed me 
that simple permanentGlobalID-triggered get-me-next-PK roundtrip would never 
ever cause an exception. The UNIQUE thing of course might cause an exception 
essentially any time -- *but*, when this happens, the PK will be already 
assigned, committed and safe. Thus it seemed to me...

 Doing what you describe won’t change or avoid that underlying problem.  It 
 will just change when it happens.

... it actually would avoid the problem -- by separating “a transaction during 
which a PK gets assigned” from “a transaction which might be aborted by the 
UNIQUE exception“.

But of course I might be missing some important point?

Thanks a big lot for all the help,
OC


 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-19 Thread Chuck Hill
Turn on SQL logging and see what happens.  I don't recall if the Pks are 
generated in their own transaction or as part of the saveChanges() transaction. 
 If they are generated and committed in their own transaction (which is my 
guess), then your proposal won't help.

Chuck


On 2015-05-19, 2:24 PM, ocs.cz wrote:

Chuck,

On 19 5 2015, at 11:13 pm, Chuck Hill 
ch...@gevityinc.commailto:ch...@gevityinc.com wrote:
Well then, what if I, at the moment any EO gets inserted into an EC, 
immediatelly called permanentGlobalID for it?
The original problem was caused, as best I can call, by FrontBase vending the 
same sequence number twice.

Which itself was (probably, far as I can say) caused by an exception during a 
transaction (namely, an exception triggered by an UNIQUE constraint) and 
rollback.

As always, I might be overlooking something of importance, but it seemed me 
that simple permanentGlobalID-triggered get-me-next-PK roundtrip would never 
ever cause an exception. The UNIQUE thing of course might cause an exception 
essentially any time -- *but*, when this happens, the PK will be already 
assigned, committed and safe. Thus it seemed to me...

Doing what you describe won't change or avoid that underlying problem.  It will 
just change when it happens.

... it actually would avoid the problem -- by separating a transaction during 
which a PK gets assigned from a transaction which might be aborted by the 
UNIQUE exception.

But of course I might be missing some important point?

Thanks a big lot for all the help,
OC


 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-17 Thread ocs.cz
Samuel,

 On 14 5 2015, at 2:12 am, Samuel Pelletier sam...@samkar.com wrote:
 I think your problem is with the locking. Optimistic locking does not lock 
 anything it check on commit if things have changed.

Right; but does it potentially mess up PK generation? I thought it should not, 
but of course, as so often, I can be wrong.

 I think that switching to pessimistic locking will help this situation

Originally, we have used pessimistic locking, but we have found it gets a bit 
slow. Correct me please if I am overlooking something of importance, but far as 
I know (and by my testing) it seems when pessimistic, even _reads_ are locked 
out and wait till the first transaction is committed.

We sort of need that anybody can read, and that any locking and induced delays 
happens only for those who edit, leaving readers unaffected.

 for a multiple instance setup

Actually, the problem did happen in single-instance, but with concurrent 
requests on.

 the sequence will be locked for the remaining transaction time. This will 
 prevent other instance to obtain primary keys during the remaining of the 
 transaction but will keep your primary key generator safe.

Hmmm I wonder.

What if I generated all the PKs myself programmatically, using something like a 
milli- or even microsecond timestamp?

That should make clashes possible, but extremely improbable. I might even 
dedicate some bits to encode the thread number into the PK; in that case the 
clashes should be nearly impossible.

For the extremely rare cases when they would happen, I suppose it should be 
somewhat hairy: I would have to change the affected PK to the current 
timestamp, and go through all the relationships to change the appropriate 
FKs... ick, that could get ugly, but I suppose it should happen _extremely_ 
rarely?

(I know EOF can do better itself, but not with INTEGER PKs.)

That said, it might be better to let EOF do its best with optimistic locking, 
and if the PK clash -- again very improbable, but possible -- happens, send to 
the DB SET UNIQUE FOR offending table(its PK), and retry?

Thanks a big lot,
OC

 Le 2015-05-13 à 13:05, OC o...@ocs.cz a écrit :
 
 Samuel,
 
 On 12. 5. 2015, at 23:49, Samuel Pelletier sam...@samkar.com wrote:
 
 Sequence generation for concurrent access may be tricky to do right, 
 especially if the system is tuned for performance. There is a confrontation 
 between the sequence integrity and the concurrent access. It is easy to use 
 a sequence table wrong...
 
 Definitely, and I am far from sure I am doing it right. Nevertheless it 
 seems to be reasonably well tested.
 
 Also, I do not use a separate sequence table; my approach is much simpler: 
 there is a sequential attribute guarded by a UNIQUE constraint, and the 
 saving code simply detects that this constraint failed, and if so, 
 increments the value of the attribute and tries again.
 
 That is far from efficient in case there is a lot of clashes, but they 
 happen to be reasonably rare; and it should be pretty fail-proof, or am I 
 overlooking something of importance?
 
 OC, which database are you using
 
 FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty 
 old one.
 
 Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT 
 / Java 1.6.0_65 / Mac OS X 10.6.8.
 
 with which connection settings for isolation and locking
 
 Read-committed, optimistic.
 
 and how your primary key are generated ?
 
 Standard untouched EOF approach. All my PKs are INTEGERs.
 
 Thanks a lot,
 OC
 
 Le 2015-05-12 à 17:09, Chuck Hill ch...@gevityinc.com a écrit :
 
 You really do come up with the absolute best problems!  :-)  
 www.youtube.com/watch?v=otCpCn0l4Wo
 
 My guess is that somehow the database failed to record the update to the 
 sequence number.  Every time you ran it after that, it generated the used 
 one and then failed. When you added logging, something that you added 
 caused two to get generated with the first not used.  Then everything 
 worked again.
 
 Except… sequences should be generated outside of the ACID transition so I 
 can’t see how this could happen once, let alone multiple times.
 
 Chuck
 
 On 2015-05-12, 1:56 PM, OC wrote:
 
 Hello there,
 
 my application, among others, generates and stores audit records. The 
 appropriate code is comparatively straightforward; it boils down to 
 something like
 
 ===
 ... ec might contain unsaved objects at this moment ...
 DBAudit audit=new DBAudit()
 ec.insertObject(audit)
 audit.takeValuesFromDictionary(... couple of plain attributes ...)
 for (;;) { // see below the specific situation which causes a retry
 try {
   ec.saveChanges()
 } catch (exception) {
   // EC might contain an object which needs a sequentially numbered 
 attribute
   // it should be reliable through all instances
   // there is a DB unique constraint to ensure that
   // the constraint exception is detected and served essentially this way:
   if 

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-17 Thread ocs.cz
Chuck,

 On 14 5 2015, at 2:22 am, Chuck Hill ch...@gevityinc.com wrote:
 
 FrontBase will “return” the sequence number if the transaction is rolled 
 back, but I am pretty sure that EOF does a commit immediately after selecting 
 for a PK.
 
 It is possible that somehow the commit after the PK select failed and the 
 exception got eaten, I suppose.  That seems a bit far fetched.

Hmmm here I might possibly see a way to prevent the problem in future: 
correct me please if I am wrong, but I understand permanentGlobalID causes this 
generation (and commit), right?

Well then, what if I, at the moment any EO gets inserted into an EC, 
immediatelly called permanentGlobalID for it?

Unless I am overlooking something, it should get, commit and assign a safe PK 
for the EO. Later, when the EO gets saved, no PK clash would be possible.

About the only drawback I can see is that when generating lots of new EOs, 
there would be many unnecessary roundtrips to the DB and it would be sloow. But 
normally I create at worst tens (normally just a couple) of EOs inside a r/r 
loop, and batch imports etc. need to be optimised separately anyway.

Might this be a solution? Or am I overlooking something of importance, as so 
often?

Thanks a lot,
OC



 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-17 Thread ocs.cz
Samuel,

 On 14 5 2015, at 2:30 pm, Samuel Pelletier sam...@samkar.com wrote:
 
 I just tested with my local FB 5.2.14 and it behave like oracle, the current 
 transaction state or setting does not affect the unique sequence, it always 
 increments and return the next value.
 
 OC, I suggest you upgrade your FB version, it is very easy, just update the 
 binaries, no need to migrate the data.

That looks like the easiest cure :)

Myself, I use 7.2.18, but the admins at the server have their own priorities 
(notably they still stick with Java 6, which already caused a lot of problems) 
-- anyway, hopefully they won't mind upgrading; I'll send them the suggestion, 
thanks a lot!

OC



 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-16 Thread Samuel Pelletier
Chuck,

I used FrontBase Manager with 2 connections with these setting and discrete 
commits.
set transaction isolation level read committed, locking optimistic, read write;

I tried different rollback scenarios but the sequence was always correct. I 
used FB 5.2.14 on OS X with a database created on the same version.

Samuel

 Le 2015-05-14 à 16:21, Chuck Hill ch...@gevityinc.com a écrit :
 
 Hi Samuel,
 
 What did you do to test FrontBase?  I tried this in FrontBaseManager with 
 “discrete commit” and rolling back the transaction, caused the sequence 
 numbers generated to be generated again in the next transaction.  If you 
 commit (or auto commit) then it behaves as  you describe.  It also does that 
 for EOF which does a commit after the “SELECT UNIQUE FROM table
 
 
 Chuck
 
 
 On 2015-05-14, 5:30 AM, Samuel Pelletier wrote:
 
 In FB, they used to be inside the transaction (If I remember correctly) and 
 with Read Committed locking optimistic, the server to return the same value 
 to both connections if the second select overlaps the first.
 
 I just tested with my local FB 5.2.14 and it behave like oracle, the current 
 transaction state or setting does not affect the unique sequence, it always 
 increments and return the next value.
 
 OC, I suggest you upgrade your FB version, it is very easy, just update the 
 binaries, no need to migrate the data.
 
 Samuel
 
 Le 2015-05-13 à 20:22, Chuck Hill ch...@gevityinc.com 
 mailto:ch...@gevityinc.com a écrit :
 
 It depends on the database.  The Oracle sequence generation is outside of 
 the ACID transaction and is not affected by transactions or commits.  Once 
 Oracle has returned a number from a sequence it won’t do so again* 
 regardless of any transactions getting rolled back or committed.   
 
 * assuming that the sequence is not configured to CYCLE. 
 
 FrontBase will “return” the sequence number if the transaction is rolled 
 back, but I am pretty sure that EOF does a commit immediately after 
 selecting for a PK.
 
 It is possible that somehow the commit after the PK select failed and the 
 exception got eaten, I suppose.  That seems a bit far fetched.
 
 Chuck
 
 On 2015-05-13, 5:12 PM, Samuel Pelletier wrote:
 
 OC,
 
 I think your problem is with the locking. Optimistic locking does not lock 
 anything it check on commit if things have changed.
 
 I think that switching to pessimistic locking will help this situation for a 
 multiple instance setup, the sequence will be locked for the remaining 
 transaction time. This will prevent other instance to obtain primary keys 
 during the remaining of the transaction but will keep your primary key 
 generator safe.
 
 This apply to all database to my knowledge, I just googgled and ir seems 
 Oracle behave the same way.
 
 Samuel
 
 
 Le 2015-05-13 à 13:05, OC o...@ocs.cz mailto:o...@ocs.cz a écrit :
 Samuel,
 On 12. 5. 2015, at 23:49, Samuel Pelletier sam...@samkar.com 
 mailto:sam...@samkar.com wrote:
 Sequence generation for concurrent access may be tricky to do right, 
 especially if the system is tuned for performance. There is a confrontation 
 between the sequence integrity and the concurrent access. It is easy to use 
 a sequence table wrong...
 Definitely, and I am far from sure I am doing it right. Nevertheless it 
 seems to be reasonably well tested.
 Also, I do not use a separate sequence table; my approach is much simpler: 
 there is a sequential attribute guarded by a UNIQUE constraint, and the 
 saving code simply detects that this constraint failed, and if so, 
 increments the value of the attribute and tries again.
 That is far from efficient in case there is a lot of clashes, but they 
 happen to be reasonably rare; and it should be pretty fail-proof, or am I 
 overlooking something of importance?
 OC, which database are you using
 FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty 
 old one.
 Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT 
 / Java 1.6.0_65 / Mac OS X 10.6.8.
 with which connection settings for isolation and locking
 Read-committed, optimistic.
 and how your primary key are generated ?
 Standard untouched EOF approach. All my PKs are INTEGERs.
 Thanks a lot,
 OC
 Le 2015-05-12 à 17:09, Chuck Hill ch...@gevityinc.com 
 mailto:ch...@gevityinc.com a écrit :
 You really do come up with the absolute best problems!  :-)  
 www.youtube.com/watch?v=otCpCn0l4Wo 
 http://www.youtube.com/watch?v=otCpCn0l4Wo
 My guess is that somehow the database failed to record the update to the 
 sequence number.  Every time you ran it after that, it generated the used 
 one and then failed. When you added logging, something that you added caused 
 two to get generated with the first not used.  Then everything worked again.
 Except… sequences should be generated outside of the ACID transition so I 
 can’t see how this could happen once, let alone multiple times.
 Chuck
 On 2015-05-12, 1:56 PM, OC wrote:
 Hello there,
 my application, 

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-14 Thread Samuel Pelletier
In FB, they used to be inside the transaction (If I remember correctly) and 
with Read Committed locking optimistic, the server to return the same value to 
both connections if the second select overlaps the first.

I just tested with my local FB 5.2.14 and it behave like oracle, the current 
transaction state or setting does not affect the unique sequence, it always 
increments and return the next value.

OC, I suggest you upgrade your FB version, it is very easy, just update the 
binaries, no need to migrate the data.

Samuel

 Le 2015-05-13 à 20:22, Chuck Hill ch...@gevityinc.com a écrit :
 
 It depends on the database.  The Oracle sequence generation is outside of the 
 ACID transaction and is not affected by transactions or commits.  Once Oracle 
 has returned a number from a sequence it won’t do so again* regardless of any 
 transactions getting rolled back or committed.   
 
 * assuming that the sequence is not configured to CYCLE. 
 
 FrontBase will “return” the sequence number if the transaction is rolled 
 back, but I am pretty sure that EOF does a commit immediately after selecting 
 for a PK.
 
 It is possible that somehow the commit after the PK select failed and the 
 exception got eaten, I suppose.  That seems a bit far fetched.
 
 Chuck
 
 On 2015-05-13, 5:12 PM, Samuel Pelletier wrote:
 
 OC,
 
 I think your problem is with the locking. Optimistic locking does not lock 
 anything it check on commit if things have changed.
 
 I think that switching to pessimistic locking will help this situation for a 
 multiple instance setup, the sequence will be locked for the remaining 
 transaction time. This will prevent other instance to obtain primary keys 
 during the remaining of the transaction but will keep your primary key 
 generator safe.
 
 This apply to all database to my knowledge, I just googgled and ir seems 
 Oracle behave the same way.
 
 Samuel
 
 
 Le 2015-05-13 à 13:05, OC o...@ocs.cz mailto:o...@ocs.cz a écrit :
 Samuel,
 On 12. 5. 2015, at 23:49, Samuel Pelletier sam...@samkar.com 
 mailto:sam...@samkar.com wrote:
 Sequence generation for concurrent access may be tricky to do right, 
 especially if the system is tuned for performance. There is a confrontation 
 between the sequence integrity and the concurrent access. It is easy to use a 
 sequence table wrong...
 Definitely, and I am far from sure I am doing it right. Nevertheless it seems 
 to be reasonably well tested.
 Also, I do not use a separate sequence table; my approach is much simpler: 
 there is a sequential attribute guarded by a UNIQUE constraint, and the 
 saving code simply detects that this constraint failed, and if so, increments 
 the value of the attribute and tries again.
 That is far from efficient in case there is a lot of clashes, but they happen 
 to be reasonably rare; and it should be pretty fail-proof, or am I 
 overlooking something of importance?
 OC, which database are you using
 FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty 
 old one.
 Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT / 
 Java 1.6.0_65 / Mac OS X 10.6.8.
 with which connection settings for isolation and locking
 Read-committed, optimistic.
 and how your primary key are generated ?
 Standard untouched EOF approach. All my PKs are INTEGERs.
 Thanks a lot,
 OC
 Le 2015-05-12 à 17:09, Chuck Hill ch...@gevityinc.com 
 mailto:ch...@gevityinc.com a écrit :
 You really do come up with the absolute best problems!  :-)  
 www.youtube.com/watch?v=otCpCn0l4Wo
 My guess is that somehow the database failed to record the update to the 
 sequence number.  Every time you ran it after that, it generated the used one 
 and then failed. When you added logging, something that you added caused two 
 to get generated with the first not used.  Then everything worked again.
 Except… sequences should be generated outside of the ACID transition so I 
 can’t see how this could happen once, let alone multiple times.
 Chuck
 On 2015-05-12, 1:56 PM, OC wrote:
 Hello there,
 my application, among others, generates and stores audit records. The 
 appropriate code is comparatively straightforward; it boils down to something 
 like
 ===
 ... ec might contain unsaved objects at this moment ...
 DBAudit audit=new DBAudit()
 ec.insertObject(audit)
 audit.takeValuesFromDictionary(... couple of plain attributes ...)
 for (;;) { // see below the specific situation which causes a retry
   try {
 ec.saveChanges()
   } catch (exception) {
 // EC might contain an object which needs a sequentially numbered 
 attribute
 // it should be reliable through all instances
 // there is a DB unique constraint to ensure that
 // the constraint exception is detected and served essentially this way:
 if (exceptionIsNotUniqueConstraint(exception)) throw exception
 SomeClass culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
 culprit.theSequentialNumber++
 // and try again...
   }
 }
 

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-14 Thread Chuck Hill
Hi Samuel,

What did you do to test FrontBase?  I tried this in FrontBaseManager with 
“discrete commit” and rolling back the transaction, caused the sequence numbers 
generated to be generated again in the next transaction.  If you commit (or 
auto commit) then it behaves as  you describe.  It also does that for EOF which 
does a commit after the “SELECT UNIQUE FROM table


Chuck


On 2015-05-14, 5:30 AM, Samuel Pelletier wrote:

In FB, they used to be inside the transaction (If I remember correctly) and 
with Read Committed locking optimistic, the server to return the same value to 
both connections if the second select overlaps the first.

I just tested with my local FB 5.2.14 and it behave like oracle, the current 
transaction state or setting does not affect the unique sequence, it always 
increments and return the next value.

OC, I suggest you upgrade your FB version, it is very easy, just update the 
binaries, no need to migrate the data.

Samuel

Le 2015-05-13 à 20:22, Chuck Hill 
ch...@gevityinc.commailto:ch...@gevityinc.com a écrit :

It depends on the database.  The Oracle sequence generation is outside of the 
ACID transaction and is not affected by transactions or commits.  Once Oracle 
has returned a number from a sequence it won’t do so again* regardless of any 
transactions getting rolled back or committed.

* assuming that the sequence is not configured to CYCLE.

FrontBase will “return” the sequence number if the transaction is rolled back, 
but I am pretty sure that EOF does a commit immediately after selecting for a 
PK.

It is possible that somehow the commit after the PK select failed and the 
exception got eaten, I suppose.  That seems a bit far fetched.

Chuck

On 2015-05-13, 5:12 PM, Samuel Pelletier wrote:

OC,

I think your problem is with the locking. Optimistic locking does not lock 
anything it check on commit if things have changed.

I think that switching to pessimistic locking will help this situation for a 
multiple instance setup, the sequence will be locked for the remaining 
transaction time. This will prevent other instance to obtain primary keys 
during the remaining of the transaction but will keep your primary key 
generator safe.

This apply to all database to my knowledge, I just googgled and ir seems Oracle 
behave the same way.

Samuel


Le 2015-05-13 à 13:05, OC o...@ocs.czmailto:o...@ocs.cz a écrit :
Samuel,
On 12. 5. 2015, at 23:49, Samuel Pelletier 
sam...@samkar.commailto:sam...@samkar.com wrote:
Sequence generation for concurrent access may be tricky to do right, especially 
if the system is tuned for performance. There is a confrontation between the 
sequence integrity and the concurrent access. It is easy to use a sequence 
table wrong...
Definitely, and I am far from sure I am doing it right. Nevertheless it seems 
to be reasonably well tested.
Also, I do not use a separate sequence table; my approach is much simpler: 
there is a sequential attribute guarded by a UNIQUE constraint, and the saving 
code simply detects that this constraint failed, and if so, increments the 
value of the attribute and tries again.
That is far from efficient in case there is a lot of clashes, but they happen 
to be reasonably rare; and it should be pretty fail-proof, or am I overlooking 
something of importance?
OC, which database are you using
FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty old 
one.
Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT / 
Java 1.6.0_65 / Mac OS X 10.6.8.
with which connection settings for isolation and locking
Read-committed, optimistic.
and how your primary key are generated ?
Standard untouched EOF approach. All my PKs are INTEGERs.
Thanks a lot,
OC
Le 2015-05-12 à 17:09, Chuck Hill 
ch...@gevityinc.commailto:ch...@gevityinc.com a écrit :
You really do come up with the absolute best problems!  :-)
www.youtube.com/watch?v=otCpCn0l4Wohttp://www.youtube.com/watch?v=otCpCn0l4Wo
My guess is that somehow the database failed to record the update to the 
sequence number.  Every time you ran it after that, it generated the used one 
and then failed. When you added logging, something that you added caused two to 
get generated with the first not used.  Then everything worked again.
Except… sequences should be generated outside of the ACID transition so I can’t 
see how this could happen once, let alone multiple times.
Chuck
On 2015-05-12, 1:56 PM, OC wrote:
Hello there,
my application, among others, generates and stores audit records. The 
appropriate code is comparatively straightforward; it boils down to something 
like
===
... ec might contain unsaved objects at this moment ...
DBAudit audit=new DBAudit()
ec.insertObject(audit)
audit.takeValuesFromDictionary(... couple of plain attributes ...)
for (;;) { // see below the specific situation which causes a retry
  try {
ec.saveChanges()
  } catch (exception) {
// EC might contain an object which needs a sequentially numbered attribute
   

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-13 Thread Samuel Pelletier
OC,

I think your problem is with the locking. Optimistic locking does not lock 
anything it check on commit if things have changed.

I think that switching to pessimistic locking will help this situation for a 
multiple instance setup, the sequence will be locked for the remaining 
transaction time. This will prevent other instance to obtain primary keys 
during the remaining of the transaction but will keep your primary key 
generator safe.

This apply to all database to my knowledge, I just googgled and ir seems Oracle 
behave the same way.

Samuel


 Le 2015-05-13 à 13:05, OC o...@ocs.cz a écrit :
 
 Samuel,
 
 On 12. 5. 2015, at 23:49, Samuel Pelletier sam...@samkar.com wrote:
 
 Sequence generation for concurrent access may be tricky to do right, 
 especially if the system is tuned for performance. There is a confrontation 
 between the sequence integrity and the concurrent access. It is easy to use 
 a sequence table wrong...
 
 Definitely, and I am far from sure I am doing it right. Nevertheless it seems 
 to be reasonably well tested.
 
 Also, I do not use a separate sequence table; my approach is much simpler: 
 there is a sequential attribute guarded by a UNIQUE constraint, and the 
 saving code simply detects that this constraint failed, and if so, increments 
 the value of the attribute and tries again.
 
 That is far from efficient in case there is a lot of clashes, but they happen 
 to be reasonably rare; and it should be pretty fail-proof, or am I 
 overlooking something of importance?
 
 OC, which database are you using
 
 FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty 
 old one.
 
 Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT / 
 Java 1.6.0_65 / Mac OS X 10.6.8.
 
 with which connection settings for isolation and locking
 
 Read-committed, optimistic.
 
 and how your primary key are generated ?
 
 Standard untouched EOF approach. All my PKs are INTEGERs.
 
 Thanks a lot,
 OC
 
 Le 2015-05-12 à 17:09, Chuck Hill ch...@gevityinc.com a écrit :
 
 You really do come up with the absolute best problems!  :-)  
 www.youtube.com/watch?v=otCpCn0l4Wo
 
 My guess is that somehow the database failed to record the update to the 
 sequence number.  Every time you ran it after that, it generated the used 
 one and then failed. When you added logging, something that you added 
 caused two to get generated with the first not used.  Then everything 
 worked again.
 
 Except… sequences should be generated outside of the ACID transition so I 
 can’t see how this could happen once, let alone multiple times.
 
 Chuck
 
 On 2015-05-12, 1:56 PM, OC wrote:
 
 Hello there,
 
 my application, among others, generates and stores audit records. The 
 appropriate code is comparatively straightforward; it boils down to 
 something like
 
 ===
 ... ec might contain unsaved objects at this moment ...
 DBAudit audit=new DBAudit()
 ec.insertObject(audit)
 audit.takeValuesFromDictionary(... couple of plain attributes ...)
 for (;;) { // see below the specific situation which causes a retry
  try {
ec.saveChanges()
  } catch (exception) {
// EC might contain an object which needs a sequentially numbered 
 attribute
// it should be reliable through all instances
// there is a DB unique constraint to ensure that
// the constraint exception is detected and served essentially this way:
if (exceptionIsNotUniqueConstraint(exception)) throw exception
SomeClass 
 culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
culprit.theSequentialNumber++
// and try again...
  }
 }
 ===
 
 It might be somewhat convoluted way to solve that (though I am afraid I 
 can't see any easier), but it worked for many months, about a zillion times 
 without the exception, sometimes with the exception and retry, always 
 without the slightest glitch.
 
 Then it so happened that
 
 - the EC indeed did contain an object with wrong (already occupied) 
 sequential number
 - a DBAudit with PK=1015164 was inserted
 - first time saveChanges did throw and the transaction was rolled back; the 
 second time (with incremented sequential number) it saved all right.
 
 So far so good, this did happen before often and never led to problems.
 
 This time though it did. The next time the above code was performed (no 
 sequentials, just the audit), the newly created audit was assigned _again_ 
 PK=1015164! Of course it failed. Well, we thought, perhaps there's some 
 ugly mess inside the EO stack; let's restart the application!
 
 After restart, the very first time the above code was called -- which is 
 pretty soon -- it happened again: regardless there was properly saved row 
 with PK=1015164 in the DB, EOF again assigned the same PK to the newly 
 created EO. I've tried it about five times (at first I did not believe my 
 eyes), it behaved consistently: restart, first time a DBAudit is created, 
 it gets PK=1015164 and saving (naturally) fails.
 
 Then I've 

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-13 Thread Chuck Hill
It depends on the database.  The Oracle sequence generation is outside of the 
ACID transaction and is not affected by transactions or commits.  Once Oracle 
has returned a number from a sequence it won’t do so again* regardless of any 
transactions getting rolled back or committed.

* assuming that the sequence is not configured to CYCLE.

FrontBase will “return” the sequence number if the transaction is rolled back, 
but I am pretty sure that EOF does a commit immediately after selecting for a 
PK.

It is possible that somehow the commit after the PK select failed and the 
exception got eaten, I suppose.  That seems a bit far fetched.

Chuck

On 2015-05-13, 5:12 PM, Samuel Pelletier wrote:

OC,

I think your problem is with the locking. Optimistic locking does not lock 
anything it check on commit if things have changed.

I think that switching to pessimistic locking will help this situation for a 
multiple instance setup, the sequence will be locked for the remaining 
transaction time. This will prevent other instance to obtain primary keys 
during the remaining of the transaction but will keep your primary key 
generator safe.

This apply to all database to my knowledge, I just googgled and ir seems Oracle 
behave the same way.

Samuel


Le 2015-05-13 à 13:05, OC o...@ocs.czmailto:o...@ocs.cz a écrit :
Samuel,
On 12. 5. 2015, at 23:49, Samuel Pelletier 
sam...@samkar.commailto:sam...@samkar.com wrote:
Sequence generation for concurrent access may be tricky to do right, especially 
if the system is tuned for performance. There is a confrontation between the 
sequence integrity and the concurrent access. It is easy to use a sequence 
table wrong...
Definitely, and I am far from sure I am doing it right. Nevertheless it seems 
to be reasonably well tested.
Also, I do not use a separate sequence table; my approach is much simpler: 
there is a sequential attribute guarded by a UNIQUE constraint, and the saving 
code simply detects that this constraint failed, and if so, increments the 
value of the attribute and tries again.
That is far from efficient in case there is a lot of clashes, but they happen 
to be reasonably rare; and it should be pretty fail-proof, or am I overlooking 
something of importance?
OC, which database are you using
FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty old 
one.
Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT / 
Java 1.6.0_65 / Mac OS X 10.6.8.
with which connection settings for isolation and locking
Read-committed, optimistic.
and how your primary key are generated ?
Standard untouched EOF approach. All my PKs are INTEGERs.
Thanks a lot,
OC
Le 2015-05-12 à 17:09, Chuck Hill 
ch...@gevityinc.commailto:ch...@gevityinc.com a écrit :
You really do come up with the absolute best problems!  :-)
www.youtube.com/watch?v=otCpCn0l4Wo
My guess is that somehow the database failed to record the update to the 
sequence number.  Every time you ran it after that, it generated the used one 
and then failed. When you added logging, something that you added caused two to 
get generated with the first not used.  Then everything worked again.
Except… sequences should be generated outside of the ACID transition so I can’t 
see how this could happen once, let alone multiple times.
Chuck
On 2015-05-12, 1:56 PM, OC wrote:
Hello there,
my application, among others, generates and stores audit records. The 
appropriate code is comparatively straightforward; it boils down to something 
like
===
... ec might contain unsaved objects at this moment ...
DBAudit audit=new DBAudit()
ec.insertObject(audit)
audit.takeValuesFromDictionary(... couple of plain attributes ...)
for (;;) { // see below the specific situation which causes a retry
  try {
ec.saveChanges()
  } catch (exception) {
// EC might contain an object which needs a sequentially numbered attribute
// it should be reliable through all instances
// there is a DB unique constraint to ensure that
// the constraint exception is detected and served essentially this way:
if (exceptionIsNotUniqueConstraint(exception)) throw exception
SomeClass culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
culprit.theSequentialNumber++
// and try again...
  }
}
===
It might be somewhat convoluted way to solve that (though I am afraid I can't 
see any easier), but it worked for many months, about a zillion times without 
the exception, sometimes with the exception and retry, always without the 
slightest glitch.
Then it so happened that
- the EC indeed did contain an object with wrong (already occupied) sequential 
number
- a DBAudit with PK=1015164 was inserted
- first time saveChanges did throw and the transaction was rolled back; the 
second time (with incremented sequential number) it saved all right.
So far so good, this did happen before often and never led to problems.
This time though it did. The next time the above code was performed (no 
sequentials, 

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-13 Thread OC
Chuck,

On 12. 5. 2015, at 23:09, Chuck Hill ch...@gevityinc.com wrote:

 You really do come up with the absolute best problems!  :-)  

Well it's great if one's best in something, is it not? ;)

 My guess is that somehow the database failed to record the update to the 
 sequence number.  Every time you ran it after that, it generated the used one 
 and then failed. When you added logging, something that you added caused two 
 to get generated with the first not used.  Then everything worked again.
 
 Except… sequences should be generated outside of the ACID transition so I 
 can’t see how this could happen once, let alone multiple times.

If that indeed was the culprit, is there a way to prevent the same problem if 
it occurs again?

Thanks,
OC

 On 2015-05-12, 1:56 PM, OC wrote:
 
 Hello there,
 
 my application, among others, generates and stores audit records. The 
 appropriate code is comparatively straightforward; it boils down to something 
 like
 
 ===
 ... ec might contain unsaved objects at this moment ...
 DBAudit audit=new DBAudit()
 ec.insertObject(audit)
 audit.takeValuesFromDictionary(... couple of plain attributes ...)
 for (;;) { // see below the specific situation which causes a retry
   try {
 ec.saveChanges()
   } catch (exception) {
 // EC might contain an object which needs a sequentially numbered 
 attribute
 // it should be reliable through all instances
 // there is a DB unique constraint to ensure that
 // the constraint exception is detected and served essentially this way:
 if (exceptionIsNotUniqueConstraint(exception)) throw exception
 SomeClass culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
 culprit.theSequentialNumber++
 // and try again...
   }
 }
 ===
 
 It might be somewhat convoluted way to solve that (though I am afraid I can't 
 see any easier), but it worked for many months, about a zillion times without 
 the exception, sometimes with the exception and retry, always without the 
 slightest glitch.
 
 Then it so happened that
 
 - the EC indeed did contain an object with wrong (already occupied) 
 sequential number
 - a DBAudit with PK=1015164 was inserted
 - first time saveChanges did throw and the transaction was rolled back; the 
 second time (with incremented sequential number) it saved all right.
 
 So far so good, this did happen before often and never led to problems.
 
 This time though it did. The next time the above code was performed (no 
 sequentials, just the audit), the newly created audit was assigned _again_ 
 PK=1015164! Of course it failed. Well, we thought, perhaps there's some ugly 
 mess inside the EO stack; let's restart the application!
 
 After restart, the very first time the above code was called -- which is 
 pretty soon -- it happened again: regardless there was properly saved row 
 with PK=1015164 in the DB, EOF again assigned the same PK to the newly 
 created EO. I've tried it about five times (at first I did not believe my 
 eyes), it behaved consistently: restart, first time a DBAudit is created, it 
 gets PK=1015164 and saving (naturally) fails.
 
 Then I've prepared a version with extended logs; for start, I've simply added 
 a log of audit.permanentGlobalID() just before saveChanges.
 
 It worked without a glitch, assigning (and logging) PK=1015165, and 
 (naturally) saving without a problem.
 
 I have immediately stopped the app, returned to the original version -- the 
 one which used to consistently fail -- and from that moment on, it worked all 
 right too, assigning PK=1015166, and then PK=1015167, and so forth, as it 
 should. Without a need to log audit.permanentGlobalID() first.
 
 Well. Gremlins?
 
 Might perhaps anyone have the slightest glitch of an idea what the b. h. 
 might have been the culprit, and how to prevent the problem to occur again in 
 the future?
 
 Thanks a lot,
 OC
 
 
 ___
 Do not post admin requests to the list. They will be ignored.
 Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
 Help/Unsubscribe/Update your Subscription:
 https://lists.apple.com/mailman/options/webobjects-dev/chill%40gevityinc.com
 
 This email sent to ch...@gevityinc.com


 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-13 Thread OC
Samuel,

On 12. 5. 2015, at 23:49, Samuel Pelletier sam...@samkar.com wrote:

 Sequence generation for concurrent access may be tricky to do right, 
 especially if the system is tuned for performance. There is a confrontation 
 between the sequence integrity and the concurrent access. It is easy to use a 
 sequence table wrong...

Definitely, and I am far from sure I am doing it right. Nevertheless it seems 
to be reasonably well tested.

Also, I do not use a separate sequence table; my approach is much simpler: 
there is a sequential attribute guarded by a UNIQUE constraint, and the saving 
code simply detects that this constraint failed, and if so, increments the 
value of the attribute and tries again.

That is far from efficient in case there is a lot of clashes, but they happen 
to be reasonably rare; and it should be pretty fail-proof, or am I overlooking 
something of importance?

 OC, which database are you using

FrontBase. Let me see the logs... at the server, there is 5.2.1g, a pretty old 
one.

Other sw versions: Groovy 2.3.8 / WebObjects 5.4.3 / ERExt's 6.1.3-SNAPSHOT / 
Java 1.6.0_65 / Mac OS X 10.6.8.

 with which connection settings for isolation and locking

Read-committed, optimistic.

 and how your primary key are generated ?

Standard untouched EOF approach. All my PKs are INTEGERs.

Thanks a lot,
OC

 Le 2015-05-12 à 17:09, Chuck Hill ch...@gevityinc.com a écrit :
 
 You really do come up with the absolute best problems!  :-)  
 www.youtube.com/watch?v=otCpCn0l4Wo
 
 My guess is that somehow the database failed to record the update to the 
 sequence number.  Every time you ran it after that, it generated the used 
 one and then failed. When you added logging, something that you added caused 
 two to get generated with the first not used.  Then everything worked again.
 
 Except… sequences should be generated outside of the ACID transition so I 
 can’t see how this could happen once, let alone multiple times.
 
 Chuck
 
 On 2015-05-12, 1:56 PM, OC wrote:
 
 Hello there,
 
 my application, among others, generates and stores audit records. The 
 appropriate code is comparatively straightforward; it boils down to 
 something like
 
 ===
 ... ec might contain unsaved objects at this moment ...
 DBAudit audit=new DBAudit()
 ec.insertObject(audit)
 audit.takeValuesFromDictionary(... couple of plain attributes ...)
 for (;;) { // see below the specific situation which causes a retry
   try {
 ec.saveChanges()
   } catch (exception) {
 // EC might contain an object which needs a sequentially numbered 
 attribute
 // it should be reliable through all instances
 // there is a DB unique constraint to ensure that
 // the constraint exception is detected and served essentially this way:
 if (exceptionIsNotUniqueConstraint(exception)) throw exception
 SomeClass 
 culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
 culprit.theSequentialNumber++
 // and try again...
   }
 }
 ===
 
 It might be somewhat convoluted way to solve that (though I am afraid I 
 can't see any easier), but it worked for many months, about a zillion times 
 without the exception, sometimes with the exception and retry, always 
 without the slightest glitch.
 
 Then it so happened that
 
 - the EC indeed did contain an object with wrong (already occupied) 
 sequential number
 - a DBAudit with PK=1015164 was inserted
 - first time saveChanges did throw and the transaction was rolled back; the 
 second time (with incremented sequential number) it saved all right.
 
 So far so good, this did happen before often and never led to problems.
 
 This time though it did. The next time the above code was performed (no 
 sequentials, just the audit), the newly created audit was assigned _again_ 
 PK=1015164! Of course it failed. Well, we thought, perhaps there's some ugly 
 mess inside the EO stack; let's restart the application!
 
 After restart, the very first time the above code was called -- which is 
 pretty soon -- it happened again: regardless there was properly saved row 
 with PK=1015164 in the DB, EOF again assigned the same PK to the newly 
 created EO. I've tried it about five times (at first I did not believe my 
 eyes), it behaved consistently: restart, first time a DBAudit is created, it 
 gets PK=1015164 and saving (naturally) fails.
 
 Then I've prepared a version with extended logs; for start, I've simply 
 added a log of audit.permanentGlobalID() just before saveChanges.
 
 It worked without a glitch, assigning (and logging) PK=1015165, and 
 (naturally) saving without a problem.
 
 I have immediately stopped the app, returned to the original version -- the 
 one which used to consistently fail -- and from that moment on, it worked 
 all right too, assigning PK=1015166, and then PK=1015167, and so forth, as 
 it should. Without a need to log audit.permanentGlobalID() first.
 
 Well. Gremlins?
 
 Might perhaps anyone have the slightest glitch of an idea what the b. h. 
 

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-13 Thread Chuck Hill
On 2015-05-13, 9:56 AM, OC wrote:

Chuck,

On 12. 5. 2015, at 23:09, Chuck Hill 
ch...@gevityinc.commailto:ch...@gevityinc.com wrote:

You really do come up with the absolute best problems!  :-)

Well it's great if one's best in something, is it not? ;)

True that!


My guess is that somehow the database failed to record the update to the 
sequence number.  Every time you ran it after that, it generated the used one 
and then failed. When you added logging, something that you added caused two to 
get generated with the first not used.  Then everything worked again.
Except... sequences should be generated outside of the ACID transition so I 
can't see how this could happen once, let alone multiple times.

If that indeed was the culprit, is there a way to prevent the same problem if 
it occurs again?

Assuming that you are using the FrontBase sequences, then no, I don't think so. 
 If you are using the EO_PK_TABLE approach then I am not sure.  Is replication 
involved?  I have had issues with that and the sequences before.

Chuck


Thanks,
OC

On 2015-05-12, 1:56 PM, OC wrote:
Hello there,
my application, among others, generates and stores audit records. The 
appropriate code is comparatively straightforward; it boils down to something 
like
===
... ec might contain unsaved objects at this moment ...
DBAudit audit=new DBAudit()
ec.insertObject(audit)
audit.takeValuesFromDictionary(... couple of plain attributes ...)
for (;;) { // see below the specific situation which causes a retry
   try {
 ec.saveChanges()
   } catch (exception) {
 // EC might contain an object which needs a sequentially numbered attribute
 // it should be reliable through all instances
 // there is a DB unique constraint to ensure that
 // the constraint exception is detected and served essentially this way:
 if (exceptionIsNotUniqueConstraint(exception)) throw exception
 SomeClass culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
 culprit.theSequentialNumber++
 // and try again...
   }
}
===
It might be somewhat convoluted way to solve that (though I am afraid I can't 
see any easier), but it worked for many months, about a zillion times without 
the exception, sometimes with the exception and retry, always without the 
slightest glitch.
Then it so happened that
- the EC indeed did contain an object with wrong (already occupied) sequential 
number
- a DBAudit with PK=1015164 was inserted
- first time saveChanges did throw and the transaction was rolled back; the 
second time (with incremented sequential number) it saved all right.
So far so good, this did happen before often and never led to problems.
This time though it did. The next time the above code was performed (no 
sequentials, just the audit), the newly created audit was assigned _again_ 
PK=1015164! Of course it failed. Well, we thought, perhaps there's some ugly 
mess inside the EO stack; let's restart the application!
After restart, the very first time the above code was called -- which is pretty 
soon -- it happened again: regardless there was properly saved row with 
PK=1015164 in the DB, EOF again assigned the same PK to the newly created EO. 
I've tried it about five times (at first I did not believe my eyes), it behaved 
consistently: restart, first time a DBAudit is created, it gets PK=1015164 and 
saving (naturally) fails.
Then I've prepared a version with extended logs; for start, I've simply added a 
log of audit.permanentGlobalID() just before saveChanges.
It worked without a glitch, assigning (and logging) PK=1015165, and (naturally) 
saving without a problem.
I have immediately stopped the app, returned to the original version -- the one 
which used to consistently fail -- and from that moment on, it worked all right 
too, assigning PK=1015166, and then PK=1015167, and so forth, as it should. 
Without a need to log audit.permanentGlobalID() first.
Well. Gremlins?
Might perhaps anyone have the slightest glitch of an idea what the b. h. might 
have been the culprit, and how to prevent the problem to occur again in the 
future?
Thanks a lot,
OC
___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  
(Webobjects-dev@lists.apple.commailto:Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/chill%40gevityinc.com
This email sent to ch...@gevityinc.commailto:ch...@gevityinc.com


 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-12 Thread Chuck Hill
You really do come up with the absolute best problems!  :-)
www.youtube.com/watch?v=otCpCn0l4Wo

My guess is that somehow the database failed to record the update to the 
sequence number.  Every time you ran it after that, it generated the used one 
and then failed. When you added logging, something that you added caused two to 
get generated with the first not used.  Then everything worked again.

Except... sequences should be generated outside of the ACID transition so I 
can't see how this could happen once, let alone multiple times.

Chuck

On 2015-05-12, 1:56 PM, OC wrote:

Hello there,

my application, among others, generates and stores audit records. The 
appropriate code is comparatively straightforward; it boils down to something 
like

===
... ec might contain unsaved objects at this moment ...
DBAudit audit=new DBAudit()
ec.insertObject(audit)
audit.takeValuesFromDictionary(... couple of plain attributes ...)
for (;;) { // see below the specific situation which causes a retry
  try {
ec.saveChanges()
  } catch (exception) {
// EC might contain an object which needs a sequentially numbered attribute
// it should be reliable through all instances
// there is a DB unique constraint to ensure that
// the constraint exception is detected and served essentially this way:
if (exceptionIsNotUniqueConstraint(exception)) throw exception
SomeClass culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
culprit.theSequentialNumber++
// and try again...
  }
}
===

It might be somewhat convoluted way to solve that (though I am afraid I can't 
see any easier), but it worked for many months, about a zillion times without 
the exception, sometimes with the exception and retry, always without the 
slightest glitch.

Then it so happened that

- the EC indeed did contain an object with wrong (already occupied) sequential 
number
- a DBAudit with PK=1015164 was inserted
- first time saveChanges did throw and the transaction was rolled back; the 
second time (with incremented sequential number) it saved all right.

So far so good, this did happen before often and never led to problems.

This time though it did. The next time the above code was performed (no 
sequentials, just the audit), the newly created audit was assigned _again_ 
PK=1015164! Of course it failed. Well, we thought, perhaps there's some ugly 
mess inside the EO stack; let's restart the application!

After restart, the very first time the above code was called -- which is pretty 
soon -- it happened again: regardless there was properly saved row with 
PK=1015164 in the DB, EOF again assigned the same PK to the newly created EO. 
I've tried it about five times (at first I did not believe my eyes), it behaved 
consistently: restart, first time a DBAudit is created, it gets PK=1015164 and 
saving (naturally) fails.

Then I've prepared a version with extended logs; for start, I've simply added a 
log of audit.permanentGlobalID() just before saveChanges.

It worked without a glitch, assigning (and logging) PK=1015165, and (naturally) 
saving without a problem.

I have immediately stopped the app, returned to the original version -- the one 
which used to consistently fail -- and from that moment on, it worked all right 
too, assigning PK=1015166, and then PK=1015167, and so forth, as it should. 
Without a need to log audit.permanentGlobalID() first.

Well. Gremlins?

Might perhaps anyone have the slightest glitch of an idea what the b. h. might 
have been the culprit, and how to prevent the problem to occur again in the 
future?

Thanks a lot,
OC


___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  
(Webobjects-dev@lists.apple.commailto:Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/chill%40gevityinc.com

This email sent to ch...@gevityinc.commailto:ch...@gevityinc.com
 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-12 Thread OC
Hello there,

my application, among others, generates and stores audit records. The 
appropriate code is comparatively straightforward; it boils down to something 
like

===
... ec might contain unsaved objects at this moment ...
DBAudit audit=new DBAudit()
ec.insertObject(audit)
audit.takeValuesFromDictionary(... couple of plain attributes ...)
for (;;) { // see below the specific situation which causes a retry
  try {
ec.saveChanges()
  } catch (exception) {
// EC might contain an object which needs a sequentially numbered attribute
// it should be reliable through all instances
// there is a DB unique constraint to ensure that
// the constraint exception is detected and served essentially this way:
if (exceptionIsNotUniqueConstraint(exception)) throw exception
SomeClass culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
culprit.theSequentialNumber++
// and try again...
  }
} 
===

It might be somewhat convoluted way to solve that (though I am afraid I can't 
see any easier), but it worked for many months, about a zillion times without 
the exception, sometimes with the exception and retry, always without the 
slightest glitch.

Then it so happened that

- the EC indeed did contain an object with wrong (already occupied) sequential 
number
- a DBAudit with PK=1015164 was inserted
- first time saveChanges did throw and the transaction was rolled back; the 
second time (with incremented sequential number) it saved all right.

So far so good, this did happen before often and never led to problems.

This time though it did. The next time the above code was performed (no 
sequentials, just the audit), the newly created audit was assigned _again_ 
PK=1015164! Of course it failed. Well, we thought, perhaps there's some ugly 
mess inside the EO stack; let's restart the application!

After restart, the very first time the above code was called -- which is pretty 
soon -- it happened again: regardless there was properly saved row with 
PK=1015164 in the DB, EOF again assigned the same PK to the newly created EO. 
I've tried it about five times (at first I did not believe my eyes), it behaved 
consistently: restart, first time a DBAudit is created, it gets PK=1015164 and 
saving (naturally) fails.

Then I've prepared a version with extended logs; for start, I've simply added a 
log of audit.permanentGlobalID() just before saveChanges.

It worked without a glitch, assigning (and logging) PK=1015165, and (naturally) 
saving without a problem.

I have immediately stopped the app, returned to the original version -- the one 
which used to consistently fail -- and from that moment on, it worked all right 
too, assigning PK=1015166, and then PK=1015167, and so forth, as it should. 
Without a need to log audit.permanentGlobalID() first.

Well. Gremlins?

Might perhaps anyone have the slightest glitch of an idea what the b. h. might 
have been the culprit, and how to prevent the problem to occur again in the 
future?

Thanks a lot,
OC


 ___
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Re: Back with weird problems: PK generation keeps generating same PK... up to a moment.

2015-05-12 Thread Samuel Pelletier
Sequence generation for concurrent access may be tricky to do right, especially 
if the system is tuned for performance. There is a confrontation between the 
sequence integrity and the concurrent access. It is easy to use a sequence 
table wrong...

OC, which database are you using with which connection settings for isolation 
and locking and how your primary key are generated ?

Samuel

 Le 2015-05-12 à 17:09, Chuck Hill ch...@gevityinc.com a écrit :
 
 You really do come up with the absolute best problems!  :-)  
 www.youtube.com/watch?v=otCpCn0l4Wo
 
 My guess is that somehow the database failed to record the update to the 
 sequence number.  Every time you ran it after that, it generated the used one 
 and then failed. When you added logging, something that you added caused two 
 to get generated with the first not used.  Then everything worked again.
 
 Except… sequences should be generated outside of the ACID transition so I 
 can’t see how this could happen once, let alone multiple times.
 
 Chuck
 
 On 2015-05-12, 1:56 PM, OC wrote:
 
 Hello there,
 
 my application, among others, generates and stores audit records. The 
 appropriate code is comparatively straightforward; it boils down to something 
 like
 
 ===
 ... ec might contain unsaved objects at this moment ...
 DBAudit audit=new DBAudit()
 ec.insertObject(audit)
 audit.takeValuesFromDictionary(... couple of plain attributes ...)
 for (;;) { // see below the specific situation which causes a retry
   try {
 ec.saveChanges()
   } catch (exception) {
 // EC might contain an object which needs a sequentially numbered 
 attribute
 // it should be reliable through all instances
 // there is a DB unique constraint to ensure that
 // the constraint exception is detected and served essentially this way:
 if (exceptionIsNotUniqueConstraint(exception)) throw exception
 SomeClass culprit=findTheObjectWhichCausedTheUniqueException(ec,exception)
 culprit.theSequentialNumber++
 // and try again...
   }
 }
 ===
 
 It might be somewhat convoluted way to solve that (though I am afraid I can't 
 see any easier), but it worked for many months, about a zillion times without 
 the exception, sometimes with the exception and retry, always without the 
 slightest glitch.
 
 Then it so happened that
 
 - the EC indeed did contain an object with wrong (already occupied) 
 sequential number
 - a DBAudit with PK=1015164 was inserted
 - first time saveChanges did throw and the transaction was rolled back; the 
 second time (with incremented sequential number) it saved all right.
 
 So far so good, this did happen before often and never led to problems.
 
 This time though it did. The next time the above code was performed (no 
 sequentials, just the audit), the newly created audit was assigned _again_ 
 PK=1015164! Of course it failed. Well, we thought, perhaps there's some ugly 
 mess inside the EO stack; let's restart the application!
 
 After restart, the very first time the above code was called -- which is 
 pretty soon -- it happened again: regardless there was properly saved row 
 with PK=1015164 in the DB, EOF again assigned the same PK to the newly 
 created EO. I've tried it about five times (at first I did not believe my 
 eyes), it behaved consistently: restart, first time a DBAudit is created, it 
 gets PK=1015164 and saving (naturally) fails.
 
 Then I've prepared a version with extended logs; for start, I've simply added 
 a log of audit.permanentGlobalID() just before saveChanges.
 
 It worked without a glitch, assigning (and logging) PK=1015165, and 
 (naturally) saving without a problem.
 
 I have immediately stopped the app, returned to the original version -- the 
 one which used to consistently fail -- and from that moment on, it worked all 
 right too, assigning PK=1015166, and then PK=1015167, and so forth, as it 
 should. Without a need to log audit.permanentGlobalID() first.
 
 Well. Gremlins?
 
 Might perhaps anyone have the slightest glitch of an idea what the b. h. 
 might have been the culprit, and how to prevent the problem to occur again in 
 the future?
 
 Thanks a lot,
 OC
 
 
 ___
 Do not post admin requests to the list. They will be ignored.
 Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com 
 mailto:Webobjects-dev@lists.apple.com)
 Help/Unsubscribe/Update your Subscription:
 https://lists.apple.com/mailman/options/webobjects-dev/chill%40gevityinc.com 
 https://lists.apple.com/mailman/options/webobjects-dev/chill%40gevityinc.com
 
 This email sent to ch...@gevityinc.com 
 mailto:ch...@gevityinc.com___
 Do not post admin requests to the list. They will be ignored.
 Webobjects-dev mailing list  (Webobjects-dev@lists.apple.com)
 Help/Unsubscribe/Update your Subscription:
 https://lists.apple.com/mailman/options/webobjects-dev/samuel%40samkar.com
 
 This email sent to sam...@samkar.com