RE: long transactions

2003-02-27 Thread Mahler Thomas
Hi all,

 
 I have just started converting our system to use OJB instead 
 of Castor, so I
 cannot speak to the PB v. ODMG approach.  However, our 
 perferred approach
 has been to implement long transactions using what are named
 DataTransferObjects.  Especially if you are using a session 
 bean layer, the
 DTOs can be looked at as snapshots of your domain model 
 highly tuned for a
 particular client for a particular use-case. 

That's the default approach

 During save 
 or the long
 transaction, simply lock the appropriate domain object(s) and 
 merge in the
 data from the DTO.

this process is called swizzling. We have implemented it in the new OTM
layer. But for ODMG you have to do the merging on your own.
With the PB api you can ignore swizzling as you can use optimistic locking
with timespamp or version labels to detect write conflicts.

cheers,
Thomas

 Of course writing all those DTOs is not fun, but pick your posion.
 
 
 |-Original Message-
 |From: Phil Warrick [mailto:[EMAIL PROTECTED]
 |Sent: Thursday, February 27, 2003 9:33 AM
 |To: OJB Users List
 |Subject: Re: long transactions
 |
 |
 |Hi again,
 |
 |One reason I ask is that long transactions seem to imply 
 |_stateful_ 
 |Session beans + OJB, and I haven't seen much discussion or 
 |examples 
 |relating to this combination (although there are lots of 
 |_stateless_ 
 |Session bean + OJB discussion/examples).
 |
 |My core data is essentially a tree, so updates (performed 
 |on a remove 
 |client) implicate a large graph of persistent objects.
 |
 |Does anyone have ideas/experiences to share?  Pitfalls to avoid?
 |
 |Thanks,
 |
 |Phil
 |
 |Phil Warrick wrote:
 | Hi Thomas,
 | 
 | A while ago, you mentioned that although long 
 |transactions are not 
 | currently supported, there are a few possible approaches:
 | 
 |  If you want to use the ODMG in your SessionBean 
 |scenario you have to
 |   implement your own simple long transaction mechanism.
 |   (In the OTM package we will have such a feature 
 implemented.)
 |   On the other hand ODMG is not the most natural fit 
 |for such a scenario.
 |   I recommemd to use the PersistenceBroker API for such cases!
 | 
 | Can you outline what the ODMG long transaction would 
 |look like in a 
 | SessionBean scenario?  And how PB is perhaps a better 
 |approach?  What 
 | will the OTM approach look like?
 | 
 | Over the last few months, I have been trying several 
 avenues for 
 | modifying persistent objects on remote clients and then 
 |updating them on 
 | the server, and I still haven't arrived at a 
 |satisfactory approach.
 | 
 | Any thoughts and experiences in this area would be most 
 |appreciated.
 | 
 | Thanks,
 | 
 | Phil
 |
 |
 |
 |---
 |--
 |To unsubscribe, e-mail: [EMAIL PROTECTED]
 |For additional commands, e-mail: [EMAIL PROTECTED]
 |
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: long transactions

2003-02-27 Thread Ebersole, Steven
Actually the problem we were trying to get around wasn't so much the long
transactions (as Castor does have support for that), but the performance hit
of serialization and de-serialization and network communication of
large/deep object graphs inherent in EJB environments.  The platform-neutral
long transaction support was just a superfluous benefit.



|-Original Message-
|From: Mahler Thomas [mailto:[EMAIL PROTECTED]
|Sent: Thursday, February 27, 2003 9:50 AM
|To: 'OJB Users List'
|Subject: RE: long transactions
|
|
|Hi all,
|
| 
| I have just started converting our system to use OJB instead 
| of Castor, so I
| cannot speak to the PB v. ODMG approach.  However, our 
| perferred approach
| has been to implement long transactions using what are named
| DataTransferObjects.  Especially if you are using a session 
| bean layer, the
| DTOs can be looked at as snapshots of your domain model 
| highly tuned for a
| particular client for a particular use-case. 
|
|That's the default approach
|
| During save 
| or the long
| transaction, simply lock the appropriate domain object(s) and 
| merge in the
| data from the DTO.
|
|this process is called swizzling. We have implemented it 
|in the new OTM
|layer. But for ODMG you have to do the merging on your own.
|With the PB api you can ignore swizzling as you can use 
|optimistic locking
|with timespamp or version labels to detect write conflicts.
|
|cheers,
|Thomas
|
| Of course writing all those DTOs is not fun, but pick 
|your posion.
| 
| 
| |-Original Message-
| |From: Phil Warrick [mailto:[EMAIL PROTECTED]
| |Sent: Thursday, February 27, 2003 9:33 AM
| |To: OJB Users List
| |Subject: Re: long transactions
| |
| |
| |Hi again,
| |
| |One reason I ask is that long transactions seem to imply 
| |_stateful_ 
| |Session beans + OJB, and I haven't seen much discussion or 
| |examples 
| |relating to this combination (although there are lots of 
| |_stateless_ 
| |Session bean + OJB discussion/examples).
| |
| |My core data is essentially a tree, so updates (performed 
| |on a remove 
| |client) implicate a large graph of persistent objects.
| |
| |Does anyone have ideas/experiences to share?  
|Pitfalls to avoid?
| |
| |Thanks,
| |
| |Phil
| |
| |Phil Warrick wrote:
| | Hi Thomas,
| | 
| | A while ago, you mentioned that although long 
| |transactions are not 
| | currently supported, there are a few possible approaches:
| | 
| |  If you want to use the ODMG in your SessionBean 
| |scenario you have to
| |   implement your own simple long transaction mechanism.
| |   (In the OTM package we will have such a feature 
| implemented.)
| |   On the other hand ODMG is not the most natural fit 
| |for such a scenario.
| |   I recommemd to use the PersistenceBroker API 
|for such cases!
| | 
| | Can you outline what the ODMG long transaction would 
| |look like in a 
| | SessionBean scenario?  And how PB is perhaps a better 
| |approach?  What 
| | will the OTM approach look like?
| | 
| | Over the last few months, I have been trying several 
| avenues for 
| | modifying persistent objects on remote clients and then 
| |updating them on 
| | the server, and I still haven't arrived at a 
| |satisfactory approach.
| | 
| | Any thoughts and experiences in this area would be most 
| |appreciated.
| | 
| | Thanks,
| | 
| | Phil
| |
| |
| |
| |---
| |--
| |To unsubscribe, e-mail: [EMAIL PROTECTED]
| |For additional commands, e-mail: [EMAIL PROTECTED]
| |
| 
| 
|---
|--
| To unsubscribe, e-mail: [EMAIL PROTECTED]
| For additional commands, e-mail: [EMAIL PROTECTED]
| 
| 
|
|
|---
|--
|To unsubscribe, e-mail: [EMAIL PROTECTED]
|For additional commands, e-mail: [EMAIL PROTECTED]
|

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: long transactions

2003-02-27 Thread Phil Warrick
Hi again (again),

Hi again,

snip

With the PB api you can ignore swizzling as you can use 
optimistic locking

with timespamp or version labels to detect write conflicts.
Can you expand on the PB approach a little?  How is it that 
no merge is 
necessary with optimistic locking?


With OL you have a version column for each row.
Say you load an object with version=15.
you send it to the client.
The client works on it and posts the modified object back to the server.
the server then update the database. 
With OL it checks if the version field in the instance still matches the
version entry in the database row.
In our case we will know that some other process updated the object if the
version cloumn is now greater 15.
The server will then signal a problem to the client.

If the version still matches, the object is updated to the db and the
version field is incremented, to inform other processes about the update.
Right, write conflicts are detected with OL.  But will there be an 
efficient merge of the updated graph?  Say only one of the graph's n 
objects was modified. Is OJB's use of its cache going to compare the 
before/after status of each graph object and perform a db update on only 
the one truly changed object?

Are we close to being a FAQ item?

Thanks a lot,

Phil



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: long transactions

2003-02-27 Thread Thomas Mahler
Hi again

snip OL stuff
Right, write conflicts are detected with OL.  But will there be an 
efficient merge of the updated graph?  
No.

Say only one of the graph's n 
objects was modified. Is OJB's use of its cache going to compare the 
before/after status of each graph object and perform a db update on only 
the one truly changed object?

The PB does not track object state.

In ODMG there is a mechanism that tracks the object state during a 
transaction.
Have a look at the ObjectEnvelope class.
This mechanism could be modified with moderate effort to perform the 
swizzling you'd like to see.


Are we close to being a FAQ item?

I'd prefer a real solution instead of faq item :-)

cu,
Thomas
Thanks a lot,

Phil



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: long transactions

2003-02-27 Thread Thomas Phan
  With OL you have a version column for each row.
  Say you load an object with version=15.
  you send it to the client.
  The client works on it and posts the modified object back to the server.
  the server then update the database.
  With OL it checks if the version field in the instance still matches the
  version entry in the database row.
  In our case we will know that some other process updated the object if
the
  version cloumn is now greater 15.
  The server will then signal a problem to the client.
 
  If the version still matches, the object is updated to the db and the
  version field is incremented, to inform other processes about the
update.
 

 Right, write conflicts are detected with OL.  But will there be an
 efficient merge of the updated graph?  Say only one of the graph's n
 objects was modified. Is OJB's use of its cache going to compare the
 before/after status of each graph object and perform a db update on only
 the one truly changed object?

 Are we close to being a FAQ item?

Consider the case written in the OJB FAQ page:

Say you use the PB to query an object O that has a collection attribute col
with five elements a,b,c,d,e. Next you delete Objects d and e from col and
store O again with PersistenceBroker.store(O);

PB will store the remaining objects a,b,c. But it will not delete d and e !
If you then requery object O it will again contain a,b,c,d,e !!!

The PB keeps no transactional state of the persistent Objects, thus it does
not know that d and e have to be deleted. (as a side note: deletion of d and
e could also be an error, as there might be references to them from other
objects !!!)

^^^

Will OL keeps the transactional state?

If I want to use PB, after posting the modified object, i.e. O with a, b, c,
back to the server, what's the easliest way to make PL to delete d and e
around the call to PersistenceBroker.store(O)? Should I query the existing
O, and compare a, b, c, d, e with a, b, c before I call store()?

Does deleting the existing O by its primary key, and inserting the modified
O is the easliest approach?

thanks


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: long transactions

2003-02-27 Thread Ebersole, Steven
Just to make certain that I understand this correctly (as I asked this very
question in a seperate thread and got the opposite answer):

If I map a m:n relationship letting OJB manage the association table
(documentation refers to this as non-decomposed m:n), can I use the
PersistenceBroker API and still be able to manage the inclusion of instances
in the collection?

The example I gave was a Customer and a Product.  The Customer has a
collection of Products, the relation of which is maintained in say a
CUSTOMER_PRODUCT table.  Because it is non-decomposed, I do not create a
CustomerProduct class.  Now in client code, I do:

Customer customer = // lookup customer
Product firstProduct = // lookup Product #1
Product secondProduct = // lookup Product #2
customer.getProducts().add( firstProduct );
customer.getProducts().add( secondProduct );
pb.store( customer );


Later, account receivable realizes they made a mistake, so:
Customer customer = // lookup customer
Product secondProduct = // lookup Product #2
Product thirdProduct = // lookup Product #3
customer.getProducts().remove( secondProduct );
customer.getProducts().add( thirdProduct );
pb.store( customer );


After this second section, what will be left in the CUSTOMER_PRODUCT table?
The comment below makes it seem like all three will be there.  If the above
does not work (i.e., if all three rows are still present in the association
table) then it does not seem possible to use non-decomposed m:n mappings
with the PersistenceBroker as there would be no way for me to access the
association table to even clean it up manually, aside from direct DB
access).



|-Original Message-
|From: Thomas Phan [mailto:[EMAIL PROTECTED]
|Sent: Thursday, February 27, 2003 12:02 PM
|To: OJB Users List
|Subject: Re: long transactions
|
|
|  With OL you have a version column for each row.
|  Say you load an object with version=15.
|  you send it to the client.
|  The client works on it and posts the modified object 
|back to the server.
|  the server then update the database.
|  With OL it checks if the version field in the instance 
|still matches the
|  version entry in the database row.
|  In our case we will know that some other process 
|updated the object if
|the
|  version cloumn is now greater 15.
|  The server will then signal a problem to the client.
| 
|  If the version still matches, the object is updated to 
|the db and the
|  version field is incremented, to inform other 
|processes about the
|update.
| 
|
| Right, write conflicts are detected with OL.  But will 
|there be an
| efficient merge of the updated graph?  Say only one of 
|the graph's n
| objects was modified. Is OJB's use of its cache going to 
|compare the
| before/after status of each graph object and perform a 
|db update on only
| the one truly changed object?
|
| Are we close to being a FAQ item?
|
|Consider the case written in the OJB FAQ page:
|
|Say you use the PB to query an object O that has a 
|collection attribute col
|with five elements a,b,c,d,e. Next you delete Objects d 
|and e from col and
|store O again with PersistenceBroker.store(O);
|
|PB will store the remaining objects a,b,c. But it will not 
|delete d and e !
|If you then requery object O it will again contain a,b,c,d,e !!!
|
|The PB keeps no transactional state of the persistent 
|Objects, thus it does
|not know that d and e have to be deleted. (as a side note: 
|deletion of d and
|e could also be an error, as there might be references to 
|them from other
|objects !!!)
|
|^^^
|
|Will OL keeps the transactional state?
|
|If I want to use PB, after posting the modified object, 
|i.e. O with a, b, c,
|back to the server, what's the easliest way to make PL to 
|delete d and e
|around the call to PersistenceBroker.store(O)? Should I 
|query the existing
|O, and compare a, b, c, d, e with a, b, c before I call store()?
|
|Does deleting the existing O by its primary key, and 
|inserting the modified
|O is the easliest approach?
|
|thanks
|
|
|---
|--
|To unsubscribe, e-mail: [EMAIL PROTECTED]
|For additional commands, e-mail: [EMAIL PROTECTED]
|

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: long transactions

2003-02-27 Thread Thomas Phan
 The example I gave was a Customer and a Product.  The Customer has a
 collection of Products, the relation of which is maintained in say a
 CUSTOMER_PRODUCT table.  Because it is non-decomposed, I do not create a
 CustomerProduct class.  Now in client code, I do:

 Customer customer = // lookup customer
 Product firstProduct = // lookup Product #1
 Product secondProduct = // lookup Product #2
 customer.getProducts().add( firstProduct );
 customer.getProducts().add( secondProduct );
 pb.store( customer );


 Later, account receivable realizes they made a mistake, so:
 Customer customer = // lookup customer
 Product secondProduct = // lookup Product #2
 Product thirdProduct = // lookup Product #3
 customer.getProducts().remove( secondProduct );
 customer.getProducts().add( thirdProduct );
 pb.store( customer );

From my expenience with PB, you will get the firstProduct and the
thirdProduct in the cache, but your db will still have all the 3 products.
Once you clear the cache, you'll see 3 again!

Seems that store() only does insert and update queries. It would be perfect
if it does delete queries, by comparing the graphs of the old cache and the
new object, before its update


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]