Ok, this makes sense to me now.
Is it possible to prevent creating ObjectIDs with singleKeyValue of
wrong data type? Does anyone depend on this behavior?
ObjectId itself - no, cause ObjectId can potentially be created
outside of any mapping context. But we should probably add type
safety checking somewhere between DataObjectUtils, ObjectIdQuery and
DataContext to ensure that only valid ids are processed. I'd say this
would be a very important improvement. There were a number of other
cases where people ran into various obscure problems because of that.
Care to open a bug report in Jira?
Andrus
On Nov 12, 2006, at 7:57 AM, Damir Bijuklic wrote:
Hi,
I understand my post wasn't very clear. I was going to explain it
further but I have found the cause of my problems.
DataRowStore has a snapshots hashmap, keyed on object id. It had
'duplicate' objectIds in it with Short and String values for pk.
One coming from SelectQuery (with correct Short skingleKeyValue),
and other from DataObjectUtils.objectForPK, where i was mistakenly
passing a string (in an Object variable so i didn't notice a few
times).
Is it possible to prevent creating ObjectIDs with singleKeyValue of
wrong data type? Does anyone depend on this behavior?
Damir
----- Original Message ----
From: Andrus Adamchik <[EMAIL PROTECTED]>
To: [email protected]
Sent: Thursday, 9 November, 2006 11:24:38 PM
Subject: Re: Nested data context problems
I don't think I fully follow the sequence of events here, so my
comments are a bit random...
3. I commit it on the first page, so it goes to the database -
here i can see sql updates but also indication of event i think is
used for cache synchronisation
DataRowStore [DEBUG] postSnapshotsChangeEvent: [SnapshotEvent]
source: [EMAIL PROTECTED],
modified 1 id(s)
This message is only related to synching between peer DataContexts at
the top level, so it is benign.
4. I go to the second page and it gets rendered with current
(modified) version of dataobjects
- when i debug converting of dataobjects to strings i notice
that my dataobject is in the same, new, state as i would expect, i
get the expected data with
DataObjectUtils.objectForPK(object.getDataContext(),
"Object", DataObjectUtils.pkForObject(object))
This looks suspicious. No objects should be in the "new" state after
commit.
Andrus
On Nov 8, 2006, at 8:53 AM, Damir Bijuklic wrote:
Hi,
I'm playing around with nested data context's and tapestry.
Using tapestry 4.1.1 and cayenne 1.2.1.
I have encountered an issue with cayenne that i don't quite
understand. I will describe my setup and what I'm trying to do.
I'm trying to use nested data context's on subforms/subpages.
This way i can discard changes inside subforms although my
dataobject that is rendered on the form is updated (i either call
commitChangesToParent on nested data context, or i don't).
I'm also using tapestry data squeezer, which is mechanism in
tapestry for converting from data object to string and vice versa.
DataSqueezer has two methods that look similar to this: String
squeeze(Object) and Object unsqueeze(String) .
In testing i have encountered an issue which i have tried to
reproduce with test case, but have not been able to do so, so far.
When i navigate between three pages, each having it's own
datacontext, where top page uses session bound datacontext and
subpages use ones derived with
createChildDataContext (so third page has datacontext that derives
from derived datacontext from second page).
1. I go to the third page,change some data and commit it
2. I commit it on the second page also
3. I commit it on the first page, so it goes to the database -
here i can see sql updates but also indication of event i think is
used for cache synchronisation
DataRowStore [DEBUG] postSnapshotsChangeEvent: [SnapshotEvent]
source: [EMAIL PROTECTED],
modified 1 id(s)
4. I go to the second page and it gets rendered with current
(modified) version of dataobjects
- when i debug converting of dataobjects to strings i notice
that my dataobject is in the same, new, state as i would expect, i
get the expected data with
DataObjectUtils.objectForPK(object.getDataContext(),
"Object", DataObjectUtils.pkForObject(object))
- it gets converted to something like "Object:200" where Object
is entity name and 200 is PK
5. during the next request i try to parse string back into the
object, here the funny stuff happens:
- i parse "Object:200" and use DataObjectUtils.objectForPk
(dataContext, "Object", 200) to fetch the object
- i have checked few times that the datacontext i use here is
the same as one above
- the object i receive is NOT the one i have just rendered to
the string, it has it's properties changed to what they were before
step 1
- during this step i notice message in tho log
DataRowStore [DEBUG] postSnapshotsChangeEvent: [SnapshotEvent]
source: [EMAIL PROTECTED],
modified 1 id(s)
- tracing objectForPK shows that it reads the old values from
the cache though they should not be there and it seems they weret
there a few moments ago??
I'm using shared cache, single jvm. What else could be wrong. If i
restart the app to reload the caches it's all ok. Also i would like
to load this object from the cache because it should have correct
values in it.
Any ideas what I'm missing? Any further info i could post?
Damir
Send instant messages to your online friends http://
uk.messenger.yahoo.com