Armin Waibel wrote:

Guido Beutler wrote:

Hi,

changing

public boolean representsNull(FieldDescriptor fld, Object aValue)
{
.....
if(((aValue instanceof Number) && (((Number) aValue).longValue() == 0)))
{
>>> result = fld.getPersistentField().getType().isPrimitive() && fld.isPrimaryKey();
}
....
return result;
}


into

result = fld.getPersistentField().getType().isPrimitive() && ! fld.isPrimaryKey();
seems to fix my problem, updates are generated now and insert's work too.
One of our OJB Guru's like Armin should take a look on it. ;-)
I just made some small tests and can not be sure that this don't produce some side effects.
If my fix is correct, updates never worked for objects with 0 values at primary key fields.



Maybe I'm on the wrong tack, but I don't figure out the change. This method should return 'true'
- when the field as number of '0'
- and the field is primitive (short in your case)
- and the field-descriptor is declared as PK

Yes and I think the second is not correct. 0 at primary key's, especially if the column is only one of a set of pk columns,
is a legal value, maybe that I'm wrong of course. I like using primitive fields at my data classes, because they are fast,
small and I can avoid converting everything from int to Integer and back again. Of course this is only one (my) opinion.



In your patch you return true, when the field was not declared as PK. But in that case all values are valid ('0' too). Your patch do the opposite from what I expected. Maybe I'm completely confused ;-)

You're right, this is nonsens. :-) Throwing away the rule would work for me too.


Again, if you have a class with primitive short field, declared as PK with value '0', OJB assume that the field is nullified and because PK couldn't be 'null' OJB assume the given object was new.

At repository_user.xml I can define attributes like:


<field-descriptor id="1"
name="col"
column="col"
jdbc-type="INTEGER"
conversion="org.apache.ojb.broker.accesslayer.conversions.Int2IntegerFieldConversion"


/>

I thought, this is legal for PK fields too. Am I wrong with that?
The behavior to handle int 0 values as null values is new in RC6. In RC5 OJB stored a 0 if no additional conversion was defined.
I thought that the additional conversion was build to handle exact this behavior.
In relational databases 0 is a legal pk value too and may exist in many databases. Why restrict this when using OJB ?


Edson is right with his suggestion, that testing for auto-increment would solve my problem too,
but I still don't understand why it is necessary to test for 0 in primitive types. If I define a primitive type and don't
add a special conversion I documented at the repository that I know that 0 is treated as 0 (like in RC5 ).
Maybe that I documented that this is a bad mapping too :-)
I thought using primitive types means that you can never write a null value. In my current application I can live with that
and prefer to use the performance improvement of primitives. In other applications other mappings are better.


best regards,

Guido

Hope I don't make a fool of oneself ;-)

regards,
Armin

best regards,

Guido


Guido Beutler wrote:


Guido Beutler wrote:

Armin Waibel wrote:

do you use anonymous keys? If so where?

>> Do you remember our DataSource problem with 3.2.2.RC3 with missing
>> results?

No, can you describe this problem again? Should I update to JBoss 3.2.3 and run the tests again?






we had the problem, that not all objects were returned by OJB. This seemed to be a side effect of the eager-release
flag. After Update to JBoss 3.2.3 the problem disapeared. Maybe that the bahavior of our current problem is different
in 3.2.3.
I'll put some debug gcode into PersistenceBroker and see what's going on during insert/update.


best regards,

Guido




Hi,

I added some debug code to

PersistenceBrokerImpl :

   public void store(Object obj) throws PersistenceBrokerException
   {

...

boolean doInsert = serviceBrokerHelper().hasNullPKField(cld, obj);

returns true. The reason seems to be BrokerHelper.representsNull :

public boolean representsNull(FieldDescriptor fld, Object aValue)
{
.....
if(((aValue instanceof Number) && (((Number) aValue).longValue() == 0)))
{
result = fld.getPersistentField().getType().isPrimitive() && fld.isPrimaryKey();
}
....
return result;
}


returns true for my SMALLINT objects if the value is 0. But 0 is a leagal value for SMALLINT PK attributes.
After that PersistenceBrokerImpl.store checks the cache:


/*
if PK values are set, lookup cache or db to see whether object
needs insert or update
*/
if (!doInsert)
{
doInsert = objectCache.lookup(oid) == null
&& !serviceBrokerHelper().doesExist(cld, oid, obj);
}


because of doInsert is still true (I checked it) The cache is never checked for the object. doInsert is still true and
a few lines later


           // now store it:
           store(obj, oid, cld, doInsert);

generates the insert statement. Maybe I'm wrong but for me it looks like a 0 (not null) at any PK field causes a insert statement.
In my case it is immediately the first object. Is it a good idea to check the cache independent if doInsert is true or is the implementation
of representsNull the cause and should be changed ?


best regards,

Guido





---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to