RE: xdoclet module with Java 1.5

2005-09-01 Thread Clute, Andrew
It is an issue with the xjavadoc module. You need to update that library
to one of the snapshots that has the generics parsing support. I can
email the file I am using, if you want.

To answer your second question, ues you can use generics just fine in
OJB (due to erasure). Using it here without issue.

-Andrew



 -Original Message-
 From: Daniel Perry [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, September 01, 2005 10:38 AM
 To: OJB Users List
 Subject: xdoclet module with Java 1.5
 
 Hi,
 
 I'm trying to use the OJB xdoclet module (from 1.0.3 
 download) with java 5 generics.
 
 The following:
   ArrayList Integergeneric  = new ArrayListInteger();
 
 gives an error:
 [ojbdoclet] Error parsing File 
 C:\java\pol\src\java\Test.java:Encountered
  at
  line 11, column 27.
 [ojbdoclet] Was expecting one of:
 
 
 Is there a 1.5 compatible version?
 
 Also, can I use generics for collections in OJB classes?
 
 Thanks,
 
 Daniel.
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Dynamic Proxy Question

2005-08-23 Thread Clute, Andrew
My guess is that you are seeing the Collection Prefetcher in action. 

Take a look at:
http://db.apache.org/ojb/docu/guides/repository.html#collection-descript
or-N10611

proxy-prefetch-limit



 -Original Message-
 From: Bogdan Daniliuc [mailto:[EMAIL PROTECTED] 
 Sent: Friday, August 19, 2005 10:41 AM
 To: OJB Users List
 Subject: Dynamic Proxy Question
 
   Dear All, 
 
   Due to the fact that some of our queries might return a 
 large number of objects (50.000+), I'm trying to use dynamic 
 proxies. It works well for small collections (around 1000 
 items), however for the  large ones the returned collections 
 contain proxies with already instantiated subjects. Is there 
 somekind of limitation? I'm using OJB 1.0.3.
 
   Best Regards
 
   Bogdan Daniliuc
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: paging results

2005-08-23 Thread Clute, Andrew
Yep,

Take a look at:

query.setStartAtIndex
Query.setEndAtIndex
OJBIterator.fullSize

 

 -Original Message-
 From: Laran Evans [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, August 23, 2005 10:42 AM
 To: OJB Users List
 Subject: paging results
 
 Is there a way to tell OJB to return results x through y of 
 all z rows. 
 What I'm looking for here is a way to get a Results 1 - 10 
 of about 10,000 type behavior.
 
 Is such a thing possible with OJB?
 
 - laran
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: OJB question - reference counter field

2005-08-09 Thread Clute, Andrew
The RowReader approach might not work if you are not willing to store
the collection on the object, as I was thinking you can interrogate the
collection after retrieval and before being passed to the application to
set that number.

 

 -Original Message-
 From: Andrey Shulinskiy [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, August 09, 2005 10:49 AM
 To: OJB Users List
 Subject: RE: OJB question - reference counter field
 
 Andrew,
 
 thanks a lot for the prompt reply.
 
  -Original Message-
  From: Clute, Andrew [mailto:[EMAIL PROTECTED]
  Sent: Monday, August 08, 2005 9:21 PM
  To: OJB Users List
  Subject: RE: OJB question - reference counter field
 
 
  In a word: no. There is no straight forward way to do what 
 you want to 
  accomplish.
 
  Obviously you can manage that yourself by setting the value to the 
  size of the collection before saving, and then having that field in 
  the database.
 
 I don't want the collection of Bs to be in the A class at all 
 - just the size.
 
  At first I thought a field conversion might work, but it has no 
  context to look at other fields.
 
  You can write a custom row reader for this object type that 
 looks at 
  the other field and populates the size variable. But you are still 
  going to have to manage that field yourself when the collection is 
  mutated.
 
  Take a look at:
 
  org.apache.ojb.broker.accesslayer.RowReader
 
 OK, thanks, it should be a solution.
 
 Yours sincerely,
 Andrey.
 
 
  -Andrew
 
 
  -Original Message-
  From: Andrey Shulinskiy [mailto:[EMAIL PROTECTED]
  Sent: Mon 8/8/2005 9:01 PM
  To: ojb-user@db.apache.org
  Subject: OJB  question - reference counter field
 
  Hi there!
 
  I am quite new to OJB and I'd really appreciate if anybody 
 could help 
  me with the following issue.
 
  The case is rather simple:
  tables A and B have the 1 - N relationship. And I want the 
 objects of 
  the class A have not the collection of the related objects 
 B but just 
  the number of such objects:
 
  class A {
  ...
  private Integer bCounter;
  ...
  }
 
  So the questions are - is it possible in OJB and if it is then how 
  could it be done? Is it just some kind of mapping ore 
 something more 
  complicated?
  Thanks.
 
  Yours sincerely,
  Andrey Shulinskiy.
 
 
  
 -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 
 
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: OJB 1.0.x and Java5.0

2005-08-08 Thread Clute, Andrew
Just to confirm, what you are doing works without issue.

We are currently doing in production exactly what you are asking here. As 
Thomas states, due to erasure, the same types are being placed into your 
persistent objects.

As for mutating OJB to use generics -- there really isn't much benefit that can 
be derived. Almost none of the PB-API methods pass in a class object to 
specifiy the type (it is inside a Identity, or a Query), it cannot used to 
specifiy a typed Collection.

It's actually kind of nice -- we have a wrapper API around OJB that takes in a 
critieria and spits back out a collection, and with generics we are now able to 
'type' it.

So, what use to look like this:

List l = ps.findCollectionByCriteria(Foo.class, crit);
Iterator it = l.iterator;
while (it.hasNext(){
Foo foo = (Foo)it.next();
foo.bar();
}  

With typed generics, and class attributes now looks like this:

for (Foo foo: ps.findCollecionByCriteria(Foo.class, crit)){
   foo.bar();
}

Our API decleration for this method looks like this:

public T extends BusinessObject ListT findCollectionByCriteria(ClassT 
clazz, Criteria crit);

While there isn't as much stuff that can be done with OJB itself, it sure made 
our wrapper classes nicer.

-Andrew

 -Original Message-
 From: Edson Carlos Ericksson Richter 
 [mailto:[EMAIL PROTECTED] 
 Sent: Monday, August 08, 2005 3:14 PM
 To: OJB Users List
 Subject: Re: OJB 1.0.x and Java5.0
 
 Let me expand my idea (sorry if I get boring).
 I have a User object. Each user has a LoginHour list. So, 
 using JDK 5.0 could I declare
 
 public class User {
   private String username;
   private String password;
   private ListLoginHour loginHours;
   public void setUsername(String newUsername) {...}
   public String getUsername() {...}
   public void setPassword(String newPassword) {...}
   public Stirng getPassword() {...}
   public void setLoginHours(ListLoginHour loginHours) {
 this.loginHours = loginHours;
   }
   public ListLoginHour getLoginHours() {
 return this.loginHours;
   }
 }
 
 And this will work fine? There is nothing to be changed on 
 class-descriptor, neihter in collection-descriptor?
 
 
 TIA,
 
 Edson Richter
 
 
 Thomas Dudziak escreveu:
 
 On 8/8/05, Edson Carlos Ericksson Richter
 [EMAIL PROTECTED] wrote:
   
 
 Anyone has an example about how could OJB be used with 
 Generics? This
 will not affect the class-mapping descriptor?
 
 
 
 Since Java generics will be compiled to non-generic bytecode, it does
 not really affect classloading etc. Hence it should not matter when
 running OJB, you simply specify the collection-descriptor etc. as you
 would for non-generic code. The only differences are that 
 OJB does not
 (yet) support enums, and that the XDoclet module might not work with
 generic code (you'll at least need a CVS build of the XDoclet code to
 be able to parse 1.5 code).
 
 Tom
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 
   
 
 
 
 -- 
 Edson Carlos Ericksson Richter
 MGR Informática Ltda.
 Fones: 3347-0446 / 9259-2993
 
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: OJB question - reference counter field

2005-08-08 Thread Clute, Andrew
In a word: no. There is no straight forward way to do what you want to 
accomplish.

Obviously you can manage that yourself by setting the value to the size of the 
collection before saving, and then having that field in the database.

At first I thought a field conversion might work, but it has no context to look 
at other fields.

You can write a custom row reader for this object type that looks at the other 
field and populates the size variable. But you are still going to have to 
manage that field yourself when the collection is mutated.

Take a look at:

org.apache.ojb.broker.accesslayer.RowReader

-Andrew


-Original Message-
From: Andrey Shulinskiy [mailto:[EMAIL PROTECTED]
Sent: Mon 8/8/2005 9:01 PM
To: ojb-user@db.apache.org
Subject: OJB  question - reference counter field
 
Hi there!

I am quite new to OJB and I'd really appreciate if anybody could help me
with the following issue.

The case is rather simple:
tables A and B have the 1 - N relationship. And I want the objects of the
class A have not the collection of the related objects B but just the number
of such objects:

class A {
...
private Integer bCounter;
...
}

So the questions are - is it possible in OJB and if it is then how could it
be done? Is it just some kind of mapping ore something more complicated?
Thanks.

Yours sincerely,
Andrey Shulinskiy.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

RE: orderby column in indirection-table

2005-07-28 Thread Clute, Andrew
FYI...There is a bug with MtoNCollectionsPrefetcher that caused certain
collections indirect MtoN collections to ignore the order-by clauses.

I committed a patch for that about 2 months ago, you might want to build
the latest of the 1_0 release line and see if that fixes your issue.

-Andrew

 

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, July 27, 2005 4:01 PM
 To: OJB Users List
 Subject: Re: orderby column in indirection-table
 
 Hi Armin:
 
 Your suggestion is basically what I started with except for 
 using the orderby element instead of attribute. It still 
 doesn't work. The Collection is not ordered and no order by 
 statement is generated (this is using 1.0.3) in the log 
 output from SqlGeneratorDefaultImpl.
 
 Here is my actual collection descriptor:
 
   collection-descriptor
  name=categories
  
 collection-class=org.apache.ojb.broker.util.collections.Manag
 eableArrayList
  element-class-ref=gov.doi.fis.dataobjects.WorkMeasureCategory
  indirection-table=CATEGORY_SOURCE_CROSS
  auto-update=none
  auto-delete=none
  proxy=false
  orderby name=CATEGORY_SOURCE_CROSS.CATEGORY_ORDER sort=ASC/
  fk-pointing-to-this-classcolumn=SOURCE_ID /
  fk-pointing-to-element-class column=CATEGORY_ID /
   /collection-descriptor
 
 The DDL for CATEGORY_SOURCE_CROSS is:
 
 CREATE TABLE FIS.CATEGORY_SOURCE_CROSS
 (
 SOURCE_IDNUMBER   (8) 
NOT 
 NULL
   , CATEGORY_ID  NUMBER   (8) 
NOT 
 NULL
   , CATEGORY_ORDER   NUMBER   (4) 
NOT 
 NULL
   , PRIMARY KEY (SOURCE_ID,CATEGORY_ID)
   , UNIQUE (SOURCE_ID,CATEGORY_ORDER)
   , FOREIGN KEY (CATEGORY_ID) REFERENCES
 WORK_MEASURE_CATEGORY(CATEGORY_ID)
   , FOREIGN KEY (SOURCE_ID) REFERENCES 
 WORK_MEASURE_SOURCE(SOURCE_ID) );
 
 Thanks,
 
 Jon French
 Programmer
 ECOS Development Team
 [EMAIL PROTECTED]
 970-226-9290
 
 
 
 Armin Waibel [EMAIL PROTECTED]
 07/26/2005 04:47 PM
 Please respond to
 OJB Users List ojb-user@db.apache.org
 
 
 To
 
 OJB Users List ojb-user@db.apache.org
 cc
 
 
 
 
 
 Subject
 Re: orderby column in indirection-table
 
 
 
 
 
 
 Hi Jon,
 
 isn't it possible to do something like this:
 
 collection-descriptor name=authors
 collection-class=org.apache.ojb.broker.util.collections.Manag
 eableArrayList
   element-class-ref=package.name.Author
   indirection-table=BOOK_AUTHOR_CROSS
   auto-update=none
   auto-delete=none
   proxy=true
 
   orderby name=BOOK_AUTHOR_CROSS.AUTHOR_ORDER sort=ASC/
 
   fk-pointing-to-this-classcolumn=BOOK_ID /
   fk-pointing-to-element-class column=AUTHOR_ID /
 /collection-descriptor
 
 this should add an order by using the unchanged cloumn name (Don't 
 forget to remove the old orderby attribute).
 
 regards,
 Armin
 
 [EMAIL PROTECTED] wrote:
  Thanks for your reply Armin:
  
  In the test-case you gave me, the values of both orderby name 
  attributes (name and MOVIE_ID_INT) are valid identifiers for 
  field-descriptor on the M2NTest$Actor class. The first is 
 a property 
  name of the object and the second is a table column of a property.
  
  This case is a bit different than what I need because the 
 column for 
 which 
  I would like to orderby is on the indirection-table, not on the 
  element-class table.
  
  Your statement that my indirection table isn't a pure indirection 
 table 
  is true. I'm definitely stretching the definition by adding an 
 additional 
  attribute. I would still like to avoid mapping a 
 class-descriptor for 
 the 
  m:n association if possible. 
  
  Right now, I have to use release 1.0.3 and moving the 
 orderby attribute 
 to 
  an orderby element didn't change the generated sql.
  
  
 How is the AUTHOR_ORDER column populated?
  
  
  In my case, the m:n association is relatively static and will be 
 populated 
  by hand external to OJB. I'll never have a need to add an 
 AUTHOR to a 
 BOOK 
  and thus don't need to worry about insertions into the 
 indirection-table.
  
  I'll look into the 1.0.3 source code further.
  
  Best, 
  
  Jon French
  Programmer
  ECOS Development Team
  [EMAIL PROTECTED]
  970-226-9290
  
  
  
  Armin Waibel [EMAIL PROTECTED] 
  07/26/2005 12:32 PM
  Please respond to
  OJB Users List ojb-user@db.apache.org
  
  
  To
  
  OJB Users List ojb-user@db.apache.org
  cc
  
  
  
  
  
  Subject
  Re: orderby column in indirection-table
  
  
  
  
  
  
  Hi Jon,
  
  In your case the indirection table isn't a real indirection table, 
  because you store additional information in column 
 AUTHOR_ORDER. How is 
  the AUTHOR_ORDER column populated?
  Think OJB will ignore this column when handling the m:n 
 relation between 
 
  Book and Author - or I'm wrong?
  
  Anyway you should use the new 'orderby' element to specify 
 the order by 
  fields in your reference, the 'orderby-attribute' is deprecated now.

RE: OJB cache

2005-05-09 Thread Clute, Andrew
 


 
 Hi,
 
 1) I need to completly clear the cache. How can I clear OJB 
 cache without restarting Tomcat?

pb.clearCache() (PersistenceBroker)


 
 2) Is there any way to control maximum cache size?


None of the current cache implementation's have any size parameters. However, I 
assume you are worried about memory usuage, they all use SoftReferences to 
store the persistent object into the cache. So, if the VM needs to garbage 
collect to free up memory, cached objects can be taken.

-Andrew



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: lazy materialization problem

2005-04-29 Thread Clute, Andrew

 If I cauld say OJB prefetch  parent and children 
 reference in all folders wich I materizalize by this call it 
 would be more productive
 

I am slightly confused by this statement. Are you saying you want all
the children to be materialized when the parent is created, thus
populating all of the children's back-references to it's parent with the
same instance? If so, how is what you are asking for different than just
setting the collection of children not to be a proxy collection?

The problem you are experiencing is something that we would like to fix,
so I would love to hear any, and all suggestions on work arounds for it.
But I am having trouble understanding what exactly your solution is.

On a side note: One quick, dirty, workaround is to save your 'folder'
object inside the new PersistenceBroker before calling any method on the
children collection to make it be materizalized. This will 1) save the
object to the DB, but 2) Place that object in the cache, so the same
reference will be used for your back-references. Now, this might not
work in your scenario, especially if your 'folder' object has been
mutated, and you are not ready to persist it yet. However, just a
thought.

-Andrew



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: lazy materialization problem

2005-04-29 Thread Clute, Andrew

 
 If I cauld say OJB prefetch  parent and children 
 reference in all folders wich I materizalize by this call 
 it would be 
 more productive
 
 I am slightly confused by this statement. Are you saying you 
 want all 
 the children to be materialized when the parent is created, thus 
 populating all of the children's back-references to it's parent with 
 the same instance?
 
 
 yes! That what I want.
 In many cases proxies are well. In 99% i load only one 
 folder, somethimes view its children or parent. So this 
 problem in 99% doesn't bother me.
 But only for one particular case I need work with the whole 
 tree. So I want to change proxy attribute only for for this 
 case when I want load the whole folders tree. Is it possible?
 


Yeah, take a look at:

http://db.apache.org/ojb/docu/guides/metadata.html#Per+thread+metadata+c
hanges

The basic concept is that you can get a copy of the
DescriptorRepository, and then get the ClassDescriptor for your 'folder'
object, and then get the CollectionDescriptor for your 'children'
collection. Then CollectionDescriptor#setLazy(false) for the duration of
your call, and then set it back.

Make sure to read about doing MetaData changes per Thread -- otherwise
your changes will affect the entire application, and not just this
particular instance (all instance of that collection will be loaded, not
just the one you care about).

Hope this helps.

-Andrew 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: lazy materialization problem

2005-04-28 Thread Clute, Andrew
What version of OJB are you using (1.0, 1.0.2?), and which type of cache are 
you using? (ObjectCacheTwoLevel, ObjectCacheDefaultImpl)?

The issue stems from the fact that your proxy collection is being materialized 
with 1) a different PersistneceBroker instance then the parent object, and 2) 
You are using a cache implementation that either creates copies 
(ObjectCacheTwoLevel) or is short-lived (ObjectCachePerBroker).

This is a known issue, especially with all the cache implementations other than 
ObjectCacheDefault.

For instance, if you are using ObjectCacheTwoLevel, it guarantees detached 
objects from the application cache, while guaranteeing the same reference for 
logically identical objects, but only for the life of the PersistenceBroker. 
So, if the parent object is created in one PB instance, and the collection 
proxy is materialized in another PB instance, you will get different references 
for your 'folder' object.

Can you tell me a little bit more about your application: Is this a web app? 
What is the lifecycle for your PersistenceBrokers (do you use one for each 
action? Keep one open for the entire request?). Does the materlization of our 
parent 'folder' object happen in the same request as when the collection proxy 
is 'touched' and forced to be materialized?

-Andrew







 -Original Message-
 From: Maksimenko Alexander [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, April 28, 2005 8:03 AM
 To: OJB Users List
 Subject: Re: lazy materialization problem
 
 Martin Kalén wrote:
 
  Maksimenko Alexander wrote:
 
  I have tree like structure (with parent,children 
 relationships). I'm 
  using proxies to lazy materialize them. Everything works 
 well but in 
  particular cases I have to materialize the whole tree because
  folder.getChildren().get(0).getParent() is not the same as 
 folder and 
  sometimes it makes the problem. How can I tell OJB to prefetch 
  references parent and children for all folders? Or is there 
  simpler way to do this?
 
 
  I am not sure I understand your question correctly, but it 
 sounds like
  you (possibly among other things) are interested in getting a 
  deterministic
  order on your Collection references?
 
  If that is the case, have a look at the orderby and sort attributes
  of collection-descriptor [1].
 
 Sory a was not clear enough :(
 No I mean that due lazy initialization folder and 
 folder.getChildren().get(0).getParent() are equals (by id) 
 but not the 
 same objects. Lets we have
 Folder folder = ...//get from ojb
 Folder child = folder.getChildren().get(0);
 folder.changeName(new-name) // but 
 child.getParent().getName() doesn't 
 equal new-name
 
 its okey in the most cases but in only one case I need 
 prevent this. I 
 think this problem will disapear if I can tell ojb don't use 
 proxies for 
 particular getting objects from database. but I don't know 
 how I can do this
 Hope I was clear now ;)
 Thanks
 Alexander
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: partial materialization?..

2005-04-12 Thread Clute, Andrew
The simplest way to handle this now in the current OJB framework is to
capture the writeObject() serialization method, and then do your
determination if you want to replace your proxies out for real objects.

For example, we do this as well to serialize out full graphs that are in
HttpSession. Our base object has implemented the writeObject() method:

private void writeObject(ObjectOutputStream out) throws IOException {
   ProxyUtil.backfillProixes(this);
   out.defaultWriteObject();
} 

Our helper method does the following:

 public static void backfillProixes(BusinessObjectProxyInterface bizObj)
{
Class cls = bizObj.getClass();
while (!cls.getName().equals(Object.class.getName())) {
// Need to backfill in all real objects as opposed to
proxies.
try {
Field[] fields = cls.getDeclaredFields();
for (int i = 0; i  fields.length; i++) {
Field field = fields[i];
field.setAccessible(true);
if
(BusinessObjectProxyInterface.class.isAssignableFrom(field.getType())) {
Object object = field.get(bizObj);
if (object instanceof Proxy) {
PersistenceServicableIndirectionHandler ih =
(PersistenceServicableIndirectionHandler)Proxy.getInvocationHandler(obje
ct);
field.set(bizObj, ih.getRealSubject());
}
}
}
cls = cls.getSuperclass();
} catch (SecurityException e) {
e.printStackTrace();
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
}
}


In this case BusinessObjectProxyInterface is a base interface class that
all of our Proxy interfaces extend from. You will want to customize it
you're your own use.

Hope this helps.

-Andrew

 -Original Message-
 From: Kirill Petrov [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, April 12, 2005 1:50 AM
 To: ojb-user@db.apache.org
 Subject: partial materialization?..
 
 Hello everybody,
 
 I have a database that has very complex objects called 
 Models. From time to time I need to present the user only 
 with the names of those models. 
 Since I don't want to instantiate every model only for that, 
 I used dynamic proxies for all the classes that comprise a 
 particular Model object.
 
 However, sometimes, I need to intantiate a whole model, 
 serialize it and send it through the web service. In this 
 case, if I just get a model from the database and send it 
 over the wire, I end up sending a bunch of proxy objects 
 instead of a real model.
 
 What's the right solution for this problem?
 
 Kirill
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Will a two-level cache solve this problem?

2005-03-20 Thread Clute, Andrew
The Two Level Cache is part of the OJB 1.0.2 release, which should be released 
any day now.
 
You can upgrade then and change your Cache declaration in your repository.xml 
file to specify that you want the TLCache.
 
I am using it now (with a release build), and it solves this problem. This is a 
slight performance degregation, but not significant, and defintly worth it to 
ensure clean objects from the cache.
 
-Andrew



From: Ziv Yankowitz [mailto:[EMAIL PROTECTED]
Sent: Sun 3/20/2005 2:51 AM
To: OJB Users List
Subject: RE: Will a two-level cache solve this problem?



Hi All,

we are new to the OJB and we have the same problem.
we are using ojb 1.1 can someone please explain how can we implement the 
two-level cache?

Thanks

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 09, 2005 5:16 PM
To: OJB Users List
Subject: Re: Will a two-level cache solve this problem?


Clute, Andrew wrote:
 Good news, I think.

 Just so I can understand, I want to clarify: The global cache will be
 the same as the cache today, and will contain full graphs.

The second level cache only contain flat objects, but as second level
cache you can declare all ObjectCache implementations.
When OJB lookup an object from the TwoLevelCache the second level lookup
the flat objects and materialize the full object graph. Here is the
only performance drawback, to materialize the 1:n, m:n relations OJB
have to query for the references id's.

Maybe you could mix the used cache strategies. In the
jdbc-connection-descriptor declare the TLCache and for read-only object
(or less updated objects) declare the default cache in class-descriptor.


 When an
 object is retrieved, a copy of the object is returned to the client, and
 that copy is placed into the second-level global cache?

Right.


So, any object
 that is used from a retrieve mechanism is dereferenced from the objects
 that are in the cache, and whatever the client does to them is not
 affecting the cache?


That's the theory ;-)


 If so, that is very cool! I don't really want to worry about a locking
 strategy, because it seems to be overhead that we don't need -- using
 optimistic locking works well enough for us. This seems like it gives me
 the best of both worlds -- I don't have to worry about read locks, but I
 also don't have to worry about mutating the global cache until my TX
 commits.

 I would assume that the second-level cache doesn't commit to the global
 cache until the Tx commits, right?

Right, except for new materialized objects. These objects will be
immediately pushed to the second level cache (flat copies). I introduce
this to populate the cache, otherwise only changed objects will be put
in the cache.


 I would also assume that JTA based TX
 won't make a difference?


Yep, should work in non- and managed environments.


 All very cool stuff!


Wait and see ;-)

 You mentioned that this will be included in the next release, but I
 assume you mean 1.1, and not 1.0.2, right?

No, it will be part of the upcomming 1.0.2 release (scheduled for Sunday).

Armin

 If it is meant for 1.1, is
 there a release that is stable enough if all I care to do is add this
 caching-strategy to a 1.0.X release featureset?

 Thanks for all the help!

 -Andrew



 

 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, March 09, 2005 9:40 AM
 To: OJB Users List
 Subject: Re: Will a two-level cache solve this problem?

 Hi Andrew,

   So, my question is will the introduction of a two-level cache isolate

clients of OJB from mutating the object that is in the real cache?


 yep!

   Are
   the objects that are in the local cache versus the global cache  
 different references, or are they the same?
  

 They are different, the second level cache only deal with flat (no
 references populated) copies of the persistent class objects. The used
 CopyStrategy is pluggable.

 In OJB_1_0_RELEASE branch the first version of the two-level cache work
 this way (will be included in next release).


   Is my only true option to go with an ODMG/OTM locking strategy to
   isloate my reads from writes?
  

 You could write an thin layer above the PB-api using the kernel locking
 api in org.apache.ojb.broker.locking (OJB_1_0_RELEASE branch).

 regards,
 Armin


 Clute, Andrew wrote:

Hello all!

I have a standard 3-tier webapp back with OJB in my business layer. We
are using the PB API. We have a host of domain objects, that is passed
up to the web tier and used for form manipulation.

The standard pattern for us when editing an object is:

1) Retrieve business object from PersistenceService
2) Use object and integrate it to set form elements
3) Place object into HttpSession for later
4) On submit pass, take object out of HttpSession, and then populate
date from form back into object
5) Save object through PB

We are using the default caching strategy as it provides us with the
most

RE: VARCHAR columns?

2005-03-14 Thread Clute, Andrew
Which API are you using? What does your criteria look like?

Without knowing much about how you are selecting your data, I am going
to guess that you are doing a 'addEqualToColumn' as opposed to
'addEqualTo'. The first one assumes the String you pass in is a column
name (hence the lack of quotes around it), versus the value of a column.

-Andrew

 

-Original Message-
From: Bobby Lawrence [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 10, 2005 5:20 PM
To: ojb-user@db.apache.org
Subject: VARCHAR columns?

I have a table called projects.
Given a project_id (VARCHAR) in one table, I want to look up the project
object from the projects table.
The projects table has a primary key (project_id) that is also defined
as a VARCHAR.
Problem is, when OJB writes the select statement, it doesn't include the
single quotes in the where clause.
It creates the SQL like this:
--
SELECT 

A0.PROJECT_ID,

A0.LAST_NAME,

A0.FIRST_NAME,

A0.ORG_ID,

A0.EMAIL_ADDRESS,

A0.USER_NAME 

FROM PROJECTS A0

WHERE A0.PROJECT_ID = SDC010
--

Is this a bug?  Am I doing something wrong?

Repo.xml 

  class-descriptor
class=org.jlab.mis.services.mics.client.generated.ComputingJob
table=jobs
field-descriptor name=beginTime column=STIME
jdbc-type=TIMESTAMP primarykey=true
conversion=org.apache.ojb.broker.accesslayer.conversions.Calendar2Times
tampFieldConversion/
field-descriptor name=endTime column=ETIME
jdbc-type=TIMESTAMP
conversion=org.apache.ojb.broker.accesslayer.conversions.Calendar2Times
tampFieldConversion/
field-descriptor name=queuedTime column=QUEUED_TIME
jdbc-type=TIMESTAMP
conversion=org.apache.ojb.broker.accesslayer.conversions.Calendar2Times
tampFieldConversion/
field-descriptor name=numNodesUsed column=NODE_COUNT
jdbc-type=INTEGER/
field-descriptor name=numCpusUsed column=CPU_COUNT
jdbc-type=INTEGER/
field-descriptor name=chargeFactor column=CHARGE_FACTOR
jdbc-type=DOUBLE/
field-descriptor name=projectId column=PROJECT_ID
jdbc-type=VARCHAR access=anonymous/
reference-descriptor name=projectJobAllocatedAgainst
class-ref=org.jlab.mis.services.mics.client.generated.Project
  foreignkey field-ref=projectId/
/reference-descriptor
  /class-descriptor

  class-descriptor
class=org.jlab.mis.services.mics.client.generated.Project
table=projects
field-descriptor name=id column=PROJECT_ID jdbc-type=VARCHAR
primarykey=true/
field-descriptor name=name column=PROJECT_NAME
jdbc-type=VARCHAR /
field-descriptor name=initialAllocation column=TOTAL_HOURS
jdbc-type=BIGINT /
field-descriptor name=fiscalYear column=FISCAL_YEAR
jdbc-type=VARCHAR /
  /class-descriptor


Has anyone else experienced this?

--

Bobby Lawrence
MIS Application Developer

Jefferson Lab (www.jlab.org)

 Email: [EMAIL PROTECTED]
Office: (757) 269-5818
 Pager: (757) 584-5818






-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Will a two-level cache solve this problem?

2005-03-09 Thread Clute, Andrew
Hello all!
 
I have a standard 3-tier webapp back with OJB in my business layer. We
are using the PB API. We have a host of domain objects, that is passed
up to the web tier and used for form manipulation.
 
The standard pattern for us when editing an object is:
 
1) Retrieve business object from PersistenceService
2) Use object and integrate it to set form elements
3) Place object into HttpSession for later
4) On submit pass, take object out of HttpSession, and then populate
date from form back into object
5) Save object through PB
 
We are using the default caching strategy as it provides us with the
most amount of performance increase. A lot of our objects are static (we
are 90% read, 10% write) so we really want to keep that in place.
 
However, the problem arises with the fact that the web app is munging
with the same object reference that is in the cache! So, in my pattern
above, while we are updating the object in Session, we are also updating
the object in the cache. We have gotten around it by every object we
return from OJB we clone. I really don't like that and want to get away
from it.
 
I know that one solution to this is ODMG and to implement read/write
locks. I have been trying to stay away from that, only because it seems
like I can't find a clean pattern to establish a write lock on the
submit pass of a form when the object is in HttpSession.
 
So, my question is will the introduction of a two-level cache isolate
clients of OJB from mutating the object that is in the real cache? Are
the objects that are in the local cache versus the global cache
different references, or are they the same?

Is my only true option to go with an ODMG/OTM locking strategy to
isloate my reads from writes?


RE: Will a two-level cache solve this problem?

2005-03-09 Thread Clute, Andrew
Good news, I think.

Just so I can understand, I want to clarify: The global cache will be
the same as the cache today, and will contain full graphs. When an
object is retrieved, a copy of the object is returned to the client, and
that copy is placed into the second-level global cache? So, any object
that is used from a retrieve mechanism is dereferenced from the objects
that are in the cache, and whatever the client does to them is not
affecting the cache?

If so, that is very cool! I don't really want to worry about a locking
strategy, because it seems to be overhead that we don't need -- using
optimistic locking works well enough for us. This seems like it gives me
the best of both worlds -- I don't have to worry about read locks, but I
also don't have to worry about mutating the global cache until my TX
commits.

I would assume that the second-level cache doesn't commit to the global
cache until the Tx commits, right? I would also assume that JTA based TX
won't make a difference?

All very cool stuff!

You mentioned that this will be included in the next release, but I
assume you mean 1.1, and not 1.0.2, right? If it is meant for 1.1, is
there a release that is stable enough if all I care to do is add this
caching-strategy to a 1.0.X release featureset?

Thanks for all the help!

-Andrew



 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 09, 2005 9:40 AM
To: OJB Users List
Subject: Re: Will a two-level cache solve this problem?

Hi Andrew,

  So, my question is will the introduction of a two-level cache isolate
 clients of OJB from mutating the object that is in the real cache?

yep!

  Are
  the objects that are in the local cache versus the global cache  
different references, or are they the same?
 

They are different, the second level cache only deal with flat (no
references populated) copies of the persistent class objects. The used
CopyStrategy is pluggable.

In OJB_1_0_RELEASE branch the first version of the two-level cache work
this way (will be included in next release).


  Is my only true option to go with an ODMG/OTM locking strategy to
  isloate my reads from writes?
 

You could write an thin layer above the PB-api using the kernel locking 
api in org.apache.ojb.broker.locking (OJB_1_0_RELEASE branch).

regards,
Armin


Clute, Andrew wrote:
 Hello all!
  
 I have a standard 3-tier webapp back with OJB in my business layer. We
 are using the PB API. We have a host of domain objects, that is passed
 up to the web tier and used for form manipulation.
  
 The standard pattern for us when editing an object is:
  
 1) Retrieve business object from PersistenceService
 2) Use object and integrate it to set form elements
 3) Place object into HttpSession for later
 4) On submit pass, take object out of HttpSession, and then populate
 date from form back into object
 5) Save object through PB
  
 We are using the default caching strategy as it provides us with the
 most amount of performance increase. A lot of our objects are static
(we
 are 90% read, 10% write) so we really want to keep that in place.
  
 However, the problem arises with the fact that the web app is munging
 with the same object reference that is in the cache! So, in my pattern
 above, while we are updating the object in Session, we are also
updating
 the object in the cache. We have gotten around it by every object we
 return from OJB we clone. I really don't like that and want to get
away
 from it.
  
 I know that one solution to this is ODMG and to implement read/write
 locks. I have been trying to stay away from that, only because it
seems
 like I can't find a clean pattern to establish a write lock on the
 submit pass of a form when the object is in HttpSession.
  
 So, my question is will the introduction of a two-level cache isolate
 clients of OJB from mutating the object that is in the real cache? Are
 the objects that are in the local cache versus the global cache
 different references, or are they the same?
 
 Is my only true option to go with an ODMG/OTM locking strategy to
 isloate my reads from writes?
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Will a two-level cache solve this problem?

2005-03-09 Thread Clute, Andrew
Armin, you are a god! ;)

Thanks for the information, all very cool!

I do have one more question, which I assume a lot of people will have:
If I am reading this write, when using a TLCache, objects that are
returned to a client that have 1:N or M:N relationships will have to had
another query to get the FK's for those collections? How does this work
when using Proxies for collections?

I am only having to pay the hit of a single query to get all the FK's,
and *not* having to pay for retrieving full objects (thus making Proxied
collections worthless)? I assume you need the FK's to accurately handle
the copying between cache-levels.  Does this all sound accurate?

Thanks again, this 1.0.2 is really shaping up to be more than a point
release!

-Andrew



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 09, 2005 10:16 AM
To: OJB Users List
Subject: Re: Will a two-level cache solve this problem?

Clute, Andrew wrote:
 Good news, I think.
 
 Just so I can understand, I want to clarify: The global cache will be 
 the same as the cache today, and will contain full graphs.

The second level cache only contain flat objects, but as second level
cache you can declare all ObjectCache implementations.
When OJB lookup an object from the TwoLevelCache the second level lookup
the flat objects and materialize the full object graph. Here is the
only performance drawback, to materialize the 1:n, m:n relations OJB
have to query for the references id's.

Maybe you could mix the used cache strategies. In the
jdbc-connection-descriptor declare the TLCache and for read-only object
(or less updated objects) declare the default cache in class-descriptor.


 When an
 object is retrieved, a copy of the object is returned to the client, 
 and that copy is placed into the second-level global cache?

Right.


So, any object
 that is used from a retrieve mechanism is dereferenced from the 
 objects that are in the cache, and whatever the client does to them is

 not affecting the cache?
 

That's the theory ;-)


 If so, that is very cool! I don't really want to worry about a locking

 strategy, because it seems to be overhead that we don't need -- using 
 optimistic locking works well enough for us. This seems like it gives 
 me the best of both worlds -- I don't have to worry about read locks, 
 but I also don't have to worry about mutating the global cache until 
 my TX commits.
 
 I would assume that the second-level cache doesn't commit to the 
 global cache until the Tx commits, right?

Right, except for new materialized objects. These objects will be
immediately pushed to the second level cache (flat copies). I introduce
this to populate the cache, otherwise only changed objects will be put
in the cache.


 I would also assume that JTA based TX
 won't make a difference?
 

Yep, should work in non- and managed environments.


 All very cool stuff!


Wait and see ;-)

 You mentioned that this will be included in the next release, but I 
 assume you mean 1.1, and not 1.0.2, right?

No, it will be part of the upcomming 1.0.2 release (scheduled for
Sunday).

Armin

 If it is meant for 1.1, is
 there a release that is stable enough if all I care to do is add this 
 caching-strategy to a 1.0.X release featureset?
 
 Thanks for all the help!
 
 -Andrew
 
 
 
  
 
 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, March 09, 2005 9:40 AM
 To: OJB Users List
 Subject: Re: Will a two-level cache solve this problem?
 
 Hi Andrew,
 
   So, my question is will the introduction of a two-level cache 
 isolate
 
clients of OJB from mutating the object that is in the real cache?
 
 
 yep!
 
   Are
   the objects that are in the local cache versus the global cache   
 different references, or are they the same?
  
 
 They are different, the second level cache only deal with flat (no 
 references populated) copies of the persistent class objects. The used

 CopyStrategy is pluggable.
 
 In OJB_1_0_RELEASE branch the first version of the two-level cache 
 work this way (will be included in next release).
 
 
   Is my only true option to go with an ODMG/OTM locking strategy to  
  isloate my reads from writes?
  
 
 You could write an thin layer above the PB-api using the kernel 
 locking api in org.apache.ojb.broker.locking (OJB_1_0_RELEASE branch).
 
 regards,
 Armin
 
 
 Clute, Andrew wrote:
 
Hello all!
 
I have a standard 3-tier webapp back with OJB in my business layer. We

are using the PB API. We have a host of domain objects, that is passed

up to the web tier and used for form manipulation.
 
The standard pattern for us when editing an object is:
 
1) Retrieve business object from PersistenceService
2) Use object and integrate it to set form elements
3) Place object into HttpSession for later
4) On submit pass, take object out of HttpSession, and then populate 
date from form back into object
5) Save object through PB
 
We are using the default caching

Decomposed Collection and isNull() -- How to make it work?

2005-01-11 Thread Clute, Andrew
If I have a collection descriptor that is mapped via a decomposed M:N
relationship, it errors out when I attempt to add an isNull criteria for
the relationship.
 
For example:
 
I have an object 'foo'
It has a collection of objects 'bar' that is mapped via an
indirection-table and an M:N relationship, and the collection field name
on 'foo' is called 'bars'.
When I attempt to do criteria.addEqualTo(bars.name,Cool) -- that
works fine. It will add the inner join correctly.
 
However, when I do criteria.isNull(bars), it pukes and attempts to add
the field name in to the generated SQL query. It doesn't add the inner
join to the indirection table, and then add the IS NULL criteria on
that.
 
 
Am I missing something, or is this a legit bug?
 
-Andrew
 
 
 


RE: Way to do Outer Joins for orderby's in Collection descriptor?

2004-12-14 Thread Clute, Andrew
Thanks! That does exactly what I need. I forgot about that feature!

These OJB guys have thought of everything!

-Andrew 

-Original Message-
From: Daniel Perry [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 14, 2004 6:18 AM
To: OJB Users List
Subject: RE: Way to do Outer Joins for orderby's in Collection
descriptor?

Not sure if there's an easier way, but you can modify the criteria/query
used in collection descriptors by writing a class that implements
QueryCustomizer.  This class can do anything with the
Criteria/QueryByCriteria in use.

Then specify it in your collection-descriptor:
query-customizer class=myquerycustomizerclass attribute
attribute-name=somthing attribute-value=value/ /query-customizer

Note you can pass attributes to customizers.

Daniel.

 -Original Message-
 From: Clute, Andrew [mailto:[EMAIL PROTECTED]
 Sent: 13 December 2004 21:33
 To: OJB Users List
 Subject: Way to do Outer Joins for orderby's in Collection descriptor?


 When you define an orderby clause inside a collection descriptor, it 
 seems to default to doing an INNER JOIN.

 I know about the ability to do the QueryByCriteria.setPathOuterJoin, 
 but that is assuming you have a criteria. How do you, or is it even 
 possible, to set that path for order by statements inside of 
 collection descriptor?

 Thanks!

 -Andrew





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


BUG: I thought auto-update was suppose to default to NONE?

2004-10-18 Thread Clute, Andrew
For collection descriptors, I thought that if you left the auto-update
field off, it was suppose to default to false -- and thus CASCADE_NONE.

However, I am seeing where that is not the case, but instead the code
inside of ObjectReferenceDescriptor is defaulting 'false' to
CASCADE_LINK, with a note that CollectionDescriptor needs to override
and set 'false' equal to CASCADE_NONE.

However, that code is not written.

Am I missing something here?

-Andrew



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: JBoss have to close connection in managed environment and nes ted EJB calls

2004-09-09 Thread Clute, Andrew
I don't know if this was every acknowledged on the list, my Jboss recognizes that they 
do it this way, and it is how they want to do it.

You can turn off the Connection checking.

http://www.jboss.org/wiki/Wiki.jsp?page=WhatDoesTheMessageDoYourOwnHousekeepingMean

See Thread Local Pattern

-Andrew

 

-Original Message-
From: André Markwalder [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 05, 2004 5:09 AM
To: OJB Users List
Subject: RE: JBoss have to close connection in managed environment and nes ted EJB 
calls

Hi Armin,

Thanks for spending hours of investigation.

I think it is absolutely correct, that OJB uses only one PersistenceBroker and as you 
described it seems that it is a problem of JBoss.

Again thanks a lot for your detailed description.

regards,
André



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Samstag, 3. Juli 2004 17:45
To: OJB Users List
Subject: Re: JBoss have to close connection in managed environment and nes ted EJB 
calls


Hi Andre,

after spending hours of investigation, I think I found the reason for the warning.

Seems JBoss doesn't recognize a connection.close() call when:
- bean 1 create a PB instance and do some work
- in bean 1 another bean (bean 2) was used
- bean 2 create a PB instance. Internal OJB use the same PB instance, thus both use 
the same internal PB instance wrapped by different handle. 
The use PB was already associated with a connection in bean 1, thus bean
2 use the same connection handle.
- now bean 2 close the used PB handle, internal the PB instance only release/close the 
used connection
- now bean 1 perform additional work, thus the PB instance create a new connection 
(because bean 2 close it) and close it after use (PB close call in bean 1)
- bean 1 method ends and the container commit the transaction

Now the problem occur, because JBoss does not recognize that the first connection 
created in bean 1 was closed in bean 2 and log a warning about unclosed connection.

If you comment out line 110 in PersistenceBrokerFactorySyncImpl or use version 1.5 of 
PersistenceBrokerFactorySyncImpl the warning does not occur.

In version 1.6 I introduce that different beans in the same tx use internal the same 
PB instance (think this is similar to DataSource
handling) to avoid massive PB instance creation for bean in bean calls.

See
http://nagoya.apache.org/eyebrowse/[EMAIL PROTECTED]m
sgId=1693533

The question now is are OJB wrong in handling connections or should JBoss allow this? 
I'm don't know the answer.


**
Here is my test:

Have a look how OJB handles connection '[EMAIL PROTECTED]'
(the bean source code can be found below)

--- test start an lookup first PB in first bean
...
16:53:17,625 INFO  [CacheDistributor] Create new 
ObjectCacheImplementation for 'default'
16:53:17,625 INFO  [STDOUT]  lookup con: 
[EMAIL PROTECTED] 
connection=org.jboss.resource.adapter
[EMAIL PROTECTED]: false
16:53:17,625 INFO  [STDOUT] ## broker1: 
[EMAIL PROTECTED] 
connection=org.jboss.resource.adapter.
[EMAIL PROTECTED]

--- query in first bean, connection was created, now we call the nested 
bean

16:53:17,625 INFO  [STDOUT] ### DO nested bean call
16:53:17,625 INFO  [PersonArticleManagerPBBean] ** Found bean: 
org.apache.ojb.ejb.pb.ArticleManagerPBBeanLocal:Stateless
16:53:17,640 INFO  [STDOUT]  lookup con: 
[EMAIL PROTECTED] 
connection=org.jboss.resource.adapter
[EMAIL PROTECTED]: false
16:53:17,640 INFO  [PersistenceBrokerImpl] Cascade store for this 
reference-descriptor (category) was set to false.
...

16:53:17,656 INFO  [STDOUT]  lookup con: 
[EMAIL PROTECTED] 
connection=org.jboss.resource.adapter
[EMAIL PROTECTED]: false
16:53:17,656 INFO  [PersistenceBrokerImpl] Cascade store for this 
reference-descriptor (category) was set to false.
16:53:17,656 INFO  [STDOUT]  lookup con: 
[EMAIL PROTECTED] 
connection=org.jboss.resource.adapter
[EMAIL PROTECTED]: false
16:53:17,671 ERROR [STDERR]  release connection: 
[EMAIL PROTECTED] 
connection=org.jboss.resource
[EMAIL PROTECTED] thread: Thread[RMI TCP 
Connection(2)-217.224.94.148,5,RMI Runtime]
16:53:17,671 INFO  [STDOUT]  close con: 
[EMAIL PROTECTED]
16:53:17,671 INFO  [STDOUT]  is closed: true
16:53:17,671 INFO  [STDOUT] ### END nested bean call

--- nested bean call is finished and '[EMAIL PROTECTED]' is 
closed!! But the second bean close the connection created by the first bean.
bean1 now start to insert objects and create a new connection, because 
the first one was closed by the nested bean


16:53:17,671 INFO  [STDOUT] ## broker1: now store objects
16:53:17,671 INFO  [STDOUT]  create con: 
[EMAIL PROTECTED] 
connection=org.jboss.resource.adapter
[EMAIL PROTECTED]
16:53:17,671 INFO  [STDOUT]  lookup con: 
[EMAIL PROTECTED] 
connection=org.jboss.resource.adapter
[EMAIL PROTECTED]: false
16:53:17,671 INFO  [STDOUT]  lookup con: 
[EMAIL PROTECTED] 

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-16 Thread Clute, Andrew
Well, I believe I have found the crux of the issue.

I currently have two things deployed to my Jboss server, both which use
commons-lang (my non-OJB app is a Tapestry app, and is using langs1.0).
When my non-OJB app is deployed, I get the issue. However, when I
undeploy that app, and my OJB app is the only one deployed, and I can
redeploy as often as I would like.

So, obviously this is one of those infamous Jboss ClassLoader issues
(flat classloader space), and as such, I trying to figure out a
workaround. So, it seems like OJB really has no issue, but it was just
the brunt of the Jboss issues.

Thanks for all the help...and once I have found a working solution, I
will post it for all to see.

-Andrew

 

-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 5:29 PM
To: OJB Users List
Subject: RE: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

I am wondering if it has something to do with the fact that
SerilizationUtils uses ObjectInputStream to serialize/desearlize the
objects, and ObjectInputStream on the deserialization does a
Class.forName() to create the new object -- which in the J2EE
classloader world can cause problems. I think that would explain why it
would use the previous versions. I am posting a message to the Jboss
group to see if my hypothesis is correct.

-Andrew



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 5:25 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

  So, now I need to figure out why this is happening. Something thing
 looks weird for the after-serilization version after redploying, since

 the url for that class is null. Not sure where it is loading it from,
or   why it has a stored copy of it.
 

I must admit that I don't have a clue...

Did you checked commons-lang.jar? SerializationUtils is part of
commons-lang and if this jar wasn't redeployed it will use the 'old' 
class-loader. Or is commons-lang duplicated in classpath?

regards,
Armin



Clute, Andrew wrote:
 Well, I have narrowed the issue down further, but still do not have a 
 solution yet. In ConnectionRepository.getAllDescriptor(), the 
 JdbcConnectionDescriptor's that are in the current repository are 
 cloned
 (seralized) into another list and returned. I made the guess (and I 
 was
 right) that when this error is exposed, the JdbcConnectionDescriptor's

 that are returned from the Serilization are loaded in a different 
 classloader than the ones that OJB creates!
 
 To prove this, I changed the code for that method from:
 
 [code]
 public List getAllDescriptor()
 {
 return (List) SerializationUtils.clone(new 
 ArrayList(jcdMap.values()));
 }
 [/code]
 
 To:
 
 [code]
 public List getAllDescriptor()
 {
 
   Iterator it = jcdMap.values().iterator();
   while (it.hasNext()){
   Object o = it.next();
   System.out.println(ClassLoader for  +
 o.getClass().getName() + before Serialization: 
 +o.getClass().getClassLoader());
   }
 
   List returnList = (List) SerializationUtils.clone(new 
 ArrayList(jcdMap.values()));
   it = returnList.iterator();
   while (it.hasNext()){
   Object o = it.next();
   System.out.println(ClassLoader for  +
 o.getClass().getName() + after Serialization: 
 +o.getClass().getClassLoader());
   }
 
 return returnList;
 }
 [/code]
 
 And as I assumed, the first time my application is deployed, the 
 classloader for the Connection is the same for both what OJB uses, and

 what SerilizationUtils uses:
 
 17:02:09,592 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor before
 Serialization: [EMAIL PROTECTED]
 url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56536OSNCore.ear
 ,addedOrder=37}
 17:02:18,811 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor after
 Serialization: [EMAIL PROTECTED]
 url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56536OSNCore.ear
 ,addedOrder=37}
 
 
 But, after redeploying it, the classloader for OJB changes (as I would

 assume is correct), but the classloader for SerilizationUtils stays 
 the same as the previous version! Oops!
 
 17:03:04,780 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor before
 Serialization: [EMAIL PROTECTED]
 url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56537OSNCore.ear
 ,addedOrder=38}
 17:03:11,280 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor after
 Serialization: [EMAIL PROTECTED]
 url=null ,addedOrder=37}
 
 So, now I need to figure out why this is happening. Something thing 
 looks weird for the after-serilization

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
I am almost certain that is a ClassLoader issue. 

Yes, my deployment looks almost the exact same as Stephen's (in fact, I
chimed in when he first posted that stating that is already how I was
doing it, and it worked fine).

Now, something I forgot to mention: We have only started seeing this
since we upgraded to 1.0 from 1.0RC6. We see the problem on both our dev
server that is on Jboss 3.2.3, and on my development machine that is on
Jboss 3.2.5.

Are there any known parts to the OJB Metadata and Configuration stuff
that lives through redeployments (i.e. is static)?
-Andrew

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 2:14 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Hi Andrew,

think this is a ClassLoader problem. Maybe ojb.jar itself or one of the
jars OJB depends on is not correctly reloaded.

Did you follow the instructions made by Stephen Ting

http://db.apache.org/ojb/docu/guides/deployment.html#Packing+an+.ear+fil
e

regards,
Armin


Clute, Andrew wrote:
 I am running OJB 1.0 with JBoss 3.2.5.
 
 On *occasional* redeployments of my EAR file (with nested Jars and 
 Wars) I will get a nasty ClassCastException that is only fixable by 
 restarting Jboss. This happens in the
MetadataManager.buildDefaultKey() method.
 
 The top part of the stack trace is posted below. From what I can tell,

 the exception stems from not that it is the wrong class attempting to 
 be casted, but it is an instance of a class that is from a previous 
 deployment (and thus classloader) that is trying to be casted in to 
 the same class type in a new class loader.
 
 I have taken a quick look at MetadataManager, and don't see anything 
 terribly obvious as to the cause -- which I would assume is a static 
 instance to the Collection of JdbcConnectionsDescriptors. There is a a

 ThreadLocal variable, but I don't think that is the cause.
 
 So, my question is: has anyone else seen this? Can anyone think of why

 on a undeployment that not all of the OJB classes are removed from the

 VM?
 
 Thanks!
 
 Here is the stacktrace:
 
 2004-08-11 13:24:22,923 ERROR [org.jboss.ejb.plugins.LogInterceptor]
 RuntimeException:
 java.lang.ClassCastException
   at
 org.apache.ojb.broker.metadata.MetadataManager.buildDefaultKey(Unknown
 Source)
   at org.apache.ojb.broker.metadata.MetadataManager.init(Unknown
 Source)
   at org.apache.ojb.broker.metadata.MetadataManager.init(Unknown
 Source)
   at
 org.apache.ojb.broker.metadata.MetadataManager.getInstance(Unknown
 Source)
   at
 org.apache.ojb.broker.core.PersistenceBrokerFactoryBaseImpl.getDefault
 Ke
 y(Unknown Source)
   at
 org.apache.ojb.broker.core.PersistenceBrokerFactoryBaseImpl.defaultPer
 si
 stenceBroker(Unknown Source)
   at
 org.apache.ojb.broker.PersistenceBrokerFactory.defaultPersistenceBroke
 r(
 Unknown Source)
   at
 org.osn.persistence.PersistenceSessionPBImpl.getBroker(PersistenceSess
 io
 nPBImpl.java:79)
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
Armin,

Could you clarify for me what you mean by I think that some jar files
changed between rc6 and 1.0. Are you saying that dependencies were
removed that rc6 had that 1.0 doesn't need? My Class-Path entry from my
EJB jar file contains the following entries:

Manifest-Version: 1.0
Ant-Version: Apache Ant 1.6.1
Created-By: 1.4.2-b28 (Sun Microsystems Inc.)
Built-By: andrew.clute
Class-Path: Merlia.jar OSNHtml.jar antlr.jar commons-beanutils.jar com
 mons-collections.jar commons-dbcp.jar commons-digester.jar commons-fi
 leupload.jar commons-lang.jar commons-logging.jar commons-pool.jar co
 mmons-validator.jar db-ojb-1.0.0-src.jar db-ojb-1.0.0.jar jakarta-poi
 -1.5.1.jar p6spy.jar

Are you thinking that there are unnesscary entries in it? I guess am not
sure what the cause or solution would be based on your statement to look
for. Thanks!

-Andrew



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 2:34 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:

 I am almost certain that is a ClassLoader issue. 
 
 Yes, my deployment looks almost the exact same as Stephen's (in fact, 
 I chimed in when he first posted that stating that is already how I 
 was doing it, and it worked fine).
 
 Now, something I forgot to mention: We have only started seeing this 
 since we upgraded to 1.0 from 1.0RC6. We see the problem on both our 
 dev server that is on Jboss 3.2.3, and on my development machine that 
 is on Jboss 3.2.5.
 
 Are there any known parts to the OJB Metadata and Configuration stuff 
 that lives through redeployments (i.e. is static)?

As far as I know the ClassLoader take care of static instances too.
Did you check all jar names and Class-Path entries in your config files?

I think that some jar files changed between rc6 and 1.0

Armin


 -Andrew
 
 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:14 PM
 To: OJB Users List
 Subject: Re: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Hi Andrew,
 
 think this is a ClassLoader problem. Maybe ojb.jar itself or one of 
 the jars OJB depends on is not correctly reloaded.
 
 Did you follow the instructions made by Stephen Ting
 
 http://db.apache.org/ojb/docu/guides/deployment.html#Packing+an+.ear+f
 il
 e
 
 regards,
 Armin
 
 
 Clute, Andrew wrote:
 
I am running OJB 1.0 with JBoss 3.2.5.

On *occasional* redeployments of my EAR file (with nested Jars and
Wars) I will get a nasty ClassCastException that is only fixable by 
restarting Jboss. This happens in the
 
 MetadataManager.buildDefaultKey() method.
 
The top part of the stack trace is posted below. From what I can tell,
 
 
the exception stems from not that it is the wrong class attempting to 
be casted, but it is an instance of a class that is from a previous 
deployment (and thus classloader) that is trying to be casted in to 
the same class type in a new class loader.

I have taken a quick look at MetadataManager, and don't see anything 
terribly obvious as to the cause -- which I would assume is a static 
instance to the Collection of JdbcConnectionsDescriptors. There is a a
 
 
ThreadLocal variable, but I don't think that is the cause.

So, my question is: has anyone else seen this? Can anyone think of why
 
 
on a undeployment that not all of the OJB classes are removed from the
 
 
VM?

Thanks!

Here is the stacktrace:

2004-08-11 13:24:22,923 ERROR [org.jboss.ejb.plugins.LogInterceptor]
RuntimeException:
java.lang.ClassCastException
  at
org.apache.ojb.broker.metadata.MetadataManager.buildDefaultKey(Unknown
Source)
  at org.apache.ojb.broker.metadata.MetadataManager.init(Unknown
Source)
  at org.apache.ojb.broker.metadata.MetadataManager.init(Unknown
Source)
  at
org.apache.ojb.broker.metadata.MetadataManager.getInstance(Unknown
Source)
  at
org.apache.ojb.broker.core.PersistenceBrokerFactoryBaseImpl.getDefault
Ke
y(Unknown Source)
  at
org.apache.ojb.broker.core.PersistenceBrokerFactoryBaseImpl.defaultPer
si
stenceBroker(Unknown Source)
  at
org.apache.ojb.broker.PersistenceBrokerFactory.defaultPersistenceBroke
r(
Unknown Source)
  at
org.osn.persistence.PersistenceSessionPBImpl.getBroker(PersistenceSess
io
nPBImpl.java:79)

 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
Ahh, I don't think that is the case, since my Class-Path setting is
dynamically generated when I produce the EAR by taking all of the jars
in my lib directory and adding it to that setting.

Now, I did not update my commons-* jar file for 1.0 -- and you are
saying that there was some upgrades? I wonder if that could be the
issue.

Thanks!

-Andrew 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 2:48 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:

 Armin,
 
 Could you clarify for me what you mean by I think that some jar files

 changed between rc6 and 1.0.

sorry, my bad English ;-)
I mean the names of some jars are changed, e.g. 
commons-collections-2.1.1.jar
instead of commons-collections.jar.
Maybe you have a jar in classpath that doesn't match the Class-Path
setting.

regards
Armin

Are you saying that dependencies were
 removed that rc6 had that 1.0 doesn't need? My Class-Path entry from 
 my EJB jar file contains the following entries:
 
 Manifest-Version: 1.0
 Ant-Version: Apache Ant 1.6.1
 Created-By: 1.4.2-b28 (Sun Microsystems Inc.)
 Built-By: andrew.clute
 Class-Path: Merlia.jar OSNHtml.jar antlr.jar commons-beanutils.jar com

 mons-collections.jar commons-dbcp.jar commons-digester.jar commons-fi

 leupload.jar commons-lang.jar commons-logging.jar commons-pool.jar co

 mmons-validator.jar db-ojb-1.0.0-src.jar db-ojb-1.0.0.jar jakarta-poi

 -1.5.1.jar p6spy.jar
 
 Are you thinking that there are unnesscary entries in it? I guess am 
 not sure what the cause or solution would be based on your statement 
 to look for. Thanks!
 
 -Andrew
 
 
 
 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:34 PM
 To: OJB Users List
 Subject: Re: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Clute, Andrew wrote:
 
 
I am almost certain that is a ClassLoader issue. 

Yes, my deployment looks almost the exact same as Stephen's (in fact, 
I chimed in when he first posted that stating that is already how I 
was doing it, and it worked fine).

Now, something I forgot to mention: We have only started seeing this 
since we upgraded to 1.0 from 1.0RC6. We see the problem on both our 
dev server that is on Jboss 3.2.3, and on my development machine that 
is on Jboss 3.2.5.

Are there any known parts to the OJB Metadata and Configuration stuff 
that lives through redeployments (i.e. is static)?
 
 
 As far as I know the ClassLoader take care of static instances too.
 Did you check all jar names and Class-Path entries in your config
files?
 
 I think that some jar files changed between rc6 and 1.0
 
 Armin
 
 
 
-Andrew

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 2:14 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Hi Andrew,

think this is a ClassLoader problem. Maybe ojb.jar itself or one of 
the jars OJB depends on is not correctly reloaded.

Did you follow the instructions made by Stephen Ting

http://db.apache.org/ojb/docu/guides/deployment.html#Packing+an+.ear+f
il
e

regards,
Armin


Clute, Andrew wrote:


I am running OJB 1.0 with JBoss 3.2.5.

On *occasional* redeployments of my EAR file (with nested Jars and
Wars) I will get a nasty ClassCastException that is only fixable by 
restarting Jboss. This happens in the

MetadataManager.buildDefaultKey() method.


The top part of the stack trace is posted below. From what I can 
tell,


the exception stems from not that it is the wrong class attempting to

be casted, but it is an instance of a class that is from a previous 
deployment (and thus classloader) that is trying to be casted in to 
the same class type in a new class loader.

I have taken a quick look at MetadataManager, and don't see anything 
terribly obvious as to the cause -- which I would assume is a static 
instance to the Collection of JdbcConnectionsDescriptors. There is a 
a


ThreadLocal variable, but I don't think that is the cause.

So, my question is: has anyone else seen this? Can anyone think of 
why


on a undeployment that not all of the OJB classes are removed from 
the


VM?

Thanks!

Here is the stacktrace:

2004-08-11 13:24:22,923 ERROR [org.jboss.ejb.plugins.LogInterceptor]
RuntimeException:
java.lang.ClassCastException
 at
org.apache.ojb.broker.metadata.MetadataManager.buildDefaultKey(Unknow
n
Source)
 at org.apache.ojb.broker.metadata.MetadataManager.init(Unknown
Source)
 at org.apache.ojb.broker.metadata.MetadataManager.init(Unknown
Source)
 at
org.apache.ojb.broker.metadata.MetadataManager.getInstance(Unknown
Source)
 at
org.apache.ojb.broker.core.PersistenceBrokerFactoryBaseImpl.getDefaul
t
Ke
y(Unknown Source

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
Upgrading to the newest versions of the lib files for OJB did not fix
the problem.

I wish there was someway I could figure out what was keeping the
reference to the previous classes around that would conflict with the
new classloader. Ugh!

-Andrew

 

-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 2:50 PM
To: OJB Users List
Subject: RE: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Ahh, I don't think that is the case, since my Class-Path setting is
dynamically generated when I produce the EAR by taking all of the jars
in my lib directory and adding it to that setting.

Now, I did not update my commons-* jar file for 1.0 -- and you are
saying that there was some upgrades? I wonder if that could be the
issue.

Thanks!

-Andrew 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 2:48 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:

 Armin,
 
 Could you clarify for me what you mean by I think that some jar files

 changed between rc6 and 1.0.

sorry, my bad English ;-)
I mean the names of some jars are changed, e.g. 
commons-collections-2.1.1.jar
instead of commons-collections.jar.
Maybe you have a jar in classpath that doesn't match the Class-Path
setting.

regards
Armin

Are you saying that dependencies were
 removed that rc6 had that 1.0 doesn't need? My Class-Path entry from 
 my EJB jar file contains the following entries:
 
 Manifest-Version: 1.0
 Ant-Version: Apache Ant 1.6.1
 Created-By: 1.4.2-b28 (Sun Microsystems Inc.)
 Built-By: andrew.clute
 Class-Path: Merlia.jar OSNHtml.jar antlr.jar commons-beanutils.jar com

 mons-collections.jar commons-dbcp.jar commons-digester.jar commons-fi

 leupload.jar commons-lang.jar commons-logging.jar commons-pool.jar co

 mmons-validator.jar db-ojb-1.0.0-src.jar db-ojb-1.0.0.jar jakarta-poi

 -1.5.1.jar p6spy.jar
 
 Are you thinking that there are unnesscary entries in it? I guess am 
 not sure what the cause or solution would be based on your statement 
 to look for. Thanks!
 
 -Andrew
 
 
 
 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:34 PM
 To: OJB Users List
 Subject: Re: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Clute, Andrew wrote:
 
 
I am almost certain that is a ClassLoader issue. 

Yes, my deployment looks almost the exact same as Stephen's (in fact, 
I chimed in when he first posted that stating that is already how I 
was doing it, and it worked fine).

Now, something I forgot to mention: We have only started seeing this 
since we upgraded to 1.0 from 1.0RC6. We see the problem on both our 
dev server that is on Jboss 3.2.3, and on my development machine that 
is on Jboss 3.2.5.

Are there any known parts to the OJB Metadata and Configuration stuff 
that lives through redeployments (i.e. is static)?
 
 
 As far as I know the ClassLoader take care of static instances too.
 Did you check all jar names and Class-Path entries in your config
files?
 
 I think that some jar files changed between rc6 and 1.0
 
 Armin
 
 
 
-Andrew

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 2:14 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Hi Andrew,

think this is a ClassLoader problem. Maybe ojb.jar itself or one of 
the jars OJB depends on is not correctly reloaded.

Did you follow the instructions made by Stephen Ting

http://db.apache.org/ojb/docu/guides/deployment.html#Packing+an+.ear+f
il
e

regards,
Armin


Clute, Andrew wrote:


I am running OJB 1.0 with JBoss 3.2.5.

On *occasional* redeployments of my EAR file (with nested Jars and
Wars) I will get a nasty ClassCastException that is only fixable by 
restarting Jboss. This happens in the

MetadataManager.buildDefaultKey() method.


The top part of the stack trace is posted below. From what I can 
tell,


the exception stems from not that it is the wrong class attempting to

be casted, but it is an instance of a class that is from a previous 
deployment (and thus classloader) that is trying to be casted in to 
the same class type in a new class loader.

I have taken a quick look at MetadataManager, and don't see anything 
terribly obvious as to the cause -- which I would assume is a static 
instance to the Collection of JdbcConnectionsDescriptors. There is a 
a


ThreadLocal variable, but I don't think that is the cause.

So, my question is: has anyone else seen this? Can anyone think of 
why


on a undeployment that not all of the OJB classes are removed from 
the


VM?

Thanks!

Here is the stacktrace:

2004-08-11 13:24:22,923 ERROR [org.jboss.ejb.plugins.LogInterceptor

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
I don't fill out the application.xml entries, since I Thought it was an
either-or situation (either Class-Path in the manifest file, or entries
in Application.xml)

 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 3:18 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:

 Upgrading to the newest versions of the lib files for OJB did not fix 
 the problem.
 
 I wish there was someway I could figure out what was keeping the 
 reference to the previous classes around that would conflict with the 
 new classloader. Ugh!


last-ditch attempt ;-)
Did you check the entries in application.xml too? Or was this file
auto-generated too?

Armin

 -Andrew
 
  
 
 -Original Message-
 From: Clute, Andrew [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:50 PM
 To: OJB Users List
 Subject: RE: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Ahh, I don't think that is the case, since my Class-Path setting is 
 dynamically generated when I produce the EAR by taking all of the jars

 in my lib directory and adding it to that setting.
 
 Now, I did not update my commons-* jar file for 1.0 -- and you are 
 saying that there was some upgrades? I wonder if that could be the 
 issue.
 
 Thanks!
 
 -Andrew
 
 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:48 PM
 To: OJB Users List
 Subject: Re: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Clute, Andrew wrote:
 
 
Armin,

Could you clarify for me what you mean by I think that some jar files
 
 
changed between rc6 and 1.0.
 
 
 sorry, my bad English ;-)
 I mean the names of some jars are changed, e.g. 
 commons-collections-2.1.1.jar
 instead of commons-collections.jar.
 Maybe you have a jar in classpath that doesn't match the Class-Path 
 setting.
 
 regards
 Armin
 
 Are you saying that dependencies were
 
removed that rc6 had that 1.0 doesn't need? My Class-Path entry from 
my EJB jar file contains the following entries:

Manifest-Version: 1.0
Ant-Version: Apache Ant 1.6.1
Created-By: 1.4.2-b28 (Sun Microsystems Inc.)
Built-By: andrew.clute
Class-Path: Merlia.jar OSNHtml.jar antlr.jar commons-beanutils.jar com
 
 
mons-collections.jar commons-dbcp.jar commons-digester.jar commons-fi
 
 
leupload.jar commons-lang.jar commons-logging.jar commons-pool.jar co
 
 
mmons-validator.jar db-ojb-1.0.0-src.jar db-ojb-1.0.0.jar jakarta-poi
 
 
-1.5.1.jar p6spy.jar

Are you thinking that there are unnesscary entries in it? I guess am 
not sure what the cause or solution would be based on your statement 
to look for. Thanks!

-Andrew



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 2:34 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:



I am almost certain that is a ClassLoader issue. 

Yes, my deployment looks almost the exact same as Stephen's (in fact,

I chimed in when he first posted that stating that is already how I 
was doing it, and it worked fine).

Now, something I forgot to mention: We have only started seeing this 
since we upgraded to 1.0 from 1.0RC6. We see the problem on both our 
dev server that is on Jboss 3.2.3, and on my development machine that

is on Jboss 3.2.5.

Are there any known parts to the OJB Metadata and Configuration stuff

that lives through redeployments (i.e. is static)?


As far as I know the ClassLoader take care of static instances too.
Did you check all jar names and Class-Path entries in your config
 
 files?
 
I think that some jar files changed between rc6 and 1.0

Armin




-Andrew

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 2:14 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Hi Andrew,

think this is a ClassLoader problem. Maybe ojb.jar itself or one of 
the jars OJB depends on is not correctly reloaded.

Did you follow the instructions made by Stephen Ting

http://db.apache.org/ojb/docu/guides/deployment.html#Packing+an+.ear+
f
il
e

regards,
Armin


Clute, Andrew wrote:



I am running OJB 1.0 with JBoss 3.2.5.

On *occasional* redeployments of my EAR file (with nested Jars and
Wars) I will get a nasty ClassCastException that is only fixable by 
restarting Jboss. This happens in the

MetadataManager.buildDefaultKey() method.



The top part of the stack trace is posted below. From what I can 
tell,


the exception stems from not that it is the wrong class attempting 
to
 
 
be casted, but it is an instance of a class that is from a previous 
deployment (and thus classloader) that is trying

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
Just for giggles, I changed my EAR to use the Application.xml file to
denote the dependant jar files, and took it out of the Manifest file for
my Ejb jar, and it still is causing the issue!

Ughh. Might be time to post this to the Jboss forums -- but they are not
nearly as helpful! :)

-Andrew

 

-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 3:22 PM
To: OJB Users List
Subject: RE: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

I don't fill out the application.xml entries, since I Thought it was an
either-or situation (either Class-Path in the manifest file, or entries
in Application.xml)

 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 3:18 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:

 Upgrading to the newest versions of the lib files for OJB did not fix 
 the problem.
 
 I wish there was someway I could figure out what was keeping the 
 reference to the previous classes around that would conflict with the 
 new classloader. Ugh!


last-ditch attempt ;-)
Did you check the entries in application.xml too? Or was this file
auto-generated too?

Armin

 -Andrew
 
  
 
 -Original Message-
 From: Clute, Andrew [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:50 PM
 To: OJB Users List
 Subject: RE: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Ahh, I don't think that is the case, since my Class-Path setting is 
 dynamically generated when I produce the EAR by taking all of the jars

 in my lib directory and adding it to that setting.
 
 Now, I did not update my commons-* jar file for 1.0 -- and you are 
 saying that there was some upgrades? I wonder if that could be the 
 issue.
 
 Thanks!
 
 -Andrew
 
 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:48 PM
 To: OJB Users List
 Subject: Re: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Clute, Andrew wrote:
 
 
Armin,

Could you clarify for me what you mean by I think that some jar files
 
 
changed between rc6 and 1.0.
 
 
 sorry, my bad English ;-)
 I mean the names of some jars are changed, e.g. 
 commons-collections-2.1.1.jar
 instead of commons-collections.jar.
 Maybe you have a jar in classpath that doesn't match the Class-Path 
 setting.
 
 regards
 Armin
 
 Are you saying that dependencies were
 
removed that rc6 had that 1.0 doesn't need? My Class-Path entry from 
my EJB jar file contains the following entries:

Manifest-Version: 1.0
Ant-Version: Apache Ant 1.6.1
Created-By: 1.4.2-b28 (Sun Microsystems Inc.)
Built-By: andrew.clute
Class-Path: Merlia.jar OSNHtml.jar antlr.jar commons-beanutils.jar com
 
 
mons-collections.jar commons-dbcp.jar commons-digester.jar commons-fi
 
 
leupload.jar commons-lang.jar commons-logging.jar commons-pool.jar co
 
 
mmons-validator.jar db-ojb-1.0.0-src.jar db-ojb-1.0.0.jar jakarta-poi
 
 
-1.5.1.jar p6spy.jar

Are you thinking that there are unnesscary entries in it? I guess am 
not sure what the cause or solution would be based on your statement 
to look for. Thanks!

-Andrew



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 2:34 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:



I am almost certain that is a ClassLoader issue. 

Yes, my deployment looks almost the exact same as Stephen's (in fact,

I chimed in when he first posted that stating that is already how I 
was doing it, and it worked fine).

Now, something I forgot to mention: We have only started seeing this 
since we upgraded to 1.0 from 1.0RC6. We see the problem on both our 
dev server that is on Jboss 3.2.3, and on my development machine that

is on Jboss 3.2.5.

Are there any known parts to the OJB Metadata and Configuration stuff

that lives through redeployments (i.e. is static)?


As far as I know the ClassLoader take care of static instances too.
Did you check all jar names and Class-Path entries in your config
 
 files?
 
I think that some jar files changed between rc6 and 1.0

Armin




-Andrew

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 2:14 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Hi Andrew,

think this is a ClassLoader problem. Maybe ojb.jar itself or one of 
the jars OJB depends on is not correctly reloaded.

Did you follow the instructions made by Stephen Ting

http://db.apache.org/ojb/docu/guides/deployment.html#Packing+an+.ear+
f
il
e

regards,
Armin


Clute

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
Well, I have narrowed the issue down further, but still do not have a
solution yet. In ConnectionRepository.getAllDescriptor(), the
JdbcConnectionDescriptor's that are in the current repository are cloned
(seralized) into another list and returned. I made the guess (and I was
right) that when this error is exposed, the JdbcConnectionDescriptor's
that are returned from the Serilization are loaded in a different
classloader than the ones that OJB creates!

To prove this, I changed the code for that method from:

[code]
public List getAllDescriptor()
{
return (List) SerializationUtils.clone(new
ArrayList(jcdMap.values()));
}
[/code]

To:

[code]
public List getAllDescriptor()
{

Iterator it = jcdMap.values().iterator();
while (it.hasNext()){
Object o = it.next();
System.out.println(ClassLoader for  +
o.getClass().getName() + before Serialization: 
+o.getClass().getClassLoader());
}

List returnList = (List) SerializationUtils.clone(new
ArrayList(jcdMap.values()));
it = returnList.iterator();
while (it.hasNext()){
Object o = it.next();
System.out.println(ClassLoader for  +
o.getClass().getName() + after Serialization: 
+o.getClass().getClassLoader());
}

return returnList;
}
[/code]

And as I assumed, the first time my application is deployed, the
classloader for the Connection is the same for both what OJB uses, and
what SerilizationUtils uses:

17:02:09,592 INFO  [STDOUT] ClassLoader for
org.apache.ojb.broker.metadata.JdbcConnectionDescriptor before
Serialization: [EMAIL PROTECTED]
url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56536OSNCore.ear
,addedOrder=37}
17:02:18,811 INFO  [STDOUT] ClassLoader for
org.apache.ojb.broker.metadata.JdbcConnectionDescriptor after
Serialization: [EMAIL PROTECTED]
url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56536OSNCore.ear
,addedOrder=37}


But, after redeploying it, the classloader for OJB changes (as I would
assume is correct), but the classloader for SerilizationUtils stays the
same as the previous version! Oops!

17:03:04,780 INFO  [STDOUT] ClassLoader for
org.apache.ojb.broker.metadata.JdbcConnectionDescriptor before
Serialization: [EMAIL PROTECTED]
url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56537OSNCore.ear
,addedOrder=38}
17:03:11,280 INFO  [STDOUT] ClassLoader for
org.apache.ojb.broker.metadata.JdbcConnectionDescriptor after
Serialization: [EMAIL PROTECTED]
url=null ,addedOrder=37} 

So, now I need to figure out why this is happening. Something thing
looks weird for the after-serilization version after redploying, since
the url for that class is null. Not sure where it is loading it from, or
why it has a stored copy of it.

-Andrew

-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 3:53 PM
To: OJB Users List
Subject: RE: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Just for giggles, I changed my EAR to use the Application.xml file to
denote the dependant jar files, and took it out of the Manifest file for
my Ejb jar, and it still is causing the issue!

Ughh. Might be time to post this to the Jboss forums -- but they are not
nearly as helpful! :)

-Andrew

 

-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 3:22 PM
To: OJB Users List
Subject: RE: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

I don't fill out the application.xml entries, since I Thought it was an
either-or situation (either Class-Path in the manifest file, or entries
in Application.xml)

 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Friday, August 13, 2004 3:18 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

Clute, Andrew wrote:

 Upgrading to the newest versions of the lib files for OJB did not fix 
 the problem.
 
 I wish there was someway I could figure out what was keeping the 
 reference to the previous classes around that would conflict with the 
 new classloader. Ugh!


last-ditch attempt ;-)
Did you check the entries in application.xml too? Or was this file
auto-generated too?

Armin

 -Andrew
 
  
 
 -Original Message-
 From: Clute, Andrew [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 2:50 PM
 To: OJB Users List
 Subject: RE: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Ahh, I don't think that is the case, since my Class-Path setting is 
 dynamically generated when I produce the EAR by taking all of the jars

 in my lib directory and adding it to that setting.
 
 Now, I did not update my commons-* jar file for 1.0

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
I am wondering if it has something to do with the fact that
SerilizationUtils uses ObjectInputStream to serialize/desearlize the
objects, and ObjectInputStream on the deserialization does a
Class.forName() to create the new object -- which in the J2EE
classloader world can cause problems. I think that would explain why it
would use the previous versions. I am posting a message to the Jboss
group to see if my hypothesis is correct.

-Andrew



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 13, 2004 5:25 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and
JdbcConnectionDescriptor) -- anyone else have it?

  So, now I need to figure out why this is happening. Something thing
 looks weird for the after-serilization version after redploying, since
 the url for that class is null. Not sure where it is loading it from,
or   why it has a stored copy of it.
 

I must admit that I don't have a clue...

Did you checked commons-lang.jar? SerializationUtils is part of
commons-lang and if this jar wasn't redeployed it will use the 'old' 
class-loader. Or is commons-lang duplicated in classpath?

regards,
Armin



Clute, Andrew wrote:
 Well, I have narrowed the issue down further, but still do not have a 
 solution yet. In ConnectionRepository.getAllDescriptor(), the 
 JdbcConnectionDescriptor's that are in the current repository are 
 cloned
 (seralized) into another list and returned. I made the guess (and I 
 was
 right) that when this error is exposed, the JdbcConnectionDescriptor's

 that are returned from the Serilization are loaded in a different 
 classloader than the ones that OJB creates!
 
 To prove this, I changed the code for that method from:
 
 [code]
 public List getAllDescriptor()
 {
 return (List) SerializationUtils.clone(new 
 ArrayList(jcdMap.values()));
 }
 [/code]
 
 To:
 
 [code]
 public List getAllDescriptor()
 {
 
   Iterator it = jcdMap.values().iterator();
   while (it.hasNext()){
   Object o = it.next();
   System.out.println(ClassLoader for  +
 o.getClass().getName() + before Serialization: 
 +o.getClass().getClassLoader());
   }
 
   List returnList = (List) SerializationUtils.clone(new 
 ArrayList(jcdMap.values()));
   it = returnList.iterator();
   while (it.hasNext()){
   Object o = it.next();
   System.out.println(ClassLoader for  +
 o.getClass().getName() + after Serialization: 
 +o.getClass().getClassLoader());
   }
 
 return returnList;
 }
 [/code]
 
 And as I assumed, the first time my application is deployed, the 
 classloader for the Connection is the same for both what OJB uses, and

 what SerilizationUtils uses:
 
 17:02:09,592 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor before
 Serialization: [EMAIL PROTECTED]
 url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56536OSNCore.ear
 ,addedOrder=37}
 17:02:18,811 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor after
 Serialization: [EMAIL PROTECTED]
 url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56536OSNCore.ear
 ,addedOrder=37}
 
 
 But, after redeploying it, the classloader for OJB changes (as I would

 assume is correct), but the classloader for SerilizationUtils stays 
 the same as the previous version! Oops!
 
 17:03:04,780 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor before
 Serialization: [EMAIL PROTECTED]
 url=file:/C:/jboss-3.2.5/server/default/tmp/deploy/tmp56537OSNCore.ear
 ,addedOrder=38}
 17:03:11,280 INFO  [STDOUT] ClassLoader for 
 org.apache.ojb.broker.metadata.JdbcConnectionDescriptor after
 Serialization: [EMAIL PROTECTED]
 url=null ,addedOrder=37}
 
 So, now I need to figure out why this is happening. Something thing 
 looks weird for the after-serilization version after redploying, since

 the url for that class is null. Not sure where it is loading it from, 
 or why it has a stored copy of it.
 
 -Andrew
 
 -Original Message-
 From: Clute, Andrew [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 3:53 PM
 To: OJB Users List
 Subject: RE: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have it?
 
 Just for giggles, I changed my EAR to use the Application.xml file to 
 denote the dependant jar files, and took it out of the Manifest file 
 for my Ejb jar, and it still is causing the issue!
 
 Ughh. Might be time to post this to the Jboss forums -- but they are 
 not nearly as helpful! :)
 
 -Andrew
 
  
 
 -Original Message-
 From: Clute, Andrew [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 13, 2004 3:22 PM
 To: OJB Users List
 Subject: RE: Jboss and ClassCastException (MetadataManager and
 JdbcConnectionDescriptor) -- anyone else have

RE: Jboss and ClassCastException (MetadataManager and JdbcConnectionDescriptor) -- anyone else have it?

2004-08-13 Thread Clute, Andrew
That's a good idea about trying a modified version of commons-lang. However, I am not 
sure what they will be able to do about it since they are using ObjectInputStream to 
do the serialization, and that is Sun's code. Either way, I will see if there is a 
workaround.

On the other hand -- why is OJB using this method to do what looks like is a simple 
clone routine? If commons-lang method is known to be non-complaint with J2EE 
(especially JBoss) classloading, wouldn't OJB want to change the way it clones those 
descriptors?

-Andrew


-Original Message-
From: Thomas Dudziak [mailto:[EMAIL PROTECTED]
Sent: Fri 8/13/2004 6:34 PM
To: OJB Users List
Subject: Re: Jboss and ClassCastException (MetadataManager and 
JdbcConnectionDescriptor) -- anyone else have it?
 
Clute, Andrew wrote:

 I am wondering if it has something to do with the fact that
 SerilizationUtils uses ObjectInputStream to serialize/desearlize the
 objects, and ObjectInputStream on the deserialization does a
 Class.forName() to create the new object -- which in the J2EE
 classloader world can cause problems. I think that would explain why it
 would use the previous versions. I am posting a message to the Jboss
 group to see if my hypothesis is correct.

Hope you don't mind if I hop in :-)
A couple of weeks ago we unified class and resource loading in OJB into 
the ClassHelper class, which per default uses the class loader of the 
current thread. So perhaps the problem here is that the 
SerializationUtils class does not use this class loader (it is known to 
happen that the classloader that class.forName uses is not the same as 
the one of the current thread, e.g. when writing Ant tasks).
However in OJB we cannot change this, so perhaps you could create a 
modified version of commons-lang to verify this, and if this is true, 
then you probably should file a feature request with the commons-lang 
folks ?

Tom

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

RE: How do you map a reference to an interface-backed class? (non-trivial)

2004-06-21 Thread Clute, Andrew
That's what I assumed, and was afraid of.

I think I can get around it temporarily by actually creating a composite
field that contains the classname and the PK in the same field (i.e.
foo.bar.CatalogItem:12345), and then creating a custom Conversion that
will take that string and instantiate the object.

Now, is there any technical reason why there isn't a strategy to map a
class that has a reference to an interface-backed class where the
concrete classes are not mapped in the same table? It would seem to me,
that such a feature could be implemented pretty straight-forward in the
current architecture (and I would be willing to take that on), but I
want to make sure I understand all of the ramifications of why this
isn't done at this time (feature never implemented?, won't work with
what we currently have?, etc.

One staight-forward approach would to modify the repostiroy_user.xml
definitions to allow the following type of mapping:

reference-descriptor
name=interfaceItem
class-name-field-ref=interfaceClassName

foreignkey field-ref=interfaceGuid/
/reference-descriptor

field-descriptor
name=interfaceClassName
column=interface_class_name
jdbc-type=VARCHAR
/
 field-descriptor
name=interfaceGuid
column=interface_guid
jdbc-type=VARCHAR
access=anonymous
/

-Andrew


-Original Message-
From: Thomas Dudziak [mailto:[EMAIL PROTECTED] 
Sent: Sunday, June 20, 2004 10:17 AM
To: OJB Users List
Subject: Re: How do you map a reference to an interface-backed class?
(non-trivial)


 To give some more concrete to the example, here is what I have...I 
 have two different objects that already exist: Course and CatalogItem.

 Now at this point we need to start accepting payment for them, so I 
 have created an Order and OrderItem, and I want the OrderItem to be 
 able to contain either one of the objects. So I created a Sellable 
 interface, and Course and CatalogItem now implement them. So, I now 
 need to figure out how to map OrderItem so that when it is restored, 
 the right object (Course or CatalogItem) is created.

This will work if both Course and CatalogItem map to the same table. If
you require them to be in different tables, then you'll probably have to
do the loading manually. This means that you have the basic parts
(primary key, ojbConcreteClass) in the same table, and load the other
fields/references/collections on your own in the constructor of the
concrete subclass. This is a bit more involved though.

Tom


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: How do you map a reference to an interface-backed class? (non-trivial)

2004-06-20 Thread Clute, Andrew
So, let's say that the classes that implement this interface have nothing in common, 
than the fact that they implement the same interface (existing classes that are 
modified to implement this interface to give some commonality to them), so there are 
no common properties between them.

In this case, would that mean the only get/set I would need to define on the interface 
is for ojbConcreteClass?

I am so not sure how the the reference would work...the class that has the reference 
has a column for the FK to that referenced object, but should it also contain the 
objConcreteClass field to the class-type of the referenced object? This is were my 
confusion is.

To give some more concrete to the example, here is what I have...I have two different 
objects that already exist: Course and CatalogItem. Now at this point we need to start 
accepting payment for them, so I have created an Order and OrderItem, and I want the 
OrderItem to be able to contain either one of the objects. So I created a Sellable 
interface, and Course and CatalogItem now implement them. So, I now need to figure out 
how to map OrderItem so that when it is restored, the right object (Course or 
CatalogItem) is created.

Thanks!


-Original Message-
From: Thomas Dudziak [mailto:[EMAIL PROTECTED]
Sent: Sun 6/20/2004 4:31 AM
To: OJB Users List
Subject: Re: How do you map a reference to an interface-backed class? (non-trivial)
 
Andrew Clute wrote:

 I have looked at all of the examples, I can't seem to make heads or 
 tails of how to handle my situation. I want to map a class that has a 
 reference to an object, that can one of many different objects, that all 
 implement the same interface.
 
 Example
 
 Interface A{
 
 }
 
 class B implements A
 {}
 
 class C implements A
 {}
 
 class Z
 {
 private A myA;
 }
 
 So, in this case, how do I map class Z since it's member variable is of 
 Type A, but I want the concrete object, either of Type B or C to be 
 filled in.
 
 Seems like there is no known strategy to do this: Using Interfaces With 
 OJB talks about a similar situation, but assumes that there is only one 
 concrete class that can be instantiated for the reference (the 
 factory-method assumes that it will only return one concrete class type).
 
 Seems like I need a combination of the Interface mapping, and the 
 'ojbConcreteClass' property to have the appropriate concrete class 
 instantiated.

Yes, that should work. You define all common fields in the interface 
including ojbConcreteClass using getter/setter methods, and you need the 
PersistentFieldIntrospectorImpl or PersistentFieldAutoProxyImpl for the 
field access (OJB.properties) as there are no fields only bean methods.
You also should not need factory-class/factory-method now because OJB 
shouldn't try to instantiate the interface if ojbConcreteClass does not 
refer to it.

Tom


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

RE: How to use DUAL table for multiple types of queries

2004-05-20 Thread Clute, Andrew
Based on looking in
org.apache.ojb.broker.accesslayer.sql.SqlSelectStatement.getStatement()
-- it looks like if no columns are specified, then it will do the
multimapped object select, which you are seeing.

It is clearer as what to do in the HEAD version, but in RC5, it looks
like if you change it over to a ReportQuery, and ask for only that one
column that you need, it would work.


if (columns == null || columns.length == 0)
{
/**
 * MBAIRD: use the appendListofColumnsForSelect, as it
finds
 * the union of select items for all object mapped to
the same table. This
 * will allow us to load objects with unique mapping
fields that are mapped
 * to the same table.
 */
columnList =
appendListOfColumnsForSelect(getSearchClassDescriptor(), stmt);
}
else
{
columnList = appendListOfColumns(columns, stmt);
}


In this case, columns are ones that are specified, which ReportQuery is
used for.

-Andrew



-Original Message-
From: Glenn Barnard [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 2:10 PM
To: [EMAIL PROTECTED]
Subject: How to use DUAL table for multiple types of queries


I posted this yesterdayWould someone PLEASE help me. Am running out
of time.


We use Oracle 9 and have several different functions we can call. In
OJB, they are all mapped with the table name of DUAL. For example:

   SELECT function(args) AS column FROM DUAL

Our repository.xml has a table entry for each function so that the
result is kept in it's own class. For example:

  class-descriptor
class=com.business.model.ClassName1
table=dual
refresh=true  

field-descriptor id=1
  name=fieldName1
  column=fieldName1
  primarykey=true
  jdbc-type=VARCHAR
  nullable=false /

/class-descriptor

and:

  class-descriptor
class=com.business.model.ClassName2
table=dual
refresh=true  

field-descriptor id=1
  name=fieldName2
  column=fieldName2
  primarykey=true
  jdbc-type=VARCHAR
  nullable=false /

/class-descriptor

The problem is that when OJB goes to extract the values from the
result set, it tries to do so for 2 columns, fieldName1 and
fieldName2.

I thought that by specifiying the class name I wanted (e.g., Class1)
that OJB would only get the fields for that class. Can anyone advise
me how I can do this without resorting to having only one class for
a function calls? Oh, I'm using a customized rc5 and cannot upgrade
until after this release ships (a timing/resource issue).

Thanks ya'll!

_
Stop worrying about overloading your inbox - get MSN Hotmail Extra
Storage! 
http://join.msn.click-url.com/go/onm00200362ave/direct/01/


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Bug in QueryReferenceBroker?

2004-05-13 Thread Clute, Andrew
Thanks for the quick response! I thought I was going crazy when I saw
that method and couldn't figure out why it was doing that. I assumed I
was just missing something in the bigger picture.

Glad we could find that before the next RC.

-Andrew

 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 13, 2004 6:24 AM
To: OJB Users List
Subject: Re: Bug in QueryReferenceBroker?

Hi Andrew,

I checked in a fix (similar to your patch + minor modifications in
method) and a new test in AnonymousFieldsTest. The test fails with NPE
when using QueryReferenceBroker version 1.15 and pass with latest
version.
Hope this will solve your problem too.
Thank you very much!

regards,
java-dumbhead Armin

Armin Waibel wrote:
 Hi Andrew,
 
 seems you patch will do the job.
 I will check this ASAP.
 Thanks!
 
 regards,
 Armin
 
 Clute, Andrew wrote:
 
 I created what I think is an appropriate patch -- it fixed my issue.
 Here it is.

 Index: QueryReferenceBroker.java
 ===
 RCS file:
 /home/cvspublic/db-ojb/src/java/org/apache/ojb/broker/core/QueryRefer
 enc
 eBroker.java,v
 retrieving revision 1.15
 diff -u -r1.15 QueryReferenceBroker.java
 --- QueryReferenceBroker.java6 May 2004 19:45:57 -1.15
 +++ QueryReferenceBroker.java12 May 2004 21:47:44 -
 @@ -425,6 +425,7 @@
  {
  return new Identity(referencedObject, pb);
  }
 +return null;
  }
  else
  {

  

 -Original Message-
 From: Clute, Andrew [mailto:[EMAIL PROTECTED] Sent: 
 Wednesday, May 12, 2004 5:37 PM
 To: OJB Users List; OJB Developers List
 Subject: Bug in QueryReferenceBroker?

 I recently updated to HEAD and am finding a weird issue now.

 I have an object Session, that has a reference to an Object called 
 Person. Now Person is a proxy object. I am using an Anonymous FK to 
 reference Person from Session.

 When I try to restore Session when it has no Person hanging on it, it

 restores the Session object with a Person Proxy object hanging off of

 it (it shouldn't!), and the Proxy's PK being a collection of null.

 I think I might have narrowed down why it is happening:

 Method getReferencedObjectIdentity(), here is a code snipet:

  if (hasNullifiedFKValue)
  {
   if(isAnonymousKeyReference(cld, rds))
{
 Object referencedObject =
rds.getPersistentField().get(obj);
  if(referencedObject != null)
   {
 return new Identity(referencedObject, pb);
  }
  }
   else
   {
   return null;
   }
   }

  // ensure that top-level extents are used for Identities
  return new Identity(rds.getItemClass(), 
 pb.getTopLevelClass(rds.getItemClass()), fkValues);

 In my case, I have a nullifiedFKValue, so it goes into the first If 
 block, and then it sees that it is an AnonymousKeyReference, but then

 my referencesObject us null (like it should be). But instead of 
 returning null, it jums out to the bottom where it returns a new 
 Identity!! Why is it doing that? I can see where Armin has made some 
 changes to handle better AnonymousFK's, is this a side-effect of
that?

 -ANdrew


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Trying to return an unknown connection2!

2004-05-12 Thread Clute, Andrew
Wondering if any work has been done on this. I am now getting the same
error, and wondering if I could test the changes.

-Andrew





-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 27, 2004 12:09 PM
To: OJB Users List
Subject: Re: Trying to return an unknown connection2!



  For managed enviroment this seems to be better.  ;-)   Is it a
difference or disadvantage for unmanaged enviroments?

of course it's different in non managed environments, because we only
can close a connection after PB.commit/abortTx when using PB-tx.

We have to take of side-effects in non-managed environments when
decouple connectionManager.isInLocalTx from PB.

  Of course I'm willing to test a fix.
  I'm currently a litte bit bussy too so impementing a fix on our own
 maybe difficult but I'll check it.

Great! I will contact you when the enhancement is in CVS. Please don't
hesitate to contact me if you have the feeling that I forgot it ;-)

regards,
Armin

Guido Beutler wrote:

 Hi Armin,
 
 Armin Waibel wrote:
 
 Hi Guido,

 we can try to release the used connection on PB.close() call instead 
 of Synchronization#beforeCompleation.
 
 
 For managed enviroment this seems to be better.  ;-) Is it a 
 difference or disadvantage for unmanaged enviroments?
 

 In PBFSyncImpl line 227 the close() method of PBImpl is overridden. 
 If we are in local-tx we don't really close the used PB handle and 
 thus do not release the used connection (it's done in
#beforeCompleation).

 To do so we have to make PB.isInTransaction method independed from 
 ConnectionManager.isInLocalTransaction method. After that we can 
 release the used connection (via connectionManager) in 
 PBSyncImpl.close method and keep PBSyncImpl still in PB-tx.
 
 
 Sounds like I have to take a look on it to understand what's to
change.
 

 Currently I'm busy with other OJB stuff, but I will try this ASAP. 
 Are you willing to test my changes or do you want to start this 
 refactoring by your own?
 
 
 Of course I'm willing to test a fix.
 I'm currently a litte bit bussy too so impementing a fix on our own 
 maybe difficult but I'll check it.
 
 thanks for the help and best regards,
 
 Guido
 

 regards,
 Armin

 Guido Beutler wrote:

 Hi Armin,

 sorry for the delay!
 Because nobody else had an answer I spent some time to get closer to

 the problem.
 After that I posted my question at jboss. Here's the thread:

 http://www.jboss.org/index.html?module=bbop=viewtopict=49041

 I don't know if I am allowed to repost the answer here (copyrights 
 etc. ) Please use the link above. I'm curious about the replies 
 here.

 best regards,

 Guido

 Armin Waibel wrote:

 Hi Guido,

 
  Any ideas what's going on there?

 I only answer to say No, I don't have a clue.

 I assume (maybe I'm completely wrong ;-)) that JBoss has problems 
 in handling the connections/DataSources associated with the running

 tx in a proper way. Your direct connection instance will be 
 associated with the suspended tx, within the new tx OJB lookup a 
 new connection, do all work and close the connection. It seems that

 the used connection is not vaild in jboss 
 TxConnectionManager...bla, bla

 Reached the line count for a do my best answer ;-)

 regards,
 Armin

 Guido Beutler wrote:

 Hello,

 I've got a strange problem with RC6 at JBoss 3.2.3.

 I've got a statefull and a stateless session bean. The stateless 
 session bean contains all OJB stuff.
 The statefull facade accesses some tables via JDBC directly.
 That stateless session OJB bean has transaction attribute
RequiresNew.
 The facade runs with Required.
 Both ejb's are container managed.

 If a method allocates a JDBC Connection from data source and then 
 access the OJB EJB the following exception is thrown.

 java.lang.IllegalStateException: Trying to return an unknown 
 connection2!
[EMAIL PROTECTED]
at
 org.jboss.resource.connectionmanager.CachedConnectionManager.unreg
 isterConnection(CachedConnectionManager.java:330)

at
 org.jboss.resource.connectionmanager.TxConnectionManager$TxConnect
 ionEventListener.connectionClosed(TxConnectionManager.java:539)

at
 org.jboss.resource.adapter.jdbc.BaseWrapperManagedConnection.close
 Handle(BaseWrapperManagedConnection.java:296)

at
 org.jboss.resource.adapter.jdbc.WrappedConnection.close(WrappedCon
 nection.java:117)

at
 org.apache.ojb.broker.util.WrappedConnection.close(WrappedConnecti
 on.java:124)

at
 org.apache.ojb.broker.util.pooling.ByPassConnection.close(ByPassCo
 nnection.java:64)

at
 org.apache.ojb.broker.accesslayer.ConnectionFactoryAbstractImpl.re
 leaseConnection(ConnectionFactoryAbstractImpl.java:79)

at
 org.apache.ojb.broker.accesslayer.ConnectionManagerImpl.releaseCon
 nection(ConnectionManagerImpl.java:286)

at
 org.apache.ojb.broker.core.PersistenceBrokerFactorySyncImpl$Persis
 tenceBrokerSyncImpl.beforeCompletion(PersistenceBrokerFactorySyncI
 mpl.jav

 a:177)

RE: Trying to return an unknown connection2!

2004-05-12 Thread Clute, Andrew
You know, I have a bad habit of asking questions for before looking at
the CVS log. ;)

I have downloaded the new changes, and they are working perfectly. I am
using PB-Api. My method that does non-CMT work, and also calls EJB's
that have CMT is working properly.

Thanks

-Andrew

 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 12, 2004 10:42 AM
To: OJB Users List
Subject: Re: Trying to return an unknown connection2!

Hi Andrew,

Clute, Andrew wrote:
 Wondering if any work has been done on this. I am now getting the same

 error, and wondering if I could test the changes.
 

Did you try latest from CVS? I checked in the changes a few days ago.
The optimations only made for the PB-api. Do you have problems with
PB-api or ODMG-api?

regards,
Armin

 -Andrew
 
 
 
 
 
 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, April 27, 2004 12:09 PM
 To: OJB Users List
 Subject: Re: Trying to return an unknown connection2!
 
 
 
   For managed enviroment this seems to be better.  ;-)   Is it a 
 difference or disadvantage for unmanaged enviroments?
 
 of course it's different in non managed environments, because we only 
 can close a connection after PB.commit/abortTx when using PB-tx.
 
 We have to take of side-effects in non-managed environments when 
 decouple connectionManager.isInLocalTx from PB.
 
   Of course I'm willing to test a fix.
   I'm currently a litte bit bussy too so impementing a fix on our own
 
maybe difficult but I'll check it.
 
 
 Great! I will contact you when the enhancement is in CVS. Please don't

 hesitate to contact me if you have the feeling that I forgot it ;-)
 
 regards,
 Armin
 
 Guido Beutler wrote:
 
 
Hi Armin,

Armin Waibel wrote:


Hi Guido,

we can try to release the used connection on PB.close() call instead 
of Synchronization#beforeCompleation.


For managed enviroment this seems to be better.  ;-) Is it a 
difference or disadvantage for unmanaged enviroments?


In PBFSyncImpl line 227 the close() method of PBImpl is overridden. 
If we are in local-tx we don't really close the used PB handle and 
thus do not release the used connection (it's done in
 
 #beforeCompleation).
 
To do so we have to make PB.isInTransaction method independed from 
ConnectionManager.isInLocalTransaction method. After that we can 
release the used connection (via connectionManager) in 
PBSyncImpl.close method and keep PBSyncImpl still in PB-tx.


Sounds like I have to take a look on it to understand what's to
 
 change.
 
Currently I'm busy with other OJB stuff, but I will try this ASAP. 
Are you willing to test my changes or do you want to start this 
refactoring by your own?


Of course I'm willing to test a fix.
I'm currently a litte bit bussy too so impementing a fix on our own 
maybe difficult but I'll check it.

thanks for the help and best regards,

Guido


regards,
Armin

Guido Beutler wrote:


Hi Armin,

sorry for the delay!
Because nobody else had an answer I spent some time to get closer to
 
 
the problem.
After that I posted my question at jboss. Here's the thread:

http://www.jboss.org/index.html?module=bbop=viewtopict=49041

I don't know if I am allowed to repost the answer here (copyrights 
etc. ) Please use the link above. I'm curious about the replies 
here.

best regards,

Guido

Armin Waibel wrote:


Hi Guido,


Any ideas what's going on there?

I only answer to say No, I don't have a clue.

I assume (maybe I'm completely wrong ;-)) that JBoss has problems 
in handling the connections/DataSources associated with the running
 
 
tx in a proper way. Your direct connection instance will be 
associated with the suspended tx, within the new tx OJB lookup a 
new connection, do all work and close the connection. It seems that
 
 
the used connection is not vaild in jboss 
TxConnectionManager...bla, bla

Reached the line count for a do my best answer ;-)

regards,
Armin

Guido Beutler wrote:


Hello,

I've got a strange problem with RC6 at JBoss 3.2.3.

I've got a statefull and a stateless session bean. The stateless 
session bean contains all OJB stuff.
The statefull facade accesses some tables via JDBC directly.
That stateless session OJB bean has transaction attribute
 
 RequiresNew.
 
The facade runs with Required.
Both ejb's are container managed.

If a method allocates a JDBC Connection from data source and then 
access the OJB EJB the following exception is thrown.

java.lang.IllegalStateException: Trying to return an unknown 
connection2!
 
 [EMAIL PROTECTED]
 
   at
org.jboss.resource.connectionmanager.CachedConnectionManager.unreg
isterConnection(CachedConnectionManager.java:330)

   at
org.jboss.resource.connectionmanager.TxConnectionManager$TxConnect
ionEventListener.connectionClosed(TxConnectionManager.java:539)

   at
org.jboss.resource.adapter.jdbc.BaseWrapperManagedConnection.close
Handle(BaseWrapperManagedConnection.java:296

Bug in QueryReferenceBroker?

2004-05-12 Thread Clute, Andrew
I recently updated to HEAD and am finding a weird issue now.

I have an object Session, that has a reference to an Object called
Person. Now Person is a proxy object. I am using an Anonymous FK to
reference Person from Session.

When I try to restore Session when it has no Person hanging on it, it
restores the Session object with a Person Proxy object hanging off of it
(it shouldn't!), and the Proxy's PK being a collection of null.

I think I might have narrowed down why it is happening:

Method getReferencedObjectIdentity(), here is a code snipet:

 if (hasNullifiedFKValue)
 {
  if(isAnonymousKeyReference(cld, rds))
   {
Object referencedObject = rds.getPersistentField().get(obj);
 if(referencedObject != null)
  {
return new Identity(referencedObject, pb);
 }
 }
  else
  {
  return null;
  }
  }

 // ensure that top-level extents are used for Identities
 return new Identity(rds.getItemClass(),
pb.getTopLevelClass(rds.getItemClass()), fkValues);

In my case, I have a nullifiedFKValue, so it goes into the first If
block, and then it sees that it is an AnonymousKeyReference, but then my
referencesObject us null (like it should be). But instead of returning
null, it jums out to the bottom where it returns a new Identity!! Why is
it doing that? I can see where Armin has made some changes to handle
better AnonymousFK's, is this a side-effect of that?

-ANdrew



RE: Bug in QueryReferenceBroker?

2004-05-12 Thread Clute, Andrew
I created what I think is an appropriate patch -- it fixed my issue.
Here it is.

Index: QueryReferenceBroker.java
===
RCS file:
/home/cvspublic/db-ojb/src/java/org/apache/ojb/broker/core/QueryReferenc
eBroker.java,v
retrieving revision 1.15
diff -u -r1.15 QueryReferenceBroker.java
--- QueryReferenceBroker.java   6 May 2004 19:45:57 -   1.15
+++ QueryReferenceBroker.java   12 May 2004 21:47:44 -
@@ -425,6 +425,7 @@
 {
 return new Identity(referencedObject, pb);
 }
+return null;
 }
 else
 {

 

-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 12, 2004 5:37 PM
To: OJB Users List; OJB Developers List
Subject: Bug in QueryReferenceBroker?

I recently updated to HEAD and am finding a weird issue now.

I have an object Session, that has a reference to an Object called
Person. Now Person is a proxy object. I am using an Anonymous FK to
reference Person from Session.

When I try to restore Session when it has no Person hanging on it, it
restores the Session object with a Person Proxy object hanging off of it
(it shouldn't!), and the Proxy's PK being a collection of null.

I think I might have narrowed down why it is happening:

Method getReferencedObjectIdentity(), here is a code snipet:

 if (hasNullifiedFKValue)
 {
  if(isAnonymousKeyReference(cld, rds))
   {
Object referencedObject = rds.getPersistentField().get(obj);
 if(referencedObject != null)
  {
return new Identity(referencedObject, pb);
 }
 }
  else
  {
  return null;
  }
  }

 // ensure that top-level extents are used for Identities
 return new Identity(rds.getItemClass(),
pb.getTopLevelClass(rds.getItemClass()), fkValues);

In my case, I have a nullifiedFKValue, so it goes into the first If
block, and then it sees that it is an AnonymousKeyReference, but then my
referencesObject us null (like it should be). But instead of returning
null, it jums out to the bottom where it returns a new Identity!! Why is
it doing that? I can see where Armin has made some changes to handle
better AnonymousFK's, is this a side-effect of that?

-ANdrew


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Issue with Iterator.hasNext() and custom RowReader

2004-04-01 Thread Clute, Andrew
As a little background, I have created a custom RowReader that allows me
to filter out objects based upon some criteria on that object. More
specifically, if a deleted date exist on the object, I filter it out and
return null.

The idea behind this is that I want to maintain some history in my
database, but not even have the filtered object available to be used
outside of OJB. The custom rowreader works pretty well in doing that,
except I have run into an issue with Iterators returned from PB, and
also internal functions inside of OJB that use these iterators.

In my application, I know that the Iterator might have a spot that
contains a null object, but the it.hasNext() will return true, because
the null object was placed into there.

However, there are places inside of OJB where the assumption has been
made that an object coming out of an Iterator will always be non-null.
This is probably not a bad assumption as you would assume this would
never be the case. But, as you can probably tell, what is happening is
the Iterator thinks it has a next value, but doesn't really know that it
doesn't until the next() method is called, and the RowReader filters it
out.

A good example of where OJB makes this assumption is in
ReferencePrefetcher.associateBatched(). But this is just one of many.


Now, I see a couple options:
1)Stop using custom RowReaders, and make it known via documentation that
a RowReader must still return a non-null objects, otherwise other code
will break
2) Fix all the points in other OJB methods where the assumption is made
that a value that comes out the Iterator is not null, and make it do a
null-check
3) Change RsIterator.next() to be smater and give the RowReader a chance
to filter out the objects. However, the problem with this is if the
'filtered' object exist in the middle of the chain, the rs.next() would
return false, and the rest of the non-filtered objects would be lost.

I would love to get some thoughts on this. My guess is that number 1 or
2 is the best answers, although 1 seems to be pretty harsh.

Thanks

-Andrew




RE: soft-deleting objects

2004-03-23 Thread Clute, Andrew
I have implemented the very same thing via a custom RowReader. I just extended the 
default RowReader, and read in the value from that super. After that, I just my 
criteria to see if I should avoid it, and return based on that.

Here is my method that I implemnted that extended RowReaderDefaultImpl

public Object readObjectFrom(Map row) throws PersistenceBrokerException
{
Object o = super.readObjectFrom(row);
if (o instanceof AuditableBusinessObject)
{
if (((AuditableBusinessObject)o).getDeletedDate() != null) 
 
return null;
}

return o;

} 

Hope this helps.

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Tino Schöllhorn
Sent: Tuesday, March 23, 2004 6:21 AM
To: [EMAIL PROTECTED]
Subject: soft-deleting objects

Hi,

I want to implement something like a soft-delete:

Objects should be marked as deleted in its corresponing table and OJB should just 
ignore them when it is materializing or querying them.

Where would be the best point to start when I want to implement this feature? I just 
played around with the RowReader-Concept - but I have the feeling that this is not the 
right place to start, because I think I have to modify the queries OJB is submitting 
to the database.

Any ideas?

Regards
Tino



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: OJB + MSSQL sp_executesql problem

2004-02-26 Thread Clute, Andrew
I am not sure what you mean my 'displaytag' -- could explain that more?
I am curious how that limits the resultset returned from the database.

As for the driver, here it is:

http://www.inetsoftware.de/English/produkte/JDBC_Overview/ms.htm

-Andrew

 

-Original Message-
From: Robert S. Sfeir [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 26, 2004 10:10 AM
To: OJB Users List
Subject: Re: OJB + MSSQL sp_executesql problem

Clute, Andrew wrote:

We were using the jTDS driver for awhile to, with decent results -- 
except that it is not JDBC 2.0 compliant. It states that it is, but it 
is missing quite a few features -- mostly notably scrollable
resultsets.
So, if you do any paging work at all, I have found the jTDS driver to 
be orders of magnitude slower because it has to iterate though the 
entire result set.

We are now using the Merila driver from i-net, to much success.
  

I do most of my paging stuff using displaytag, no need for scrollable
resultsets.  That said I would love to get a link to that driver to
evaluate it.  MSSQL is not my favorite DB, but hey people use it, and
who am I to tell them not to.

R

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: OJB + MSSQL sp_executesql problem

2004-02-26 Thread Clute, Andrew
We were using the jTDS driver for awhile to, with decent results --
except that it is not JDBC 2.0 compliant. It states that it is, but it
is missing quite a few features -- mostly notably scrollable resultsets.
So, if you do any paging work at all, I have found the jTDS driver to be
orders of magnitude slower because it has to iterate though the entire
result set.

We are now using the Merila driver from i-net, to much success.

-Andrew



-Original Message-
From: Robert S. Sfeir [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 26, 2004 8:47 AM
To: OJB Users List
Subject: Re: OJB + MSSQL sp_executesql problem

Yeah, and further, is seems that the MSSQL MS Driver can't deal with
sets in an unordered order of fields, meaning if the DB has, id name age
height, and you do sets in the order of id age name height, the driver
will choke and complain.  How lame is that?

You might want to take a look at this driver, granted it's still beta,
but we've had good results with it:

http://sourceforge.net/projects/jtds/

R

Charles Anthony wrote:

Hi Alex,

Very simply, OJB does not issue the sp_executesql statement; the 
Microsoft JDBC driver does ! OJB just issues the SELECT statement.

I would strongly suggest that you look to using a different MSSQl JDBC 
Driver[1]; about a year ago I did a comparative benchmark of JDBC 
Drivers for MS SQL, looking at Microsoft, DataDirect, JSQLConnect and 
Opta2000. For the area of code in our app that I benchmarked, the 
Microsoft driver was by far the slowest, and Opta2000 was 50% faster. 
[2] I posted my results to the list, so they should be in the archive
somewhere.

The indexed attribute in the XML repository has no significance to 
the OJB runtime; it is there so that table schemas (or DDL) can be 
generated from the repository.

In short, if you have to use the Microsoft driver, it's probably worth 
asking around on their forums to see if anyone there has encountered 
this issue.


Cheers,

Charles


[1] It's advice my employer doesn't actually follow ! 
[2] As with all benchmarks, your mileage WILL vary in your app; don't 
rely on my comparisons, do your own benchmarks.

  

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: 26 February 2004 02:13
To: [EMAIL PROTECTED]
Subject: OJB + MSSQL sp_executesql problem


Hello everyone ! 

I have been using OJB for the last several months in several projects 
and have had no problems at all - great product ! However, in the last

project, I am having problems with the sp_executesql statement that is

generated by OJB in the queries.  It apparently is a problem with my 
mssql

installation, but I am looking for a workaround without having to do 
anything major with the database.

A couple of lines about my db setup:
1. OJB rc5, jdk 1.4.2
2. mssql database - the table that I am having a problem with is quite

large - 60 million rows.

The problem is that it takes about 20 seconds to run a query to 
retrieve a

record from the database. When I look at the generated code, the query

is of the form 'sp_executesql SELECT ... from ... WHERE DCN=..'. 
If I run

the query directly in the MSSQL query analyzer, it takes just as  long

(so

apparently the problem is not with anything in OJB). However, if I 
take the query out of the 'sp_executesql..' statement, and run it as a

regular select query (e.g. only SELECT ... from ... WHERE 
DCN=..), it takes less than a second to run.  I investigated my 
set up and it appears that for some reason mssql messes up the indexes

on the table - instead of using the clustered index that is specified 
on the field on which I specify the WHERE condition (e.g. 'DCN' in the

sample query snippet above), it uses the index on the primary key 
(e.g. the Id field). When the

query is run as a literal (second example below) - everything works 
like a

charm and mssql selects the correct index. 

As you can see below, I thought that if I indicated in the repository 
that

the DCN column was indexed, it would resolve the issue; however, the 
indexed=true property does not seem to change the generated 
sp_executesql statement in any way.

So, my question is, is there a way to make mssql use the right index 
with some property in the configuration (e.g. that would possibly pass

an index

hint to the query)  ? Has anyone else encountered similar behaviour ? 

Sample code: 

The repository-user.xml
... 
   class-descriptor
class=com.divintech.cigna.printrejects.valueobjects.ScanClaimVO 
table=Claim_Export_Summary
field-descriptor id=1 name=id column=ID 
jdbc-type=INTEGER 

access=readonly autoincrement=true primarykey=true/
field-descriptor id=2 name=dcn column=DCN 
jdbc-type=CHAR 
access=readonly indexed=true /
field-descriptor id=3 name=batchName 
column=Batch_Name_IA 
jdbc-type=VARCHAR  access=readonly /
field-descriptor id=4 name=exportDate 
column=CreateDate 
jdbc-type=DATE access=readonly/
field-descriptor id=5 

RE: how to keep repository-user.xml mappings separate from ojb.sar?

2004-02-25 Thread Clute, Andrew
There really is no reason to deploy OJB as a SAR (I would love to hear
from anyone else that can tell what a good use case would be deploying
it as a SAR).

Just create a facade EJB that will create your PersistenceBroker and do
your work find/save work there. Your client app's will not access the
broker, but instead the EJB's methods. In that case, your EAR will
contain your EJB-JAR's, probably with your DAO's, and then you can
include the repository files in the root of that EAR -- works like a
charm. You know have one self-contained application, with it's own OJB
'space'.

You can even leverage the EJB examples that OJB ships with as your
pass-through EJB.

Let me know if this answers your problem, or if you have more questions.
I have a current deployment that is doing exactly this right now.

BTW, yes, this means that each EAR has it's own OJB version -- but if
you have a unified data model, you really only want one Persistence
layer across an enterprise.

-Andrew





-Original Message-
From: Michael Mogley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 25, 2004 12:47 PM
To: OJB Users List
Subject: Re: how to keep repository-user.xml mappings separate from
ojb.sar?

Hi Armin,

So in this case, how would I access the broker?  I assume through JNDI
somehow?  Would I have to setup the PersistenceBrokerFactory as a
resource adaptor?  How would I do that?

Michael

- Original Message -
From: Armin Waibel [EMAIL PROTECTED]
To: OJB Users List [EMAIL PROTECTED]
Sent: Wednesday, February 25, 2004 5:21 AM
Subject: Re: how to keep repository-user.xml mappings separate from
ojb.sar?


 Hi Michael,

 Michael Mogley wrote:
  Hi all,
 
  I'm trying to deploy an application on JBoss 3.2.3 using latest OJB.
I've followed the steps to create an ojb.sar in the deployment dir.
 
  I would like to keep the xml mapping definitions and DAOs local to
the
specific .ear
 I'm deploying.  Is this possible?  Or must I keep all the mappings for
 all applications
 in one repository-user.xml in the ojb.sar?
 

 Don't use .sar file in this case (AFAIK it's JBoss specific and shared
 among the deployed application), try to include all OJB jar (with used
 libraries) + beans in each application .ear file?
 (Don't ask for details ;-))

 regards,
 Armin

  Thanks for any help/advice.
 
  Michael

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Bug (and Fix) with a custom RowReader and PB.getObjectByQuery()

2004-02-24 Thread Clute, Andrew
I have implemented a custom RowReader that filters out deleted items for
me. I have a deleted_date column on all my tables, and this RowReader
checks to see if this column is not null, and if it is null, returns the
object, otherwise returns null.

Easy and effective way for me to soft delete items -- still in the
database, but not in my object model.

However, in the getObjectByQuery() call, it assumes the first result
that is returned from the RsIterator is the only result, and returns
that.

I have a situation where an item with a certain criteria is in my
database twice -- once deleted, and then a non-deleted version of it.
When I do a PB.getObjectByQuery(), the RsIterator get's both results
from the database, but the first row is the deleted row, so my RowReader
filters it out, and do not get the right result.

The current code in PersistenceBrokerImpl looks like this:

OJBIterator it = getIteratorFromQuery(query, cld);
  Object result = null;
  if (it.hasNext())
  {
result = it.next();
  }
  it.releaseDbResources();
  return result;

As you can see, there are distinct cases where the user will not get the
right results if using a custom RowReader, since the assumption is the
first one will always be the case.

I would like to change the code to the following:

 OJBIterator it = getIteratorFromQuery(query, cld);
   Object result = null;
   while (result==null  it.hasNext())
   {
result = it.next();
   }
   it.releaseDbResources();
   return result;

This will ensure that if there are multiple possible results for a
query, that a custom RowReader will have a chance to find the first
instance of a result that will work.

Any thoughts or reasons why we could not change this?

-Andrew

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Xdoclet: Why the default size of 24 on VARCHAR's in the new version?

2004-02-12 Thread Clute, Andrew
I see in the documentation, and the new build of my repository file from
the 1.2 Release of the OJB Xdoclet module, that if you do not specify a
length for your varchar's, it defaults to a value of 24.

Why was this change necessary? I see that it says because of MySQL that
it now defaults to that, but I am confused as to the ramification for
the rest of us. Up to this point, at least on SQL Server, having no
length specified worked great. And actually, I would prefer not to have
to maintain a length field in my java files, if it is not necessary.

Any idea of what will happen if I leave out the length, it defaults to
24, and my column has data longer than 24? Will it truncate it?

Since this setting was for MySQL only, could we have it be a flag on the
Xdoclet tag that it is a MySql run, and then have it do the default
length, otherwise, don't default to a length?

Thanks

-Andrew

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Bug in AbstractSequenceManager found!!

2004-02-12 Thread Clute, Andrew
Armin,

First off, yes, it does work. Thanks! Feel free to commit to CVS.

Second, I want to apologize. I get frustrated with people at my work
that don't attempt to solve their own problems, and it seems that I did
the same thing to you. I was on my way out the door from work yesterday
when this creeped up in my upgrade to the latest from HEAD. I was in a
rush to get an email to the list before I left so you would see it first
thing in the morning, and I didn't spend anytime to actually find the
solution.

As soon as I got home from my commute, I looked at it, and came to the
same fix that you did, about 10 minutes before you sent your email out. 

So, once again, thanks!

-Andrew



-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 11, 2004 7:04 PM
To: OJB Users List
Subject: Re: Bug in AbstractSequenceManager found!!

Hi Andrew,

oops, my fault! I can't test this sequence manager implementation,
because I don't use MSSQL.
Seems we have to override getUniqueValue method of AbstractSM..

I attached a new version of SequenceManagerMSSQLGuidImpl - hope it will
pass the apache server ;-) Can you verify this class?

regards,
Armin

Clute, Andrew wrote:

 I recently updated to HEAD of CVS, and it exposed an issue I have 
 within the AbstractSequenceManager.
 
 I was the original author of the SequenceManagerMSSQLGuidImpl sequence

 manager, with it's goal to be to allow the use of MSSQL's unique 
 identifier as their primary key. When the SequenceManagerMSSQLGuidImpl

 was written, the code inside of AbstractSequenceManager (v 1.10) would

 do a check of the JDBC type of the primary key field, and the call the

 apporpriate getUniqueX field.
 
 Because the MSSQL unique identifiers come back best as VARCHAR's (they

 look like this '3C6D40F6-F961-49F2-B7F4-BAAA48B8F1F3'), I had them 
 defiend as varchar's in the repository, and AbstractSequenceManager 
 would see that and call the getUniqueString() method on the sequence 
 manager, and all would be fine. Worked like a charm.
 
 However, the code in AbstractSequenceManager (as of v1.11) now no 
 longer does that type check, and just assumes that getUniqueLong() 
 will work and blindly calls that! Opps! That won't work for the 
 SequenceManagerMSSQLGuidImpl because there is no way to return a long 
 representation of the GUID string that is returned from MSSQL.
 
 So, that makes SequenceManagerMSSQLGuidImpl broken, as it has no way 
 to return a valid PrimaryKey.
 
 Is this something we should fix, or am I out of luck now with my 
 SequenceManagerMSSQLGuidImpl?
 
 Thanks
 -Andrew
 
 
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Bug in AbstractSequenceManager found!!

2004-02-11 Thread Clute, Andrew
I recently updated to HEAD of CVS, and it exposed an issue I have within
the AbstractSequenceManager.

I was the original author of the SequenceManagerMSSQLGuidImpl sequence
manager, with it's goal to be to allow the use of MSSQL's unique
identifier as their primary key. When the SequenceManagerMSSQLGuidImpl
was written, the code inside of AbstractSequenceManager (v 1.10) would
do a check of the JDBC type of the primary key field, and the call the
apporpriate getUniqueX field. 

Because the MSSQL unique identifiers come back best as VARCHAR's (they
look like this '3C6D40F6-F961-49F2-B7F4-BAAA48B8F1F3'), I had them
defiend as varchar's in the repository, and AbstractSequenceManager
would see that and call the getUniqueString() method on the sequence
manager, and all would be fine. Worked like a charm.

However, the code in AbstractSequenceManager (as of v1.11) now no longer
does that type check, and just assumes that getUniqueLong() will work
and blindly calls that! Opps! That won't work for the
SequenceManagerMSSQLGuidImpl because there is no way to return a long
representation of the GUID string that is returned from MSSQL.

So, that makes SequenceManagerMSSQLGuidImpl broken, as it has no way to
return a valid PrimaryKey.

Is this something we should fix, or am I out of luck now with my
SequenceManagerMSSQLGuidImpl?

Thanks
-Andrew




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Status on 1.0 release?

2004-01-29 Thread Clute, Andrew
I haven't seen any talk recently on the list about the progress for when
the 1.0 release will be labeled. Is there a plan now for when that might
happen? Are there certain bugs outstanding that are keeping this from
happening that we can help chip into fix?

I will admit my curiosity is for a somewhat selfish reason -- I am still
using RC4 and have been wanting to hold off on the transition until 1.0
came out, versus RC5.

Thanks

-Andrew


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [PB API] concurrency problems

2004-01-29 Thread Clute, Andrew
Maybe I am missing somethingbut why not wait until the entire tree
(collections and references) has been materialized before pushing the
object on the cache?

Or, is there a problem with circular references by doing that?

-Andrew



 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 29, 2004 1:31 PM
To: OJB Users List
Subject: Re: [PB API] concurrency problems

Hi Sven,

I can reproduce your problem. It's a really nasty concurrency
materialization problem when using global/shared caches and an object
was materialized first time:

Say we have a class Account with reference to Buyer, Buyer has a
reference to Address and Address has a reference to AddressType (similar
to your test case)

Account -1:1- Buyer -1:1- Address -1:1- AddressType

We set autoretrieve true.

Thread_1: PB lookup Account object. Object was not found in cache, PB
start to materialize Account and push this to cache, then first
reference was materialized and pushed to cache,...

Thread_2: PB lookup Account object. Object was found in cache and
returned. But the found object is the same pushed by thread_1, thus PB
returns a not full materialized Account object.

If a local cache was used, e.g. ObjectCachePerBrokerImpl this situation
will not arise.

Solution is to set attribute 'refresh=true' in all
reference-descriptor. This force OJB to lookup each reference for an
object whether or not it was found in cache.

regards,
Armin

Sven Efftinge wrote:

 I just changed ObjectCacheClass from DefaultImpl to EmptyImpl.
 Without caching the test doesn't fail.
 I think this happens because the threads don't have to share the 
 objects in this case.
 Sven
 
 Sven Efftinge wrote:
 
 Hi Armin,
 I wrote a test for this, now.
 Unfortunately I wrote it against my application, because I haven't 
 installed the OJB test suite yet ;( The test starts n threads. Each 
 thread creates a PersistentBroker, retrieves 4 instances of an entity

 and then checks some references for each entity.
 When I run the test for 1 thread the test passes.
 When I run it for e.g. 30 threads the test fails(most of the time) 
 because some references were null.
 Mmh... the test fails to RC4, also.
 I'm not sure if this is really the same problem because I don't get 
 any errors from OJB directly. Only the null references.

 Here is the test (I used GroboUtils from SF):

public void testConcurrentRead() throws Throwable {
int numthreads = 30;
TestRunnable[] tests = new TestRunnable[numthreads];

for (int i = 0; i  tests.length; i++) {
tests[i] = new FetchPersistentObjects();
}
MultiThreadedTestRunner testRunner =
new MultiThreadedTestRunner(tests);

testRunner.runTestRunnables();

}

class FetchPersistentObjects extends TestRunnable {
  /* (non-Javadoc)
 * @see
net.sourceforge.groboutils.junit.v1.TestRunnable#runTest()
 */
public void runTest() throws Throwable {
PersistenceBroker broker = null;
KontoIF konto = null;
try {
broker =
 PersistenceBrokerFactory.createPersistenceBroker(pbKey);
Criteria crit = new Criteria();
crit.addEqualTo(KONTO.NAME, test);
QueryByCriteria query = new 
 QueryByCriteria(Konto.class, crit);
//we have 4 kontos with name=test in the
database
List kontos = (List)
broker.getCollectionByQuery(query);
for (Iterator iter = kontos.iterator();
iter.hasNext();) {
konto = (KontoIF) iter.next();
assertEquals(test, konto.getName());
assertNotNull(All kontos have a reference to an 
 interessent, konto.getInteressent());
assertNotNull(All interessents have a reference 
 to an adresse, konto.getInteressent().getAdresse());
assertNotNull(All adresses have a reference to an

 adresseart, konto.getInteressent().getAdresse().getAdresseart());
assertNotNull(All adressearts have a varname, 
 konto.getInteressent().getAdresse().getAdresseart().getVarname());
}
} finally {
broker.close();
}
}

}

 Armin Waibel wrote:

 Hi Sven,

 I don't know if your problem rely on a concurrency problem. Between
 rc4 and rc5 I changed handling of DB resources in RsIterator class.
 Now OJB is very strict in closing used resources. All resources will

 be released when
 - PB instance was closed
 - PB commit call is done
 - PB abort call is done
 This helps to avoid abandoned Statement/ResultSet instances.

  org.apache.ojb.broker.PersistenceBrokerException:
 
org.apache.ojb.broker.accesslayer.RsIterator$ResourceClosedException:
  Resources no longer reachable, RsIterator will be automatic 
  cleaned up on PB.close/.commitTransaction/.abortTransaction

 The exception says that OJB has 

RE: [PB API] concurrency problems

2004-01-29 Thread Clute, Andrew
I realized about 5 minutes after I sent this that it *would* in fact
cause circular reference problems. So, that wouldn't work.

This is a pretty tricky issue. The other solution I had would be to
'soft' commit the parent object to the cache until the entire map below
it is materialized, and then hard commit it to the cache -- with only
the methods that are materializing the references and collections having
access to it (or you could put it to some temporary cache that is only
available to that thread for materializing that map).

-Andrew





-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 29, 2004 3:00 PM
To: OJB Users List
Subject: RE: [PB API] concurrency problems

Maybe I am missing somethingbut why not wait until the entire tree
(collections and references) has been materialized before pushing the
object on the cache?

Or, is there a problem with circular references by doing that?

-Andrew



 

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 29, 2004 1:31 PM
To: OJB Users List
Subject: Re: [PB API] concurrency problems

Hi Sven,

I can reproduce your problem. It's a really nasty concurrency
materialization problem when using global/shared caches and an object
was materialized first time:

Say we have a class Account with reference to Buyer, Buyer has a
reference to Address and Address has a reference to AddressType (similar
to your test case)

Account -1:1- Buyer -1:1- Address -1:1- AddressType

We set autoretrieve true.

Thread_1: PB lookup Account object. Object was not found in cache, PB
start to materialize Account and push this to cache, then first
reference was materialized and pushed to cache,...

Thread_2: PB lookup Account object. Object was found in cache and
returned. But the found object is the same pushed by thread_1, thus PB
returns a not full materialized Account object.

If a local cache was used, e.g. ObjectCachePerBrokerImpl this situation
will not arise.

Solution is to set attribute 'refresh=true' in all
reference-descriptor. This force OJB to lookup each reference for an
object whether or not it was found in cache.

regards,
Armin

Sven Efftinge wrote:

 I just changed ObjectCacheClass from DefaultImpl to EmptyImpl.
 Without caching the test doesn't fail.
 I think this happens because the threads don't have to share the 
 objects in this case.
 Sven
 
 Sven Efftinge wrote:
 
 Hi Armin,
 I wrote a test for this, now.
 Unfortunately I wrote it against my application, because I haven't 
 installed the OJB test suite yet ;( The test starts n threads. Each 
 thread creates a PersistentBroker, retrieves 4 instances of an entity

 and then checks some references for each entity.
 When I run the test for 1 thread the test passes.
 When I run it for e.g. 30 threads the test fails(most of the time) 
 because some references were null.
 Mmh... the test fails to RC4, also.
 I'm not sure if this is really the same problem because I don't get 
 any errors from OJB directly. Only the null references.

 Here is the test (I used GroboUtils from SF):

public void testConcurrentRead() throws Throwable {
int numthreads = 30;
TestRunnable[] tests = new TestRunnable[numthreads];

for (int i = 0; i  tests.length; i++) {
tests[i] = new FetchPersistentObjects();
}
MultiThreadedTestRunner testRunner =
new MultiThreadedTestRunner(tests);

testRunner.runTestRunnables();

}

class FetchPersistentObjects extends TestRunnable {
  /* (non-Javadoc)
 * @see
net.sourceforge.groboutils.junit.v1.TestRunnable#runTest()
 */
public void runTest() throws Throwable {
PersistenceBroker broker = null;
KontoIF konto = null;
try {
broker =
 PersistenceBrokerFactory.createPersistenceBroker(pbKey);
Criteria crit = new Criteria();
crit.addEqualTo(KONTO.NAME, test);
QueryByCriteria query = new 
 QueryByCriteria(Konto.class, crit);
//we have 4 kontos with name=test in the
database
List kontos = (List)
broker.getCollectionByQuery(query);
for (Iterator iter = kontos.iterator();
iter.hasNext();) {
konto = (KontoIF) iter.next();
assertEquals(test, konto.getName());
assertNotNull(All kontos have a reference to an 
 interessent, konto.getInteressent());
assertNotNull(All interessents have a reference 
 to an adresse, konto.getInteressent().getAdresse());
assertNotNull(All adresses have a reference to an

 adresseart, konto.getInteressent().getAdresse().getAdresseart());
assertNotNull(All adressearts have a varname, 
 konto.getInteressent().getAdresse().getAdresseart().getVarname());
}
} finally

Migration from RC4 - RC5?

2003-12-15 Thread Clute, Andrew
Sorry if this has been addressed, but has there been any discussion about
steps necessary for transition from RC4 to RC5?

More specifically, have there been any fundamental changes to API calls, or
to the OJB.properties file that precludes someone from just taking the new
RC5 jar file and dropping it in place of the RC4 jar file?

Thanks



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [OFF-TOPIC] JAVA JOB QUESTIONs

2003-12-12 Thread Clute, Andrew
Yeah, sorry, I guess I assumed that it was known the cost of living was
higher in the bigger cities, thus justifying the increase in salary.

I will say one thing for being a Java developer -- the starting pay, while
fair, is not extremely high. But as you become more and more experienced,
and have a proven track record, the pay range can scale up pretty fast.

I know some good 5-8 year guys who can make 75-100k, with 10+ year
architects being 100k+ -- YMMV.

 

-Original Message-
From: Gus Heck [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 12, 2003 2:10 PM
To: OJB Users List
Subject: Re: [OFF-TOPIC] JAVA JOB QUESTIONs


Clute, Andrew wrote:

Depends on where in the country you want to live.

In the Midwest, I usually see (and hire) junior level guys in the 
40-55k range.

If you can find a job on either of the coasts (NYC,SF,Boston) you are 
looking at probably another 15% on top of that.

  

But the cost of living is at least 15% higher too unless you commute 
over an hour a day. I'm in the lower half of that range and about 50% of 
my income (after taxes) goes to rent and utilities. I do get very good 
benefits however.


-Original Message-
From: Tiago Henrique Costa Rodrigues Alves 
[mailto:[EMAIL PROTECTED]
Sent: Friday, December 12, 2003 2:29 PM
To: OJB Users List (E-mail)
Subject: [OFF-TOPIC] JAVA JOB QUESTIONs


Hi,

I am wondering, how much an ordinary Java/JSP developer with 2 years 
experience can make a year? ( Minimal / USA )

Tiago Henrique C. R. Alves


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

  




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Best practice for using ODMG with EJB? (Cache also)

2003-10-17 Thread Clute, Andrew
I currently have our application running using OJB. I am using the PB
interface because it was the easiest to prototype and get up and running.

We have a Struts application that calls a collection of EJB services for
retrieving specific object-trees that the web app needs, along with
Add/Update/Delete methods on the EJB's. One of my main selling points for
convincing the team to move away from PHP to Java/J2EE was the strengths of
O/R tools like OJB, specifically the cache -- I think it is a strong seller,
especially in a 80% read-only application.

So, to facilitate that, I constructed a Façade wrapper around the
PersistenceBroker (so, if I wanted to, I could swap it to ODMG/JDO), and it
seems to work well. I have deployed our 'Core' application as a collection
of EJB's that make use of OJB under the hood, and then our web application
as separate war file. But, because they are in the same container (Jboss),
it makes use of the Local versus Remote interfaces -- which is desired.
However, when using the cache, and the local interface, any manipulation
done by the web application on it's objects is manipulating the object in
cache.

I always though of the cache as a 'clean' representation of what was in the
database -- so in all of my retrieve methods in my EJB's, I return clone's
of the DataObjects. This allows for the client applications to manipulate
them, and not affect the cache objects, and send them for committing, also
updating the cache.

But because PB API is not a full persistence API, I am starting to hit the
issues that API's like ODMB fix (deleted objects in collections, object
locking, etc) -- and want to get a feel for how best to use something like
ODMG in my situation.

My goals are:

1) To have a centralized application that handles all database and service
level transactions. It would hand out objects from the cache (preferably
clones) and receive objects to store them. We only have one client
application that would be using this, but down the road we will have many
more
2) Move to an ODMG like API that can manage locking and whatnot to free up
not having to manage object locking, deletion, etc
3) For Goal 1 to make use of the cache -- most of our applications are
read-only. So that makes sense to make heavy use of the cache -- but at the
same time we do have update scenarios that I would like to be 'atomic'.

Is there a pattern that facilitates these goals?

Thanks!

-Andrew

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Best practice for using ODMG with EJB? (Cache also)

2003-10-17 Thread Clute, Andrew
Thanks for the offer...I can definitely peer around in the code to get a
feel for it.

However let me ask you this...how would I best use OTM in this situation?
Would OTM solve my problem?

Let me rephrase, assuming I could use OTM, would the EJB service continue to
pass out clones to client applications, and then later on handle the
merging, and object deletion etc? Or instead would I be passing out straight
objects from the cache, and OTM manages the changes to those objects?

Let me ask this different question -- is it a bad assumption on my part if I
am going to use the cache to assume the cache will only contain 'committed'
data, and not transient data that client applications may or may not be
updating to it? If the later is the case, and there is much more to be
gained by passing out the original objects from the cache so I can use
things like ODMG, I would assume then I need to put into place safeguards to
make sure that any changes that my client applications make to those objects
are thrown out if not committed within a reasonable time. I know that
optimistic locking will handle the scenario when two clients are
manipulating the same object at the same time -- but I can't get pass the
case in my head where a client application dirties the object, but never
saves it -- and now the cache is dirty.

I guess in the immediate future, I can code around that by convention since
I control all of the client applications, and these client apps live in the
same container so they are all pass-by-reference. And when we do allow other
outside applications to hit this service it will be in a different
container, so now they are pass-by-value, so there is some sanity in that.
Our client apps would be using cached objects and would be careful with
them.

All that leads me to another question that I don't seem to find an answer
to: if a remote EJB passes off an OJB dataobject, I would assume you cannot
use any proxies because it would not be able to walk back across the wire,
correct? I am currently not making use of any proxies because of that
assumption. Seems to me that proxies don't fit well at all with EJB's.

Have I rambled enough? :)

-Andrew

-Original Message-
From: Brian McCallister [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 17, 2003 10:18 AM
To: OJB Users List
Subject: Re: Best practice for using ODMG with EJB? (Cache also)


There is no documentation outside of the javadocs for it yet. I can 
happily provide pointers and samples for you, if you like, from my 
learn the OTM project. I am working on learning my way around it well 
enough to write decent docs on it -- just a matter of finding the time.

I wouldn't call the OTM production-ready yet (which is why I asked 
about your timeline), and it won't be production-ready in 1.0 so real 
OTM docs probably won't be published outside of CVS for a while (I 
could be wrong on this, but in general introducing a whole new API 
while trying to lock down a 1.0 release is a bad idea).

-Brian

On Friday, October 17, 2003, at 10:07 AM, Clute, Andrew wrote:

 Thanks for the response...you added some interesting point.

 I knew somewhat that the OJB cache only worked with QueryByIdentity(),
 but
 the gains are still significant for us. Our object model has some 
 rather
 deep and wide trees (for better or worse), and the gain on object
 materialization on the hanging objects is a real positive. Maybe I am
 overstating it's benefit, but I do see a significant real-world 
 difference
 when the cache is on and off -- however I am not above attributing 
 that to
 bad design on my part for the object model.

 As for OTM -- I will look into that. However, I don't see any
 documentation,
 or for that matter mention, of it on the OJB site. Am I missing 
 something?

 -Andrew

 -Original Message-
 From: Brian McCallister [mailto:[EMAIL PROTECTED]
 Sent: Friday, October 17, 2003 9:47 AM
 To: OJB Users List
 Subject: Re: Best practice for using ODMG with EJB? (Cache also)


 Depending on your deployment timeline you might want to look into the 
 OTM for this type of deployment. It provides the high-level 
 functionality you are looking for from ODMG, but it also knows how to 
 play very nicely with JTA transactions -- a big plus in EJB 
 containers. The OTM is the least mature of OJB's API's, however. That 
 said, it is my favorite by a long shot.

 An important note about OJB's cache -- the only query type that 
 completely reads from the cache as compared to querying is the 
 QueryByIdentity type query. The cache is primarily used to avoid 
 object materialization and maintain reference integrity.

 Think for a moment about the query select products from 
 org.apache.ojb.tutorials.Product where cost  10.0. The query has to 
 be executed as the cache cannot know that it has all the satisfied 
 objects. Objects in the cache won't be re-materialized at least, but 
 the query still needs to be run against the database.

 It *is* possible to get major

RE: Best practice for using ODMG with EJB? (Cache also)

2003-10-17 Thread Clute, Andrew
That is kind of what I assumed. It makes using proxies somewhat worthless in
an EJB environment, at least in my opinion. Proxies to me seemed to be a
great way to do lazy instantiation, and keep the client ignorant of
persistence mechanisms.

So, if you have to materialize the entire graph before serialization, then
you really don't gain anything.

Thanks for the response!



-Original Message-
From: Phil Warrick [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 17, 2003 11:01 AM
To: OJB Users List
Subject: Re: Best practice for using ODMG with EJB? (Cache also)


Hi Andrew,

 All that leads me to another question that I don't seem to find an 
 answer
 to: if a remote EJB passes off an OJB dataobject, I would assume you
cannot
 use any proxies because it would not be able to walk back across the wire,
 correct? I am currently not making use of any proxies because of that
 assumption. Seems to me that proxies don't fit well at all with EJB's.

You can use proxies with EJB across the wire (I do, in ODMG mode), but 
the downside is that the client must never try to materialize them and 
the client needs to be OJB-aware.  Before dereferencing a proxied object 
on the client side, care must be taken to ensure that they have been 
materialized on the server before serialization. There is a tradeoff 
here in controlling object-graph size with proxies on the one hand, and 
transparency of access to persistent classes--regardless of whether the 
JVM is local or remote--on the other.

Removing the need for the client to be OJB-aware could be resolved at 
either the OJB level or in your application by altering serialization 
code (writeObject()/readObject() methods I believe) for your persistent 
classes, to replace any proxies with something else.  But in this 
scenario the client still needs to be concerned with delegating to the 
server any materialization of associated objects that must be accessed.

There has been some discussion from time to time on improving this.

Phil


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Connection failed if DB restarted?

2003-10-17 Thread Clute, Andrew
I apologized in advance for the increase in mail traffic today. I have
checked the archives for an answer to this, to no avail.

I am seeing, as expected, if our Database has been restarted that any db
connections in the pool as becoming invalid. I understand how this is
happening -- the connection doesn't realize the socket connection has been
reset, and so it fails to persist any data. I have simulated the situation,
and I can see the connection when it is retrieved from the pool is still
marked as active.

I am using ConnectionFactoryPooledImpl. What seems odd to me, is it never
self-corrects. It looks to me like when the connection failed, it is still
in the pool, and I never get a valid connection.

OJB is being used by EJB's sitting in Jboss. I am not using OJB as a
deployed EJB, but calling directly. I am also not using a DataSource inside
of Jboss, but instead the connection pool from OJB.

It seems to me using the pooled connections, that when the database is
restarted, then I have to take down my Jboss server and restart it to
refresh the pool. Am I missing something here? Now, I could use no pool, and
then the problem never arises. Would using the DataSource from jBoss and
ConnectionFactoryManagedImpl make it better.

What should me expectations be for db connections when the database has been
restarted? (Bigger issue here is our SQL Server cluster -- all the
connections become invalid when the server fails over to a different
server).

-Andrew


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Connection failed if DB restarted?

2003-10-17 Thread Clute, Andrew
Thanks for the answer...worked the way I wanted it to for my immediate
problem in production.

Not keen on the fact that I know have a ton of extra SQL calls--but I guess
that is the price I pay right now! :)

Thanks again!

-Andrew

-Original Message-
From: Armin Waibel [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 17, 2003 11:30 AM
To: OJB Users List
Subject: Re: Connection failed if DB restarted?


Hi Andrew,

you can use the 'validationQuery' attribute of connection-pool element to
specify a validation query. This query will be executed each time before a
connection was delivered by the pool.

Any proposals to make this more sophisticated are
welcome.

regards,
Armin

On Fri, 17 Oct 2003 11:20:02 -0400, Clute, Andrew 
[EMAIL PROTECTED] wrote:

 I apologized in advance for the increase in mail traffic today. I have 
 checked the archives for an answer to this, to no avail.

 I am seeing, as expected, if our Database has been restarted that any 
 db connections in the pool as becoming invalid. I understand how this 
 is happening -- the connection doesn't realize the socket connection 
 has been reset, and so it fails to persist any data. I have simulated 
 the situation,
 and I can see the connection when it is retrieved from the pool is still
 marked as active.

 I am using ConnectionFactoryPooledImpl. What seems odd to me, is it 
 never self-corrects. It looks to me like when the connection failed, 
 it is still in the pool, and I never get a valid connection.

 OJB is being used by EJB's sitting in Jboss. I am not using OJB as a 
 deployed EJB, but calling directly. I am also not using a DataSource 
 inside of Jboss, but instead the connection pool from OJB.

 It seems to me using the pooled connections, that when the database is 
 restarted, then I have to take down my Jboss server and restart it to 
 refresh the pool. Am I missing something here? Now, I could use no 
 pool, and then the problem never arises. Would using the DataSource 
 from jBoss and ConnectionFactoryManagedImpl make it better.

 What should me expectations be for db connections when the database 
 has
 been
 restarted? (Bigger issue here is our SQL Server cluster -- all the
 connections become invalid when the server fails over to a different
 server).

 -Andrew


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]






-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Problem with refreshing references; Possible bug?

2003-10-16 Thread Clute, Andrew
I have found an issue that I think is a bug, but I want to make sure that is
the case.

Given the following scenario:

Object A has 'refresh=true' on it's descriptor, and has a reference to
Object B, and in the reference descriptor for Object B, 'refresh=true' is
set as well.

However, Object B's class descriptor does not have the 'refresh=true' set.

In my scenario, when Object A is retrieved, I want to always refresh, and to
refresh Object B...but I don't want Object B to be refreshed in every
situation, only when Object A is getting refreshed.

Based upon having refresh attributes available on both the references and
class descriptors, I would assume this would work.

However, looking through the code, when refreshing Object A, it will look
and correctly realize that it needs to refresh Object B, but when Object B
is then attempting to be refreshed, it looks at it's class descriptor and
sees that it does not need to refreshed, and returns the cached object. 

So, even though the reference descriptor says to refresh, the object ignores
it and doesn't because it the class descriptor for that class says it does
not need to be refreshed.

Is this expected behavior?

For those who want to look at the code..

In the method PersistenceBrokerImpl.getReferencedObject() -- if the object
is already in the cache, it will blindly call
PersistenceBrokerImpl.getObjectByIdentity(), which will only look at the
ClassDescriptor to determine whether or not to cache it, and not the
ReferenceDescriptor that was determining that it need to be refreshed.

-Andrew

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Problem with refreshing references; Possible bug?

2003-10-16 Thread Clute, Andrew
I should also mention this seems to be an issue with Collection as
wellif a collection descriptor declares to refresh the collection
objects, it will not do it of the class descriptor for the member class does
not allow it to be refreshed.



-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 16, 2003 12:31 PM
To: 'OJB Users List'
Subject: Problem with refreshing references; Possible bug?


I have found an issue that I think is a bug, but I want to make sure that is
the case.

Given the following scenario:

Object A has 'refresh=true' on it's descriptor, and has a reference to
Object B, and in the reference descriptor for Object B, 'refresh=true' is
set as well.

However, Object B's class descriptor does not have the 'refresh=true' set.

In my scenario, when Object A is retrieved, I want to always refresh, and to
refresh Object B...but I don't want Object B to be refreshed in every
situation, only when Object A is getting refreshed.

Based upon having refresh attributes available on both the references and
class descriptors, I would assume this would work.

However, looking through the code, when refreshing Object A, it will look
and correctly realize that it needs to refresh Object B, but when Object B
is then attempting to be refreshed, it looks at it's class descriptor and
sees that it does not need to refreshed, and returns the cached object. 

So, even though the reference descriptor says to refresh, the object ignores
it and doesn't because it the class descriptor for that class says it does
not need to be refreshed.

Is this expected behavior?

For those who want to look at the code..

In the method PersistenceBrokerImpl.getReferencedObject() -- if the object
is already in the cache, it will blindly call
PersistenceBrokerImpl.getObjectByIdentity(), which will only look at the
ClassDescriptor to determine whether or not to cache it, and not the
ReferenceDescriptor that was determining that it need to be refreshed.

-Andrew

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



OJB Xdoclet: Is there a way to get class-descriptor attributes to extend to subclasses?

2003-09-18 Thread Clute, Andrew
I have a model like this:

BusinessObject -- AuditableBusinessObject --  Dog

Dog extends AuditableBusinessObject -- along with 30 or so other classes,
while 15 or so only extend BusinessObject.

AuditableBusinessObject's all have created/modified/deleted dates, and are
never deleted, but only 'soft' deleted -- where the deleted date is set, but
the record is not removed from the db -- and any retrieves from the database
need to filter these out. The AuditableBusinessObject class has no
corresponding table's -- all it is adding is references to 3 fields that are
in the subclasses tables.

Well, I have written my own RowReader to filter these out, and it works
great when I put this in the class comments for my subclass, like the
following:

/**
 * @ojb.class   table=tbl_dog
 *  include-inherited=true
 *  row-reader=org.foo.ojb.RowReaderFilterDeletedImpl
 *  
 */
public class Dog extends AuditableBusinessObject


Now, I would like to make it so that all classes that extend
AuditableBusinessObject automatically get this row-reader, and I don't need
to set it in the individual sub-classes. I would assume something like this
would work, but it doesn't:


/**
 * @ojb.class   row-reader=org.foo.ojb.RowReaderFilterDeletedImpl
 *
 */
public class AuditableBusinessObject extends BusinessObject
{}

/**
 * @ojb.class   table=tbl_dog
 *  include-inherited=true
 *  
 */
public class Dog extends AuditableBusinessObject
{}

When doing this, it will create an entry in my repositrory_user.xml for
AuditableBusinessObject with the row-reader set, the default table name
(doesn't matter since not table exist for that and we don't read from a
table for ABO), and the extends for it. However, it *does not* place the
row-reader attribute in the class descriptor comment for my Dog class --
thus making my Dog class to load using the default row-reader, and not my
custom one.

Is there a way to do this, or am I asking for too much? Or, is my design
wrong, and I *should* specify the row-reader in each class?

Thanks!
-Andrew


RE: Xdoclet: How to use base-class properties with different colu mn names in children?

2003-09-12 Thread Clute, Andrew
Ok, great. That is exactly how I have it. As we discussed before that is not
working, so I will patiently wait for your fix for this.

I really do appreciate your work and effort in this!

Thanks again!
-Andrew

-Original Message-
From: Thomas Dudziak [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 12, 2003 5:14 AM
To: OJB Users List
Subject: RE: Xdoclet: How to use base-class properties with different colu
mn names in children?


A field declared in the class javadoc comment is only then not anonymous if
there is a tagged field with that name in a base class of that class. So,
you simply tag the field with @ojb.field in your base class (BusinessObject
if I remember correctly) - this does not require that you also tag the class
- and then the @ojb.field in the class javadoc comment in the subclass will
automatically override the base class tag for the subclass.

E.g.

class BusinessObject
{
  /**
   * @ojb.field column=guid
   */
  protected String guid;
}

/**
 * @ojb.class
 * @ojb.field name=guid
 *column=dog_guid
 */
class Dog extends BusinessObject
{
}

Tom


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Xdoclet: How to use base-class properties with different column n ames in children?

2003-09-10 Thread Clute, Andrew
So, let's say I have a base class 'BusinessObject' with a property called
'guid'

So, I have two classes, both extend BusinessObject, named 'dog' and 'cat'.

My database (which already exist), has a field in the tbl_dog table called
dog_guid, and the tbl_cat has cat_guid.

Before using Xdoclet to get my repository_user.xml, I would define the
mapping for both dog and cat's guid, cooresponding the appropriate column
name for that table, and the name of the field in the base class.

I can't seem to find the same way to do it in Xdoclet-ojb -- It seems I have
to have a property named guid in each of my child classes to attach an
xdoclet tag to it.

I have tried this:

code
/**
 * @ojb.class table=tbl_cat
 *include-inherited=true  
 * @ojb.field name=guid
 *column=cat_guid
 *jdbc-type=VARCHAR
*/
Public class Cat extends BusinessObject
/code

But, it doesn't create the reference in the xml file. However, if I put a
property called guid in the dog and cat classes, and the tag that, it works
-- but that stinks because I am killing OO.

Thoughts?


RE: Xdoclet: How to use base-class properties with different colu mn n ames in children?

2003-09-10 Thread Clute, Andrew
Ok, well I figured out that the version of Xdoclet-ojb that comes compiled
with RC4 does not have this feature in it.

I downloaded the most recent jar from CVS, and now it works, sort of.

If it put my @ojb.field description in the class comment, it will now had
the field-description, but now it always sets the access=anonymous, even
if I specify an access=readwrite for that field.

Ways around this?



-Original Message-
From: Clute, Andrew [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 10, 2003 3:45 PM
To: '[EMAIL PROTECTED]'
Subject: Xdoclet: How to use base-class properties with different column n
ames in children?


So, let's say I have a base class 'BusinessObject' with a property called
'guid'

So, I have two classes, both extend BusinessObject, named 'dog' and 'cat'.

My database (which already exist), has a field in the tbl_dog table called
dog_guid, and the tbl_cat has cat_guid.

Before using Xdoclet to get my repository_user.xml, I would define the
mapping for both dog and cat's guid, cooresponding the appropriate column
name for that table, and the name of the field in the base class.

I can't seem to find the same way to do it in Xdoclet-ojb -- It seems I have
to have a property named guid in each of my child classes to attach an
xdoclet tag to it.

I have tried this:

code
/**
 * @ojb.class table=tbl_cat
 *include-inherited=true  
 * @ojb.field name=guid
 *column=cat_guid
 *jdbc-type=VARCHAR
*/
Public class Cat extends BusinessObject
/code

But, it doesn't create the reference in the xml file. However, if I put a
property called guid in the dog and cat classes, and the tag that, it works
-- but that stinks because I am killing OO.

Thoughts?

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]