Re: Shared classloader and subclasses

2007-03-20 Thread roger.keays


I've been able to patch MetaDataRepository#getPCSubclasses() to work around
this problem (see below), but I'm a bit unsure about whether statics should
be used in PCRegistry or not. The listeners are stored in a static field,
which means they collect information about every application which loads
PCRegistry with the same class loader. Wouldn't the Broker object be a more
appropriate location for a PCRegistry without static fields?

Roger


Begin patch (from 0.9.6 source)

Index: src/main/java/org/apache/openjpa/meta/MetaDataRepository.java
===
--- src/main/java/org/apache/openjpa/meta/MetaDataRepository.java  
(revision 474176)
+++ src/main/java/org/apache/openjpa/meta/MetaDataRepository.java  
(working copy)
@@ -1243,7 +1243,16 @@
 Collection subs = (Collection) _subs.get(cls);
 if (subs == null)
 return Collections.EMPTY_LIST;
-return subs;
+
+/* only return subclasses we have metadata for */
+Collection result = new LinkedList();
+for (Iterator i = subs.iterator(); i.hasNext();) {
+Class c = (Class) i.next();
+if (_metas.containsKey(c)  _metas.get(c) != null) {
+result.add(c);
+}
+}
+return result;
 }



roger.keays wrote:
 
 Hi there,
 
 I'm trying to move my openjpa libs from WEB-INF/lib to Tomcat's
 shared/lib, but it seems I have a border case which makes this difficult.
 
 The situation is that each instance of the webapp loads between 5 - 10
 subclasses of an abstract Entity. Which classes are loaded is specified by
 that instance's configuration file. This is done with some custom
 bootstrapping code which ignores the normal JPA persistence.xml
 mechanisms. Everything works fine when one instance is loaded, but when
 another is loaded with a different set of subclasses, things get a bit
 hairy.
 
 AFIACT, the problem is that the openjpa.enhance.PCRegistry class uses
 static fields to store Meta information. When the second instance is
 loaded, the PCRegistry has been initialized, but doesn't contain that
 instance's subclasses and an exception is thrown. This is not a problem
 using the WEB-INF/lib classloader of course because each instance has
 their own PCRegistry class.
 
 I'm wondering if anybody might be able to suggest a workaround. I'd be
 happy if there was a way to load all the subclass metadata into the
 PCRegistry, but I still need each instance of the webapp to only be aware
 of their own subclasses.
 
 Cheers,
 
 Roger
 

-- 
View this message in context: 
http://www.nabble.com/Shared-classloader-and-subclasses-tf3431312.html#a9567668
Sent from the open-jpa-dev mailing list archive at Nabble.com.



[jira] Commented: (OPENJPA-175) Eager selects by PagingResultObjectProvider may not use the FetchBatchSize

2007-03-20 Thread Abe White (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENJPA-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482409
 ] 

Abe White commented on OPENJPA-175:
---

+1, with some caveats:

- The proposed patch doesn't handle the common cases where the FetchBatchSize 
is -1 (unlimited) or 0 (driver default).
- I'm a little nervous about defaulting the in clause limit to unlimited when 
we don't have much info on actual database limits other than Oracle.

 Eager selects by PagingResultObjectProvider may not use the FetchBatchSize
 --

 Key: OPENJPA-175
 URL: https://issues.apache.org/jira/browse/OPENJPA-175
 Project: OpenJPA
  Issue Type: Bug
Affects Versions: 0.9.0, 0.9.6
Reporter: Srinivasa Segu
 Attachments: OPENJPA-175-patch.txt


 The PagingResultObjectProvider during initialization does checks to determine 
 the appropriate pageSize. While this logic caps the size to 50 and addresses 
 determining an appropriate page size, it doesn't always conform to the set 
 batch size. For example with the size being 1000 and FetchBatchSize set to 
 say 500, the page size is determined to be 50 resulting in eager selects 
 happening in batches of 50 when the user expects it to be in batches of 500. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (OPENJPA-168) sql optimize n rows query hint

2007-03-20 Thread Abe White (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENJPA-168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482421
 ] 

Abe White commented on OPENJPA-168:
---

Comments on the proposed patch:

- I don't like the whole scheme of setting the expected result count to -1 for 
anything artificial.  It's confusing and unnecessary.  Just set it to the 
number of expected primary results, and the DBDictionary can invoke 
sel.hasEagerJoin(true) to figure out if the expected count can be used.  Or 
just have the getter for the expected count always return 0 if there is an 
eager to-many join (or better yet, turn -1 into a value meaning unknown and 
have it return -1, which would then also be the default when no expected count 
is set).  

- I still think there should be a way to get rid of Union.is/setSingleResult by 
moving the expected result property to SelectExecutor -- which both Select and 
Union extend -- and taking advantage of the new expected result (1 obviously 
indicates a single result).  

- If you're going to validate the value of the user-supplied hint in the JPA 
QueryImpl, you might as well transform it into a Number at that point before 
setting it into the FetchConfiguration.  Also, I'd accept any Number, not just 
an Integer (technically we should accept any whole number, but that's a pain to 
implement).  Then in the JDBC layer, you can just cast the hint value directly 
to a Number and forgo validating it and checking for String values a second 
time.

- DB2 really cares whether you use optimize for 1 row vs. optimize for 1 
rows?  That's ugly.

- We should probably generalize the configuration of row optimization to the 
base DBDictionary with an override mechanism.

- If you're going to invoke setUnique(true) on the underlying query from the 
JPA QueryImpl's getSingleResult, you need to do three things:
  1. Unset it in a finally clause, because the very next call might be to 
getResultList, and in general getSingleResult shouldn't have stateful side 
effects.
  2. Change the kernel's QueryImpl to throw an exception when unique is set but 
the query doesn't return any results.  Right now it allows 0 results and will 
return null, which is indistinguishable from a projection on a null field that 
returned 1 result.
  3. Get rid of the code immediately following in getSingleResult that extracts 
the value if a List is returned, because after setting the unique flag on the 
underlying query, it will never return a List.  

- The hint key should be a constant in the kernel's Query interface or 
somewhere like that.

 sql optimize n rows query hint
 --

 Key: OPENJPA-168
 URL: https://issues.apache.org/jira/browse/OPENJPA-168
 Project: OpenJPA
  Issue Type: New Feature
Reporter: David Wisneski
 Assigned To: David Wisneski
 Attachments: OPENJPA-168.patch.txt


 There werre various comments from Patrick, Abe and Kevin Sutter about the 
 code that I checked related to Optimize hint.  So I have gone back and 
 relooked at this and wil be making some changes.  At Kevin's suggestion I 
 will do this through a JIRA feature so that folks will have opportunity to 
 comment on this before the code is actually done and checked in.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Shared classloader and subclasses

2007-03-20 Thread Abe White
AFIACT, the problem is that the openjpa.enhance.PCRegistry class  
uses static
fields to store Meta information. When the second instance is  
loaded, the

PCRegistry has been initialized, but doesn't contain that instance's
subclasses and an exception is thrown


The PCRegistry has to use static members because each persistent  
class registers itself with the registry in its static initializer.   
There is no way for a persistent class to access a specific registry  
instance when it is loaded into the JVM.


I don't think the proposed patch is viable, because there are cases  
where we lazily-load metadata, and we don't want to leave out  
subclasses just because we haven't parsed their metadata yet.  What  
is the exception you're seeing?

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.


[jira] Commented: (OPENJPA-132) java.lang.NoSuchMethodError for entity with ID of type java.sql.Date

2007-03-20 Thread Michael Dick (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENJPA-132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482448
 ] 

Michael Dick commented on OPENJPA-132:
--

I'm fine using Abe's patch. The patch I submitted was just focussed on 
java.sql.Date, not the other java.sql classes. A simpler fix which adds more 
function is usually a good thing. 

 java.lang.NoSuchMethodError for entity with ID of type java.sql.Date
 

 Key: OPENJPA-132
 URL: https://issues.apache.org/jira/browse/OPENJPA-132
 Project: OpenJPA
  Issue Type: Bug
  Components: kernel
Reporter: Michael Dick
Priority: Minor
 Fix For: 0.9.7

 Attachments: OpenJPA-132.patch.txt


 Opening JIRA report to track the following problem (posted to development 
 forum 
 http://www.nabble.com/Exception-when-using-java.sql.Date-as-an-id-tf3189597.html)
  
 I'm getting the following exception when I try to fetch an entity with a 
 java.sql.Date as the id :
 java.lang.NoSuchMethodError: 
 org.apache.openjpa.util.DateId.getId()Ljava/sql/Date;
 at mikedd.entities.SqlDatePK.pcCopyKeyFieldsFromObjectId (SqlDatePK.java)
 at mikedd.entities.SqlDatePK.pcNewInstance(SqlDatePK.java)
 at org.apache.openjpa.enhance.PCRegistry.newInstance(PCRegistry.java:118)
 at org.apache.openjpa.kernel.StateManagerImpl.initialize 
 (StateManagerImpl.java:247)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initializeState(JDBCStoreManager.java:327)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initialize(JDBCStoreManager.java:252)
 at 
 org.apache.openjpa.kernel.DelegatingStoreManager.initialize(DelegatingStoreManager.java:108)
 at 
 org.apache.openjpa.kernel.ROPStoreManager.initialize(ROPStoreManager.java:54)
 at org.apache.openjpa.kernel.BrokerImpl.initialize (BrokerImpl.java:868)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:826)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:743)
 at org.apache.openjpa.kernel.DelegatingBroker.find 
 (DelegatingBroker.java:169)
 at 
 org.apache.openjpa.persistence.EntityManagerImpl.find(EntityManagerImpl.java:346)
 at mikedd.tests.TestSqlDateId.testFindAfterClear(TestSqlDateId.java:25)
 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke (Method.java:585)
 at junit.framework.TestCase.runTest(TestCase.java:154)
 . . .
 It's coming from the generated bytecode which expects there to be a getId 
 method that returns the same type of the Id, however java.sql.Date is using 
 the same ID class as java.util.Date. Do we need a separate class for 
 java.sql.Date? 
 Responses from Patrick and Craig follow. The consensus so far is to provide 
 ID separate classes for java.sql.Date and java.util.Date. 
 It looks like we either need a separate type for java.sql.Date (and
 presumably java.sql.Timestamp), or we need to change the logic to accept
 a getId() method that returns a type that is assignable from the id
 field's type.
 -Patrick
 It's probably cleaner if we have separate classes for the different
 types. That is, have the getId method in the new
 org.apache.openjpa.util.SQLDateId return the proper type
 (java.sql.Date). After all, java.sql.{Date, Time, Timestamp} are not
 really the same as java.util.Date.
 -Craig
 FTR, I think that I prefer separate classes as well; it's clearer, and
 avoids any ambiguity with other subclasses in the future.
 -Patrick

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (OPENJPA-132) java.lang.NoSuchMethodError for entity with ID of type java.sql.Date

2007-03-20 Thread Kevin Sutter (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENJPA-132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482462
 ] 

Kevin Sutter commented on OPENJPA-132:
--

Abe,
Can you post your patch so that we can see how the two approaches differ?  
Thanks.

Kevin

 java.lang.NoSuchMethodError for entity with ID of type java.sql.Date
 

 Key: OPENJPA-132
 URL: https://issues.apache.org/jira/browse/OPENJPA-132
 Project: OpenJPA
  Issue Type: Bug
  Components: kernel
Reporter: Michael Dick
Priority: Minor
 Fix For: 0.9.7

 Attachments: OpenJPA-132.patch.txt


 Opening JIRA report to track the following problem (posted to development 
 forum 
 http://www.nabble.com/Exception-when-using-java.sql.Date-as-an-id-tf3189597.html)
  
 I'm getting the following exception when I try to fetch an entity with a 
 java.sql.Date as the id :
 java.lang.NoSuchMethodError: 
 org.apache.openjpa.util.DateId.getId()Ljava/sql/Date;
 at mikedd.entities.SqlDatePK.pcCopyKeyFieldsFromObjectId (SqlDatePK.java)
 at mikedd.entities.SqlDatePK.pcNewInstance(SqlDatePK.java)
 at org.apache.openjpa.enhance.PCRegistry.newInstance(PCRegistry.java:118)
 at org.apache.openjpa.kernel.StateManagerImpl.initialize 
 (StateManagerImpl.java:247)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initializeState(JDBCStoreManager.java:327)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initialize(JDBCStoreManager.java:252)
 at 
 org.apache.openjpa.kernel.DelegatingStoreManager.initialize(DelegatingStoreManager.java:108)
 at 
 org.apache.openjpa.kernel.ROPStoreManager.initialize(ROPStoreManager.java:54)
 at org.apache.openjpa.kernel.BrokerImpl.initialize (BrokerImpl.java:868)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:826)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:743)
 at org.apache.openjpa.kernel.DelegatingBroker.find 
 (DelegatingBroker.java:169)
 at 
 org.apache.openjpa.persistence.EntityManagerImpl.find(EntityManagerImpl.java:346)
 at mikedd.tests.TestSqlDateId.testFindAfterClear(TestSqlDateId.java:25)
 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke (Method.java:585)
 at junit.framework.TestCase.runTest(TestCase.java:154)
 . . .
 It's coming from the generated bytecode which expects there to be a getId 
 method that returns the same type of the Id, however java.sql.Date is using 
 the same ID class as java.util.Date. Do we need a separate class for 
 java.sql.Date? 
 Responses from Patrick and Craig follow. The consensus so far is to provide 
 ID separate classes for java.sql.Date and java.util.Date. 
 It looks like we either need a separate type for java.sql.Date (and
 presumably java.sql.Timestamp), or we need to change the logic to accept
 a getId() method that returns a type that is assignable from the id
 field's type.
 -Patrick
 It's probably cleaner if we have separate classes for the different
 types. That is, have the getId method in the new
 org.apache.openjpa.util.SQLDateId return the proper type
 (java.sql.Date). After all, java.sql.{Date, Time, Timestamp} are not
 really the same as java.util.Date.
 -Craig
 FTR, I think that I prefer separate classes as well; it's clearer, and
 avoids any ambiguity with other subclasses in the future.
 -Patrick

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (OPENJPA-132) java.lang.NoSuchMethodError for entity with ID of type java.sql.Date

2007-03-20 Thread Abe White (JIRA)

 [ 
https://issues.apache.org/jira/browse/OPENJPA-132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abe White resolved OPENJPA-132.
---

Resolution: Fixed

Fixed in SVN revision 520522.  We can back out if we decide to use an 
alternative fix strategy in the future.

 java.lang.NoSuchMethodError for entity with ID of type java.sql.Date
 

 Key: OPENJPA-132
 URL: https://issues.apache.org/jira/browse/OPENJPA-132
 Project: OpenJPA
  Issue Type: Bug
  Components: kernel
Reporter: Michael Dick
Priority: Minor
 Fix For: 0.9.7

 Attachments: OpenJPA-132.patch.txt


 Opening JIRA report to track the following problem (posted to development 
 forum 
 http://www.nabble.com/Exception-when-using-java.sql.Date-as-an-id-tf3189597.html)
  
 I'm getting the following exception when I try to fetch an entity with a 
 java.sql.Date as the id :
 java.lang.NoSuchMethodError: 
 org.apache.openjpa.util.DateId.getId()Ljava/sql/Date;
 at mikedd.entities.SqlDatePK.pcCopyKeyFieldsFromObjectId (SqlDatePK.java)
 at mikedd.entities.SqlDatePK.pcNewInstance(SqlDatePK.java)
 at org.apache.openjpa.enhance.PCRegistry.newInstance(PCRegistry.java:118)
 at org.apache.openjpa.kernel.StateManagerImpl.initialize 
 (StateManagerImpl.java:247)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initializeState(JDBCStoreManager.java:327)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initialize(JDBCStoreManager.java:252)
 at 
 org.apache.openjpa.kernel.DelegatingStoreManager.initialize(DelegatingStoreManager.java:108)
 at 
 org.apache.openjpa.kernel.ROPStoreManager.initialize(ROPStoreManager.java:54)
 at org.apache.openjpa.kernel.BrokerImpl.initialize (BrokerImpl.java:868)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:826)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:743)
 at org.apache.openjpa.kernel.DelegatingBroker.find 
 (DelegatingBroker.java:169)
 at 
 org.apache.openjpa.persistence.EntityManagerImpl.find(EntityManagerImpl.java:346)
 at mikedd.tests.TestSqlDateId.testFindAfterClear(TestSqlDateId.java:25)
 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke (Method.java:585)
 at junit.framework.TestCase.runTest(TestCase.java:154)
 . . .
 It's coming from the generated bytecode which expects there to be a getId 
 method that returns the same type of the Id, however java.sql.Date is using 
 the same ID class as java.util.Date. Do we need a separate class for 
 java.sql.Date? 
 Responses from Patrick and Craig follow. The consensus so far is to provide 
 ID separate classes for java.sql.Date and java.util.Date. 
 It looks like we either need a separate type for java.sql.Date (and
 presumably java.sql.Timestamp), or we need to change the logic to accept
 a getId() method that returns a type that is assignable from the id
 field's type.
 -Patrick
 It's probably cleaner if we have separate classes for the different
 types. That is, have the getId method in the new
 org.apache.openjpa.util.SQLDateId return the proper type
 (java.sql.Date). After all, java.sql.{Date, Time, Timestamp} are not
 really the same as java.util.Date.
 -Craig
 FTR, I think that I prefer separate classes as well; it's clearer, and
 avoids any ambiguity with other subclasses in the future.
 -Patrick

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (OPENJPA-132) java.lang.NoSuchMethodError for entity with ID of type java.sql.Date

2007-03-20 Thread Abe White (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENJPA-132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482484
 ] 

Abe White commented on OPENJPA-132:
---

Sorry Kevin; I didn't see your comment before committing.  As my resolution 
comment states, though, I can back my fix out if we decide we don't like it.

 java.lang.NoSuchMethodError for entity with ID of type java.sql.Date
 

 Key: OPENJPA-132
 URL: https://issues.apache.org/jira/browse/OPENJPA-132
 Project: OpenJPA
  Issue Type: Bug
  Components: kernel
Reporter: Michael Dick
Priority: Minor
 Fix For: 0.9.7

 Attachments: OpenJPA-132.patch.txt


 Opening JIRA report to track the following problem (posted to development 
 forum 
 http://www.nabble.com/Exception-when-using-java.sql.Date-as-an-id-tf3189597.html)
  
 I'm getting the following exception when I try to fetch an entity with a 
 java.sql.Date as the id :
 java.lang.NoSuchMethodError: 
 org.apache.openjpa.util.DateId.getId()Ljava/sql/Date;
 at mikedd.entities.SqlDatePK.pcCopyKeyFieldsFromObjectId (SqlDatePK.java)
 at mikedd.entities.SqlDatePK.pcNewInstance(SqlDatePK.java)
 at org.apache.openjpa.enhance.PCRegistry.newInstance(PCRegistry.java:118)
 at org.apache.openjpa.kernel.StateManagerImpl.initialize 
 (StateManagerImpl.java:247)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initializeState(JDBCStoreManager.java:327)
 at 
 org.apache.openjpa.jdbc.kernel.JDBCStoreManager.initialize(JDBCStoreManager.java:252)
 at 
 org.apache.openjpa.kernel.DelegatingStoreManager.initialize(DelegatingStoreManager.java:108)
 at 
 org.apache.openjpa.kernel.ROPStoreManager.initialize(ROPStoreManager.java:54)
 at org.apache.openjpa.kernel.BrokerImpl.initialize (BrokerImpl.java:868)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:826)
 at org.apache.openjpa.kernel.BrokerImpl.find(BrokerImpl.java:743)
 at org.apache.openjpa.kernel.DelegatingBroker.find 
 (DelegatingBroker.java:169)
 at 
 org.apache.openjpa.persistence.EntityManagerImpl.find(EntityManagerImpl.java:346)
 at mikedd.tests.TestSqlDateId.testFindAfterClear(TestSqlDateId.java:25)
 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke (Method.java:585)
 at junit.framework.TestCase.runTest(TestCase.java:154)
 . . .
 It's coming from the generated bytecode which expects there to be a getId 
 method that returns the same type of the Id, however java.sql.Date is using 
 the same ID class as java.util.Date. Do we need a separate class for 
 java.sql.Date? 
 Responses from Patrick and Craig follow. The consensus so far is to provide 
 ID separate classes for java.sql.Date and java.util.Date. 
 It looks like we either need a separate type for java.sql.Date (and
 presumably java.sql.Timestamp), or we need to change the logic to accept
 a getId() method that returns a type that is assignable from the id
 field's type.
 -Patrick
 It's probably cleaner if we have separate classes for the different
 types. That is, have the getId method in the new
 org.apache.openjpa.util.SQLDateId return the proper type
 (java.sql.Date). After all, java.sql.{Date, Time, Timestamp} are not
 really the same as java.util.Date.
 -Craig
 FTR, I think that I prefer separate classes as well; it's clearer, and
 avoids any ambiguity with other subclasses in the future.
 -Patrick

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (OPENJPA-175) Eager selects by PagingResultObjectProvider may not use the FetchBatchSize

2007-03-20 Thread Srinivasa Segu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OPENJPA-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivasa Segu updated OPENJPA-175:
---

Attachment: OPENJPA-175-patch.txt

Patch with fixes to address the FetchBatchSize values of -1, 0. For 
FetchBatchSize being 0 or negative value other than -1 uses the earlier logic 
of determining a pageSize capped by 50 based on size.

 Eager selects by PagingResultObjectProvider may not use the FetchBatchSize
 --

 Key: OPENJPA-175
 URL: https://issues.apache.org/jira/browse/OPENJPA-175
 Project: OpenJPA
  Issue Type: Bug
Affects Versions: 0.9.0, 0.9.6
Reporter: Srinivasa Segu
 Attachments: OPENJPA-175-patch.txt, OPENJPA-175-patch.txt


 The PagingResultObjectProvider during initialization does checks to determine 
 the appropriate pageSize. While this logic caps the size to 50 and addresses 
 determining an appropriate page size, it doesn't always conform to the set 
 batch size. For example with the size being 1000 and FetchBatchSize set to 
 say 500, the page size is determined to be 50 resulting in eager selects 
 happening in batches of 50 when the user expects it to be in batches of 500. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Shared classloader and subclasses

2007-03-20 Thread roger.keays



Abe White wrote:
 
 AFIACT, the problem is that the openjpa.enhance.PCRegistry class  
 uses static
 fields to store Meta information. When the second instance is  
 loaded, the
 PCRegistry has been initialized, but doesn't contain that instance's
 subclasses and an exception is thrown
 
 The PCRegistry has to use static members because each persistent  
 class registers itself with the registry in its static initializer.   
 There is no way for a persistent class to access a specific registry  
 instance when it is loaded into the JVM.
 
 I don't think the proposed patch is viable, because there are cases  
 where we lazily-load metadata, and we don't want to leave out  
 subclasses just because we haven't parsed their metadata yet.  What  
 is the exception you're seeing?
 
4|true|0.9.6-incubating org.apache.openjpa.persistence.ArgumentException:
No m
etadata was found for type class figbird.forums.entities.Topic. The class
does
 not appear in the list of persistent types:
[figbird.lists.entities.Newsletter,
 figbird.lists.entities.MailingList, mlynch.entities.Flyer,
figbird.cms.entities
.File, figbird.lists.entities.Email, figbird.cms.entities.News,
figbird.cms.enti
ties.Blob, figbird.cms.entities.IFrame, figbird.cms.entities.Redirect,
figbird.c
ms.entities.Article, figbird.cms.entities.Comment,
figbird.cms.entities.Fragment
, figbird.cms.entities.Content, figbird.lists.entities.Delivery,
figbird.cms.ent
ities.Privilege, figbird.cms.entities.Page].
at
org.apache.openjpa.meta.MetaDataRepository.getMetaData(MetaDataReposi
tory.java:278)
at
org.apache.openjpa.meta.ClassMetaData.getPCSubclassMetaDatas(ClassMet
aData.java:337)
at
org.apache.openjpa.meta.ClassMetaData.getMappedPCSubclassMetaDatas(Cl
assMetaData.java:351)
at
org.apache.openjpa.jdbc.meta.ClassMapping.getMappedPCSubclassMappings
(ClassMapping.java:575)
at
org.apache.openjpa.jdbc.meta.ClassMapping.getIndependentAssignableMap
pings(ClassMapping.java:614)
at
org.apache.openjpa.jdbc.meta.ValueMappingImpl.getIndependentTypeMappi
ngs(ValueMappingImpl.java:345)
at
org.apache.openjpa.jdbc.meta.FieldMapping.getIndependentTypeMappings(
FieldMapping.java:964)
at
org.apache.openjpa.jdbc.meta.strats.RelationFieldStrategy.supportsSel
ect(RelationFieldStrategy.java:351)
at
org.apache.openjpa.jdbc.meta.FieldMapping.supportsSelect(FieldMapping
.java:692)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreManager.createEagerSelects(JD
BCStoreManager.java:928)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreManager.createEagerSelects(JD
BCStoreManager.java:910)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreManager.select(JDBCStoreManag
er.java:876)
at
org.apache.openjpa.jdbc.sql.SelectImpl.select(SelectImpl.java:762)
at
org.apache.openjpa.jdbc.sql.LogicalUnion$UnionSelect.select(LogicalUn
ion.java:585)
at
org.apache.openjpa.jdbc.sql.LogicalUnion$UnionSelect.selectIdentifier
:
at
org.apache.openjpa.jdbc.kernel.exps.SelectConstructor.select(SelectCo
nstructor.java:263)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreQuery.populateSelect(JDBCStor
eQuery.java:265)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreQuery.access$000(JDBCStoreQue
ry.java:70)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreQuery$1.select(JDBCStoreQuery
.java:237)
at
org.apache.openjpa.jdbc.sql.LogicalUnion.select(LogicalUnion.java:280
)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreQuery.populateUnion(JDBCStore
Query.java:235)
at
org.apache.openjpa.jdbc.kernel.JDBCStoreQuery.executeQuery(JDBCStoreQ
uery.java:183)
at
org.apache.openjpa.kernel.ExpressionStoreQuery$DataStoreExecutor.exec
uteQuery(ExpressionStoreQuery.java:672)
at
org.apache.openjpa.datacache.QueryCacheStoreQuery$QueryCacheExecutor.
executeQuery(QueryCacheStoreQuery.java:305)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:977)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:789)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:759)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:755)
at
org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.jav
a:512)
at
org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:213)
at
org.apache.openjpa.persistence.QueryImpl.getSingleResult(QueryImpl.ja
va:268)
at figbird.cms.application.DAO.getRootItem(DAO.java:149)
 
In the case above, another webapp has loaded the forums module, causing this
webapp to look for the mappings in that module even though they aren't
available.

I had difficulty trying to figure out how to restrict which subclasses are
'seen'. Ideally I think it'd be done in the MetaDataRepository#register()
method, but no metadata seems to be available at this time.

Thanks for your help,

Roger
-- 
View this message in context: 

Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marc Prud'hommeaux

Marina-

On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following set of  
questions?


1. The doc requires that In order to enable automatic runtime  
mapping, you must first list all your persistent classes. Is this  
true for EE case also?


Yes. People usually list them all in the class tags in the  
persistence.xml file.



2. Section 1.2.Generating DDL SQL talks about .sql files, but  
what I am looking for are jdbc files, i.e. files with the lines  
that can be used directly as java.sql statements to be executed  
against database.


The output should be sufficient. Try it out and see if the format is  
something you can use.



3. Is there a document that describes all possible values for the  
openjpa.jdbc.SynchronizeMappings property?


Unfortunately, no. Basically, the setting of the  
SynchronizeMappings property will be of the form action 
(Bean1=value1,Bean2=value2), where the bean values are those  
listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc you  
can see http://incubator.apache.org/openjpa/docs/latest/javadoc/org/ 
apache/openjpa/jdbc/meta/MappingTool.html ).





thank you,
-marina

Marc Prud'hommeaux wrote:

Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:

Hi,

I am part of the GlassFish persistence team and was wondering  
how  does OpenJPA support JPA auto DDL generation (we call it  
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets of   
files for each PU: a ...dropDDL.jdbc and a ...createDDL.jdbc  
file  on deploy (i.e. before the application  is actually loaded  
into the  container) and then executing 'create' file as the last  
step in  deployment, and 'drop' file on undeploy or the 1st step  
in  redeploy. This allows us to drop tables created by the  
previous  deploy operation.


This approach is done for both, the CMP and the default JPA   
provider. It would be nice to add java2db support for OpenJPA as   
well, and I'm wondering if we need to do anything special, or  
it'll  all work just by itself?
We do have support for runtime creation of the schema via the   
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also  
described  in the documentation) at runtime against all the  
registered  persistent classes.

Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct  
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?

b

2. How would a user drop the tables in such environment?
I don't think it can be used to automatically drop then create   
tables. The mappingtool can be executed manually twice, the  
first  time to drop all the tables, and the second time to re- 
create them,  but I don't think it can be automatically done at  
runtime with the  SynchronizeMappings property.
3. If the answer to either 1a or 1b is yes, how does the code   
distinguish between the server startup time and the application   
being loaded for the 1st time?
That is one of the reasons why we think it would be inadvisable  
to  automatically drop tables at runtime :)
4. Is there a mode that allows creating a file with the jdbc   
statements to create or drop the tables and constraints?

Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_ddl_examples

thank you,
-marina







Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marina Vatkina

Marc,

Marc Prud'hommeaux wrote:

Marina-

On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following set of  
questions?


1. The doc requires that In order to enable automatic runtime  
mapping, you must first list all your persistent classes. Is this  
true for EE case also?



Yes. People usually list them all in the class tags in the  
persistence.xml file.


They do in SE, but as there is no requirement to do it in EE, people try to 
reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it do the same for 
the schema generation?


I'll check the rest.

thanks,
-marina



2. Section 1.2.Generating DDL SQL talks about .sql files, but  what 
I am looking for are jdbc files, i.e. files with the lines  that can 
be used directly as java.sql statements to be executed  against database.



The output should be sufficient. Try it out and see if the format is  
something you can use.



3. Is there a document that describes all possible values for the  
openjpa.jdbc.SynchronizeMappings property?



Unfortunately, no. Basically, the setting of the  SynchronizeMappings 
property will be of the form action (Bean1=value1,Bean2=value2), where 
the bean values are those  listed in 
org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc you  can see 
http://incubator.apache.org/openjpa/docs/latest/javadoc/org/ 
apache/openjpa/jdbc/meta/MappingTool.html ).





thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was wondering  how  
does OpenJPA support JPA auto DDL generation (we call it  
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets of   
files for each PU: a ...dropDDL.jdbc and a ...createDDL.jdbc  file  
on deploy (i.e. before the application  is actually loaded  into 
the  container) and then executing 'create' file as the last  step 
in  deployment, and 'drop' file on undeploy or the 1st step  in  
redeploy. This allows us to drop tables created by the  previous  
deploy operation.


This approach is done for both, the CMP and the default JPA   
provider. It would be nice to add java2db support for OpenJPA as   
well, and I'm wondering if we need to do anything special, or  
it'll  all work just by itself?


We do have support for runtime creation of the schema via the   
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also  
described  in the documentation) at runtime against all the  
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct  
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?


b


2. How would a user drop the tables in such environment?


I don't think it can be used to automatically drop then create   
tables. The mappingtool can be executed manually twice, the  first  
time to drop all the tables, and the second time to re- create them,  
but I don't think it can be automatically done at  runtime with the  
SynchronizeMappings property.


3. If the answer to either 1a or 1b is yes, how does the code   
distinguish between the server startup time and the application   
being loaded for the 1st time?


That is one of the reasons why we think it would be inadvisable  to  
automatically drop tables at runtime :)


4. Is there a mode that allows creating a file with the jdbc   
statements to create or drop the tables and constraints?


Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/  
manual.html#ref_guide_ddl_examples



thank you,
-marina









Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marc Prud'hommeaux

Marina-

They do in SE, but as there is no requirement to do it in EE,  
people try to reduce the amount of typing ;).


Hmm ... we might not actually require it in EE, since we do examine  
the ejb jar to look for persistent classes. I'm not sure though.


You should test with both listing them and not listing them. I'd be  
interested to know if it works without.




On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:

Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:

Marc,

Thanks for the pointers. Can you please answer the following set  
of  questions?


1. The doc requires that In order to enable automatic runtime   
mapping, you must first list all your persistent classes. Is  
this  true for EE case also?
Yes. People usually list them all in the class tags in the   
persistence.xml file.


They do in SE, but as there is no requirement to do it in EE,  
people try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it do  
the same for the schema generation?


I'll check the rest.

thanks,
-marina
2. Section 1.2.Generating DDL SQL talks about .sql files, but   
what I am looking for are jdbc files, i.e. files with the  
lines  that can be used directly as java.sql statements to be  
executed  against database.
The output should be sufficient. Try it out and see if the format  
is  something you can use.
3. Is there a document that describes all possible values for  
the  openjpa.jdbc.SynchronizeMappings property?
Unfortunately, no. Basically, the setting of the   
SynchronizeMappings property will be of the form action  
(Bean1=value1,Bean2=value2), where the bean values are those   
listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc  
you  can see http://incubator.apache.org/openjpa/docs/latest/ 
javadoc/org/ apache/openjpa/jdbc/meta/MappingTool.html ).

thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was wondering   
how  does OpenJPA support JPA auto DDL generation (we call it   
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets  
of   files for each PU: a ...dropDDL.jdbc and  
a ...createDDL.jdbc  file  on deploy (i.e. before the  
application  is actually loaded  into the  container) and then  
executing 'create' file as the last  step in  deployment, and  
'drop' file on undeploy or the 1st step  in  redeploy. This  
allows us to drop tables created by the  previous  deploy  
operation.


This approach is done for both, the CMP and the default JPA
provider. It would be nice to add java2db support for OpenJPA  
as   well, and I'm wondering if we need to do anything special,  
or  it'll  all work just by itself?


We do have support for runtime creation of the schema via the
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also   
described  in the documentation) at runtime against all the   
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct   
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?


b


2. How would a user drop the tables in such environment?


I don't think it can be used to automatically drop then create
tables. The mappingtool can be executed manually twice, the   
first  time to drop all the tables, and the second time to re-  
create them,  but I don't think it can be automatically done at   
runtime with the  SynchronizeMappings property.


3. If the answer to either 1a or 1b is yes, how does the code
distinguish between the server startup time and the  
application   being loaded for the 1st time?


That is one of the reasons why we think it would be inadvisable   
to  automatically drop tables at runtime :)


4. Is there a mode that allows creating a file with the jdbc
statements to create or drop the tables and constraints?


Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_ddl_examples



thank you,
-marina









RE: Using DDL generation in a Java EE environment?

2007-03-20 Thread Pinaki Poddar
  They do in SE, but as there is no requirement to do it in EE, people 
 try to reduce the amount of typing ;).

In EE, persistent classes can be specified via
a) explictly via class
b) via one or more jar-file
c) via one or more mapping-file
d) leave everything unspecified and OpenJPA will scan for @Entity
annotated classes in the deployed unit 


Pinaki Poddar
BEA Systems
415.402.7317  


-Original Message-
From: Marc Prud'hommeaux [mailto:[EMAIL PROTECTED] On Behalf Of
Marc Prud'hommeaux
Sent: Tuesday, March 20, 2007 6:22 PM
To: open-jpa-dev@incubator.apache.org
Subject: Re: Using DDL generation in a Java EE environment?

Marina-

 They do in SE, but as there is no requirement to do it in EE, people 
 try to reduce the amount of typing ;).

Hmm ... we might not actually require it in EE, since we do examine the
ejb jar to look for persistent classes. I'm not sure though.

You should test with both listing them and not listing them. I'd be
interested to know if it works without.



On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:

 Marc,

 Marc Prud'hommeaux wrote:
 Marina-
 On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:
 Marc,

 Thanks for the pointers. Can you please answer the following set of

 questions?

 1. The doc requires that In order to enable automatic runtime   
 mapping, you must first list all your persistent classes. Is this  
 true for EE case also?
 Yes. People usually list them all in the class tags in the   
 persistence.xml file.

 They do in SE, but as there is no requirement to do it in EE, people 
 try to reduce the amount of typing ;).

 If OpenJPA can identify all entities in EE world, why can't it do the 
 same for the schema generation?

 I'll check the rest.

 thanks,
 -marina
 2. Section 1.2.Generating DDL SQL talks about .sql files, but   
 what I am looking for are jdbc files, i.e. files with the lines  
 that can be used directly as java.sql statements to be executed  
 against database.
 The output should be sufficient. Try it out and see if the format is

 something you can use.
 3. Is there a document that describes all possible values for the  
 openjpa.jdbc.SynchronizeMappings property?
 Unfortunately, no. Basically, the setting of the   
 SynchronizeMappings property will be of the form action  
 (Bean1=value1,Bean2=value2), where the bean values are those   
 listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc you

 can see http://incubator.apache.org/openjpa/docs/latest/
 javadoc/org/ apache/openjpa/jdbc/meta/MappingTool.html ).
 thank you,
 -marina

 Marc Prud'hommeaux wrote:

 Marina-
 On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:

 Hi,

 I am part of the GlassFish persistence team and was wondering   
 how  does OpenJPA support JPA auto DDL generation (we call it   
 java2db)  in a Java EE application server.

 Our application server supports java2db via creating two sets  
 of   files for each PU: a ...dropDDL.jdbc and  
 a ...createDDL.jdbc  file  on deploy (i.e. before the application

 is actually loaded  into the  container) and then executing 
 'create' file as the last  step in  deployment, and 'drop' file on

 undeploy or the 1st step  in  redeploy. This allows us to drop 
 tables created by the  previous  deploy operation.

 This approach is done for both, the CMP and the default JPA
 provider. It would be nice to add java2db support for OpenJPA  
 as   well, and I'm wondering if we need to do anything special,  
 or  it'll  all work just by itself?

 We do have support for runtime creation of the schema via the
 openjpa.jdbc.SynchronizeMappings property. It is described at:
   http://incubator.apache.org/openjpa/docs/latest/manual/   
 manual.html#ref_guide_mapping_synch
 The property can be configured to run the mappingtool (also   
 described  in the documentation) at runtime against all the   
 registered  persistent classes.

 Here are my 1st set of questions:

 1. Which API would trigger the process, assuming the correct   
 values  are specified in the persistence.xml file? Is it:
 a) provider.createContainerEntityManagerFactory(...)? or
 b) the 1st call to emf.createEntityManager() in this VM?
 c) something else?

 b

 2. How would a user drop the tables in such environment?

 I don't think it can be used to automatically drop then create
 tables. The mappingtool can be executed manually twice, the   
 first  time to drop all the tables, and the second time to re-  
 create them,  but I don't think it can be automatically done at   
 runtime with the  SynchronizeMappings property.

 3. If the answer to either 1a or 1b is yes, how does the code
 distinguish between the server startup time and the  
 application   being loaded for the 1st time?

 That is one of the reasons why we think it would be inadvisable   
 to  automatically drop tables at runtime :)

 4. Is there a mode that allows creating a file with the jdbc
 statements to create or drop the tables and constraints?

 Yes. See:
   

Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marina Vatkina

Marc,

Marc Prud'hommeaux wrote:

Marina-

They do in SE, but as there is no requirement to do it in EE,  people 
try to reduce the amount of typing ;).



Hmm ... we might not actually require it in EE, since we do examine  the 
ejb jar to look for persistent classes. I'm not sure though.


You should test with both listing them and not listing them. I'd be  
interested to know if it works without.


Let me give it a try. How would the persistence.xml property look like to 
generate .sql file? Where will it be placed in EE environment?  Does it use use 
the name as-is or prepend it with some path?


thanks.





On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following set  
of  questions?


1. The doc requires that In order to enable automatic runtime   
mapping, you must first list all your persistent classes. Is  this  
true for EE case also?


Yes. People usually list them all in the class tags in the   
persistence.xml file.



They do in SE, but as there is no requirement to do it in EE,  people 
try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it do  the 
same for the schema generation?


I'll check the rest.

thanks,
-marina

2. Section 1.2.Generating DDL SQL talks about .sql files, but   
what I am looking for are jdbc files, i.e. files with the  lines  
that can be used directly as java.sql statements to be  executed  
against database.


The output should be sufficient. Try it out and see if the format  
is  something you can use.


3. Is there a document that describes all possible values for  the  
openjpa.jdbc.SynchronizeMappings property?


Unfortunately, no. Basically, the setting of the   
SynchronizeMappings property will be of the form action  
(Bean1=value1,Bean2=value2), where the bean values are those   
listed in org.apache.openjpa.jdbc.meta.MappingTool (whose javadoc  
you  can see http://incubator.apache.org/openjpa/docs/latest/ 
javadoc/org/ apache/openjpa/jdbc/meta/MappingTool.html ).



thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was wondering   
how  does OpenJPA support JPA auto DDL generation (we call it   
java2db)  in a Java EE application server.


Our application server supports java2db via creating two sets  
of   files for each PU: a ...dropDDL.jdbc and  a 
...createDDL.jdbc  file  on deploy (i.e. before the  application  
is actually loaded  into the  container) and then  executing 
'create' file as the last  step in  deployment, and  'drop' file 
on undeploy or the 1st step  in  redeploy. This  allows us to drop 
tables created by the  previous  deploy  operation.


This approach is done for both, the CMP and the default JPA
provider. It would be nice to add java2db support for OpenJPA  
as   well, and I'm wondering if we need to do anything special,  
or  it'll  all work just by itself?



We do have support for runtime creation of the schema via the
openjpa.jdbc.SynchronizeMappings property. It is described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also   
described  in the documentation) at runtime against all the   
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the correct   
values  are specified in the persistence.xml file? Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?



b


2. How would a user drop the tables in such environment?



I don't think it can be used to automatically drop then create
tables. The mappingtool can be executed manually twice, the   
first  time to drop all the tables, and the second time to re-  
create them,  but I don't think it can be automatically done at   
runtime with the  SynchronizeMappings property.


3. If the answer to either 1a or 1b is yes, how does the code
distinguish between the server startup time and the  application   
being loaded for the 1st time?



That is one of the reasons why we think it would be inadvisable   
to  automatically drop tables at runtime :)


4. Is there a mode that allows creating a file with the jdbc
statements to create or drop the tables and constraints?



Yes. See:
  http://incubator.apache.org/openjpa/docs/latest/manual/   
manual.html#ref_guide_ddl_examples



thank you,
-marina











[jira] Created: (OPENJPA-176) Exception prefixes should be human-readable

2007-03-20 Thread Marc Prud'hommeaux (JIRA)
Exception prefixes should be human-readable
---

 Key: OPENJPA-176
 URL: https://issues.apache.org/jira/browse/OPENJPA-176
 Project: OpenJPA
  Issue Type: Improvement
  Components: diagnostics
Affects Versions: 0.9.6, 0.9.0
Reporter: Marc Prud'hommeaux
Priority: Trivial


OpenJPA prefixes all exception messages with a string of the form exception 
type|is fatal|version, restulting in strings like 4|false|0.9.6-incubating 
org.apache.openjpa.persistence.PersistenceException. This isn't very useful to 
the casual observer, since no translation of the meaning of the fields is done.

it would be nice if we translated the fatal and type parameters, so that the 
string looked like user-error|recoverable|0.9.6-incubating.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Using DDL generation in a Java EE environment?

2007-03-20 Thread Marina Vatkina
Then I'll first start with an easier task - check what happens in EE if entities 
are not explicitly listed in the persistence.xml file :).


thanks,
-marina

Marc Prud'hommeaux wrote:

Marina-

Let me give it a try. How would the persistence.xml property look  
like to generate .sql file?



Actually, I just took a look at this, and it look like it isn't  
possible to use the SynchronizeMappings property to automatically  
output a sql file. The reason is that the property takes a standard  
OpenJPA plugin string that configures an instances of MappingTool,  but 
the MappingTool class doesn't have a setter for the SQL file to  write 
out to.


So I think your only recourse would be to write your own adapter to  to 
this that manually creates a MappingTool instance and runs it with  the 
correct flags for outputting a sql file. Take a look at the  javadocs 
for the MappingTool to get started, and let us know if you  have any 
questions about proceeding.




On Mar 20, 2007, at 4:59 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-

They do in SE, but as there is no requirement to do it in EE,   
people try to reduce the amount of typing ;).


Hmm ... we might not actually require it in EE, since we do  examine  
the ejb jar to look for persistent classes. I'm not sure  though.
You should test with both listing them and not listing them. I'd  be  
interested to know if it works without.



Let me give it a try. How would the persistence.xml property look  
like to generate .sql file? Where will it be placed in EE  
environment?  Does it use use the name as-is or prepend it with  some 
path?


thanks.


On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:


Marc,

Marc Prud'hommeaux wrote:


Marina-
On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:


Marc,

Thanks for the pointers. Can you please answer the following  set  
of  questions?


1. The doc requires that In order to enable automatic  runtime   
mapping, you must first list all your persistent  classes. Is  
this  true for EE case also?



Yes. People usually list them all in the class tags in the
persistence.xml file.




They do in SE, but as there is no requirement to do it in EE,   
people try to reduce the amount of typing ;).


If OpenJPA can identify all entities in EE world, why can't it  do  
the same for the schema generation?


I'll check the rest.

thanks,
-marina

2. Section 1.2.Generating DDL SQL talks about .sql files,  but   
what I am looking for are jdbc files, i.e. files with  the  
lines  that can be used directly as java.sql statements to  be  
executed  against database.



The output should be sufficient. Try it out and see if the  format  
is  something you can use.


3. Is there a document that describes all possible values for   
the  openjpa.jdbc.SynchronizeMappings property?



Unfortunately, no. Basically, the setting of the
SynchronizeMappings property will be of the form action   
(Bean1=value1,Bean2=value2), where the bean values are  those   
listed in org.apache.openjpa.jdbc.meta.MappingTool  (whose javadoc  
you  can see http://incubator.apache.org/openjpa/ docs/latest/ 
javadoc/org/ apache/openjpa/jdbc/meta/ MappingTool.html ).



thank you,
-marina

Marc Prud'hommeaux wrote:


Marina-
On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:


Hi,

I am part of the GlassFish persistence team and was  wondering   
how  does OpenJPA support JPA auto DDL generation  (we call it   
java2db)  in a Java EE application server.


Our application server supports java2db via creating two  sets  
of   files for each PU: a ...dropDDL.jdbc and   a 
...createDDL.jdbc  file  on deploy (i.e. before the   
application  is actually loaded  into the  container) and  then  
executing 'create' file as the last  step in   deployment, and  
'drop' file on undeploy or the 1st step  in   redeploy. This  
allows us to drop tables created by the   previous  deploy  
operation.


This approach is done for both, the CMP and the default  JPA
provider. It would be nice to add java2db support for  OpenJPA  
as   well, and I'm wondering if we need to do  anything 
special,  or  it'll  all work just by itself?




We do have support for runtime creation of the schema via  the
openjpa.jdbc.SynchronizeMappings property. It is  described at:
  http://incubator.apache.org/openjpa/docs/latest/manual/
manual.html#ref_guide_mapping_synch
The property can be configured to run the mappingtool (also
described  in the documentation) at runtime against all the
registered  persistent classes.



Here are my 1st set of questions:

1. Which API would trigger the process, assuming the  correct   
values  are specified in the persistence.xml file?  Is it:

a) provider.createContainerEntityManagerFactory(...)? or
b) the 1st call to emf.createEntityManager() in this VM?
c) something else?




b


2. How would a user drop the tables in such environment?




I don't think it can be used to automatically drop then  
create