PersistentCollection bug. OpenJPA version to use.
Hi, I'm facing a ClassCastException in the context of a PersistentCollection. It sound like exactly the behavior described here : http://issues.apache.org/jira/browse/OPENJPA-1020 This bug look like fix but without any OpenJPA fix version. Any news ? I'm using a 1.2.2 snapshot from september. I noticed last snapshot (according to system date) is from december. http://people.apache.org/repo/m2-snapshot-repository/org/apache/openjpa/apache-openjpa/1.2.2-SNAPSHOT/ Is there any interest for us to upgrade the snapshot from september's one to current one ? If yes, what is the difference between snapshot and nightly build then ? I tested the 1.2.2. snapshot from 17th of decembre : nothing changed, the bug is still there. We are in a delivery phase for a customer, so I think I should not use 2.0 since there are too much changes. Any idea, advices ? Thanks.
Re: problem with NChar in Oracle 10g
Hi Mohammad, I don't think we can set a system property by using persistence.xml and even if we could, that would affect your entire app server anyways. System properties affect the whole JVM even if set during runtime. The Oracle doc you mentioned [1] suggests that you can use oracle.jdbc.defaultNChar also as a connection property. JDBC drivers usually allow you to set connection properties by appending them to connection strings. I am not sure whether Oracle driver allows it, if it does, you could modify the connection string in the data source definition and you should be done. Also, some app servers might be able to call connProps.setProperty(..., ...) when a connection is taken from the pool. I am suspecting that your problem is not specific to NCHAR but to CHAR columns in general. It might be that reading a CHAR column through JDBC returns its value right-padded with spaces to the length of the column. Is it possible that you try your test case with CHAR instead of NCHAR and confirm whether the behaviour is similar? As for strange characters instead of spaces - I would try another version of the JDBC driver. For example, if your driver major version is higher than your database major version, I would test the driver which matches major version of the database. Cheers, Milosz [1] http://www.oracle.com/technology/sample_code/tech/java/codesnippet/jdbc/nchar/readme.html Hello When a field of type nchar is read by OpenJPA the lenght of read value is constant and equals to the lenght of the field but if the lenght of value is less than the lenght of field the remained are filled with space or strange characters and we can't even trim that string value to remove the spaces. As I read the Oracle readme about nchars, they stated to use setFormOfUse and in OpenJPA manual it is said that OpenJPA is automatically uses this method if it find a field of type nchar and nvarchar but still we have problem. And for your information, we didn't insert any international-character into such field. It is also said we can use system property -Doracle.jdbc.defaultNChar=true but I am reluctant to set this value in App server. I am eager to know if we can set system property in persistence xml? I appreciate if you share your experiences about this with me thanks - -- Regards Mohammad http://pixelshot.wordpress.com Pixelshot -- View this message in context: http://n2.nabble.com/problem-with-NChar-in-Oracle-10g-tp4231056p4231056.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Best way to prime OpenJPA before first request?
I'm building a REST-based app using CXF and OpenJPA 1.2.1. The app works fine, but I've noticed that the first request to the app after startup takes quite a while, and most of the time is spent on the first JPA request. Following requests, even for different objects and classes, goes much faster. What are my possible options for priming JPA at startup so that the first request doesn't take so long? 1. If you are using servlets, confgure web.xml to load your servlet at start-up or add some *Listener. 2. Your application server might provide some scheduler API - try to configure it so that it issues a request to your app after the app gets loaded. 3. openjpa.InitializeEagerly property but I am afraid it requires a newer OpenJPA than 1.2.1 See also this post http://n2.nabble.com/EntityManager-used-in-multiple-threads-td3662432.html#a3687617 Greetings, Milosz
RE: XML overrides annotations, except when it doesn't
-Original Message- From: KARR, DAVID (ATTCINW) Sent: Tuesday, December 29, 2009 9:05 AM To: users@openjpa.apache.org Subject: XML overrides annotations, except when it doesn't Both the JPA spec and the OpenJPA doc implies that if I have annotations on a field and a definition for the field in my orm.xml file, the annotations for that field will be ignored and only the XML definition will be used. Is that correct? I believe I've found a situation where that isn't exactly true, and I'd like to understand whether this is a bug or a misunderstanding. I'd really like to get some clarification on this point. I've discovered a feature related to this that I like, and I'd like to know whether I can depend on it. As I said, the spec implies that the XML overriding is on a field basis, so that if you have anything specified for a field in the XML, it should override anything in the annotations for that field. OpenJPA appears to be using a more fine-grained approach, instead of what the spec says. For instance, if I have an enum type where the XML specifies the column name, but the annotations for the field do not, and the annotations specify the OpenJPA-specific Strategy annotation, this essentially merges the annotations with the XML, so that everything works. Similarly, if I have a OneToMany and I want to use the OrderColumn annotation (I'm not using the 2.0 implementation), I appear to be able to use the OrderColumn annotation even if I have the XML for the field specifying the column. I like this fine-grained approach, as it's really more logical and functional. However, if I'm understanding the spec, this isn't portable to another JPA implementation. I hope I'm wrong. I prefer to define logical annotations and physical XML. This effectively means that many annotations would be ignored, but I like seeing the logical relationships defined in the entity class. I have numerous OneToMany annotations defined in my entities, with the corresponding physical XML. What I accidentally discovered is that I had some entities where I had added (fetch = FetchType.EAGER) to the annotation, but I never added the corresponding setting to the XML. I found those relationships were being eagerly fetched. When I tried removing the fetch setting in the annotation, the relationship became lazily fetched. So, although the docs say that the overriding is at the field level, and not the piece of a field level, it appears that the settings in the annotation and the XML have been merged in some way. Is the fetch setting an exception to the rule as I understand it? Are there other exceptions to this? Is this a bug, according to the spec?
RE: XML overrides annotations, except when it doesn't
-Original Message- From: KARR, DAVID (ATTCINW) Sent: Wednesday, December 30, 2009 10:58 AM To: users@openjpa.apache.org Subject: RE: XML overrides annotations, except when it doesn't -Original Message- From: KARR, DAVID (ATTCINW) Sent: Tuesday, December 29, 2009 9:05 AM To: users@openjpa.apache.org Subject: XML overrides annotations, except when it doesn't Both the JPA spec and the OpenJPA doc implies that if I have annotations on a field and a definition for the field in my orm.xml file, the annotations for that field will be ignored and only the XML definition will be used. Is that correct? I believe I've found a situation where that isn't exactly true, and I'd like to understand whether this is a bug or a misunderstanding. I'd really like to get some clarification on this point. I've discovered a feature related to this that I like, and I'd like to know whether I can depend on it. As I said, the spec implies that the XML overriding is on a field basis, so that if you have anything specified for a field in the XML, it should override anything in the annotations for that field. OpenJPA appears to be using a more fine-grained approach, instead of what the spec says. For instance, if I have an enum type where the XML specifies the column name, but the annotations for the field do not, and the annotations specify the OpenJPA-specific Strategy annotation, this essentially merges the annotations with the XML, so that everything works. Similarly, if I have a OneToMany and I want to use the OrderColumn annotation (I'm not using the 2.0 implementation), I appear to be able to use the OrderColumn annotation even if I have the XML for the field specifying the column. Sigh. I spoke too soon. The OrderColumn annotation is merged, but not the Strategy annotation. I have two fields, one using OrderColumn and the other using Strategy. Both have XML definitions for the fields. I'm finding that the OrderColumn setting is being observed, but not the Strategy setting. If I then comment out the XML definition for the fields that use Strategy, but not the ones that use OrderColumn, my Strategy works, and so does my list ordering. I guess I can now conclude there is a bug here, as I've determined the treatment is inconsistent, but I don't know which one is wrong. I like this fine-grained approach, as it's really more logical and functional. However, if I'm understanding the spec, this isn't portable to another JPA implementation. I hope I'm wrong. I prefer to define logical annotations and physical XML. This effectively means that many annotations would be ignored, but I like seeing the logical relationships defined in the entity class. I have numerous OneToMany annotations defined in my entities, with the corresponding physical XML. What I accidentally discovered is that I had some entities where I had added (fetch = FetchType.EAGER) to the annotation, but I never added the corresponding setting to the XML. I found those relationships were being eagerly fetched. When I tried removing the fetch setting in the annotation, the relationship became lazily fetched. So, although the docs say that the overriding is at the field level, and not the piece of a field level, it appears that the settings in the annotation and the XML have been merged in some way. Is the fetch setting an exception to the rule as I understand it? Are there other exceptions to this? Is this a bug, according to the spec?
error from openjpa - BLOB/CLOB's transaction may be committed
I am using openjpa 1.2.0 and derby db. I am getting this error message once in a while. does anybody know why I am getting this. Is there something wrong with the derby db table or something? org.apache.derby.client.am.SqlException: The data in this BLOB or CLOB is no longer available. The BLOB/CLOB's transaction may be committed, or its connection is closed. at org.apache.derby.client.am.ResultSet.completeSqlca(Unknown Source) at org.apache.derby.client.net.NetResultSetReply.parseFetchError(Unknown Source) at org.apache.derby.client.net.NetResultSetReply.parseCNTQRYreply(Unknown Source) at org.apache.derby.client.net.NetResultSetReply.readFetch(Unknown Source) at org.apache.derby.client.net.ResultSetReply.readFetch(Unknown Source) at org.apache.derby.client.net.NetResultSet.readFetch_(Unknown Source) at org.apache.derby.client.am.ResultSet.flowFetch(Unknown Source) at org.apache.derby.client.net.NetCursor.getMoreData_(Unknown Source) at org.apache.derby.client.am.Cursor.stepNext(Unknown Source) at org.apache.derby.client.am.Cursor.next(Unknown Source) at org.apache.derby.client.am.ResultSet.nextX(Unknown Source) at org.apache.derby.client.am.ResultSet.next(Unknown Source) at org.tranql.connector.jdbc.ResultSetHandle.next(ResultSetHandle.java:791) at org.apache.openjpa.lib.jdbc.DelegatingResultSet.next(DelegatingResultSet.java:106) at org.apache.openjpa.jdbc.sql.ResultSetResult.nextInternal(ResultSetResult.java:222) at org.apache.openjpa.jdbc.sql.AbstractResult.next(AbstractResult.java:173) at org.apache.openjpa.jdbc.kernel.GenericResultObjectProvider.next(GenericResultObjectProvider.java:99) at org.apache.openjpa.lib.rop.EagerResultList.init(EagerResultList.java:35) at org.apache.openjpa.kernel.QueryImpl.toResult(QueryImpl.java:1228) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:990) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:805) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:775) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:771) at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:517) at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:254) at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:293) -- View this message in context: http://n2.nabble.com/error-from-openjpa-BLOB-CLOB-s-transaction-may-be-committed-tp4233854p4233854.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
error from openjpa
I am using openjpa 1.2.0 and derby db. I am getting this error message once in a while. does anybody know why I am getting this. Is there something wrong with the derby db table or something? org.apache.derby.client.am.SqlException: The data in this BLOB or CLOB is no longer available. The BLOB/CLOB's transaction may be committed, or its connection is closed. at org.apache.derby.client.am.ResultSet.completeSqlca(Unknown Source) at org.apache.derby.client.net.NetResultSetReply.parseFetchError(Unknown Source) at org.apache.derby.client.net.NetResultSetReply.parseCNTQRYreply(Unknown Source) at org.apache.derby.client.net.NetResultSetReply.readFetch(Unknown Source) at org.apache.derby.client.net.ResultSetReply.readFetch(Unknown Source) at org.apache.derby.client.net.NetResultSet.readFetch_(Unknown Source) at org.apache.derby.client.am.ResultSet.flowFetch(Unknown Source) at org.apache.derby.client.net.NetCursor.getMoreData_(Unknown Source) at org.apache.derby.client.am.Cursor.stepNext(Unknown Source) at org.apache.derby.client.am.Cursor.next(Unknown Source) at org.apache.derby.client.am.ResultSet.nextX(Unknown Source) at org.apache.derby.client.am.ResultSet.next(Unknown Source) at org.tranql.connector.jdbc.ResultSetHandle.next(ResultSetHandle.java:791) at org.apache.openjpa.lib.jdbc.DelegatingResultSet.next(DelegatingResultSet.java:106) at org.apache.openjpa.jdbc.sql.ResultSetResult.nextInternal(ResultSetResult.java:222) at org.apache.openjpa.jdbc.sql.AbstractResult.next(AbstractResult.java:173) at org.apache.openjpa.jdbc.kernel.GenericResultObjectProvider.next(GenericResultObjectProvider.java:99) at org.apache.openjpa.lib.rop.EagerResultList.init(EagerResultList.java:35) at org.apache.openjpa.kernel.QueryImpl.toResult(QueryImpl.java:1228) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:990) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:805) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:775) at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:771) at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:517) at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:254) at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:293) _ Hotmail: Powerful Free email with security by Microsoft. http://clk.atdmt.com/GBL/go/171222986/direct/01/
Re: How to implement a map where the key is in the join table, not in the target table?
For JPA 2.0 (OpenJPA trunk), the map key does not need to be a field in the target entity. For example, please see the test case in org.apache.openjpa.persistence.jdbc.maps.*. - Original Message From: KARR, DAVID (ATTCINW) dk0...@att.com To: users@openjpa.apache.org Sent: Tue, December 29, 2009 9:36:53 AM Subject: How to implement a map where the key is in the join table, not in the target table? The information I've read about the map construct is that the key value used in the map is taken from a field in the target entity. I have a situation where I need to define a map where the key value is a column in a join table, not the target table. Here's an example of the structure I have: Table FOO: -- VARCHAR FOO_ID INT TYPE Table FOO_BILLING_INFO: -- VARCHAR FOO_ID VARCHAR BILLING_SYSTEM VARCHAR BILLING_INFO_ID Table BILLING_INFO: --- VARCHAR BILLING_INFO_ID VARCHAR BILL_CODE VARCHAR SYSTEM_NAME There is a OneToMany relationship from FOO to BILLING_INFO. The key value for the map is intended to be the BILLING_SYSTEM value in the join table. Also note that the values of BILLING_SYSTEM are not the same as or related to the values for SYSTEM_NAME in BILLING_INFO (that was my first guess). For each unique value of FOO_ID in FOO, there will be two FOO_BILLING_INFO rows with different BILLING_SYSTEM values, and each of those two rows will point to a BILLING_INFO row, sometimes the same one for both FOO_BILLING_INFO rows, but sometimes not.
RE: How to implement a map where the key is in the join table, not in the target table?
-Original Message- From: Fay Wang [mailto:fyw...@yahoo.com] Sent: Wednesday, December 30, 2009 1:47 PM To: users@openjpa.apache.org Subject: Re: How to implement a map where the key is in the join table, not in the target table? For JPA 2.0 (OpenJPA trunk), the map key does not need to be a field in the target entity. For example, please see the test case in org.apache.openjpa.persistence.jdbc.maps.*. I assume the following is the relevant excerpt? This appeared to be the only example of this in that package, although I'm looking at the M3 distribution, not the trunk. @ManyToMany @JoinTable(name=CENROLLS, joincolum...@joincolumn(name=STUDENT), inversejoincolum...@joincolumn(name=SEMESTER)) @MapKeyJoinColumn(name=COURSE) MapCourse, Semester enrollment = new HashMapCourse, Semester(); - Original Message From: KARR, DAVID (ATTCINW) dk0...@att.com To: users@openjpa.apache.org Sent: Tue, December 29, 2009 9:36:53 AM Subject: How to implement a map where the key is in the join table, not in the target table? The information I've read about the map construct is that the key value used in the map is taken from a field in the target entity. I have a situation where I need to define a map where the key value is a column in a join table, not the target table. Here's an example of the structure I have: Table FOO: -- VARCHAR FOO_ID INT TYPE Table FOO_BILLING_INFO: -- VARCHAR FOO_ID VARCHAR BILLING_SYSTEM VARCHAR BILLING_INFO_ID Table BILLING_INFO: --- VARCHAR BILLING_INFO_ID VARCHAR BILL_CODE VARCHAR SYSTEM_NAME There is a OneToMany relationship from FOO to BILLING_INFO. The key value for the map is intended to be the BILLING_SYSTEM value in the join table. Also note that the values of BILLING_SYSTEM are not the same as or related to the values for SYSTEM_NAME in BILLING_INFO (that was my first guess). For each unique value of FOO_ID in FOO, there will be two FOO_BILLING_INFO rows with different BILLING_SYSTEM values, and each of those two rows will point to a BILLING_INFO row, sometimes the same one for both FOO_BILLING_INFO rows, but sometimes not.
Can I map between multiple datasources?
I've been assembling an app using OpenJPA 1.2.1 for a few weeks now. I've just come to the conclusion that the app is going to require reading data from more than one database datasource. One user schema has most of the tables, but another user has some other tables. There are some references between those groups of tables. There's no way to specify multiple datasources in a single persistence unit, but I suppose I could implement multiple persistence units. Is it possible to have an entity from one persistence unit reference an entity from another persistence unit?