Re: Strange behaviour on ManyToMany relationship using RCP client
Hi Rick, we found out another interesting thing. Under some conditions (i.e. using entities on RCP client side) saving the right entity matters. We called persist on the inverse side entity of the ManyToMany relationship which lead to only one UPDATE statement for the inverse entity. The expected INSERT statement on the join table wasn't executed. As we switched the owning and inverse side everything worked as expected. Two UPDATE and one INSERT statements were executed. But again there is something strange: for the JUnit integration test this does NOT matter apparently. Calling persist on the inverse side of the relationship executed all necessary statements in order to create the new ManyToMany relationship. Do you have an explanation for this behaviour? -- View this message in context: http://openjpa.208410.n2.nabble.com/Strange-behaviour-on-ManyToMany-relationship-using-RCP-client-tp6985422p6996020.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Should OpenJPA initialise empty collections?
Hi areider, we just came across the problem you described. We too use 1.2.3-SNAPSHOT with WebSphere 7. Have a look http://openjpa.208410.n2.nabble.com/Strange-behaviour-on-ManyToMany-relationship-using-RCP-client-td6985422.html here . -- View this message in context: http://openjpa.208410.n2.nabble.com/Should-OpenJPA-initialise-empty-collections-tp6926980p6996311.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: AW: Constraint violation using OneToOne relationship
Boblitz John wrote: I might be wrong here - but it would seem to me that you are trying to persist the child, not the parent. In your definition A has a foreign key to B which means B must exist prior to A and is thus the parent in the relationship. This is also what the error message said parent key not found. Have you tried persisting B instead of A? I agree that B must exist prior to A. But in my understanding the parent is referenced by the mappedBy attribute which we have defined in entity B, so A is the parent. I had expected JPA to recognize this relationship and execute the statements in the correct order. But as Kevin explained OpenJPA has tried and failed to detect the relationship unfortunately. I don't think that persisting B instead of A is the right way (well I haven't tried it). IMHO one advantage to use JPA is that it simplifies the way to work with data and takes the burden to generate the proper SQL statements. Otherwise developers could use plain JDBC and pay attention to all relationships on their own. There would be no reason to use JPA. -- View this message in context: http://openjpa.208410.n2.nabble.com/Constraint-violation-using-OneToOne-relationship-tp6978223p6996720.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Strange behaviour on ManyToMany relationship using RCP client
Hi Rick, sorry I wasn't right. The two entities are NOT being created at the client side but they are read from the server via RMI. Then the add() method of one of them is called in order to create a new ManyToMany relationship between these two entities. The add() method then calls the other entities' add() method and the NPE is raised (I showed you the line the exception occurs). We casted the entity to PersistenceCapable as you requested. In our JUnit integration test a com.ibm.ws.persistence.kernel.WsJpaStateManagerImpl state manager is used (this works). In our RCP client a org.apache.openjpa.kernel.DetachedStateManager state manager is used (this fails). -- View this message in context: http://openjpa.208410.n2.nabble.com/Strange-behaviour-on-ManyToMany-relationship-using-RCP-client-tp6985422p6991859.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Strange behaviour on ManyToMany relationship using RCP client
Hi Rick, detach state property configuration is: property name=openjpa.DetachState value=fetch-groups(DetachedStateField=true) / We need this because we would like to delete values of fields (set them to null in the database explicitly). Here's the stack trace: java.lang.NullPointerException at xx.xxx.x.common.entity.stammdaten.Bp.addMetaBp(Bp.java:804) at xx.xxx.x.common.entity.stammdaten.MetaBp.addBp(MetaBp.java:145) at xx.xxx.x.rcp.stammdaten.internal.ui.editors.metabp.MetaBpEditor$1.run(MetaBpEditor.java:209) at org.eclipse.jface.action.Action.runWithEvent(Action.java:498) at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:584) at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:501) at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:411) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1053) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4066) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3657) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2640) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2604) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2438) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:671) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:664) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at xx.xxx.x.rcp.internal.ui.application.Application.start(Application.java:56) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:620) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:575) at org.eclipse.equinox.launcher.Main.run(Main.java:1408) at org.eclipse.equinox.launcher.Main.main(Main.java:1384) And here's the requested file: http://openjpa.208410.n2.nabble.com/file/n6985542/Bp.class Bp.class -- View this message in context: http://openjpa.208410.n2.nabble.com/Strange-behaviour-on-ManyToMany-relationship-using-RCP-client-tp6985422p6985542.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Strange behaviour on ManyToMany relationship using RCP client
Well no, because both entities are created at client side and sent to the server for persisting later. Just for this case we implemented a JUnit test and an integration test which runs against our WebSphere development server and everything works fine. But as soon as the RCP client comes into play, we get the exception. It's weird, isn't it? -- View this message in context: http://openjpa.208410.n2.nabble.com/Strange-behaviour-on-ManyToMany-relationship-using-RCP-client-tp6985422p6985766.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Constraint violation using OneToOne relationship
This is just unbelievable! As I expected using this property has side effects. We have boolean fields defined in our database as number(1,0) because Oracle does not know boolean type. Now we get ArgumentExceptions saying that we declare columns that are not compatible with the expected type bit. These errors never showed up before we added the property openjpa.jdbc.SchemaFactory. Now what? Remove the property and use OneToMany instead of OneToOne relationships with programmatically limit the number of child elements to one only? This would be very awful. Are there any other options we have got? -- View this message in context: http://openjpa.208410.n2.nabble.com/Constraint-violation-using-OneToOne-relationship-tp6978223p6980990.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Constraint violation using OneToOne relationship
We have an entity A having a OneToOne unidirectional relationship to entity B configured like this: In entity A: @OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = B_ID) private B b; Now we create new entities A and B and would like to persist them: final A a = new A(); final B b = new B(); a.setB(b); em.persist(a); We get a PersistenceException because OpenJPA tries to insert the entity with the foreign key first: openjpa-1.2.3-SNAPSHOT-r422266:1053401 nonfatal general error org.apache.openjpa.persistence.PersistenceException: ORA-02291: integrity constraint violated - parent key not found {prepstmnt 1708156368 INSERT INTO A (ID, ROW_ERF_TSTAMP, ROW_ERF_USER, ROW_MUT_VERSION, B_ID) VALUES (?, ?, ?, ?, ?) [params=(long) 396, (Timestamp) 2011-11-09 15:38:12.048, (String) TEST, (int) 1, (long) 2772]} [code=2291, state=23000] Why does OpenJPA first try to insert entity A and not B? An how can we force OpenJPA to execute statements in the right order? Interestingly this works on our OneToMany relationships. In the internet I found https://issues.apache.org/jira/browse/OPENJPA-1961 this and http://www-01.ibm.com/support/docview.wss?uid=swg1PK74266 this . But these issues are for OneToMany relationships. IBM provides a solution using the OpenJPA property openjpa.jdbc.SchemaFactory. But we would like to understand why persisting does not work in the OneToOne case above. So has somebody an explanation for our issue? Thank you in advance! We use WebSphere server 7.0.0.19 and OpenJPA 1.2.3. -- View this message in context: http://openjpa.208410.n2.nabble.com/Constraint-violation-using-OneToOne-relationship-tp6978223p6978223.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Constraint violation using OneToOne relationship
Hi Rick, these are the properties for the unit test PU: properties property name=openjpa.Log value=DefaultLevel=INFO, SQL=WARN, JDBC=WARN, Query=TRACE, Schema=ERROR, Runtime=WARN / property name=openjpa.ConnectionFactoryProperties value=PrintParameters=true, PrettyPrint=true / property name=openjpa.LockManager value=version / property name=openjpa.jdbc.TransactionIsolation value=read-committed / property name=openjpa.DetachState value=fetch-groups(DetachedStateField=true) / /properties And these properties are for production environment: properties property name=openjpa.ConnectionFactoryProperties value=PrintParameters=true / property name=openjpa.LockManager value=version / property name=openjpa.jdbc.TransactionIsolation value=read-committed / property name=openjpa.DetachState value=fetch-groups(DetachedStateField=true) / /properties -- View this message in context: http://openjpa.208410.n2.nabble.com/Constraint-violation-using-OneToOne-relationship-tp6978223p6978287.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Constraint violation using OneToOne relationship
Hi Rick, thank you for the fast responses! Yes I successfully tried the solution IBM proposed. But why do I have to add this property just for OneToOne relationships in order to make them work properly? For all other relationships this is not necessary. I really would like to understand this difference. Are there any side effects for the persistence mappings by adding the openjpa.jdbc.SchemaFactory property to the PU? I'm suspicious of adding properties to get things to work which in my opinion should work without further configuration. I wonder if creating both sides of a OneToOne relationship and persisting them by saving the parent entity is a special case. -- View this message in context: http://openjpa.208410.n2.nabble.com/Constraint-violation-using-OneToOne-relationship-tp6978223p6978516.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Using FetchPlan.setFetchBatchSize() feature leads tojava.io.NotSerializableException
Okay, using the method setFetchBatchSize(int size) works as desired but the result is not serializable so an exception will occur after trying to send the result to a remote client. The simple solution to this problem is: copy the result list content to a new list and send the copy to the client. That's it. -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-FetchPlan-setFetchBatchSize-feature-leads-to-java-io-NotSerializableException-tp6771824p6824148.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Join fetch does not work if data cache is enabled
I wonder if I'm the first developer stumbling upon this one... -- View this message in context: http://openjpa.208410.n2.nabble.com/Join-fetch-does-not-work-if-data-cache-is-enabled-tp6787245p6824153.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: AW: Speed of fetching simple entities using OpenJPA
In the meantime we were able to speed up things by only selecting the fields from the database we really need in the application at application start. You can do this with constructor expressions. You probably know that. We approximately had 50 seconds to read and deliver 60'000 entities to the remote client. Now this is done in 2 seconds with only reading two fields instead of everything and the use of setBatchFetchSize() feature! Michael Pflueger wrote: Hi, Thanks for your comment. Well, in your case it seems to be a DB problem though? Have you tried reading without cursors or using a different database? -- View this message in context: http://openjpa.208410.n2.nabble.com/Speed-of-fetching-simple-entities-using-OpenJPA-tp6784781p6824183.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Speed of fetching simple entities using OpenJPA
Hi Michael, I had a similar question about this topic. Reading 6 rows and converting them into entities needs 24 seconds (using OpenJPA 1.2.3). This is quite a long time in my opinion. Take a look: http://openjpa.208410.n2.nabble.com/Entity-generation-is-very-time-consuming-tp6579389p6579389.html Entity generation is very time consuming Unfortunately I haven't found a solution to speed up things yet... -- View this message in context: http://openjpa.208410.n2.nabble.com/Speed-of-fetching-simple-entities-using-OpenJPA-tp6784781p6786834.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Join fetch does not work if data cache is enabled
In my persistence.xml I enabled the data cache: property name=openjpa.DataCache value=true(CacheSize=1) / I used a named query SELECT p FROM Parent p LEFT JOIN FETCH p.children to load the childrens' list eagerly which is configured lazy in the Parent entity. The Parent entity then was requested from a remote client. The first time the query was executed the client received the parent with the list of children. In the trace I could see OpenJPA executing a SELECT from the database. The second time the query was executed the client received the parent but the list of children was null (and stayed null for all subsequent requests). There was no SELECT fired anymore. openjpa.jdbc.JDBC: Trace: Initial connection autoCommit: true, holdability: 1, TransactionIsolation: 2 openjpa.jdbc.JDBC: Trace: t 1391284973, conn 2070182756 [0 ms] close openjpa.jdbc.JDBC: Trace: t 1391284973, conn 711993968 [0 ms] close openjpa.Query: Trace: Executing query: SELECT p FROM Parent p LEFT JOIN FETCH p.childrenList openjpa.jdbc.SQL: Trace: t 1391284973, conn 2114092546 executing prepstmnt 54657858 SELECT [lots of attributes] FROM PARENT t0, LINIE t1 WHERE t0.ID = t1.PARENT_ID(+) openjpa.jdbc.SQL: Trace: t 1391284973, conn 2114092546 [187 ms] spent openjpa.jdbc.JDBC: Trace: t 1391284973, conn 2114092546 [15 ms] close openjpa.Query: Trace: Executing query: SELECT p FROM Parent p LEFT JOIN FETCH p.childrenList openjpa.Query: Trace: Executing query: SELECT p FROM Parent p LEFT JOIN FETCH p.childrenList This behaviour went away as soon as I removed the data cache property from the persistence.xml. Is this a bug in the data cache? Seems like it forgets to fill the list after the first request. OpenJPA 1.2.3 is used. -- View this message in context: http://openjpa.208410.n2.nabble.com/Join-fetch-does-not-work-if-data-cache-is-enabled-tp6787245p6787245.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Using FetchPlan.setFetchBatchSize() feature leads to java.io.NotSerializableException
How do I set fetch size in JPA? I would like the cursor to read 2000 rows at once to speed up entity generation. Thanks for help! -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-FetchPlan-setFetchBatchSize-feature-leads-to-java-io-NotSerializableException-tp6771824p6782493.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Using Oracle hint generates SQL syntax error
I used an Oracle query hint as described in the OpenJPA documentation: final Query query = em.createQuery(SELECT b FROM Bp b); query.setHint(openjpa.hint.OracleSelectHint, new Integer(2000)); final ListBp result = query.getResultList(); Executing this query fails with org.apache.openjpa.lib.jdbc.ReportingSQLException: ORA-00923: FROM keyword not found where expected (java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected). This is the generated query: SELECT 2000 t0.ID [some other attributes left out here] FROM BP t0 Why is OpenJPA generating wrong SQL statements? I also tried to set the query hint in a named query: named-query name=FIND_ALL_BP query/query hint name=openjpa.hint.OptimizeResultCount value=2000/ /named-query final Query namedQuery = em.createNamedQuery(MyQueryNames.FIND_ALL_BP.name()); final ListBp result = namedQuery.getResultList(); This query was executed but it did not show any hint. It simply was SELECT b FROM Bp b so I guess the hint was just ignored because the SELECT did not perform any faster. Any suggestions on how to successfully use query hints? I'm using Oracle 11g and OpenJPA 1.2.3. -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-Oracle-hint-generates-SQL-syntax-error-tp6775413p6775413.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Using FetchPlan.setFetchBatchSize() feature leads to java.io.NotSerializableException
In my code I would like to set the fetch batch size in order to speed up loading entities: final OpenJPAQuery ojpaQuery = OpenJPAPersistence.cast(em.createNamedQuery(myNiceQuery)); ojpaQuery.getFetchPlan().setFetchBatchSize(2000); final List result = ojpaQuery.getResultList(); The result list should be sent to a RCP client. Now I wonder why I get java.io.NotSerializableException on the client side using this feature: Exception in thread P=573122:O=0:CT java.rmi.MarshalException: CORBA BAD_PARAM 0x4f4d0006 Maybe; nested exception is: java.io.NotSerializableException: SERVER (id=4773e3aa, host=myhost.mycompany.com) TRACE START: org.omg.CORBA.BAD_PARAM: org.apache.openjpa.datacache.QueryCacheStoreQuery$CachingResultObjectProvider is not serializable vmcid: OMG minor code: 6 completed: Maybe at com.ibm.rmi.util.Utility.throwNotSerializableForCorba(Utility.java:1661) at com.ibm.rmi.io.IIOPOutputStream.writeValueType(IIOPOutputStream.java:1142) at com.ibm.rmi.io.IIOPOutputStream.writeObjectField(IIOPOutputStream.java:1082) at com.ibm.rmi.io.IIOPOutputStream.outputClassFields(IIOPOutputStream.java:1013) at com.ibm.rmi.io.IIOPOutputStream.outputObject(IIOPOutputStream.java:997) at com.ibm.rmi.io.IIOPOutputStream.continueSimpleWriteObject(IIOPOutputStream.java:484) at com.ibm.rmi.io.IIOPOutputStream.simpleWriteObjectLoop(IIOPOutputStream.java:464) at com.ibm.rmi.io.IIOPOutputStream.simpleWriteObject(IIOPOutputStream.java:528) at com.ibm.rmi.io.ValueHandlerImpl.writeValue(ValueHandlerImpl.java:168) at com.ibm.rmi.iiop.CDRWriter.write_value(CDRWriter.java:1195) at com.ibm.rmi.iiop.CDRWriter.write_value(CDRWriter.java:1213) at com.ibm.rmi.iiop.CDRWriter.write_abstract_interface(CDRWriter.java:1118) at com.ibm.CORBA.iiop.UtilDelegateImpl.writeAbstractObject(UtilDelegateImpl.java:483) at javax.rmi.CORBA.Util.writeAbstractObject(Util.java:148) at ... at com.ibm.CORBA.iiop.ServerDelegate.dispatchInvokeHandler(ServerDelegate.java:623) at com.ibm.CORBA.iiop.ServerDelegate.dispatch(ServerDelegate.java:476) at com.ibm.rmi.iiop.ORB.process(ORB.java:513) at com.ibm.CORBA.iiop.ORB.process(ORB.java:1574) at com.ibm.rmi.iiop.Connection.respondTo(Connection.java:2845) at com.ibm.rmi.iiop.Connection.doWork(Connection.java:2718) at com.ibm.rmi.iiop.WorkUnitImpl.doWork(WorkUnitImpl.java:63) at com.ibm.ejs.oa.pool.PooledThread.run(ThreadPool.java:118) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1604) SERVER (id=4773e3aa, host=myhost.mycompany.com) TRACE END. Please can someone explain what I'm doing wrong? Thank you. I'm using OpenJPA 1.2.3. -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-FetchPlan-setFetchBatchSize-feature-leads-to-java-io-NotSerializableException-tp6771824p6771824.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Using FetchPlan.setFetchBatchSize() feature leads to java.io.NotSerializableException
Yes the problem only occurs if I set the fetch batch size. I was very astonished getting a QueryCacheStoreQuery$CachingResultObjectProvider object as result. I would have expected an OpenJPA implementation of the List interface containing my entities... So what is going on here? I don't understand. Do I have to convert something manually? -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-FetchPlan-setFetchBatchSize-feature-leads-to-java-io-NotSerializableException-tp6771824p6772133.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Using FetchPlan.setFetchBatchSize() feature leads to java.io.NotSerializableException
Okay thanks. What I was trying to do was setting the fetch size like in java.sql.ResultSet so rows aren't read one row at a time but many rows at a time instead. We experienced a higher performance executing a query with JDBC and using the fetch size feature. Without fetch size the JDBC query was as fast as the JPA query. In other words setting the fetch size in JDBC can speed up queries so JDBC is faster than JPA. So I searched the OpenJPA documentation for how to set the fetch size and thought FetchPlan.setFetchBatchSize() is what I need. But maybe this is something different? Isn't it possible to set the fetch size in OpenJPA 1.2.3? -- View this message in context: http://openjpa.208410.n2.nabble.com/Using-FetchPlan-setFetchBatchSize-feature-leads-to-java-io-NotSerializableException-tp6771824p6772248.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Best practice: Overriding equals() in entities?
We experienced some problems implementing our own equals() methods. So we accidentally forgot to compare a relationship field in an entity and got some nasty OpenJPA errors like that: /org.apache.openjpa.persistence.ArgumentException: Encountered new object in persistent field MyEntity.myField during attach. However, this field does not allow cascade attach. Set the cascade attribute for this field to CascadeType.MERGE or CascadeType.ALL (JPA annotations) or merge or all (JPA orm.xml). You cannot attach a reference to a new object without cascading./ We then deleted all equals() methods in our entities and DTOs and the error was gone. Well, we could have fixed the method instead. ;-) But now we wonder about the need of equals(). Apparently OpenJPA does not need equals() to check for new or changed entities. So does it have any advantages to override equals() in JPA entities? Would you recommend overriding equals() or leave it up to OpenJPA's internal checks? Are there any fundamental reasons to use equals()? What are your experiences and best practices? Thanks a lot for your input! -- View this message in context: http://openjpa.208410.n2.nabble.com/Best-practice-Overriding-equals-in-entities-tp6672154p6672154.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Version of all children is incremented in OneToMany when merging parent entity
Please can anyone explain to me when batching prepstmnt occurs? What exactly is the cause of this? What are the reasons causing OpenJPA to batch statements? -- View this message in context: http://openjpa.208410.n2.nabble.com/Version-of-all-children-is-incremented-in-OneToMany-when-merging-parent-entity-tp6645128p6656525.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Version of all children is incremented in OneToMany when merging parent entity
Well I'm not able to get a test working. :-( In the JUnit test only the changed child's version is increased as it should be. The main difference is that in my test I'm using resource-local transaction type with Derby instead of JTA and Oracle. OpenJPA properties in the persistence.xml files are the same. What I do NOT see in my test are those strange batching prepstmnt update statements in the logs. I wonder what JPA is doing here. It looks like JPA is stockpiling update statements for later usage. And what exactly are those executing batch prepstmnt for? As you can see JPA is setting the MUT_VERSION of all children to 28 but only child 328241 has actually changed the MUT_USER field: openjpa.jdbc.JDBC: Trace: The batch limit is set to 100. openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 2077588437 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328238, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 2077588437 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328239, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 2077588437 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328240, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 executing batch prepstmnt 2077588437 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328240, (int) 27] openjpa.jdbc.JDBC: Trace: ExecuteBatch command returns update success count 3 openjpa.jdbc.JDBC: Trace: ExecuteBatch command returns update count -2 for statement UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ?. openjpa.jdbc.JDBC: Trace: ExecuteBatch command returns update count -2 for statement UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ?. openjpa.jdbc.JDBC: Trace: ExecuteBatch command returns update count -2 for statement UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ?. openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 executing prepstmnt 628303219 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_USER = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (String) JOHN, (int) 28, (long) 13260, (long) 328241, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328242, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328243, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328244, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328245, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328246, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328247, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ? [params=(Timestamp) 2011-08-04 15:36:37.496, (int) 28, (long) 13260, (long) 328232, (int) 27] openjpa.jdbc.SQL: Trace: t 1494112526, conn 1933144889 batching prepstmnt 904345063 UPDATE CHILD SET MUT_TSTAMP = ?, MUT_VERSION = ?, PARENT_ID = ? WHERE ID = ? AND CHILD.MUT_VERSION = ?
Re: Version of all children is incremented in OneToMany when merging parent entity
Hi Rick, when I run the application server, entites are loaded and converted to DTOs which are serialized and transported to the client via the network. Next the client changes some values (or one value in this case). Then the DTOs are transferred back to the server and converted into JPA entities again. A merge is done to persist the changes. As I said before the default LockManager (version) is responsible for incrementing the version of all children even if the objects are not modified. (I used this LockManager in my test and only the modified object's version was touched, so there has to be a difference between resource-local and JTA handling.) To increment all childrens' version field is a VERY strict behaviour of OpenJPA and in my opinion this makes not sense in every case. But how can I switch this off? If I set LockManager to none there is no difference to version. And pessimistic with VersionUpdateOnWriteLock set to false or none or whatever (not documented what value to use to switch it off) didn't help either. So what can I do? -- View this message in context: http://openjpa.208410.n2.nabble.com/Version-of-all-children-is-incremented-in-OneToMany-when-merging-parent-entity-tp6645128p6653036.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Version of all children is incremented in OneToMany when merging parent entity
Okay, reading the manual sometimes helps... ;-) Increasing the version on all children is OpenJPA default: /This lock manager does not perform any exclusive locking, but instead ensures read consistency by verifying that the version of all read-locked instances is unchanged at the end of the transaction. Furthermore, a write lock will force an increment to the version at the end of the transaction, even if the object is not otherwise modified. This ensures read consistency with non-blocking behavior. This is the default openjpa.LockManager setting in JPA. / This setting can be overriden by using the pessimistic lock manager and its properties: /The pessimistic LockManager can be configued to additionally perform the version checking and incrementing behavior of the version lock manager described below by setting its VersionCheckOnReadLock and VersionUpdateOnWriteLock properties. / So I configured OpenJPA to not change the version on update: property name=openjpa.LockManager value=pessimistic(VersionCheckOnReadLock=true,VersionUpdateOnWriteLock=false) / But it does not work. The version fields of all children are still incremented. Do I miss something? What do I have to configure in order to have OpenJPA update the changed entities' version field only? -- View this message in context: http://openjpa.208410.n2.nabble.com/Version-of-all-children-is-incremented-in-OneToMany-when-merging-parent-entity-tp6645128p6648304.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Version of all children is incremented in OneToMany when merging parent entity
Hi Rick, did you read my second posting? -- View this message in context: http://openjpa.208410.n2.nabble.com/Version-of-all-children-is-incremented-in-OneToMany-when-merging-parent-entity-tp6645128p6649130.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Version of all children is incremented in OneToMany when merging parent entity
I have a OneToMany relationship defined like this: @Entity Parent { @OneToMany(mappedBy = parent, fetch = FetchType.LAZY, cascade = { CascadeType.PERSIST, CascadeType.REFRESH, CascadeType.MERGE}) private ListChild childList; // ... @Version private int version; } @Entity Child { @ManyToOne(fetch = FetchType.EAGER) @JoinColumn(name = PARENT_ID) private Parent parent; // ... @Version private int version; } Now when I change ONE of the child elements and do a merge by executing em.merge(parent), the version of ALL children is incremented by one! I expected that the version of the changed child is incremented only. Is this a bug? I could not find anything about this behaviour in the documentation... -- View this message in context: http://openjpa.208410.n2.nabble.com/Version-of-all-children-is-incremented-in-OneToMany-when-merging-parent-entity-tp6645128p6645128.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: openjpa.Runtime Unable to locate Transaction Synchronization Registry
No there are no exceptions. And enabling openjpa.Log Enhance=TRACE did not help either. But I did some debugging and I could isolate the problem. When I persist my entities I do a flush and a refresh in order to return the entity instance just after the INSERT to the client. This is done because the client does not have to explicitly call a business entity finder method again to get the entity with the now set primary key back. So my persist method does four things: em.persist(myEntity); em.flush(); em.refresh(myEntity); return myEntity; Now it shows that this is the problem. If I remove the flush and refresh calls the SEVERE: javaAccessorNotSet errors disappear in my JUnit tests but now I get a stale (meaning the entity has the state BEFORE the INSERT - no primary key is set) entity back. I'm not sure whether this is a good practice because I do a commit in the middle of the business transaction. But the RCP client needs the primary key because after saving the data the client has to execute further business methods which need the primary key. A new remote call to the service facade in order to get the whole entity object tree with all relationships converted to DTOs first, serialized, shipped over the network and then deserialized again is very costly. But maybe you have a nice solution for me? :-) -- View this message in context: http://openjpa.208410.n2.nabble.com/openjpa-Runtime-Unable-to-locate-Transaction-Synchronization-Registry-tp6626607p6632770.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Best practice: Using fetch groups or a simple DTO?
Pinaki Poddar wrote: You can switch on SQL logging by property name=openjpa.Log value=SQL=TRACE/ Unfortunately I can't because I'm using WepSphere Application Server V7. As you probably know WebSphere simply ignores OpenJPA log properties. You have to enable WebSphere tracing to see the OpenJPA logs in the trace.log file ( http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.base.iseries.doc/info/iseries/ae/tejb_loggingwjpa.html http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.base.iseries.doc/info/iseries/ae/tejb_loggingwjpa.html ). This is really a pain and I don't know why IBM makes such nonsense. Maybe just annoying software developers... But I owe you the SQL log. Here it is: SELECT t0.ID, t0.ROW_MUT_VERSION, t0.ROW_ERF_TSTAMP, t0.ROW_ERF_USER, t0.ROW_MUT_TSTAMP, t0.ROW_MUT_USER, t0.ABKUERZUNG, t0.BEZEICHNUNG, t0.BEZEICHNUNG_LANG, t0.BPUIC, t1.ID, t1.ROW_MUT_VERSION, t1.ROW_ERF_TSTAMP, t1.ROW_ERF_USER, t1.ROW_MUT_TSTAMP, t1.ROW_MUT_USER, t1.CHECK_URL, t1.FORMAT, t1.FTP_CLIENT_FACTORY, t1.INTERVALL, t1.NAME, t1.TYP, t1.VERZEICHNIS_ARCHIV, t1.VERZEICHNIS_DB, t1.VERZEICHNIS_IN, t0.FIKTIV_BP_TF, t0.GUELTIG_BIS, t0.GUELTIG_VON, t0.HALT_AUF_VERLANGEN_TF, t2.ID, t2.ROW_MUT_VERSION, t2.ROW_ERF_TSTAMP, t2.ROW_ERF_USER, t2.ROW_MUT_TSTAMP, t2.ROW_MUT_USER, t2.ABKUERZUNG, t2.ROW_GUELTIG_BIS, t2.ROW_GUELTIG_VON, t3.ID, t3.ROW_MUT_VERSION, t3.ROW_ERF_TSTAMP, t3.ROW_ERF_USER, t3.ROW_MUT_TSTAMP, t3.ROW_MUT_USER, t3.BESCHREIBUNG_DE, t3.BESCHREIBUNG_EN, t3.BESCHREIBUNG_FR, t3.BESCHREIBUNG_IT, t3.ROW_GUELTIG_BIS, t3.ROW_GUELTIG_VON, t3.TEXT_DE, t3.TEXT_EN, t3.TEXT_FR, t3.TEXT_IT, t2.UIC_LAND, t2.WAEHRUNG, t2.ZEITZONE, t0.PRIO, t0.REGION, t0.ROW_GUELTIG_BIS, t0.ROW_GUELTIG_VON, t0.UIC_BP, t0.UIC_KONTROLLZIFFER, t0.X_GEO, t0.X_SWISS_GRID, t0.Y_GEO, t0.Y_SWISS_GRID, t0.Z_GEO, t0.Z_SWISS_GRID FROM BP t0, DATENQUELLE t1, LAND t2, TEXT_SD t3 WHERE t0.DATENQUELLE_ID = t1.ID(+) AND t0.LAND_ID = t2.ID(+) AND t2.TEXT_SD_ID = t3.ID(+) As you can see everything is selected although I'm using the fetch group. There are joins to three other tables because some fields are eagerly fetched. But I will set them to lazy. Maybe fetch plan does not work with some relationships set to eager fetch type. I will create some JUnit tests as you proposed but I don't know if this gets me further. -- View this message in context: http://openjpa.208410.n2.nabble.com/Best-practice-Using-fetch-groups-or-a-simple-DTO-tp6598057p6625117.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
404 while trying to download OpenJPA sources
It is not possible to download the pre-packaged binaries for SNAPSHOT releases. I'm getting not found errors trying to download *apache-openjpa-1.2.3-SNAPSHOT-binary.zip* and *apache-openjpa-1.2.3-SNAPSHOT-source.zip* Please can you fix the links? Thank you! -- View this message in context: http://openjpa.208410.n2.nabble.com/404-while-trying-to-download-OpenJPA-sources-tp6625220p6625220.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
openjpa.Runtime Unable to locate Transaction Synchronization Registry
Does anyone know what's wrong when this error occurs and how I can fix it? I get this message when I test my EJB DAO with JUnit. I create a new entity and persist it using H2 in-memory database. Next I load the entity from the database again. When I invoke a getter method from the entity the message is printed out. The JUnit test is executed with javaagent runtime enhancement. Here are some loggings: 218 resource-local-pu INFO [main] openjpa.Runtime - Starting OpenJPA 1.2.3-SNAPSHOT 331 resource-local-pu INFO [main] openjpa.jdbc.JDBC - Using dictionary class org.apache.openjpa.jdbc.sql.H2Dictionary. 0 [main] INFO server.persistence.dao.MyDaoBean - Persisting new entity with number [638]. 260 [main] INFO server.persistence.dao.MyDaoBean - Finding entity with primary key [1]. now I invoke entity.getNumber() Jul 27, 2011 5:32:19 PM null null SEVERE: javaAccessorNotSet 4725 resource-local-pu INFO [main] openjpa.Runtime - Unable to locate Transaction Synchronization Registry. -- View this message in context: http://openjpa.208410.n2.nabble.com/openjpa-Runtime-Unable-to-locate-Transaction-Synchronization-Registry-tp6626607p6626607.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Best practice: Using fetch groups or a simple DTO?
Rick Curtis wrote: Try removing the default fetchgroup? Hi Rick, thank you for your input! I did the following: final OpenJPAQuery ojpaQuery = OpenJPAPersistence.cast(em.createNamedQuery(StammdatenQueryNames.FIND_ALL_BP.name())); ojpaQuery.getFetchPlan().removeFetchGroup(FetchGroup.NAME_DEFAULT); ojpaQuery.getFetchPlan().addFetchGroup(short); final ListBp result = ojpaQuery.getResultList(); All fields and all relationships are loaded. Another idea? -- View this message in context: http://openjpa.208410.n2.nabble.com/Best-practice-Using-fetch-groups-or-a-simple-DTO-tp6598057p6621633.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Best practice: Using fetch groups or a simple DTO?
Pinaki Poddar wrote: How are you verifying whether a field data has been loaded in an entity instance or not? Well I have a breakpoint on line final ListBp result = ojpaQuery.getResultList(); where I can inspect the result list just after the SELECT was executed. By the way this is the SELECT: [7/26/11 17:49:29:904 CEST] 000e Query 3 openjpa.Query: Trace: Executing query: SELECT b FROM Bp b I would have expected something like this using my fetch group: SELECT b.id, b.bezeichnung FROM Bp b Okay I have some relationships declared in Bp with fetch type EAGER so I understand why OpenJPA is loading these fields with joins. But I thought with OpenJPA's FetchPlan functionality I could select the fields I would like to load in a fine grained way. The method clearFetchGroups() did not solve my issue either. :-( -- View this message in context: http://openjpa.208410.n2.nabble.com/Best-practice-Using-fetch-groups-or-a-simple-DTO-tp6598057p6622771.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Best practice: Using fetch groups or a simple DTO?
It does not work as described in the documentation. Or am I missing something? Here is my entity: @Entity @Table(name = BP) @FetchGroup(name = short, attributes = { @FetchAttribute(name = id), @FetchAttribute(name = bezeichnung) }) public class Bp extends BaseEntity implements Serializable { public static final long serialVersionUID = -8334035710155503058L; @Id @SequenceGenerator(name = SeqBp, sequenceName = SEQ_BP) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = SeqBp) @Column(name = ID) private Long id; @Column(name = ABKUERZUNG) private String abkuerzung; @Column(name = REGION) private String region; @Column(name = BEZEICHNUNG) private String bezeichnung; // lots of other fields and relations here } The code to load the entities with the named FetchGroup short: final OpenJPAQuery ojpaQuery = OpenJPAPersistence.cast(em.createNamedQuery(StammdatenQueryNames.FIND_ALL_BP.name())); ojpaQuery.getFetchPlan().addFetchGroup(short); final ListBp result = ojpaQuery.getResultList(); In the list I get the entities are loaded completely containing all fields and relations. Why? -- View this message in context: http://openjpa.208410.n2.nabble.com/Best-practice-Using-fetch-groups-or-a-simple-DTO-tp6598057p6617686.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Best practice: Using fetch groups or a simple DTO?
Pinaki Poddar wrote: FetchPlan Great! But now how does it work? Do you have a detailed example? The OpenJPA documentation is very spare about it (Chapter 5.7 of the reference guide just lists the interface's methods - there are no hints about the results and the Magazine entity is not complete). My questions: 1) There is no way to dynamically create a new FetchPlan at runtime? I have to declare FetchGroup by annotations first in order to use the FetchPlan feature? 2) When I have an entity with 15 fields and I would like to load only two of them (no relations), I have to create a FetchGroup accordingly. In the next step I execute OpenJPAPersistence.cast(...), add my FetchGroup and then what exactly do I get? My entity with the two fields only? My entity with two fields filled and the remains initialized to default values? 3) What will be the result if the entity has some relations? Are they initialized with null if they are not defined in my custom FetchGroup? I'm using OpenJPA 1.2.3 so please keep that in mind when answering. Thank you! -- View this message in context: http://openjpa.208410.n2.nabble.com/Best-practice-Using-fetch-groups-or-a-simple-DTO-tp6598057p6601589.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Best practice: Using fetch groups or a simple DTO?
I have a big entity with lots of data which I don't need to send to my client. Only the primary key and a string should be delivered. All in all I have to send 6 entities. Now I have two possibilities: 1) I create a simple DTO with just the two required fields and fill it up with JPA's constructor expression. 2) I use a fetch group which defines only the two required attributes on my big entity. But all other attributes are initialized with default values I guess and transported via the network as well. What is best practice? What would you recommend? Maybe there are other solutions? Thanks for your input! -- View this message in context: http://openjpa.208410.n2.nabble.com/Best-practice-Using-fetch-groups-or-a-simple-DTO-tp6598057p6598057.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Entity generation is very time consuming
Okay I tried loading the data with plain JDBC. I found out that moving the cursor over 6 rows takes 14 seconds! So I can't blame JPA for taking so much time. Well at least that's how Oracle 11g performs. Maybe there are other faster databases. -- View this message in context: http://openjpa.208410.n2.nabble.com/Entity-generation-is-very-time-consuming-tp6579389p6594377.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Entity generation is very time consuming
I have one single select query and the connection pool shows one connection being obtained. The entity has indeed many one-to-many and many-to-one relationships. So if that should be the problem I created a DTO which has only the primary key (Long) and a string. Then I used constructor expression filling up this DTO instead of the entity with all its dependencies (SELECT NEW MyDto(t.id, t.name) FROM Table t). I was very astonished about the fact that it just lasted as long as the original entity creation! How can this be explained? Nearly 27 seconds reading two fields from the database table and generating 60.000 DTOs? This is awfully low-performance! Are the JPA entities still generated with all relationships if constructor expressions are used? One thing: I'm using Oracle 11g, not 10g as written in my first posting. -- View this message in context: http://openjpa.208410.n2.nabble.com/Entity-generation-is-very-time-consuming-tp6579389p6582447.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Re: Entity generation is very time consuming
Well I can't use typed queries because I use JPA 1.0. This is the result of your suggestion: [7/14/11 16:08:37:205 CEST] 0013 Query 3 openjpa.Query: Trace: Executing query: SELECT b.id, b.bezeichnung FROM Bp b [7/14/11 16:08:37:205 CEST] 0013 jdbc_SQL 3 openjpa.jdbc.SQL: Trace: t 1528126229, conn 690825517 executing prepstmnt 819474648 SELECT t0.ID, t0.BEZEICHNUNG FROM BP t0 [7/14/11 16:08:37:221 CEST] 0013 jdbc_SQL 3 openjpa.jdbc.SQL: Trace: t 1528126229, conn 690825517 [16 ms] spent [7/14/11 16:08:51:784 CEST] 0013 jdbc_JDBC 3 openjpa.jdbc.JDBC: Trace: t 1528126229, conn 690825517 [0 ms] close 14 seconds. Better. But still far away from fast for reading two little fields... Maybe JPA isn't made for reading tables with many rows. I wonder what developers and users do in the meantime when they have to read and process let's say 80 or more data rows. Meet the girlfriend in the park? Go on holidays? Hey we are living in the 21st century. Machines have power. ;-) I will try a plain old JDBC database access next and see how it performs. -- View this message in context: http://openjpa.208410.n2.nabble.com/Entity-generation-is-very-time-consuming-tp6579389p6583445.html Sent from the OpenJPA Users mailing list archive at Nabble.com.
Entity generation is very time consuming
I have an Oracle 10g database and in the JEE application I use OpenJPA-1.2.3-SNAPSHOT (which is shipped with IBM WebSphere 7 application server). I'm loading 60.000 rows from a table with some booleans, strings, numbers and timestamps. Nothing big and no blobs and things like that. The generation of the corresponding JPA entities takes 24 seconds. This is very long in my opinion. Is there any possibility to tune and speed things up considerably? I think 60.000 entities should not be that much data so I have to get a cup of coffee every time I trigger the select... Do you have any hints for me in order to increase performance? Thank you! P.S.: Unfortunately I can't upgrade to JPA 2.0, only JPA 1.0 features allowed. -- View this message in context: http://openjpa.208410.n2.nabble.com/Entity-generation-is-very-time-consuming-tp6579389p6579389.html Sent from the OpenJPA Users mailing list archive at Nabble.com.